input
stringlengths 6.82k
29k
|
---|
Instruction: Is there a role for Excimer laser in the treatment of keratoconus?
Abstracts:
abstract_id: PUBMED:33573798
Excimer laser in keratoconus management Visual rehabilitation in keratoconus is a challenge, notably because of the significant irregular astigmatism and optical aberrations that it induces. Many surgical techniques have been developed in addition to, or in the case of failure of, spectacles and rigid gas permeable contact lenses: intracorneal ring segments, intraocular lenses, excimer laser and, as a last resort, keratoplasty. Excimer laser photoablates the cornea, allowing remodeling of its surface. There are various treatment modes (wavefront-optimized, wavefront-guided and topography-guided), allowing performance of a customized treatment if needed. Its use in keratoconus has been described since the 2000s, alone or in combination with other procedures. For example, the combination of photoablation and corneal cross linking, a technique that increases corneal rigidity and in so doing can slow or even stop the progression of keratoconus, proved its efficacy and safety in many studies, and various protocols have been described. A triple procedure, including intracorneal ring segments, excimer laser and cross linking, has also given some very promising results in progressive keratoconus, providing a significative improvement in visual acuity and topographic data. The combination of excimer laser and intraocular lenses remains a poorly explored lead that might provide some satisfactory results. The objective of this review is to summarize the recent data on excimer laser in keratoconus management.
abstract_id: PUBMED:9587590
Is there a role for Excimer laser in the treatment of keratoconus? Purpose: In advanced keratoconus, when contact lenses become intolerant or provide insufficient vision, the only current available treatment is penetrating keratoplasty. But getting a graft takes a long time; we evaluated the eventual role of Excimer laser in taking care of those patients.
Material And Methods: We present a series of 8 advanced keratoconus, at that time, intolerant to contact lens because of a major ectasia and/or apical opacities. An excimer photoablation, using the therapeutic or rarely the refractive one, was performed in order to flatten or resurface the cornea, allowing a new contact lens fitting.
Results: We noted no complication, except a denser and longer postoperative haze. Thanks to a good compliance, 4 patients benefited from a new contact lens fitting, leading to an optimal visual recovery. On the other hand, in 3 cases, a corneal graft had to be performed, without any side effect. An anatomo-pathologic analysis was achieved on the 3 corneal caps, giving some precisions about tissular changes.
Conclusion: Excimer laser, under its therapeutic approach, might be an efficient treatment for advanced keratoconus, in conditions of cautious indications selection and of an inevitable coupling to contactology. Then, this surgery could delay or even avoid penetrating keratoplasty.
abstract_id: PUBMED:33630150
Excimer laser-assisted DALK: a case report from the Homburg Keratoconus Center (HKC) Indications: The aim of excimer laser-assisted deep anterior lamellar keratoplasty (excimer-DALK) is, as in mechanical DALK, the treatment of keratectasia (keratoconus and pellucid marginal degeneration), stromal scars or stromal corneal dystrophy. A prerequisite for surgery is the absence of (pre‑) Descemet's scars and an intact endothelium.
Surgical Technique: After excimer laser-assisted trephination to 80% of the corneal thickness at the trephination site, intrastromal air injection (so-called big bubble) and lamellar corneal preparation, a lamellar anterior transplantation of the endothelium-free donor tissue is performed. The technique combines the advantages of DALK and excimer laser trephination. We describe the steps of an excimer-DALK from the Homburg Keratoconus Center (HKC).
Conclusion: Excimer-DALK is a viable treatment option for patients with intact endothelium. In cases of intraoperative perforation, conversion to excimer-perforating keratoplasty (PKP) with all the advantages of excimer laser trephination remains feasible.
abstract_id: PUBMED:28932339
Penetrating Keratoplasty for Keratoconus - Excimer Versus Femtosecond Laser Trephination. Background: In case of keratoconus, rigid gas-permeable contact lenses as the correction method of first choice allow for a good visual acuity for quite some time. In a severe stage of the disease with major cone-shaped protrusion of the cornea, even specially designed keratoconus contact lenses are no more tolerated. In case of existing contraindications for intrastromal ring segments, corneal transplantation typically has a very good prognosis.
Methods: In case of advanced keratoconus - especially after corneal hydrops due to rupture of Descemet's membrane - penetrating keratoplasty (PKP) still is the surgical method of first choice. Noncontact excimer laser trephination seems to be especially beneficial for eyes with iatrogenic keratectasia after LASIK and those with repeat grafts in case of "keratoconus recurrences" due to small grafts with thin host cornea. For donor trephination from the epithelial side, an artificial chamber is used. Wound closure is achieved with a double running cross-stitch suture according to Hoffmann. Graft size is adapted individually depending on corneal size ("as large as possible - as small as necessary"). Limbal centration will be preferred intraoperatively due to optical displacement of the pupil. During the last 10 years femtosecond laser trephination has been introduced from the USA as a potentially advantageous approach.
Results: Prospective clinical studies have shown that the technique of non-contact excimer laser PKP improves donor and recipient centration, reduces "vertical tilt" and "horizontal torsion" of the graft in the recipient bed, thus resulting in significantly less "all-sutures-out" keratometric astigmatism (2.8 vs. 5.7 D), higher regularity of the topography (SRI 0.80 vs. 0.98) and better visual acuity (0.80 vs. 0.63) in contrast to the motor trephine. The stage of the disease does not influence functional outcome after excimer laser PKP. Refractive outcomes of femtosecond laser keratoplasty, however, resemble that of the motor trephine.
Conclusions: In contrast to the undisputed clinical advantages of excimer laser keratoplasty with orientation teeth/notches in keratoconus, the major disadvantage of femtosecond laser application is still the necessity of suction and applanation of the cone during trephination with intraoperative pitfalls and high postoperative astigmatism.
abstract_id: PUBMED:31650512
Comparison of Excimer Laser Versus Femtosecond Laser Assisted Trephination in Penetrating Keratoplasty: A Retrospective Study. Introduction: To compare the impact of non-mechanical excimer-assisted (EXCIMER) and femtosecond laser-assisted (FEMTO) trephination on outcomes after penetrating keratoplasty (PK).
Methods: In this retrospective study, 68 eyes from 23 females and 45 males (mean age at time of surgery, 53.3 ± 19.8 years) were included. Inclusion criteria were one surgeon (BS), primary central PK, Fuchs' dystrophy (FUCHS) or keratoconus (KC), no previous intraocular surgery, graft oversize 0.1 mm and 16-bite double running suture. Trephination was performed using a manually guided 193-nm Zeiss Meditec MEL70 excimer laser (EXCIMER group: 18 FUCHS, 17 KC) or 60-kHz IntraLase™ femtosecond laser (FEMTO group: 16 FUCHS, 17 KC). Subjective refractometry (trial glasses) and corneal topography analysis (Pentacam HR; Casia SS-1000 AS-OCT; TMS-5) were performed preoperatively, before removal of the first suture (11.4 ± 1.9 months) and after removal of the second suture (22.6 ± 3.8 months).
Results: Before suture removal, mean refractive/AS-OCT topographic astigmatism did not differ significantly between EXCIMER and FEMTO. After suture removal, mean refractive/Pentacam/AS-OCT topographic astigmatism was significantly higher in the FEMTO (6.2 ± 2.9 D/7.1 ± 3.2 D/7.4 ± 3.3 D) than in the EXCIMER patients (4.3 ± 3.0 D/4.4 ± 3.1 D/4.0 ± 2.9 D) (p ≤ 0.005). Mean corrected distance visual acuity increased from 0.22 and 0.23 preoperatively to 0.55 and 0.53 before or 0.7 and 0.6 after suture removal in the EXCIMER and FEMTO groups, respectively. Differences between EXCIMER and FEMTO were only pronounced in the KC subgroup.
Conclusion: Non-mechanical EXCIMER trephination seems to have advantages regarding postoperative corneal astigmatism and visual acuity compared with FEMTO trephination, especially in KC. A bigger sample size and longer follow-up are needed to evaluate the long-term impact of EXCIMER and FEMTO trephination on postoperative topographic and visual outcomes.
abstract_id: PUBMED:31696253
Excimer laser-assisted penetrating keratoplasty : On 1 July 2019 excimer laser penetrating keratoplasty celebrates its 30th anniversary. Video article Background And Objective: Since 1986 Naumann and Lang have developed and optimized the technique of nonmechanical corneal trephination using a 193-nm excimer laser along metal masks for penetrating keratoplasty (PKP). The aim of this paper is to demonstrate the technique of excimer laser-assisted keratoplasty.
Methods: After beginning with elliptical transplants a major improvement resulted in the introduction of eight "orientation cogs/notches" at the edges of round metal masks for reduction of "horizontal torsion". For noncontact donor trephination from the epithelial side an artificial anterior chamber is used. The surgical technique is demonstrated in detail and in almost full length in a video of the operation, which is available online.
Results: Prospective clinical studies have shown that the technique of noncontact excimer laser PKP improves donor and recipient misalignment, reduces "vertical tilt" and "horizontal torsion" of the graft in the recipient bed. This results in significantly less keratometric astigmatism, greater regularity of the topography and better spectacle-corrected visual acuity after removal of the sutures. Besides less perioperative disturbance of the blood-intraocular fluid barrier, excimer laser trephination does not induce enhanced cataract formation and does not impair the graft endothelium. Likewise, the rate of immune reactions is not adversely affected by the excimer laser. Furthermore, trephination of an unstable cornea, such as (nearly) perforated corneal ulcers or after radial keratotomy or LASIK is facilitated.
Conclusion: Because of the undisputed clinical advantages, especially in eyes with advanced keratoconus, excimer laser trephination with orientation cogs/notches is currently favored in the routine daily practice in Homburg/Saar.
abstract_id: PUBMED:9682106
Excimer laser surgery for keratoconus. Purpose: To determine whether there is increased risk associated with excimer laser surgery of primary keratoconus.
Setting: Department of Ophthalmology, County Council Hospital, Ryhov, Jönköping, Sweden.
Methods: Twenty-four eyes in 23 patients with keratoconus had photorefractive keratectomy (PRK) to reduce the steepness of the cone. A VISX Twenty-Twenty B laser (193 nm) was used for the treatments. Spherical ablations, cylindrical ablations, or both were used. Patients were followed for a mean of 22 months (range 6 to 46 months).
Results: Fourteen patients (58%) had improved visual acuity. Eleven (46%) could manage with only spectacles, and three (13%) were fitted with contact lenses to get adequate visual function. All treated corneas healed, and no acceleration of the keratoconus was seen.
Conclusion: No increased risk was associated with treating primary keratoconus with excimer laser PRK. We found that excimer laser surgery can improve vision and the ability to wear contact lenses, and it did not interfere with subsequent corneal transplantation surgery.
abstract_id: PUBMED:33832048
Thought of corneal collagen cross-linking combined with excimer laser ablation for irregular corneas in keratoconus The concept of diagnosis and treatment of keratoconus is constantly updated. Today, we are not only concerned with how to delay the progress of the disease, but also with the preservation of useful vision for patients as well as improvement of visual quality. With the precise and individualized application of excimer laser and femtosecond laser technology in ophthalmology, corneal cross-linking combined with excimer laser ablation for the irregular cornea has become a new strategy for keratoconus. However, questions have been raised and caused ophthalmologists' thinking. Are patients with keratoconus who have progressively thinned corneas suitable for excimer laser ablation? Which is better when the combined strategy is applied, simultaneous or sequential surgery? Based on the research data from home and abroad, we comprehensively sort out various treatment methods for the focus issues. It is hoped that this article can provide guidance for the rational selection of an optimal clinical solution to keratoconus. (Chin J Ophthalmol, 2021, 57:251-253).
abstract_id: PUBMED:21786270
Collagen crosslinking for ectasia following PRK performed in excimer laser-assisted keratoplasty for keratoconus. Purpose: To report the results of corneal collagen crosslinking (CXL) in a patient with corneal ectasia developed after excimer laser-assisted lamellar keratoplasty for keratoconus and a secondary photorefractive keratectomy (PRK) for residual refractive error.
Methods: A 33-year-old woman, who had originally been treated for keratoconus in the right eye by excimer laser-assisted lamellar keratoplasty, subsequently had her residual ametropia treated by topographically guided, transepithelial excimer laser PRK. Five years after PRK, the patient developed corneal ectasia showing concomitant visual changes of best spectacle-corrected visual acuity (BSCVA) reduced to 20/33 with a refraction of -6.00 +6.00 × 30. The minimum corneal thickness at the ectasia apex was 406 µm. A treatment of riboflavin-UVA-induced corneal CXL was performed on the right eye.
Results: Two years after the CXL treatment, the right eye improved to 20/20 BSCVA with a refraction of plano +1.00 × 50 while exhibiting a clear lamellar graft.
Conclusions: Corneal CXL provided safe and effective management of ectasia developed after excimer laser-assisted lamellar keratoplasty and PRK.
abstract_id: PUBMED:7522096
Excimer laser photorefractive keratectomy for treatment of keratoconus. Background: Five eyes with keratoconus that were scheduled for penetrating keratoplasty were treated instead with excimer laser photorefractive keratectomy to reduce the steepness of the cone. The follow up after 6 to 12 months is reported here.
Methods: A 193-nanometer excimer laser system VISX 20/20 was used for correction of myopia or astigmatism. The patients had a complete ophthalmological examination including slit-lamp microscopy and videokeratography. The patients were followed with six examinations during a period of at least 6 months.
Results: In four eyes, a reduction of the astigmatism was achieved with an increase of visual acuity. There were no problems with wound healing or any signs that the excimer adversely affected the cornea or activated the keratoconus disease process.
Conclusion: The treatment of keratoconus with excimer laser has been regarded as seriously contraindicated on a speculative basis. This risk seems to be exaggerated.
Answer: Yes, there is a role for Excimer laser in the treatment of keratoconus. Excimer laser photoablation can remodel the corneal surface, and its use in keratoconus has been described since the 2000s, either alone or in combination with other procedures (PUBMED:33573798). It has been used to flatten or resurface the cornea, allowing for new contact lens fitting in cases of advanced keratoconus where contact lenses become intolerant or provide insufficient vision, potentially delaying or even avoiding penetrating keratoplasty (PUBMED:9587590).
Excimer laser-assisted deep anterior lamellar keratoplasty (excimer-DALK) is also a treatment option for patients with keratectasia, stromal scars, or stromal corneal dystrophy, provided they have an intact endothelium (PUBMED:33630150). In penetrating keratoplasty (PKP) for keratoconus, noncontact excimer laser trephination has been shown to improve donor and recipient centration, reduce graft misalignment, and result in less keratometric astigmatism and better visual acuity compared to motor trephine (PUBMED:28932339, PUBMED:31650512, PUBMED:31696253).
Furthermore, excimer laser surgery has been used to treat primary keratoconus, with some patients experiencing improved visual acuity and the ability to wear contact lenses, without interfering with subsequent corneal transplantation surgery (PUBMED:9682106). The combination of corneal cross-linking with excimer laser ablation for irregular corneas in keratoconus is a new strategy that is being explored, although questions remain regarding the suitability of patients with progressively thinned corneas for excimer laser ablation and the timing of combined procedures (PUBMED:33832048).
In summary, Excimer laser plays a role in the management of keratoconus, offering various treatment options that can improve visual outcomes and potentially delay the need for more invasive procedures like penetrating keratoplasty. |
Instruction: Predictors of alternative and complementary medicine use in inflammatory bowel disease: do measures of conventional health care utilization relate to use?
Abstracts:
abstract_id: PUBMED:15128356
Predictors of alternative and complementary medicine use in inflammatory bowel disease: do measures of conventional health care utilization relate to use? Objective: Alternative and complementary therapies (ACM) have gained increasing attention in the past few years. It was our purpose to determine whether increased ACM use is associated with increased use of conventional health care resources. Additionally, demographics of use, subjective benefit, and cost were analyzed.
Methods: We enrolled 150 inflammatory bowel disease (IBD) patients from a tertiary care center and performed a phone survey of their ACM use in the past year. A population-based administrative database was accessed to extract data regarding use of conventional medicine (hospitalizations, doctor visits, and GI specific doctor visits). Patients were divided into three groups: (i) no ACM (n = 60) (ii) users of exercise, diet, and prayer (EDP) exclusively (n = 47) (iii) other ACM use (n = 43) which included those who may have used EDP as well as any of acupuncture, chiropractic, homeopathy, naturopathy, herbology, massage, relaxation, reflexology, hypnotherapy, aromatherapy, meditation, or support group.
Results: ACM was used by 60% (EDP 31%, other ACM 29%). There were no significant differences in use between the three groups by disease diagnosis, education level, employment status, use of IBD medications, number of hospitalizations, doctor visits, or GI specific doctor visits. The EDP group was more likely to be married (p = 0.006) and female (p = 0.04) compared to no ACM. The EDP group tended to be older than the no ACM (p = 0.001) and other ACM (p = 0.01). The other ACM had shorter disease duration than EDP (p = 0.04) and no ACM (p = 0.04). The most commonly used therapies were diet (45%), herbal (17%), exercise (15%), prayer (11%), and relaxation (10%). ACM was sought for pain/cramps (64%), diarrhea (60%), and gas/bloating (21%). Seventy-three percent of EDP interventions incurred no cost compared to 33% with other ACM (p < 0.0001). The median annual amount spent on other ACM was $56 (range $0-$4800). Subjectively, patients felt helped by trials of EDP 95% of the time whereas other ACM helped 67% of the time (p < 0.0001).
Conclusions: ACM use could not be predicted by either greater or less hospitalizations, conventional doctor visits, or GI specific visits. ACM was sought mostly to palliate pain or diarrhea. Those using EDP are more likely to be older married women. Subjectively other ACM is of less benefit (67%) than EDP (95%). If doctor visits or hospitalizations represent degree of increased disease activity then this too is not predictive of using ACM.
abstract_id: PUBMED:29173531
The Practical Pros and Cons of Complementary and Alternative Medicine in Practice: Integrating Complementary and Alternative Medicine into Clinical Care. Complementary and alternative medicine (CAM) is changing health care for individuals with inflammatory bowel disease. The move toward increasing patient autonomy and addressing lifestyle and psychosocial factors contributes to this shift. Numerous clinics and centers are offering new models to incorporate these elements. There is need for better and more robust data regarding CAM efficacy and safety. CAM offers a test kitchen for new approaches to care and care delivery, which are now being developed and studied, and has the possibility to affect patient quality of life, disease morbidity, cost, and use of health care.
abstract_id: PUBMED:29173516
Use of Complementary and Alternative Medicine in Inflammatory Bowel Disease Around the World. Use of complementary sand alternative medicine (CAM) is common among patients with inflammatory bowel disease (IBD). CAM can be broadly categorized as whole medical systems, mind-body interventions, biologically based therapies, manipulative and body-based methods, and energy therapies. Most do not use it to treat IBD specifically, and most take it as an adjunct to conventional therapy not in place of it. However, patients are frequently uncomfortable initiating a discussion of CAM with their physicians, which may impact adherence to conventional therapy. A greater emphasis on CAM in medical education may facilitate patient-physician discussions regarding CAM.
abstract_id: PUBMED:18514908
Provider-based complementary and alternative medicine use among three chronic illness groups: associations with psychosocial factors and concurrent use of conventional health-care services. Objective: The focus of this study was to examine the patterns of provider-based complementary and alternative medicine (CAM) use across three chronic illness groups, and to identify the socio-demographic, health-related, and psychosocial factors associated with CAM use.
Design: Cross-sectional international survey administered on the Internet to individuals with arthritis, inflammatory bowel disease (IBD), and mixed chronic conditions.
Main Outcome Measures: Self-reported consultations to CAM providers and to a variety of conventional health-care services made in the previous 6 months.
Results: 365 surveys were received from people with arthritis (N=140), IBD (N=110), and other chronic conditions (N=115). Overall 38.1% of respondents had used CAM, with rates ranging from 31.8 to 46.1% across the three illness groups. Backward step-wise logistic regression revealed that being female, having more than high school education, a greater number of comorbid conditions, higher perceived control over health and reward motivations, lower stress and less belief that health is governed by chance, were the best predictors of CAM consultations. CAM clients also used a greater variety of conventional health-care services and made more consultations relative to non-CAM clients.
Conclusions: In this study the socio-demographic and health status factors associated with CAM consultations in three different chronic illness groups were similar to those found in the general population. CAM use in the study population was also related to higher use and a greater variety of use of conventional health-care services, and with stronger beliefs in the controllability of health and an enduring motivation to seek out rewards.
abstract_id: PUBMED:30166957
The Use of Complementary and Alternative Medicine in Patients With Inflammatory Bowel Disease. Complementary and alternative medicine (CAM) includes products or medical practices that encompass herbal and dietary supplements, probiotics, traditional Chinese medicines, and a variety of mind-body techniques. The use of CAM in patients with inflammatory bowel disease (IBD) is increasing as patients seek ways beyond conventional therapy to treat their chronic illnesses. The literature behind CAM therapies and their application, efficacy, and safety is limited when compared to studies of conventional, allopathic therapies. Thus, gastroenterologists are often ill equipped to engage with their patients in informed and meaningful discussions about the role of CAM in IBD. The aims of this article are to provide a comprehensive summary and discussion of various CAM modalities and to appraise the evidence for their use.
abstract_id: PUBMED:35595418
Complementary and Alternative Medicine in Crohn's Disease. Complementary and alternative medicine (CAM) is a growing entity within inflammatory bowel disease (IBD). CAM includes mind-based therapies, body-based therapies, supplements, vitamins, and probiotics. Limitations currently exist for health care providers as it pertains to IBD and CAM that stem from knowledge gaps, conflicting reports, limited oversight, and a lack of well-organized clinical data. Even without well-described data, patients are turning to these forms of therapy at increasing rates. It is imperative that the ongoing review of CAM therapies is performed, and future trials are performed to better understand efficacy as well as adverse effects related to these therapies.
abstract_id: PUBMED:30499887
Complementary and Alternative Medicine Use in Children With Inflammatory Bowel Disease. Complementary and alternative medicine (CAM) consists of products and practices that are not considered to be a part of conventional medicine. This article reviews pediatric studies on CAM in inflammatory bowel disease (IBD) along with relevant adult studies. Prevalence of CAM use ranges from 22% to 84% in children with IBD all over the world. CAM use in IBD includes diet changes, supplements, herbals, botanicals, and mind-body therapies. Common reasons for using CAM include severe disease and concern for adverse effects of conventional medicines. Despite widespread use, there are limited studies on efficacy and safety of CAM in children. Small studies suggest a favorable evidence for use of probiotics, fish oil, marijuana, and mind-body therapy in IBD. Adverse effects of CAM are reported but are rare. The article provides current state of knowledge on the topic and provides guidance to physicians to address CAM use in pediatric patients with IBD.
abstract_id: PUBMED:25834335
Doctor communication quality and Friends' attitudes influence complementary medicine use in inflammatory bowel disease. Aim: To examine the frequency of regular complementary and alternative therapy (CAM) use in three Australian cohorts of contrasting care setting and geography, and identify independent attitudinal and psychological predictors of CAM use across all cohorts.
Methods: A cross sectional questionnaire was administered to inflammatory bowel disease (IBD) patients in 3 separate cohorts which differed by geographical region and care setting. Demographics and frequency of regular CAM use were assessed, along with attitudes towards IBD medication and psychological parameters such as anxiety, depression, personality traits and quality of life (QOL), and compared across cohorts. Independent attitudinal and psychological predictors of CAM use were determined using binary logistic regression analysis.
Results: In 473 respondents (mean age 50.3 years, 60.2% female) regular CAM use was reported by 45.4%, and did not vary between cohorts. Only 54.1% of users disclosed CAM use to their doctor. Independent predictors of CAM use which confirm those reported previously were: covert conventional medication dose reduction (P < 0.001), seeking psychological treatment (P < 0.001), adverse effects of conventional medication (P = 0.043), and higher QOL (P < 0.001). Newly identified predictors were CAM use by family or friends (P < 0.001), dissatisfaction with patient-doctor communication (P < 0.001), and lower depression scores (P < 0.001).
Conclusion: In addition to previously identified predictors of CAM use, these data show that physician attention to communication and the patient-doctor relationship is important as these factors influence CAM use. Patient reluctance to discuss CAM with physicians may promote greater reliance on social contacts to influence CAM decisions.
abstract_id: PUBMED:24559822
Predictive factors of complementary and alternative medicine use for patients with inflammatory bowel disease in Korea. Objectives: The aim of this study was to assess characteristics and predictive factors of complementary and alternative medicine (CAM) use for patients with inflammatory bowel disease (IBD) in Korea.
Design: Prospective, questionnaire based study for patients with IBD in Korea.
Setting: Six university hospitals and one primary IBD clinic.
Main Outcome Measure: Overall characteristics and predictors of CAM use were compared between CAM users and non-users.
Results: During the study period, 366 patients with IBD (ulcerative colitis=228, Crohn's disease=138) completed the full questionnaire; 29.5% (n=108) reported CAM use and 70.5% (n=258) reported no CAM use after diagnosis of IBD. In total, 64.0% were male, the mean patient age was 42.3±15.5 years, and the mean duration of IBD was 5.5±5.8 years. Using logistic regression analysis, university education (p=0.040), higher income levels (p=0.009), and longer duration of IBD (p=0.003) were found to be independent predictors of CAM use. Among CAM users, 65% of CAM was attained within 2 years of IBD diagnosis and only 28.7% discussed CAM use with their physician. Furthermore, 13.9% of CAM users discontinued conventional IBD therapy while using CAM.
Conclusions: The overall use of CAM in Korea was comparable with those in the West. Physicians should be aware of the high prevalence of CAM use by patients with IBD, especially among those with higher education levels, higher income levels, and longer IBD duration. Furthermore, physicians should ask about CAM use, and help their IBD patients make a more informed choice about CAM use.
abstract_id: PUBMED:12737442
Complementary alternative medicine in patients with inflammatory bowel disease: use and attitudes. Background: A widespread increase in the use of complementary alternative medicine (CAM) by patients with inflammatory bowel disease (IBD) has been recognized. The aim of our study was to evaluate both the extent and the determinants of CAM use by outpatients with IBD.
Methods: Outpatients of the IBD centre at the University Hospital of Berne and patients of two gastroenterology private practices in Olten (Switzerland) completed a mailed self-administrated questionnaire regarding alternative medicine. The questionnaire addressed the following topics: demographic variables; disease-related data; the use of 16 types of complementary medicine; comparison between attitudes towards alternative versus conventional medicine and out-of pocket expenses.
Results: Alternative medicine has been used by 47% of the patients. Diagnosis, duration and activity of disease, gender, age, previous surgery were not predictive for the use of CAM. The most commonly used CAM methods were: homeopathy, acupuncture and traditional Chinese medicine. Reasons cited for the use of CAM were: lack of satisfaction with and side effects of conventional therapy and the perceived safety of CAM. Sixty-one percent of patients noted that their IBD had improved with the use of CAM. By contrast, 16% noted a flare during CAM therapies. Forty-seven percent of patients paid more than Euro400 per year for CAM.
Conclusions: Complementary medicine use is common in patients with IBD. Frequently cited reasons for the use of complementary therapies were safety of CAM; dissatisfaction with conventional therapies, including their side effects; and that CAM can be used in addition to conventional therapy.
Answer: The use of alternative and complementary medicine (ACM) in patients with inflammatory bowel disease (IBD) does not appear to be predicted by measures of conventional health care utilization, such as the number of hospitalizations, doctor visits, or GI specific doctor visits (PUBMED:15128356). The study found no significant differences in ACM use between groups categorized by their use of conventional medicine. However, certain demographic factors were associated with ACM use; for example, the group using exercise, diet, and prayer (EDP) exclusively tended to be older, married women (PUBMED:15128356).
Other predictors of ACM use include higher education levels, higher income levels, and longer duration of IBD (PUBMED:24559822). Additionally, dissatisfaction with patient-doctor communication, the influence of family or friends' attitudes towards ACM, and seeking psychological treatment were identified as predictors of ACM use (PUBMED:25834335). Patients who used ACM also tended to use a greater variety of conventional health-care services and had stronger beliefs in the controllability of health (PUBMED:18514908).
It is also noted that patients are frequently uncomfortable initiating a discussion of CAM with their physicians, which may impact adherence to conventional therapy (PUBMED:29173516). Furthermore, the literature suggests that there is a need for better data regarding CAM efficacy and safety, and that CAM offers a test kitchen for new approaches to care which could affect patient quality of life, disease morbidity, cost, and use of health care (PUBMED:29173531).
In summary, while conventional health care utilization does not seem to predict ACM use in IBD patients, a variety of demographic, psychosocial, and attitudinal factors do influence the likelihood of a patient using ACM. |
Instruction: Deep venous thrombosis after saphenous endovenous radiofrequency ablation: is it predictable?
Abstracts:
abstract_id: PUBMED:28018495
Postoperative Venous Thromboembolism in Patients Undergoing Endovenous Laser and Radiofrequency Ablation of the Saphenous Vein. Objective: Endovenous laser ablation (EVLA) and radiofrequency ablation (RFA) are safe and effective treatments for varicose veins caused by saphenous reflux. Deep venous thrombosis (DVT) and endovenous heat-induced thrombosis (EHIT) are known complications of these procedures. The purpose of this article is to investigate the incidence of postoperative DVT and EHIT in patients undergoing EVLA and RFA. Methods: The patients were assessed by clinical examination and venous duplex ultrasonography before operation and at 24-72 hours, 1 month, and 1 year follow-up after operation. Endovenous ablation (EVA) had been treated for 1026 limbs (835 patients) using an RFA; 1174 limbs (954 patients) using a 1470-nm wavelength diode laser with radial two-ring fiber (1470R); and 6118 limbs (5513 patients) using a 980-nm wavelength diode laser with bare-tip fiber (980B). Results: DVT was detected in 3 legs (0.3%) of RFA, 5 legs (0.4%) of 1470R, and 27 legs (0.4%) of 980B. One patient in three symptomatic DVT treated with 980B developed asymptomatic pulmonary embolus. In all, 31 of the 35 DVTs were confined to the calf veins. The incidence of EHIT classes 2 and 3 was 2.7% following RFA procedure, 6.7% after 1470R, and 7.5% after 980B. Conclusion: The incidence of EHIT following EVA was low, especially the RFA procedure. EHIT resolves within 2-4 weeks in most patients. DVT rates after EVA were compared with those published for saphenous vein stripping. (This is a translation of J Jpn Coll Angiol 2015; 55: 153-161.).
abstract_id: PUBMED:33618065
Systematic review on the incidence and management of endovenous heat-induced thrombosis following endovenous thermal ablation of the great saphenous vein. Objective: A systematic review and meta-analysis was performed to determine the incidence of endovenous heat-induced thrombosis (EHIT) and evaluate its management after endovenous thermal ablation of the great saphenous vein (GSV).
Methods: MEDLINE and Embase were searched for studies with at least 100 patients who underwent great saphenous vein endovenous thermal ablation and had duplex ultrasound follow-up within 30 days. Data were gathered on the incidence of thrombotic complications and on the management of cases of EHIT. The primary outcome for the meta-analysis was EHIT types 2 to 4 and secondary outcomes were deep venous thrombotic events (which we defined as types 2-4 EHIT plus deep vein thrombosis [DVT]), DVT, and pulmonary embolism (PE). Pooled proportions were calculated using random effects modelling.
Results: We included 75 studies (23,265 patients). EHIT types 2 to 4 occurred in 1.27% of cases (95% confidence interval [CI], 0.74%-1.93%). Deep venous thrombotic events occurred in 1.59% (95% CI, 0.95%-2.4%). DVT occurred in 0.28% (95% CI, 0.18%-0.4%). Pulmonary embolism occurred in 0.11% (95% CI, 0.06%-0.18%). Of the 75 studies, 24 gave a description of the management strategy and outcomes for EHIT and there was inconsistency regarding its management. Asymmetrical funnel plots of studies that reported incidence of EHIT 2 to 4 and DVT suggest publication bias.
Conclusions: The recently published guidelines on EHIT from the Society for Vascular Surgery/American Venous Forum provide a framework to direct clinical decision-making. EHIT and other thrombotic complications occur infrequently and have a benign course.
abstract_id: PUBMED:27422781
A randomized prospective long-term (>1 year) clinical trial comparing the efficacy and safety of radiofrequency ablation to 980 nm laser ablation of the great saphenous vein. Purpose To compare the short- and long-term (>1 year) efficacy and safety of radiofrequency ablation (ClosureFAST™) versus endovenous laser ablation (980 nm diode laser) for the treatment of superficial venous insufficiency of the great saphenous vein. Materials and methods Two hundred patients with superficial venous insufficiency of the great saphenous vein were randomized to receive either radiofrequency ablation or endovenous laser ablation (and simultaneous adjunctive therapies for surface varicosities when appropriate). Post-treatment sonographic and clinical assessment was conducted at one week, six weeks, and six months for closure, complications, and patient satisfaction. Clinical assessment of each patient was conducted at one year and then at yearly intervals for patient satisfaction. Results Post-procedure pain ( p < 0.0001) and objective post-procedure bruising ( p = 0.0114) were significantly lower in the radiofrequency ablation group. Improvements in venous clinical severity score were noted through six months in both groups (endovenous laser ablation 6.6 to 1; radiofrequency ablation 6.2 to 1) with no significant difference in venous clinical severity score ( p = 0.4066) or measured adverse effects; 89 endovenous laser ablation and 87 radiofrequency patients were interviewed at least 12 months out with a mean long-term follow-up of 44 and 42 months ( p = 0.1096), respectively. There were four treatment failures in each group, and every case was correctable with further treatment. Overall, there were no significant differences with regard to patient satisfaction between radiofrequency ablation and endovenous laser ablation ( p = 0.3009). There were no cases of deep venous thrombosis in either group at any time during this study. Conclusions Radiofrequency ablation and endovenous laser ablation are highly effective and safe from both anatomic and clinical standpoints over a multi-year period and neither modality achieved superiority over the other.
abstract_id: PUBMED:26231185
Risk factors for endovenous heat-induced thrombosis after endovenous radiofrequency ablation performed in Thailand. Objective: We aimed to determine the incidence of and associated risk factors for endovenous heat-induced thrombosis (EHIT) after endovenous radiofrequency ablation (RFA).
Methods: We retrospectively reviewed the medical records of 82 patients with 97 great saphenous veins undergoing RFA from 2012 to 2014.
Results: The incidence of EHIT was 10.3%. Class 1, 2, and 3 EHIT was found in 50%, 30%, and 20% of legs, respectively. No class 4 EHIT, deep vein thrombosis, or pulmonary emboli occurred. Univariate analysis revealed that the associated risk factors for EHIT were a vein diameter of >10 mm, operative time of >40 min, and Caprini score of >6. Multivariate analysis revealed that the independent risk factors associated with EHIT were a vein diameter of >10 mm and operative time of >40 min.
Conclusions: A vein diameter of >10 mm and operative time of >40 min might be predictive factors for EHIT following RFA.
abstract_id: PUBMED:37458188
Endovenous thermal ablation in the treatment of large great saphenous veins of diameters > 12 mm: A systematic review meta-analysis and meta-regression. Background: We sought to assess the safety and efficacy of endovenous thermal ablation (EVTA) in treating large great saphenous veins (GSV) > 12 mm in diameter.
Methods: We performed a systematic review according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 for comparative and noncomparative studies depicting EVTA in the treatment of GSV > 12 mm. Primary endpoints included GSV occlusion, technical success, deep vein thrombosis (DVT), and endovenous heat-induced thrombosis (EHIT). We conducted a comparative analysis between GSV > 12 mm and < 12 mm and a meta-regression analysis for two sets of studies, one including the whole dataset, containing treatment arms of comparative studies with GSV < 12 mm and one exclusively for GSV > 12 mm.
Results: Seven studies, including 2564 GSV, depicting radiofrequency (RFA) and endovenous laser ablation (EVLA) were included. GSV > 12 mm occlusion, technical success, DVT, and EHIT estimates were 95.9% (95% CI: 93.6-97.8), 99.9% (95% CI: 98.9-100.0), 0.04% (95% CI: 0.0-3.4), and 1.6% (95% CI: 0.3-3.5). Meta-regression revealed a negative association between GSV diameter and occlusion for both the whole dataset (p < 0.01) and the > 12 mm groups (p = 0.04), GSV diameter and technical success for both groups (p < 0.01), (p = 0.016), and GSV diameter and EHIT only for the whole dataset (p = 0.02). The comparative analysis between GSV < 12 mm and GSV > 12 mm displayed an occlusion estimate of OR 1.79 (95% CI: 1.25-2.56) favoring small GSV.
Conclusion: Whereas we have displayed excellent occlusion and technical success results for the EVTA of GSV > 12 mm, our analysis has illustrated the unfavorable impact of GSV diameter on occlusion, technical success, and EHIT outcomes regardless of the 12 mm threshold. Potential parameter or device adjustments in a diameter-oriented fashion could further enhance outcomes.
abstract_id: PUBMED:35872143
Direct oral anticoagulant agents might be safe for patients undergoing endovenous radiofrequency and laser ablation. Objective: Studies assessing the effect of the use of anticoagulant agents on endovenous thermal ablation (ETA) have been limited to patients taking warfarin. Thus, the aim of the present study was to assess the efficacy and safety of ETA for patients taking direct oral anticoagulants (DOACs). We hypothesized that the outcome of ETA for patients taking DOACs would not be superior to the outcomes for patients taking DOACs.
Methods: We performed a retrospective review to identify patients who had undergone radiofrequency ablation or endovenous laser ablation with 1470-nm diode laser fibers for symptomatic great or small saphenous venous reflux from 2018 to 2020. The patients were dichotomized into those who had received a therapeutic dose of DOACs periprocedurally and those who had not (control group). The outcomes of interest included the rates of treated vein closure at 7 days and 9 months and the incidence of deep vein thrombosis (DVT), endothermal heat-induced thrombosis (EHIT), and bleeding periprocedurally.
Results: Of the 301 patients (382 procedures), 69 patients (87 procedures) had received DOACs and 232 control patients (295 procedures) had not received DOACs. The patients receiving DOACs were more often older (mean age, 65 years vs 55 years; P < .001) and male (70% vs 37%; P < .001), with a higher prevalence of venous thromboembolism and more severe CEAP (clinical, etiologic, anatomic, pathophysiologic) classification (5 or 6), than were the control patients. Those receiving DOACs were more likely to have had a history of DVT (44% vs 6%; P < .001), pulmonary embolism (13% vs 0%; P < .001), and phlebitis (32% vs 15%; P < .001). Procedurally, radiofrequency ablation had been used more frequently in the control group (92% vs 84%; P = .029), with longer segments of treated veins (mean, 38 mm vs 35 mm, respectively; P = .028). No major or minor bleeding events nor any EHIT had occurred in either group. Two patients in the control group (0.7%) developed DVT; however, no DVT was observed in those in the DOAC group (P = .441). At 9 months, the treated vein had remained ablated after 94.4% of procedures for patients receiving DOACs and 98.4% of the control group (P = .163). On multivariable analysis, DOAC usage was not associated with an increased risk of vein recanalization (hazard ratio, 5.76; 95% confidence interval, 0.57-58.64; P = .139). An increased preprocedural vein diameter and the use of endovenous laser ablation were associated with an increased risk of recanalization.
Conclusions: In our study of patients who had undergone ETA for symptomatic saphenous venous reflux, the periprocedural use of DOACs did not adversely affect the efficacy of endovenous ablation to ≥9 months. Furthermore, DOAC use did not confer an additional risk of bleeding, DVT, or EHIT periprocedurally. DOACs may be safely continued without affecting the efficacy and durability of ETA.
abstract_id: PUBMED:25940645
Endovenous laser ablation is an effective treatment for great saphenous vein incompetence in teenagers. Objectives: The current knowledge of chronic venous disease in teenagers and its treatment is very limited. The aim of the study is to present our experience and the available literature data on the treatment of varicose veins in teenagers with endovenous laser ablation of the great saphenous vein.
Methods: Five patients, aged 15-17 years, were qualified for surgery, based on typical signs and symptoms of chronic venous disease. Minimally invasive treatment with endovenous laser ablation of the great saphenous vein was applied.
Results: The technical success of surgery was achieved in all patients. Over a 2-year follow-up we did not observe any case of recanalisation of the great saphenous vein, recurrence of varicose veins, or serious complications, such as deep vein thrombosis or pulmonary embolism. One patient presented with resolving of post-operative bruising, and two cases of local numbness were transient.
Conclusions: Endovenous laser ablation of the great saphenous vein in the treatment of chronic venous disease in teenagers is effective and safe. The method provides excellent cosmetic effects, very short recovery time and high levels of patient satisfaction.
abstract_id: PUBMED:28403687
Mechanochemical endovenous ablation of saphenous veins using the ClariVein: A systematic review. Objective To systematically review all available English literature on mechanochemical endovenous ablation and to report on the anatomical, technical, and clinical success. Methods A systematic literature search was performed in PubMed, EMBASE, and the Cochrane Library on mechanochemical endovenous ablation for the treatment of insufficient great and/or small saphenous vein. Methodological quality of the included studies was evaluated using the MINORS score. The primary outcome measure was anatomical success, defined as closure of the treated vein on follow-up duplex ultrasound imaging. Secondary outcomes were technical and clinical success, and major complications defined as deep venous thrombosis, pulmonary embolisms or paresthesia. Results The literature search identified 759 records, of which 13 were included, describing 10 unique cohorts. A total of 1521 veins (1267 great saphenous vein and 254 small saphenous vein) were included, with cohort sizes ranging from 30 to 570 veins. The pooled anatomical success rate after short-term follow up was 92% (95% CI 90-94%) ( n = 1314 veins). After 6 and 12 months these numbers were 92% (95% CI 88-95%) ( n = 284) and 91% (95% CI 86-94%) ( n = 228), respectively. The long-term anatomical success rates at 2 and 3 years were 91% (95% CI 85-95%) ( n = 136) and 87% (95% CI 75-94%) ( n = 48), respectively. Major complications and especially nerve injury were very rare (≤ 0.2%). All studies were of moderate or good quality using the MINORS scoring scale. Conclusions Mechanochemical endovenous ablation using the ClariVein in combination with liquid sclerosant is associated with an anatomical success rate ranging from 87% to 92% and good clinical success. To date, no randomized controlled trials are available studying the anatomical success after mechanochemical ablation, compared to the endothermal ablation. The risk of major complications is very low after the procedure.
abstract_id: PUBMED:31808378
Perforator vein endovenous heat induced thrombosis after laser ablation of the great saphenous vein. Endovenous heat induced thrombosis at the sapheno-femoral or sapheno-popliteal junction is a well-known complication after superficial truncal vein endovenous laser ablation (EVLA). This report describes a rare thigh perforator vein thrombus propagation into the femoral vein after EVLA of the great saphenous vein.
abstract_id: PUBMED:24554352
Endovenous laser ablation for saphenous vein insufficiency: short- and mid-term results of 230 procedures. Objective: The present study aimed to evaluate the efficacy of endovenous laser ablation with a 1470-nm laser and to analyze the short- to mid-term results of endovenous laser ablation procedures to treat great saphenous vein insufficiency.
Method: In this retrospective study, 200 patients (230 limbs) with symptomatic varicose veins secondary to great saphenous vein insufficiency treated with 1470-nm endovenous laser ablation were studied. Patients were evaluated clinically on the first day, first week, first month, and sixth month after the operation. Treated limbs were evaluated as separate treatment events.
Results: The short-term occlusion rate was 99% and mid-term occlusion rate was 100%. Induration or swelling was the most common minor complication (13%). No major complication such as deep venous thrombosis and pulmonary embolus occurred. Preoperatively documented mean venous clinical severity score significantly reduced from 4.9±2.3 to 2.5±1.1 (p<0.05).
Conclusion: Endovenous laser ablation procedure of great saphenous vein with a 1470-nm diode laser is a minimally invasive, safe, and efficient treatment option in all-suitable patients with high short- and mid-term success rate.
Answer: Deep venous thrombosis (DVT) is a known complication following endovenous radiofrequency ablation (RFA) for the treatment of varicose veins caused by saphenous reflux. However, predicting its occurrence can be challenging. According to the available literature, the incidence of DVT following RFA is relatively low. In one study, DVT was detected in 0.3% of legs treated with RFA (PUBMED:28018495). Another study did not report any cases of DVT in either the RFA or endovenous laser ablation (EVLA) groups during the study period (PUBMED:27422781).
Risk factors for endovenous heat-induced thrombosis (EHIT), which can be a precursor to DVT, have been identified in some studies. For instance, a vein diameter of >10 mm and operative time of >40 min were found to be predictive factors for EHIT following RFA (PUBMED:26231185). However, it is important to note that EHIT and DVT are not synonymous, and EHIT does not always progress to DVT.
The systematic review and meta-analysis on the incidence and management of EHIT following endovenous thermal ablation of the great saphenous vein (GSV) reported that DVT occurred in 0.28% of cases (PUBMED:33618065). This suggests that while DVT is a possible complication, it is not a common one.
Furthermore, a study on the treatment of large GSVs with diameters > 12 mm using endovenous thermal ablation (EVTA) found that DVT occurred in only 0.04% of cases (PUBMED:37458188). This indicates that even in cases of large vein diameters, the incidence of DVT remains low.
In summary, while certain risk factors such as larger vein diameter and longer operative time may be associated with an increased risk of thrombotic complications like EHIT, the overall incidence of DVT following saphenous endovenous RFA is low and may not be easily predictable. Each patient's risk profile should be assessed individually, and the presence of risk factors should be taken into consideration when planning and performing RFA procedures. |
Instruction: Common trend: move to enucleation-Is there a case for GreenLight enucleation?
Abstracts:
abstract_id: PUBMED:32662906
En bloc GreenLight laser enucleation of the prostate (GreenLEP): An in-depth look at the anatomical endoscopic enucleation of the prostate using a 532-nm lithium triborate laser. GreenLight laser enucleation of the prostate (GreenLEP) is an alternative endoscopic enucleation of the prostate (EEP) technique for the treatment of lower urinary tract symptoms (LUTS) secondary to benign prostatic obstruction (BPO). GreenLEP is an 'en bloc' EEP technique to remove the transitional zone tissue in one piece. The procedure is a combination of laser enucleation and blunt gentle mechanical dissection using the tip of the resectoscope. The advantage of mechanical dissection is that it allows for better visualisation of the capsule and in some cases makes the dissection a little faster. This procedure is performed with a 532-nm lithium triborate laser (GreenLight™ XPS 180 W generator, AMS), a 2090 side-firing fibre and a Piranha™ morcellator (Richard Wolf GmbH). We offer a review of the evolution of the technique including the most important technical aspects, complications, advantages/disadvantages, tips and tricks and a visual step by step guide to perform the GreenLEP technique. GreenLEP is one of the latest energy sources reported in the armamentarium of EEP techniques for the treatment of BPO. GreenLEP has previously demonstrated its feasibility, safety and similar short- to mid-term functional outcomes compared to surgical gold standards in the literature.
abstract_id: PUBMED:29717908
Comparison of GreenLight Laser Photoselective Vaporization and Thulium Laser Enucleation for Nonmuscle Invasive Bladder Cancer. Objective: To evaluate the safety and efficacy of GreenLight laser vaporization and thulium laser enucleation for nonmuscle invasive bladder cancer (NMIBC).
Patients And Methods: A total of 150 patients of NMIBC who underwent either GreenLight laser vaporization (Group A, n = 78) or thulium laser enucleation (Group B, n = 72) were analyzed, respectively. The preoperative, intraoperative, and postoperative clinical data were recorded and compared in the two groups.
Results: All patients were successfully treated with GreenLight laser vaporization or thulium laser enucleation. No significant difference was observed in operation time, catheterization time, and postoperative hospital stay time between them. No complications such as obturator nerve reflex, bladder perforation, overhydration, or intraoperative or postoperative bleeding occurred in all patients. The patients were followed up for 12 months; during the period the recurrence rate was 10.26% (8/78) and 9.72% (7/72), respectively, between Groups A and B.
Conclusions: Both GreenLight laser vaporization and thulium laser enucleation are effective and safe treatments for patients with NMIBC. Long-term clinical trials are necessary to increase the current scientific evidence base.
abstract_id: PUBMED:31489477
En bloc greenlight laser enucleation of prostate (GreenLEP): about the first hundred cases. Purpose: To report the functional outcomes, perioperative morbidity and surgical learning curve key points using "en bloc" greenlight enucleation of prostate (EB-GreenLEP) for patients with refractory lower urinary tract symptoms (LUTS) due to benign prostatic hyperplasia (BPH).
Methods: Between December, 2015 and May, 2018, all consecutive patients with refractory LUTS due to BPH in our institution were included and underwent EB-GreenLEP by a single surgeon. Perioperative data, complications and functional outcomes at 1-, 6- and 12-month follow-ups were collected and retrospectively analyzed.
Results: One hundred patients were included whose median age was 69 years. The median prostate volume (PV) was 84 mL and median enucleated PV was 45.5 mL. Mean irrigation, catheterization and hospitalization times were 1.3, 1.4 and 1.6 days, respectively. Average follow-up was 9.3 months. A single high-grade Clavien-Dindo complication occurred. No urinary retention was reported. Two conversions to conventional resection of the prostate were noted. Three patients had postoperative urinary incontinence at 6 months, only one at 1 year (1%). At 1, 6 and 12 months, there was a significant improvement in IPSS score, QoL and Qmax. Enucleation and energy efficiency ratios were shorter after the 30th procedure. We demonstrated a linear correlation between enucleation time and PV (r = 0.53, p < 0.0001).
Conclusion: Our study shows that the mid-term functional results of EB-GreenLEP are comparable to other laser sources for the endoscopic enucleation of the prostate but with a shorter learning curve. We showed that, with (a) low rates of complications and a short hospital stay, EB-GreenLEP can manage medium-size glands (60-90 mL).
abstract_id: PUBMED:32420160
GreenLight Laser photoselective vapo-enucleation of the prostate with front-firing emission versus plasmakinetic resection of the prostate for benign prostate hyperplasia. Background: Although the conventional, monopolar transurethral resection of the prostate (TURP) has proven to be an effective and relatively safe treatment for patients with benign prostatic hyperplasia (BPH), many new endoscopic technologies have been introduced to treat BPH. With the development of laser, there are several alternative transurethral procedures embracing laser therapies. Herein, this study sought to explore the efficacy, safety and follow-up of GreenLight laser photoselective vapo-enucleation of the prostate (PVEP) with front-firing emission compared with plasmakinetic resection of the prostate (PKRP) used to surgically manage BPH.
Methods: Data from patients who underwent either GreenLight laser PVEP or PKRP were retrospectively collected from March 2013 to May 2018. Perioperative data from both groups were compared.
Results: Totally, 43 and 45 patients were included in the PVEP and PKRP groups, respectively. No significant difference was observed in excision efficiency ratio (resected prostate weight/operation time) between the two groups (P=0.372). The efficiency ratio of the first 20 PVEP procedures (0.36±0.09 g/min) was significantly lower than that of the second 23 PVEP procedures (0.45±0.18 g/min) (P=0.042). The PVEP group experienced a shorter duration of catheterization, postoperative hospital stay and irrigation time than the PKRP group (P<0.001, P=0.001 and P<0.001, respectively). There was no statistically significant difference between the two groups (P=0.937) in terms of overall postoperative complications. Three months after surgery, the international prostate symptoms (IPSS) score, quality of life (QOL) score, postvoid residual (PVR) volume and maximum urinary flow rate (Qmax) were decreased in both groups (P<0.001 for all) and were comparable between both groups (P=0.635, 0.662, 0.671 and 0.924, respectively).
Conclusions: GreenLight laser PVEP with front-firing emission was safe and effective modality in treating patients with BPH with short-term follow-up. PVEP was associated with shorter catheterization and postoperative hospital stay time compared with PKRP.
abstract_id: PUBMED:35880419
Network Meta-Analysis of the Treatment Safety and Efficacy of Different Lasers in Prostate Enucleation. Purpose: This study aimed to compare different laser systems for the enucleation of benign prostate hyperplasia. Methods: Randomized controlled trials (RCTs) on different lasers for prostate enucleation were searched from PubMed, Embase, and CNKI databases. Pairwise and network meta-analyis (NMA) were performed to analyze the outcome regarding surgery time, complications, short-term postvoid residual (PVR), long-term PVR, and short-term international prostate symptom score (IPSS), long-term IPSS, short-term maximum urine flow rate (Qmax), and long-term Qmax. RevMan software was used for paired meta-analysis. Considering the variance uncertainty caused by the different source regions of RCTs and the different primary conditions of surgeons and patients, this study uses Bayesian NMA conducted with ADDIS software to compare different treatment methods indirectly. Node-splitting analysis was used to test inconsistency for closed-loop indirect comparison. Results: Nine studies were included in this study, involving four types of lasers: diode laser, holmium laser, thulium laser, and greenlight laser. In safety paired meta-analysis, holmium laser could bring more complication risk than thulium laser (odds ratio: 2.70, 95% confidential interval [CI]: 1.79-4.00, p < 0.001), and no other significant result was detected. In the efficacy comparisons, holmium laser could offer better postoperative long-term PVR (standardized mean difference [SMD]: -0.35, 95%CI: -0.62, -0.09, p = 0.011), better postoperative long-term IPSS (SMD: -0.30, 95%CI: -0.57, -0.04, p = 0.011), better postoperative short-term Qmax (SMD: 0.44, 95%CI: 0.17, 0.70, p = 0.001) compared with greenlight laser. According to the results of NMA, greenlight laser may bring more complication risks when applied to prostate enucleation than the other three lasers. Thulium laser may be the recommended laser system for prostate enucleation. Conclusion: Thulium laser may be the recommended laser system since it can bring less complication risk with comparable efficacy. More RCTs are still needed to validate this study.
abstract_id: PUBMED:29464687
A Case of Self-Enucleation in an Incarcerated Patient: Case Report and Review of Literature. Self-enucleation is a severe form of self-injurious behavior which presents as an ophthalmologic and psychiatric emergency. It is usually known to occur with untreated psychosis, however, there have been reports of self-enucleation across various psychopathologies. We review a case documenting self-enucleation in the forensic setting in a patient with an unusual presentation and cluster of psychotic symptoms. Literature was reviewed using PubMed/Medline databases with key terms: "forensic science," "forensic psychiatry," "auto-enucleation," "self-enucleation," "Oedipism," "self-harm." This case is unique as it offers an alternative presentation to those most commonly depicted in current literature, helps highlight the sparsity of literature depicting self-enucleation in the forensic setting, and stimulates discussion around various potential differential diagnoses, management strategies and complications of self-enucleation within the forensic setting. It is prudent to emphasize need for aggressive and collaborative treatment for the forensic population regardless of psychopathology, presentation, or propensity for secondary gain.
abstract_id: PUBMED:24929643
Common trend: move to enucleation-Is there a case for GreenLight enucleation? Development and description of the technique. Background: Transurethral laser prostatectomy has evolved as a viable alternative for the management of benign prostate enlargement. Since the renaissance of laser prostatectomy with the advent of the holmium:yttrium-aluminum-garnet laser in the 1990s, various lasers and subsequent procedures have been introduced. These techniques can be categorized as vaporizing, resecting, and enucleating approaches. Photoselective vaporization of the prostate (PVP) is dominated by high-power lithium triborate (LBO) crystal lasers (GreenLight XPS). The mainstay of this technique is for the treatment of small to medium prostate volumes whereas enucleating techniques, such as holmium laser enucleation of the prostate and thulium enucleation of the prostate, focus on large-volume glands. In order to perspectively "delimit" LBO into the field of large-volume prostates, we developed LBO en bloc enucleation to render it as a competing transurethral enucleating approach.
Materials And Methods: We present a detailed stepwise progressive technique developed in Madrid, Spain, for the complete removal of the transitional zone by vapoenucleation. The steps include exposition of the prostatic capsule by PVP toward the peripheral zone, thereby identifying the anatomical limits of enucleation. Subsequently, the transitional zone is excised in a single bloc and morcellated after its placement into the bladder.
Conclusion: This new GreenLight en bloc enucleation technique allows to treat larger prostates than those previously treated with the PVP technique.
abstract_id: PUBMED:21584146
Self-enucleation in depression: a case report. Self- enucleation is a rare and an extreme form of self - mutilation, most commonly reported in schizophrenia. Many forms of self - injuries have been described in depression. However severe form of self- mutilation without suicidal intention, especially self- enucleation is rarely reported. In the present case self- enucleation is described as an expression of aggression in a depressive patient threatened with inter-personal loss. Aggression turned inwards is one of the major psychodynamic explanations in depression. In depressive syndrome self- mutilation, whether mild or severe is an expression of loss of affective (aggression) control.
abstract_id: PUBMED:32557842
The Evolution of Endoscopic Prostate Enucleation: A historical perspective. Here, we review the evolution of prostate enucleation in chronological order based on energy technology starting first with holmium laser, then bipolar electrocautery, followed by thulium laser and finally greenlight and diode laser enucleation of the prostate.
abstract_id: PUBMED:28781478
Common Causes of Eye Enucleation among Patients. Background: We investigated the contributing causes and factors leading to eye enucleation procedure among the patients referred to the ophthalmology clinic of the Center Eye Hospital in Tehran, Iran.
Patients And Methods: We retrospectively analyzed the clinical records of patients referred to the ophthalmology clinic of the Center Eye Hospital in Tehran, Iran, in the past 3 years. The data retrieved were analyzed.
Results: A total of 313 patients underwent eye enucleation during the past 3 years. The average age of the patients was 36.7 years (range: 20-44 years). Two hundred and eight patients (66.5%) were males and 105 (33.5%) were females. One hundred and sixty-nine patients had the right eye enucleation while 144 patients had the left eye enucleation. The various procedures included enucleation (83%), evisceration (13.4%), and exantration (3.6%). The factors leading to these procedures included trauma (33.2%), phthisis bulbi (20.7%), tumors (15.7%), and infections (10.5%).
Conclusion: In our study population, trauma was the main factor leading to eye enucleation. Hence, measures to increase social awareness and knowledge to reduce trauma cases are necessary as a preventive approach to reduce eye enucleation procedures. Concurrently, ophthalmologists should also pay more attention in early diagnoses and screening of eye health, especially detecting tumor cases such as retinoblastoma and postsurgical care to reduce secondary infections will be effective actions to reduce the overall rate of enucleation.
Answer: The trend towards enucleation in the treatment of benign prostatic hyperplasia (BPH) and other prostate conditions has included the development of various laser technologies. GreenLight laser enucleation, also known as GreenLEP, is one such technology that has been developed as an alternative endoscopic enucleation of the prostate (EEP) technique. GreenLEP is an 'en bloc' EEP technique designed to remove the transitional zone tissue in one piece, combining laser enucleation with blunt mechanical dissection using the tip of the resectoscope. This method is performed with a 532-nm lithium triborate laser and has demonstrated feasibility, safety, and similar short- to mid-term functional outcomes compared to surgical gold standards (PUBMED:32662906).
GreenLight laser has also been compared to other laser technologies, such as thulium laser enucleation, for the treatment of nonmuscle invasive bladder cancer (NMIBC). Both GreenLight laser vaporization and thulium laser enucleation were found to be effective and safe treatments for NMIBC, with no significant differences in operation time, catheterization time, postoperative hospital stay, or recurrence rates (PUBMED:29717908).
In terms of functional outcomes, perioperative morbidity, and surgical learning curve, 'en bloc' GreenLight enucleation of the prostate (EB-GreenLEP) has shown comparable mid-term functional results to other laser sources for endoscopic enucleation of the prostate, with a shorter learning curve and low rates of complications (PUBMED:31489477).
GreenLight laser photoselective vapo-enucleation of the prostate (PVEP) with front-firing emission has been compared with plasmakinetic resection of the prostate (PKRP) for BPH, showing that PVEP was associated with shorter catheterization and postoperative hospital stay times, and was safe and effective with short-term follow-up (PUBMED:32420160).
A network meta-analysis comparing different laser systems for prostate enucleation suggested that thulium laser might be the recommended system due to lower complication risks with comparable efficacy, although GreenLight laser was included in the analysis (PUBMED:35880419). |
Instruction: Does providing previous results change the accuracy of cervical cytology?
Abstracts:
abstract_id: PUBMED:20014553
Does providing previous results change the accuracy of cervical cytology? Objective: To determine whether providing previous cytology and histology findings alters the accuracy of conventional cervical cytology reading or changes reading times.
Study Design: Each of 9 cytologists read 9 batches of 8 routinely referred Pap smears (total, 648 slides), with history (H) and without history (NH), at an interval of no less than 4 weeks. Each batch was read blind to the result of reading under the other strategy and to histology. Histologic cervical intraepithelial neoplasia 2 or more severe was the reference standard. Accuracy of reading was assessed across all thresholds using receiver operating characteristic (ROC) curves and by sensitivity and specificity at a cytology threshold of possible low grade squamous intraepithelial lesion (consistent with atypical squamous cells of undermined significance).
Results: Areas under the ROC curve, sensitivities and specificities were similar if read with or without history, except for 1 reader for whom reading with history increased the area under the ROC curve from 0.716 to 0.833 (increase of 0.117, p = 0.017) and the sensitivity from 0.57 to 0.79 (increase of 0.22, p = 0.014), without any significant change in specificity. Accuracy varied between subgroups defined by age and by the severity and timing of previous abnormalities, but the results of the comparison of accuracy in H and NH did not vary by subgroup. Mean reading times were 8.2 (H) and 7.9 (NH) minutes per slide, a difference of 0.34 minutes (p = 0.083). Differences in mean batch times (H-NH) between readers ranged from -0.08 to 1.0 minutes, the largest difference being for the reader whose accuracy increased.
Conclusion: An accurate history might improve accuracy for some cytologists.
abstract_id: PUBMED:36010299
The Accuracy of Cytology, Colposcopy and Pathology in Evaluating Precancerous Cervical Lesions. Introduction: Cervical cancer (CC) is the third most common cancer in the world, and Romania has the highest incidence of cervical cancer in Europe. The aim of this study was to evaluate the correlation between cytology, colposcopy, and pathology for the early detection of premalignant cervical lesions in a group of Romanian patients. Methods: This observational type 2 cohort study included 128 women from our unit, "Bucur" Maternity, who were referred for cervical cancer screening. Age, clinical diagnosis, cytology results, colposcopy impression, and biopsy results were considered. Colposcopy was performed by two experienced examiners. The pathological examination was performed by an experienced pathologist. Results: The cytology found high-grade squamous intraepithelial lesions in 60.9% of patients, low-grade squamous intraepithelial lesions in 28.1%, atypical squamous cells for which a high-grade lesion could not be excluded in 9.4%, and atypical squamous cells of undetermined significance, known as repeated LSIL, in 1.6%. The first evaluator identified low-grade lesions in 56.3%, high-grade lesions in 40.6%, and invasion in 3.1% of patients. The second evaluator identified low-grade lesions in 59.4%, high-grade lesions in 32.0%, and invasion in 8.6% of patients. The pathological exam identified low-grade lesions in 64.1%, high-grade lesions in 25%, and carcinoma in 14% of patients. The colposcopic accuracy was greater than the cytologic accuracy. Conclusions: Colposcopy remains an essential tool for the identification of cervical premalignant cancer cells. Standardization of the protocol provided an insignificant interobserver variability and can serve as support for further postgraduate teaching.
abstract_id: PUBMED:25834538
Diagnostic accuracy of fine needle aspiration cytology in providing a diagnosis of cervical lymphadenopathy among HIV-infected patients. Background: Opportunistic infections and malignancies cause lymphadenopathy in HIV-infected patients. The use and accuracy of fine needle aspiration cytology in diagnosing of cervical lymphadenopathy among HIV-infected patients is not well studied in Uganda.
Objective: The aim of this study was to determine the diagnostic accuracy of fine needle aspiration cytology in providing a diagnosis of cervical lymphadenopathy among HIV-infected patients in Uganda.
Methods: We consecutively recruited adult HIV-infected patients with cervical lymphadenopathy admitted to Mulago Hospital medical wards. Clinical examination, fine needle aspiration and lymph node biopsy were performed. We estimated the sensitivity, specificity; negative and positive predictive values using histology as the gold standard.
Results: We enrolled 108 patients with a mean age of 33 years (range, 18-60), 59% were men and mean CD4 was 83(range, 22-375) cells/mm(3). The major causes of cervical lymphadenopathy were: tuberculosis (69.4%), Kaposi's sarcoma-KS (10.2%) and reactive adenitis (7.4%). Overall fine needle aspiration cytology accurately predicted the histological findings in 65 out of 73 cases (89%) and missed 7 cases (9.5%). With a sensitivity of 93.1%, specificity of 100%, positive predictive value of 100% and negative predictive value of 78.7% for tuberculosis and 80%; 98.4%;88.9% and 98.9% for KS respectively. No fine needle aspiration complications were noted.
Conclusions: Fine needle aspiration cytology is safe and accurate in the diagnosis of tuberculosis and KS cervical lymphadenopathy among HIV-positive patients.
abstract_id: PUBMED:25132656
Interobserver reproducibility and accuracy of p16/Ki-67 dual-stain cytology in cervical cancer screening. Background: Dual-stain cytology for p16 and Ki-67 has been proposed as a biomarker in cervical cancer screening. The authors evaluated the reproducibility and accuracy of dual-stain cytology among 10 newly trained evaluators.
Methods: In total, 480 p16/Ki-67-stained slides from human papillomavirus-positive women were evaluated in masked fashion by 10 evaluators. None of the evaluators had previous experience with p16 or p16/Ki-67 cytology. All participants underwent p16/Ki-67 training and subsequent proficiency testing. Reproducibility of dual-stain cytology was measured using the percentage agreement, individual and aggregate κ values, as well as McNemar statistics. Clinical performance for the detection of cervical intraepithelial neoplasia grade 2 or greater (CIN2+) was evaluated for each individual evaluator and for all evaluators combined compared with the reference evaluation by a cytotechnologist who had extensive experience with dual-stain cytology.
Results: The percentage agreement of individual evaluators with the reference evaluation ranged from 83% to 91%, and the κ values ranged from 0.65 to 0.81. The combined κ value was 0.71 for all evaluators and 0.73 for cytotechnologists. The average sensitivity and specificity for the detection of CIN2+ among novice evaluators was 82% and 64%, respectively; whereas the reference evaluation had 84% sensitivity and 63% specificity, respectively. Agreement on dual-stain positivity increased with greater numbers of p16/Ki-67-positive cells on the slides.
Conclusions: Good to excellent reproducibility of p16/Ki-67 dual-stain cytology was observed with almost identical clinical performance of novice evaluators compared with reference evaluations. The current findings suggest that p16/Ki-67 dual-stain evaluation can be implemented in routine cytology practice with limited training.
abstract_id: PUBMED:12583415
Accuracy of cytological findings in abnormal cervical smears by cytohistologic comparison. To investigate the accuracy rates of cytology in abnormal cervical smears and the factors contributing to a discrepant diagnosis between cytology and histology repots of cervical intraepithelial and invasive neoplasm. During the four-year period 1993 to 1996, abnormal cervical smear findings, which were followed by cervical biopsy, were available in 709 patients. The cytology and histology slides were reviewed in each case. The accuracy rates of cytology before and after review were investigated. The accuracy rate of cytology was 48%. Following review it became 56%, mainly due to a reduction in the number of cases in which the smear showed a lesser degree of CIN than did the biopsy. The proportion of cases in which the cytological impression of CIN was more severe than the histology was minimally altered. The results suggest that difficulty in the interpretation of cervical smear as well as sampling errors are responsible for reduced accuracy even in smears which are considered representative of the pathological process.
abstract_id: PUBMED:8629405
Early cervical neoplasia confirmed by conization: diagnostic accuracy of cytology, colposcopy and punch biopsy. Objective: To investigate the accuracy rates of cytology, colposcopy and punch biopsy in early cervical neoplasia confirmed by conization.
Study Design: During the 10 years from 1984 to 1993, cold knife conization was performed on 151 patients with early cervical neoplasia proven by punch biopsy at our department. The accuracy rates of cytology, colposcopy and punch biopsy were investigated.
Results: The accuracy rates of cytology, colposcopy and punch biopsy were 52% (78 of 151), 66% (100 of 151) and 66% (100 of 151), respectively.
Conclusion: These results suggest that a composite diagnosis with cytology, colposcopy and punch biopsy is necessary for a correct evaluation. Early cervical neoplasms are frequently seen in young women, and conservative procedures, such as conization, cryosurgery and laser vaporization, are the treatments of choice in order to preserve reproductive function. We recommend conization as the best conservative procedure, with preservation of reproductive function.
abstract_id: PUBMED:32213171
Evaluating the performance of a low-cost mobile phone attachable microscope in cervical cytology. Background: Cervical cancer remains a global health problem especially in remote areas of developing countries which have limited resources for cervical cancer screening. In this study, we evaluated the performance of a low-cost, smartphone attachable paper-based microscope when used for classifying images of cervical cytology.
Methods: Cervical cytology samples included: 10 Normal, 10 Low-grade squamous intraepithelial lesion (LSIL), 10 High-grade squamous intraepithelial lesion (HSIL), and 10 Malignant Pap Smears. The agreement between conventional microscopy vs. Foldscope imaging was calculated using a weighted kappa coefficient. A confusion matrix was created with three classes: Normal, LSIL, and HSIL/malignant, to evaluate the performance of the Foldscope by calculating the accuracy, sensitivity, and specificity.
Results: We observed a kappa statistic of 0.68 for the agreement. This translates into a substantial agreement between the cytological classifications by the Foldscope vs. conventional microscopy. The accuracy of the Foldscope was 80%, with a sensitivity and specificity of 85 and 90% for the HSIL/Mal category, 80 and 83.3%, for LSIL, and 70 and 96.7% for Normal.
Conclusions: This study highlights the usefulness of the Foldscope in cervical cytology, demonstrating it has substantial agreement with conventional microscopy. Its use could improve cytologic interpretations in underserved areas and, thus, improve the quality of cervical cancer screening. Improvements in existing limitations of the device, such as ability to focus, could potentially increase its accuracy.
abstract_id: PUBMED:34319039
Prevalence of Abnormal Anal Cytology in Women with Abnormal Cervical Cytology. Objective: The aim of this study was to evaluate the prevalence of abnormal anal cytology in women presenting with abnormal cervical cytology (intraepithelial lesion or cervical cancer) at the largest tertiary university hospital in Thailand.
Methods: A cross-sectional prospective study design was used. Anal cytology was performed on 145 women with abnormal cervical cytology between June 2014-Octoble 2014. If abnormal anal cytology was detected, anoscopy was performed with biopsy in any suspicious area of precancerous change.
Results: Prevalence of abnormal anal cytology was 5.5% (8 patients). Of 8 patients, six patients presented with low-grade squamous intraepithelial lesion, one patient with high-grade squamous intraepithelial lesion, and one with atypical squamous cell cannot exclude high-grade squamous intraepithelial lesion. Abnormal anoscopic impression was found in 3 cases, as follow: The first case showed faint acetowhite lesion and anoscopic impression was low grade squamous intraepithelial lesion; the second case was reported as human papillomavirus (HPV) change by anoscopic impression; and the third case showed dense acetowhite lesion with multiple punctation and pathologic examination showed anal intraepithelial neoplasm III (AIN3). The last patient underwent wide local excision of AIN3 with split-thickness skin graft reconstruction. Final pathology confirmed AIN3 with free resection margin.
Conclusion: Prevalence of abnormal anal cytology was 5.5% in patients with abnormal cervical cytology. The prevalence might be support anal cytology screening in this group of patients.
abstract_id: PUBMED:2686324
Statistical measurements of accuracy in cervical cytology. Statistical measures that can be used to monitor the level of accuracy of cervical cytologic screening are examined: sensitivity, the false-negative rate, the interval cancer rate among women with negative reports and the predictive value of a positive test. Sensitivity and the false-negative rate are subject to biases and problems of determination that may make them less useful as measures of accuracy. The interval cancer rate and the positive predictive value may be better methods of assessing the frequency of serious abnormalities following negative cytologic reports and the accuracy of cytologic reports of serious abnormalities, respectively. It is important to recognize that no single measure of accuracy can adequately define a laboratory's performance. The use of statistical analyses can be invaluable in placing in context the criticisms of cervical cytology as an appropriate screening test for the prevention of cervical cancer; statistical analysis should be used as one component of quality control.
abstract_id: PUBMED:7607854
A comparison between referral and hospital cervical cytology laboratory reports. The accuracy of reports from various cytology laboratories has hitherto not been evaluated in Israel. The purpose of our study was to assess the degree of concordance between cervical cytology reports from laboratories outside our hospital with hospital cytology reports, and of both, with biopsy reports. The study group comprised 95 women referred because of an abnormal cytology smear, and who had a repeat cytology smear in the hospital within 3 months without intervening treatment. Fifty-one of these women had a colposcopically directed biopsy. We found a poor concordance between referral and hospital cervical cytology reports, and a better concordance of hospital cytology reports with biopsy findings than of referral cytology reports. There was a tendency for overdiagnosis in both the referral and the hospital cytology reports. A mechanism for continuous quality control of cervical cytology smear reports is needed.
Answer: Providing previous results may change the accuracy of cervical cytology for some cytologists. A study found that for one out of nine cytologists, reading with history increased the area under the receiver operating characteristic (ROC) curve and sensitivity without any significant change in specificity. However, the accuracy varied between subgroups defined by age and by the severity and timing of previous abnormalities, but the comparison of accuracy in reading with history (H) and without history (NH) did not vary by subgroup. The mean reading times were slightly longer when history was provided, but the difference was not statistically significant (PUBMED:20014553).
Overall, the evidence suggests that an accurate history might improve accuracy for some cytologists, but the effect is not uniform across all readers. It is important to note that the accuracy of cervical cytology can also be influenced by other factors such as the experience of the cytologist, the quality of the sample, and the use of adjunctive tools like colposcopy and biomarkers (PUBMED:36010299, PUBMED:25132656). Therefore, while previous results can be a factor, they are just one of many that can impact the accuracy of cervical cytology readings. |
Instruction: Is the Readability of Spine-Related Patient Education Material Improving?
Abstracts:
abstract_id: PUBMED:35342026
Readability of Online Spine Patient Education Resources. Objective: We assessed the readability of spine-related patient education materials on professional society websites to determine whether this had improved since last studied. We also compared the readability of these materials to a more patient-centered source, such as WebMD.
Methods: Patient education pages from the American Association of Neurologic Surgeons (AANS), North American Spine Society (NASS), and spine-related pages from the American Academy of Orthopaedic Surgeons (AAOS), and WebMD were reviewed. Readability was evaluated using the Flesch Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) formulas. The mean FKGL and FRE scores of the societies were compared using one-way analysis of variance. The rate of a reading level at or below an eighth grade level was compared using the χ2 test.
Results: We analyzed a total of 156 sites. The mean FKGL score for the professional society sites was 11.4. The mean FRE score for the professional societies was 45.8, with 14.4% written at or below an eighth grade reading level. We found a significant difference in the FKGL scores and materials at or below the eighth grade level between the AAOS and AANS and AAOS and NASS. The mean FKGL and FRE scores for WebMD were 7.57 and 68.1, respectively, with a significant difference compared with the scores for the AAOS, NASS, and AANS. In addition, 80% of the WebMD materials had been written at or below the eighth grade reading level. A significant difference compared with the AANS and NASS (P < 0.0001) but not for the AAOS (P = 0.059).
Conclusions: The average readability of spine-related topics exceeded the eighth grade reading level. The AAOS resources had better readability compared with the NASS and AANS. We found no improvement in readability since last studied. The readability of professional societies' materials was significantly worse than those from WebMD.
abstract_id: PUBMED:37466719
Readability of spine-related patient education materials: a standard method for improvement. Purpose: Orthopaedic patient education materials (PEMs) have repeatedly been shown to be well above the recommended reading level by the National Institute of Health and American Medical Association. The purpose of this study is to create a standardized method to improve the readability of PEMs describing spine-related conditions and injuries. It is hypothesized that reducing the usage of complex words (≥ 3 syllables) and reducing sentence length to < 15 words per sentence improves readability of PEMs as measured by all seven readability formulas used.
Methods: OrthoInfo.org was queried for spine-related PEMs. The objective readability of PEMs was evaluated using seven unique readability formulas before and after applying a standardized method to improve readability while preserving critical content. This method involved reducing the use of > 3 syllable words and ensuring sentence length is < 15 words. Paired samples t-tests were conducted to assess relationships with the cut-off for statistical significance set at p < 0.05.
Results: A total of 20 spine-related PEM articles were used in this study. When comparing original PEMs to edited PEMs, significant differences were seen among all seven readability scores and all six numerical descriptive statistics used. Per the Flesch Kincaid Grade level readability formula, one original PEM (5%) versus 15 edited PEMs (75%) met recommendations of a sixth-grade reading level.
Conclusion: The current study shows that using this standardized method significantly improves the readability of spine-related PEMs and significantly increased the likelihood that PEMs will meet recommendations for being at or below the sixth-grade reading level.
abstract_id: PUBMED:37724783
Readability of patient education material in stroke: a systematic literature review. Background: Stroke education materials are crucial for the recovery of stroke patients, but their effectiveness depends on their readability. The American Medical Association (AMA) recommends patient education materials be written at a sixth-grade level. Studies show existing paper and online materials exceed patients' reading levels and undermine their health literacy. Low health literacy among stroke patients is associated with worse health outcomes and decreased efficacy of stroke rehabilitation.
Objective: We reviewed the readability of paper (i.e brochures, factsheets, posters) and online (i.e American Stroke Association, Google, Yahoo!) stroke patient education materials, reading level of stroke patients, accessibility of online health information, patients' perceptions on gaps in stroke information, and provided recommendations for improving readability.
Method: A PRISMA-guided systematic literature review was conducted using PUBMED, Google Scholar, and EbscoHost databases and "stroke", "readability of stroke patient education", and "stroke readability" search terms to discover English-language articles. A total of 12 articles were reviewed.
Results: SMOG scores for paper and online material ranged from 11.0 - 12.0 grade level and 7.8 - 13.95 grade level respectively. Reading level of stroke patients ranged from 3rd grade to 9th grade level or above. Accessibility of online stroke information was high. Structured patient interviews illustrated gaps in patient education materials and difficulty with comprehension.
Conclusion: Paper and online patient education materials exceed the reading level of stroke patients and the AMA recommended 6th grade level. Due to limitations in readability, stroke patients are not being adequately educated about their condition.
abstract_id: PUBMED:27294810
Is the Readability of Spine-Related Patient Education Material Improving?: An Assessment of Subspecialty Websites. Study Design: Analysis of spine-related patient education materials (PEMs) from subspecialty websites.
Objective: The aim of this study was to assess the readability of spine-related PEMs and compare to readability data from 2008.
Summary Of Background Data: Many spine patients use the Internet for health information. Several agencies recommend that the readability of online PEMs should be no greater than a sixth-grade reading level, as health literacy predicts health-related quality of life outcomes. This study evaluated whether the North American Spine Society (NASS), American Association of Neurological Surgeons (AANS), and American Academy of Orthopaedic Surgeons (AAOS) online PEMs meet recommended readability guidelines for medical information.
Methods: All publicly accessible spine-related entries within the patient education section of the NASS, AANS, and AAOS websites were analyzed for grade level readability using the Flesch-Kincaid formula. Readability scores were also compared with a similar 2008 analysis. Comparative statistics were performed.
Results: A total of 125 entries from the subspecialty websites were analyzed. The average (SD) readability of the online articles was grade level 10.7 (2.3). Of the articles, 117 (93.6%) had a readability score above the sixth-grade level. The readability of the articles exceeded the maximum recommended level by an average of 4.7 grade levels (95% CI, 4.292-5.103; P < 0.001). Compared with 2008, the three societies published more spine-related patient education articles (61 vs. 125, P = 0.045) and the average readability level improved from 11.5 to 10.7 (P = 0.018). Of three examined societies, only one showed significant improvement over time.
Conclusion: Our findings suggest that the spine-related PEMs on the NASS, AAOS, and AANS websites have readability levels that may make comprehension difficult for a substantial portion of the patient population. Although some progress has been made in the readability of PEMs over the past 7 years, additional improvement is necessary.
Level Of Evidence: 2.
abstract_id: PUBMED:19910867
Readability of spine-related patient education materials from subspecialty organization and spine practitioner websites. Study Design: Analysis of spine-related websites available to the general public.
Objective: To assess the readability of spine-related patient educational materials available on professional society and individual surgeon or practice based websites.
Summary Of Background Data: The Internet has become a valuable source of patient education material. A significant percentage of patients, however, find this Internet based information confusing. Healthcare experts recommend that the readability of patient education material be less than the sixth grade level. The Flesch-Kincaid grade level is the most widely used method to evaluate the readability score of textual material, with lower scores suggesting easier readability.
Methods: We conducted an Internet search of all patient education documents on the North American Spine Society (NASS), American Association of Neurological Surgeons (AANS), the American Academy of Orthopaedic Surgeons (AAOS), and a sample of 10 individual surgeon or practice based websites. The Flesch-Kincaid grade level of each article was calculated using widely available Microsoft Office Word software. The mean grade level of articles on the various professional society and individual/practice based websites were compared.
Results: A total of 121 articles from the various websites were available and analyzed. All 4 categories of websites had mean Flesch-Kincaid grade levels greater than 10. Only 3 articles (2.5%) were found to be at or below the sixth grade level, the recommended readability level for adult patients in the United States. There were no significant differences among the mean Flesch-Kincaid grade levels from the AAOS, NASS, AANS, and practice-based web-sites (P = 0.065, ANOVA).
Conclusion: Our findings suggest that most of the Spine-related patient education materials on professional society and practice-based websites have readability scores that may be too high, making comprehension difficult for a substantial portion of the United States adult population.
abstract_id: PUBMED:29151111
Readability Assessment of Patient Education Material Published by German-Speaking Associations of Urology. Objective: To assess the readability and comprehensibility of web-based German-language patient education material (PEM) issued by urological associations.
Materials And Methods: German PEM available in June 2017 was obtained from the European Association of Urology (EAU), German (DGU), Swiss (SGU) and Austrian (ÖGU) Association of Urology websites. Each educational text was analyzed separately using 4 well-established readability assessment tools: the Amstad Test (AT), G-SMOG (SMOG), Wiener Sachtextformel (WS) and the Lesbarkeitsindex (LIX).
Results: The EAU has issued PEM on 8 topics, the DGU 22 and the SGU 5. The ÖGU refers to the PEMs published by the DGU. Calculation of grade levels (SMOG, WS, LIX) showed readability scores of the 7th-14th grades. The easiest readability was found for materials on Nocturia and Urinary Incontinence issued by the EAU. Kidney Cancer and Infertility, issued by the DGU had the hardest readability. The EAU achieved the best median AT score, followed by the SGU, and the DGU.
Conclusion: Remarkable differences between readability were found for the PEMs issued by EAU, DGU and SGU. Materials published by the EAU were the easiest to read. Improving the readability of certain PEMs is of crucial importance to meet patient needs and act in the interests of a growing, self-informing German-speaking patient community.
abstract_id: PUBMED:25239196
Improving the readability of online foot and ankle patient education materials. Background: Previous studies have shown the need for improving the readability of many patient education materials to increase patient comprehension. This study's purpose was to determine the readability of foot and ankle patient education materials and to determine the extent readability can be improved. We hypothesized that the reading levels would be above the recommended guidelines and that decreasing the sentence length would also decrease the reading level of these patient educational materials.
Methods: Patient education materials from online public sources were collected. The readability of these articles was assessed by a readability software program. The detailed instructions provided by the National Institutes of Health (NIH) were then used as a guideline for performing edits to help improve the readability of selected articles. The most quantitative guideline, lowering all sentences to less than 15 words, was chosen to show the effect of following the NIH recommendations.
Results: The reading levels of the sampled articles were above the sixth to seventh grade recommendations of the NIH. The MedlinePlus website, which is a part of the NIH website, had the lowest reading level (8.1). The articles edited had an average reduction of 1.41 grade levels, with the lowest reduction in the Medline articles of 0.65.
Conclusion: Providing detailed instructions to the authors writing these patient education articles and implementing editing techniques based on previous recommendations could lead to an improvement in the readability of patient education materials.
Clinical Relevance: This study provides authors of patient education materials with simple editing techniques that will allow for the improvement in the readability of online patient educational materials. The improvement in readability will provide patients with more comprehendible education materials that can strengthen patient awareness of medical problems and treatments.
abstract_id: PUBMED:38367003
A standardised method for improving patient education material readability for orthopaedic trauma patients. Purpose: While the National Institutes of Health and American Medical Association recommend patient education materials (PEMs) should be written at the sixth-grade reading level or below, many patient education materials related to traumatic orthopaedic injuries do not meet these recommendations. The purpose of this study is to create a standardised method for enhancing the readability of trauma-related orthopaedic PEMs by reducing the use of ≥ three syllable words and reducing the use of sentences >15 words in length. We hypothesise that applying this standardized method will significantly improve the objective readability of orthopaedic trauma PEMs.
Methods: A patient education website was queried for PEMs relevant to traumatic orthopaedic injuries. Orthopaedic trauma PEMs included (N = 40) were unique, written in a prose format, and <3500 words. PEM statistics, including scores for seven independent readability formulae, were determined for each PEM before and after applying this standard method.
Results: All PEMs had significantly different readability scores when comparing original and edited PEMs (p < 0.01). The mean Flesch Kincaid Grade Level of the original PEMs (10.0 ± 1.0) was significantly higher than that of edited PEMs (5.8 ± 1.1) (p < 0.01). None of the original PEMs met recommendations of a sixth-grade reading level compared with 31 (77.5%) of edited PEMs.
Conclusions: This standard method that reduces the use of ≥ three syllable words and <15 word sentences has been shown to significantly reduce the reading-grade level of PEMs for traumatic orthopaedic injuries. Improving the readability of PEMs may lead to enhanced health literacy and improved health outcomes.
abstract_id: PUBMED:25912728
Readability evaluation of Internet-based patient education materials related to the anesthesiology field. Study Objective: The main objective of the current investigation was to assess the readability of Internet-based patient education materials related to the field of anesthesiology. We hypothesized that the majority of patient education materials would not be written according to current recommended readability grade level.
Setting: Online patient education materials describing procedures, risks, and management of anesthesia-related topics were identified using the search engine Google (available at www.google.com) using the terms anesthesia, anesthesiology, anesthesia risks, and anesthesia care.
Design: Cross-sectional evaluation.
Interventions: None.
Measurements: Assessments of content readability were performed using validated instruments (Flesch-Kincaid Grade Formulae, the Gunning Frequency of Gobbledygook, the New Dale-Chall Test, the Fry graph, and the Flesch Reading Ease score).
Main Results: Ninety-six Web sites containing Internet patient education materials (IPEMs) were evaluated. The median (interquartile range) readability grade level for all evaluated IPEMs was 13.5 (12.0-14.6). All the evaluated documents were classified at a greater readability level than the current recommended readability grade, P < .001. Readability grades were not significantly different among different IPEM sources. Assessment by the Flesch Reading Ease test classified all but 4 IPEMs as at least fairly difficult to read.
Conclusions: Internet-based patient education materials related to the field of anesthesiology are currently written far above the recommended readability grade level. High complexity of written education materials likely limits access of information to millions of American patients. Redesign of online content of Web sites that provide patient education material regarding anesthesia could be an important step in improving access to information for patients with poor health literacy.
abstract_id: PUBMED:34139351
Readability of the Most Commonly Accessed Online Patient Education Materials Pertaining to Surgical Treatments of the Spine. Objective: The American Medical Association and National Institutes of Health have recommended that patient education materials should be written at the sixth-grade reading level to maximize patient comprehension. The objective of the present study was to evaluate the readability of Internet information for the 9 most common spinal surgeries.
Methods: We reviewed 90 online patient educational materials regarding the 9 most common spinal surgeries as reported by the North American Spine Society. A Google search was performed on March 23, 2019 for each surgery, and the top 10 most visited websites for each surgery were assessed for reading level using the Flesch-Kincaid formula.
Results: Using the Flesch-Kincaid formula, the average grade reading level of the 90 websites included was 12.82, with a reading ease of 37.04 ("difficult college"). Only 6 websites relayed information to patients at or below the national average of an eighth-grade reading level. The websites for bone morphogenic protein had the highest average grade reading level at 15.88 ± 2.6. Lumbar microscopic discectomy had the lowest average grade reading level at 10.37 ± 2.89. All surgical options discussed had an average readability above the recommended sixth-grade reading level.
Conclusions: The most accessed online materials for common spinal surgeries, not only exceeded the readability limits recommended by both the American Medical Association and the National Institutes of Health, but they also exceeded the average reading ability of most adults in the United States. Patients, therefore, might not fully comprehend the information from commonly accessed websites regarding surgical spine treatment options.
Answer: The readability of spine-related patient education materials (PEMs) has been a subject of concern, as these materials often exceed the recommended reading levels for patient comprehension. The American Medical Association (AMA) and other agencies recommend that PEMs be written at a sixth-grade reading level to ensure that they are accessible to a wide audience, including those with lower literacy skills. However, studies have consistently shown that the readability of spine-related PEMs is above this recommended level.
An analysis of spine-related PEMs from subspecialty websites found that the average readability level was at a 10th-grade level, with 93.6% of the articles having a readability score above the sixth-grade level. This study noted that while there was some improvement in readability from a previous analysis in 2008, the materials still did not meet the recommended guidelines (PUBMED:27294810).
Another study assessing the readability of online spine patient education resources from professional society websites and WebMD found that the mean Flesch Kincaid Grade Level (FKGL) score for the professional society sites was 11.4, significantly higher than the recommended level. In contrast, WebMD's materials had a mean FKGL of 7.57, with 80% written at or below the eighth-grade reading level, making them more accessible (PUBMED:35342026).
Efforts to improve the readability of spine-related PEMs have been made, with one study creating a standardized method to enhance readability by reducing the use of complex words and sentence length. This method significantly improved the readability of the materials, increasing the likelihood of meeting the sixth-grade reading level recommendations (PUBMED:37466719).
Despite these efforts, the overall trend suggests that the readability of spine-related PEMs has not shown a significant improvement over time, with most materials still being written above the recommended reading level. This indicates that more work is needed to ensure that these educational resources are accessible to patients with varying levels of health literacy (PUBMED:19910867, PUBMED:34139351). |
Instruction: Unexpected reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries during the study period: was this the Hawthorne effect?
Abstracts:
abstract_id: PUBMED:12628276
Unexpected reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries during the study period: was this the Hawthorne effect? Objective: The study was originally designed to identify the risk factors that could predict those difficult instrumental deliveries resulting in birth trauma and birth asphyxia.
Design: A prospective study on all singleton deliveries in cephalic presentation with an attempt of instrumental delivery over a 12-month period (13 March 2000 to 12 March 2001).
Setting: A local teaching hospital.
Sample: Six hundred and seventy deliveries.
Methods: A codesheet was designed to record the demographic data, characteristics of first and second stages of labour and neonatal outcome. In particular, the doctor had to enter the pelvic examination findings before the attempt of instrumental delivery.
Main Outcome Measures: Birth trauma and birth asphyxia.
Results: There was a significant reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries during the study period (0.6%) when compared with that (2.8%) in the pre-study period (1998 and 1999) (RR 0.27, 95% CI 0.11-0.70). There was more trial of instrumental deliveries in the operating theatre although this was not statistically significant (RR 1.19, 95% CI 0.88-1.60). The instrumental delivery rate decreased during the study period (RR 0.88, 95% CI 0.82-0.94). The caesarean section rate for no progress of labour, the incidence of direct second stage caesarean section and the incidence of failed instrumental delivery did not increase during the study period.
Conclusions: Apart from the merits of regular audit exercise and increasing experience of the staff, the Hawthorne effect might be the major contributing factor in the reduction of birth trauma and birth asphyxia related to instrumental deliveries during the study period.
abstract_id: PUBMED:16567034
Continued reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries after the study period: was this the Hawthorne effect? Background: The incidence of birth trauma and birth asphyxia related to instrumental deliveries in our obstetric unit was high (2.8%) in 1998-1999. A study was performed in 2000 to identify the risk factors. Unexpectedly, the incidence (0.6%) was reduced significantly during the study period. We attributed this phenomenon to the famous Hawthorne effect (tendency to improve performance because of awareness of being studied).
Objectives: The objectives were to study whether there is a continued reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries in the post-study period (2001-2003) and to investigate the presence of underlying confounding factors apart from the Hawthorne effect.
Method: To compare the hospital obstetric statistics among the pre-study period (1998-1999), the study period (2000) and the post-study period (2001-2003), in particular the incidence of birth trauma and birth asphyxia related to instrumental deliveries, the instrumental delivery rate, the overall Caesarean section rate, the Caesarean section rate for no progress of labour, the incidence of failed instrumental delivery, the incidence of attempted instrumental delivery in the operating theatre, and incidence of direct second-stage Caesarean sections.
Results: The incidence of birth trauma and birth asphyxia related to instrumental deliveries (0.6%) during the study period (2000) was significantly lower than that (2.8%) during the pre-study period (1998-1999; RR 0.27, 95% CI 0.11-0.70). This phenomenon continued into the post-study period (2001-2003) when the incidence of 1.0% was similarly lower than that in the pre-study period (RR 0.35, 95% CI 0.20-0.64). The instrumental delivery rate decreased further in the post-study period (13.5%) compared with those in the study (16.6%) and pre-study (19.5%) periods (RR 0.81, 95% CI 0.75-0.89 and RR 0.69, 95% CI 0.65-0.74, respectively). There was a marked increase in the direct second-stage Caesarean section rate in the post-study period (7.1%) compared to those in the study (0.4%) and pre-study (0.7%) periods (RR 15.9, 95% CI 5.05-49.73 and RR 9.77, 95% CI 5.28-18.08, respectively).
Conclusion: A change in obstetric practice was identified that may explain the continued reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries in the post-study period.
abstract_id: PUBMED:6866358
Prognosis for future childbearing after midcavity instrumental deliveries in primigravidas. The frequency of subsequent childbearing and the method of subsequent delivery among 149 primigravidas who required instrumental delivery for midcavity arrest of the fetal head in the second stage of labor and 1258 primigravidas who delivered spontaneously were compared. The frequency of subsequent childbearing was similar in the two groups, but operative delivery for cephalopelvic disproportion (CPD) in a second pregnancy was six times greater in the instrumentally delivered group (11.2 versus 2%; P less than .005). Nevertheless, more than 75% of instrumentally delivered primigravidas who delivered heavier infants in their second pregnancy did so spontaneously. It is concluded that relative CPD is not a common factor necessitating midcavity deliveries, even if cases in which peridural anesthesia is used and deliveries for fetal bradycardia are excluded from consideration. This probably accounts for the fact that over 97% of instrumentally delivered infants suffered no birth trauma or birth asphyxia.
abstract_id: PUBMED:20708523
Incidence of and risk factors for birth trauma in Iran. Objective: Birth trauma at delivery is a rare but significant prenatal complication. The aim of this study was to determine the incidence of birth trauma and risk factors related to fetal injury.
Materials And Methods: Birth trauma was evaluated in singleton fetuses with no major anomalies and with vertex presentations over a 3-year period from 2002 to 2005. One hundred and forty-eight neonates, who experienced birth trauma, were prospectively identified and compared with 280 normal neonates. Both groups were delivered vaginally. Maternal and infant characteristics were evaluated as possible risk factors for fetal injury.
Results: Among the 148 infants with birth trauma, nine had multiple injuries. The most common injury was cephalohematoma (n = 77). Other injuries included clavicle fractures (n = 56), brachial plexus paralysis (n = 13), asphyxia (n = 7), facial lacerations (n = 4), brain hemorrhage (n = 1), and skin hematoma (n = 2). Multiple regression analysis identified premature rupture of membranes, instrumental delivery, birth weight, gestational age, induction of labor, and academic degree of attendant physician at delivery as the most significant risk factors for birth trauma.
Conclusion: The incidence of birth trauma was 41.16 per 1,000 vaginal deliveries. Induction of labor, premature rupture of membranes, academic degree of attendant physician at delivery, higher birth weight, and gestational age were associated with fetal injuries.
abstract_id: PUBMED:36251717
Birth trauma in preterm spontaneous vaginal and cesarean section deliveries: A 10-years retrospective study. Objective: We compared birth injuries for spontaneous vaginal (VD) and caesarean section (CS) deliveries in preterm and term pregnancies.
Methods: A retrospective cohort study was conducted in a single tertiary center, between January 1st, 2007, and December 31st, 2017. The study included 62330 singleton pregnancies delivered after 24 0/7 weeks gestation. Multivariable analyses compared trauma at birth, birth hypoxia and birth asphyxia in term and preterm deliveries, stratified by mode of birth, VD versus CS. Main outcome measure was trauma at birth including intracranial laceration and haemorrhage, injuries to scalp, injuries to central and peripheral nervous system, fractures to skeleton, facial and eye injury.
Results: The incidence of preterm deliveries was 10.9%. Delivery of preterm babies by CS increased from 37.0% in 2007 to 60.0% in 2017. The overall incidence of all birth trauma was 16.2%. When stratified by mode of delivery, birth trauma was recorded in 23.4% of spontaneous vaginal deliveries and 7.5% of CS deliveries (aOR 3.3, 95%CI 3.1-3.5). When considered all types of birth trauma, incidence of trauma at birth was higher after 28 weeks gestation in VD compared to CS (28-31 weeks, aOR 1.7, 95% CI 1.3-2.3; 32-36 weeks, aOR 4.2, 95% CI 3.6-4.9; >37 weeks, aOR 3.3, 95% CI 3.1-3.5). There was no difference in the incidence of birth trauma before 28 weeks gestation between VD and CS (aOR 0.8, 95% CI 0.5-1.2). Regarding overall life-threatening birth trauma or injuries at birth with severe consequences such as cerebral and intraventricular haemorrhage, cranial and brachial nerve injury, fractures of long bones and clavicle, eye and facial injury, there was no difference in vaginal preterm deliveries compared to CS deliveries (p > 0.05 for all).
Conclusion: CS is not protective of injury at birth. When all types of birth trauma are considered, these are more common in spontaneous VD, thus favoring CS as preferred method of delivery to avoid trauma at birth. However, when stratified by severity of birth trauma, preterm babies delivered vaginally are not at higher risk of major birth trauma than those delivered by CS.
abstract_id: PUBMED:17978120
Management of shoulder dystocia: trends in incidence and maternal and neonatal morbidity. Objective: To investigate trends in the incidence of shoulder dystocia, methods used to overcome the obstruction, and rates of maternal and neonatal morbidity.
Methods: Cases of shoulder dystocia and of neonatal brachial plexus injury occurring from 1991 to 2005 in our unit were identified. The obstetric notes of cases were examined, and the management of the shoulder dystocia was recorded. Demographic data, labor management with outcome, and neonatal outcome were also recorded for all vaginal deliveries over the same period. Incidence rates of shoulder dystocia and associated morbidity related to the methods used for overcoming the obstruction to labor were determined.
Results: There were 514 cases of shoulder dystocia among 79,781 (0.6%) vaginal deliveries with 44 cases of neonatal brachial plexus injury and 36 asphyxiated neonates; two neonates with cerebral palsy died. The McRoberts' maneuver was used increasingly to overcome the obstruction, from 3% during the first 5 years to 91% during the last 5 years. The incidence of shoulder dystocia, brachial plexus injury, and neonatal asphyxia all increased over the study period without change in maternal morbidity frequency.
Conclusion: The explanation for the increase in shoulder dystocia is unclear but the introduction of the McRoberts' maneuver has not improved outcomes compared with the earlier results.
Level Of Evidence: II.
abstract_id: PUBMED:23310941
Ten -year review of vacuum assisted vaginal deliveries at a district hospital in Ghana. Objective: To find the incidence, indications, failure rate and the maternal and neonatal morbidity associated with the use of the vacuum extractor in a district hospital.
Methods: A retrospective study of vacuum assisted vaginal deliveries.
Setting: Holy Family Hospital, Nkawkaw from 1st January 2000 to 31st December 2009.
Results: There were a total of 22,947 deliveries at the Holy Family Hospital over the ten year period of the study. There were 180 (0.78% of the total deliveries) cases of vacuum extraction out of which 164 (91.1%) of the extractions had successful vacuum assisted vaginal deliveries. The incidence of successful vacuum assisted vaginal delivery was 0.71% of the total number of deliveries. The failure rate of vacuum extraction was 8.9%. The commonest indications for vacuum assisted vaginal delivery were delayed second stage 40.5%, and poor maternal effort 29.3%. The maternal complication rate was 3.1% while 16 (9.7%) babies were admitted to babies unit with birth asphyxia and other complications which were mainly minor injuries.
Conclusion: The vacuum extractor is an effective and safe device for assisted vaginal delivery with high success rate even in a district hospital. Steps should be taken to encourage the safe use of vacuum assisted vaginal deliveries and it should be made more accessible.
abstract_id: PUBMED:9605443
Perinatal outcome of singleton term breech deliveries. Objective: To assess neonatal morbidity and mortality in singleton term infants delivered in breech presentation and to find a possible correlation between outcome and mode of delivery.
Study Design: Case study of 306 singleton, term (37-42 weeks), breech deliveries, that took place between 1989 and 1994 in one perinatal centre.
Results: 170 infants were delivered vaginally, 72 by elective and 64 by secondary cesarean section. Even after application of strict selection criteria -- i.e. prior pelvic assessment by staff obstetricians, an estimated birth weight of 2500-4000 g -- and with staff supervision, vaginal delivery turned out to be associated with a significantly higher incidence of low umbilical artery pH values and neonatal care unit admissions as compared to elective cesarean section. Five infants suffered mechanical trauma. One neonatal death occurred in the vaginal delivery group.
Conclusion: The results of this retrospective study of 306 singleton term breech deliveries imply that even after strict selection of patients, vaginal delivery is associated with increased neonatal morbidity in comparison to elective cesarean section.
abstract_id: PUBMED:27188481
Perinatal outcomes of vacuum assisted versus cesarean deliveries for prolonged second stage of delivery at term. Introduction: To compare perinatal outcomes of interventions for prolonged second stage of labor.
Materials And Methods: Retrospective cohort study, in a single, university-affiliate, medical center (2007-2014). Eligibility: singleton gestations at term, diagnosed with prolonged second stage of labor and head station of S + 1 and lower. We compared perinatal outcomes of cesarean deliveries (CD) with vacuum assisted deliveries (VAD).
Results: Of 62 102 deliveries, 3449 (5.6%) were eligible: 356 (10.3%) underwent CD and 3093 (89.7%) underwent VAD. The rate of five-minute Apgar scores <7 was higher in the CD group as well as rates of NICU admission, neonatal asphyxia and composite neonatal adverse outcome. After adjusting for different confounders, CD was associated with adverse neonatal composite outcome (aOR 1.57, 95% CI 1.21-2.05, p = 0.001) and VAD with cephalhematoma (aOR 4.06, 95% CI 2.64-6.25, p < 0.001). No other differences were found between the groups with regards to other traumatic outcomes.
Conclusion: Our data suggests that in deliveries complicated by prolonged second stage, CD yield poorer neonatal outcome than VAD, with no apparent major difference in traumatic composite outcome.
abstract_id: PUBMED:11441684
Features of shoulder dystocia in a busy obstetric unit. Objective: To assess the incidence and complications of shoulder dystocia and whether those complications could be avoided.
Study Design: Retrospective analysis of shoulder dystocia between 1996 and 1999 to determine whether macrosomia, diabetes, height of head at full dilatation, length of second stage or instrumental delivery could predict shoulder dystocia. Fetal asphyxia, brachial plexus injury, maneuvers used to free the shoulders and experience of the attendant were also assessed.
Results: There were 56 cases of shoulder dystocia in 24,000 births, 59% after spontaneous delivery. McRoberts maneuver was used in 48 deliveries but sufficed as a solitary procedure in nine cases. The addition of suprapubic pressure was sufficient for 25 patients and 27 when bilateral episiotomy was also used. Corkscrew procedures were required in 12 patients. Midwives were involved in 35 cases and required assistance in 27. Macrosomia > 4,000 g was a feature in 20 infants and diabetes in 6. Neither the height of the head nor the length of the second stage was helpful. There were 13 cases of Erb's palsy, seven after vacuum delivery and six after spontaneous delivery. Eight of these cases were associated with McRoberts procedure and suprapubic pressure, two with no procedure and three with the corkscrew procedure.
Conclusion: If all infants > 4,000 g had been delivered by cesarean section, there still would have been 36 cases of shoulder dystocia. If the ultrasonically estimated weight were used to select patients for cesarean section, seven cases would have been diagnosed. To lessen the degree and incidence of fetal injury, labor ward staff are urged to become as familiar as possible with the techniques of freeing the shoulders.
Answer: The unexpected reduction in the incidence of birth trauma and birth asphyxia related to instrumental deliveries during the study period was indeed attributed to the Hawthorne effect (PUBMED:12628276). The Hawthorne effect refers to the phenomenon where individuals modify their behavior in response to their awareness of being observed. In the context of the study, the healthcare providers' knowledge that their performance was being monitored as part of the study likely led to improved practices and outcomes.
The study originally aimed to identify risk factors predicting difficult instrumental deliveries resulting in birth trauma and asphyxia. It was a prospective study conducted over a 12-month period and included 670 deliveries. The results showed a significant reduction in the incidence of birth trauma and asphyxia from 2.8% in the pre-study period to 0.6% during the study period. The authors concluded that the Hawthorne effect might have been a major contributing factor to this reduction, alongside the benefits of regular audit exercises and increasing staff experience (PUBMED:12628276).
Furthermore, a follow-up study aimed to determine if the reduction in the incidence of birth trauma and asphyxia related to instrumental deliveries continued post-study (2001-2003) and to investigate other confounding factors apart from the Hawthorne effect. The results indicated that the reduced incidence persisted into the post-study period, with an incidence of 1.0%, which was still lower than the pre-study period. The instrumental delivery rate decreased further in the post-study period. The study concluded that a change in obstetric practice might explain the continued reduction in the incidence of birth trauma and asphyxia related to instrumental deliveries in the post-study period (PUBMED:16567034).
These findings suggest that the Hawthorne effect, along with changes in obstetric practice, contributed to the reduction in birth trauma and asphyxia associated with instrumental deliveries. |
Instruction: Does the exposure of urine samples to air affect diagnostic tests for urine acidification?
Abstracts:
abstract_id: PUBMED:22700881
Does the exposure of urine samples to air affect diagnostic tests for urine acidification? Background And Objectives: For accurate measurement of pH, urine collection under oil to limit the escape of CO(2) on air exposure is recommended. This study aims to test the hypothesis that urine collection under oil is not necessary in acidic urine in which bicarbonate and CO(2) are minor buffers, because loss of CO(2) would have little effect on its pH.
Design, Setting, Participants, & Measurements: One hundred consecutive random urine samples were collected under oil and analyzed for pH, pCO(2), and HCO(3)(-) immediately and after 5 minutes of vigorous shaking in uncovered flasks to allow CO(2) escape.
Results: The pH values in 97 unshaken samples ranged from 5.03 to 6.83. With shaking, urine pCO(2) decreased by 76%, whereas urine HCO(3)(-) decreased by 60%. Meanwhile, urine baseline median pH (interquartile range) of 5.84 (5.44-6.25) increased to 5.93 (5.50-6.54) after shaking (ΔpH=0.12 [0.07-0.29], P<0.001). ΔpH with pH≤6.0 was significantly lower than the ΔpH with pH>6.0 (0.08 [0.05-0.12] versus 0.36 [0.23-0.51], P<0.001). Overall, the lower the baseline pH, the smaller the ΔpH.
Conclusions: The calculation of buffer reactions in a hypothetical acidic urine predicted a negligible effect on urine pH on loss of CO(2) by air exposure, which was empirically proven by the experimental study. Therefore, exposure of urine to air does not substantially alter the results of diagnostic tests for urine acidification, and urine collection under oil is not necessary.
abstract_id: PUBMED:31539348
Pre-, post- or no acidification of urine samples for calcium analysis: does it matter? Background Measuring 24 h-urine calcium concentration is essential to evaluate calcium metabolism and excretion. Manufacturers recommend acidifying the urine before a measurement to ensure calcium solubility, but the literature offers controversial information on this pre-analytical treatment. The objectives of the study were (1) to compare pre-acidification (during urine collection) versus post-acidification (in the laboratory), and (2) to evaluate the impact of acidification on urinary calcium measurements in a large cohort. Methods We evaluated the effects of pre- and post-acidification on 24-h urine samples collected from 10 healthy volunteers. We further studied the impact of acidification on the calcium results for 567 urine samples from routine laboratory practice, including 46 hypercalciuria (≥7.5 mmol/24 h) samples. Results Calciuria values in healthy volunteers ranged from 0.6 to 12.5 mmol/24 h, and no statistical significance was found between non-acidified, pre-acidified and post-acidified conditions. A comparison of the values (ranging from 0.21 to 29.32 mmol/L) for 567 urine samples before and after acidification indicated 25 samples (4.4%) with analytical differences outside limits of acceptance. The bias observed for these deviant values ranged from -3.07 to 1.32 mmol/L; no patient was re-classified as hypercalciuric after acidification, and three patients with hypercalciuria were classified as normocalciuric after acidification. These three deviant patients represent 6.5% of hypercalciuric patients. Conclusions Our results indicate that pre- and post-acidification of urine is not necessary prior to routine calcium analysis.
abstract_id: PUBMED:19760830
Development of renal function tests for measurement of urine concentrating ability, urine acidification, and glomerular filtration rate in female cynomolgus monkeys. This study was conducted to develop tests for evaluation of uri ne concentratingability, urine acidification ability, an dglomerular filtration rate in cynomolgus macaques. In female cynomolgus macaques, baseline urine specific gravity ranged from 1.005 to 1.031, and urine osmolality ranged from 182 to 1081 mOsm/kg. A dose of 0.4 microg/kg desmopressin acetate resulted in a urine specific gravity that ranged from 1.019 to 1.043 and an osmolality that ranged from 432 to 1298 mOsm/kg. Desmopressin acetate administration increased urine specific gravity and osmolality in each animal evaluated. Baseline urine pH in these animals ranged from 6.4 to 8.2. A dose of orally administered ammonium chloride (0.1 g/kg) resulted in a urine pH that ranges from 4.1 to 7.1. Ammonium chloride administration decreased urine pH in each animal tested. Evaluation of glomerular filtration rate was accomplished through urine collection and timed blood and urine samples after oral hydration with 0.45% sodium chloride. Blood samples were analyzed for creatinine, osmolality, sodium, potassium, and chloride. Urine samples were analyzed for volume, creatinine, osmolality, sodium, potassium, and chloride. Creatinine clearance, osmolality clearance, and electrolyte fractional clearance were calculated from the values. Osmolality clearance ranged from 0.03 to 0.07 ml/kg/min, creatinine clearance ranged from 1.84 to 2.53 ml/kg/min, fractional excretion of sodium ranged from 0.17% to 0.77%, fractional excretion of potassium ranged from 4.46% to 19.87%, and fractional excretion of chloride ranged from 0.25% to 1.08%. The response to desmopressin acetate and ammonium chloride and the osmolality, creatinine concentration, and fractional electrolyte excretion were consistent in individual animals with repeat testing. No adverse events were associated with the tests. The results of these tests are consistent with repeat testing in individual animals. These tests likely can be used for clinical assessment of renal function in cynomolgus macaques or for collection of experimental data.
abstract_id: PUBMED:2382123
Ethylenethiourea in air and in urine as an indicator of exposure to ethylenebisdithiocarbamate fungicides. Ethylenethiourea (ETU) is a ubiquitous impurity of the ethylenebisdithiocarbamate (EBDC) fungicides widely used in agriculture and forestry. In the present study, ETU was used as a measure of the exposure to EBDC on potato farms and in pine nurseries during the application of EBDC fungicides and the weeding of the sprayed vegetation. Biological and hygienic monitoring was carried out through the analysis of ETU in the breathing zone and the urine of exposed workers. Even if the concentrations of ETU in the ambient air of pine nurseries exceeded those of potato farms, the concentrations of ETU in the urine of potato farmers exceeded those of pine nursery workers. This result may have been due better protective equipment in the pine nurseries. The excretion rate was 6-10 ng/h during the first 60 h after the cessation of exposure, and it diminished thereafter to 0.2 ng/h over a 22-d observation period.
abstract_id: PUBMED:19137151
Quantification of 1-aminopyrene in human urine after a controlled exposure to diesel exhaust. Diesel exhaust (DE) is a significant source of air pollution that has been linked to respiratory and cardiovascular morbidity and mortality. Many components in DE, such as polycyclic aromatic hydrocarbons, are present in the environment from other sources. 1-Nitropyrene appears to be a more specific marker of DE exposure. 1-Nitropyrene is partially metabolized to 1-aminopyrene and excreted in urine. We developed a practical, sensitive method for measuring 1-aminopyrene in human urine using a HPLC-fluorescence technique. We measured 1-aminopyrene concentrations in spot urine samples collected prior to and during 24 h following the start of 1 h controlled exposures to DE (target concentration 300 microg m(-3) as PM(10)) and clean air control. Time-weighted-average concentrations of urinary 1-aminopyrene were significantly greater following the DE exposure compared to the control (median 138.7 ng g(-1) creatinine vs. 21.7 ng g(-1) creatinine, p < 0.0001). Comparing DE to control exposures, we observed significant increases in 1-aminopyrine concentration from pre-exposure to either first post-exposure void or peak spot urine concentration following exposure (p = 0.027 and p = 0.0026, respectively). Large inter-individual variability, in both the concentration of urinary 1-aminopyrene and the time course of appearance in the urine following the standardized exposure to DE, suggests the need to explore subject variables that may affect conversion of inhaled 1-nitropyrene to urinary excretion of 1-aminopyrene.
abstract_id: PUBMED:36124920
Agreement among rapid diagnostic tests, urine malaria tests, and microscopy in malaria diagnosis of adult patients in southwestern Nigeria. Objective: We determined the malaria prevalence and ascertained the degree of agreement among rapid diagnostic tests (RDTs), urine malaria tests, and microscopy in malaria diagnosis of adults in Nigeria.
Methods: This was a cross-sectional study among 384 consenting patients recruited at a tertiary health facility in southwestern Nigeria. We used standardized interviewer-administered questionnaires to collect patients' sociodemographic information. Venous blood samples were collected and processed for malaria parasite detection using microscopy, RDTs, and urine malaria tests. The degree of agreement was determined using Cohen's kappa statistic.
Results: The malaria prevalence was 58.3% (95% confidence interval [CI]: 53.0-63.1), 20.6% (95% CI: 16.6-25.0), and 54.2% (95% CI: 49.0-59.2) for microscopy, RDTs, and urine malaria test, respectively. The percent agreement between microscopy and RDTs was 50.8%; the expected agreement was 45.1% and Cohen's kappa was 0.104. The percent agreement between microscopy and urine malaria tests was 52.1%; the expected agreement was 50.7% and Cohen's kappa was 0.03.
Conclusion: The malaria prevalence was dependent on the method of diagnosis. This study revealed that RDTs are a promising diagnostic tool for malaria in resource-limited settings. However, urine malaria test kits require further improvement in sensitivity prior to field use in malaria-endemic settings.
abstract_id: PUBMED:10511251
Solvents in urine as exposure markers. Possible pitfalls and solutions are reviewed in the development and application of the methods of urinalysis for organic solvent as a biological exposure marker. In case head-space gas chromatography (HS-GC) is applied for analysis, loss of the solvent in the urine sample should be below 5% if the transfer of the sample into a HS GC vial is terminated in 5 min. When urine samples are stored in a bottle, organic solvent may partly move into the air phase in the bottle, but such loss from the water phase can be compensated by calculation, when the volumes of the air and water phases are available. Solvents in urine generally show closer correlation with the exposure intensity than the corresponding metabolite(s). When the lowest vapor exposure concentration was determined as the concentration at which the exposed subjects can be statistically separated from the non-exposed, solvent in urine can separate at a lower exposure concentration than its metabolite(s). Compared with the solvent in blood, however, solvent in urine correlates less closer to the solvent vapor exposure, especially when the vapor concentration is low, e.g. < 10 ppm toluene.
abstract_id: PUBMED:38275774
Does Acidification Affect Urinary Creatinine in Dairy Cattle? Nitrogen content in urine plays a crucial role in assessing the environmental impact of dairy farming. Urine acidifications avoid urine nitrogen volatilization, but potentially lead to a degradation of creatinine, the most dependable marker for quantifying total urine excretion volume, affecting its measurement. This study aimed to assess how acidifying urine samples affects the concentration and detection of creatinine in dairy cattle. In this trial, individual urine samples from 20 Holstein lactating dairy cows were divided into three subsamples, allocated to 1 of 3 groups consisting of 20 samples each. Samples were immediately treated as follows: acidification with H2SO4 (1 mL of acid in 30 mL of sample) to achieve a pH < 2 (Group 1)); addition of an equal volume of distilled water (1 mL of distilled water in 30 mL of sample) to investigate dilution effects (Group 2); or storage without any acid or water treatment (Group 3). An analysis of creatinine levels was carried out using the Jaffe method. The Friedman test was employed to compare urine groups across treatments, and the Bland-Altman test was used to assess the agreement between measurements in Group 1 and Group 3. Urinary creatinine values were statistically different (p < 0.001) between Group 1 (median 48.5 mg/dL; range 36.9-83 mg/dL), Group 2 (median 47.5 mg/dL; range 36.5-80.7 mg/dL), and Group 3 (median 48.9 mg/dL, range 37.2-84). Bland-Altman analysis demonstrates agreement between Group 3 and Group 1. The measurement of urinary creatinine using the Jaffe method is affected by sample acidification, but the use of creatinine as a marker for total urine output could remain a viable tool when urine samples are acidified.
abstract_id: PUBMED:6862648
Urine chromium as an estimator of air exposure to stainless steel welding fumes. Welding stainless steel with covered electrodes, also called manual metal arc welding, generates hexavalent airborne chromium. Chromium concentrations in air and post-shift urine samples, collected the same arbitrarily chosen working day, showed a linear relationship. Since post-shift urine samples reflect chromium concentrations of both current and previous stainless steel welding fume exposure, individual urine measurements are suggested as approximate although not exact estimators of current exposure. This study evaluates the practical importance of such measurements by means of confidence limits and tests of validity.
abstract_id: PUBMED:29643270
Comparison of personal air benzene and urine t,t-muconic acid as a benzene exposure surrogate during turnaround maintenance in petrochemical plants. Previous studies have shown that biomarkers of chemicals with long half-lives may be better surrogates of exposure for epidemiological analyses, leading to less attenuation of the exposure-disease association, than personal air samples. However, chemicals with short half-lives have shown inconsistent results. In the present study, we compared pairs of personal air benzene and its short-half-life urinary metabolite trans,trans-muconic acid (t,t-MA), and predicted attenuation bias of theoretical exposure-disease association. Total 669 pairs of personal air benzene and urine t,t-MA samples were taken from 474 male workers during turnaround maintenance operations held in seven petrochemical plants. Maintenance jobs were classified into 13 groups. Variance components were calculated for personal air benzene and urine t,t-MA separately to estimate the attenuation of the theoretical exposure-disease association. Personal air benzene and urine t,t-MA showed similar attenuation of the theoretical exposure-disease association. Analyses for repeated measurements showed similar results, while in analyses for values above the limits of detection (LODs), urine t,t-MA showed less attenuation of the theoretical exposure-disease association than personal air benzene. Our findings suggest that there may be no significant difference in attenuation bias when personal air benzene or urine t,t-MA is used as a surrogate for benzene exposure.
Answer: The exposure of urine samples to air does not substantially alter the results of diagnostic tests for urine acidification. A study tested the hypothesis that urine collection under oil is not necessary in acidic urine, where bicarbonate and CO(2) are minor buffers, because the loss of CO(2) would have little effect on its pH. The results showed that after vigorous shaking of urine samples to allow CO(2) escape, the pH increased only slightly, with a median change (ΔpH) of 0.12. The change in pH was significantly lower in samples with a pH ≤ 6.0 compared to those with a pH > 6.0. Overall, the lower the baseline pH, the smaller the ΔpH, indicating that air exposure has a negligible effect on urine pH in acidic urine, and therefore, urine collection under oil is not necessary for such diagnostic tests (PUBMED:22700881). |
Instruction: Heat development at nasal endoscopes' tips: danger of tissue damage?
Abstracts:
abstract_id: PUBMED:22555994
Heat development at nasal endoscopes' tips: danger of tissue damage? A laboratory study. Objectives/hypothesis: The aim of this study was to measure the temperature generated by light sources in conjunction with angulated endoscopes and to analyze whether the heat generated at the endoscopes' tips harbors potential damage to mucosa or highly sensitive structures like the optic nerve or brain when in direct contact, considering a beginning necrosis of human protein starting at 40°C.
Study Design: Laboratory setting, prospective.
Methods: Brand new 4-mm, 0° and 30° rigid nasal endoscopes were measured each with halogen, xenon, and light-emitting diode (LED) light sources, respectively, at different power levels for tip contact temperature.
Results: The highest temperatures were reached with a xenon light source at a maximum of 44.3°C, 65.8°C, and 91.4°C at 33%, 66%, and 100% power levels, respectively, for 4-mm, 0° endoscopes. For 30° endoscopes, temperatures of 47.0°C, 75.1°C, and 95.5°C were measured at 33%, 66%, and 100% power levels (P < .001; 0° vs. 30°), respectively. At 5-mm distance from the tip, temperatures were below body temperature for all light sources (<36°C) at all power settings. Within 2 minutes after switching off light sources, temperatures dropped to room temperature (22°C).
Conclusions: Xenon light sources have the greatest illumination potential; however, at only 33% power level, potentially harmful temperatures can be reached at the tips of the endoscopes. Power LED and halogen have the highest safety; however, only LED has very good illumination. In narrow corridors, direct contact to tissues or vital structures should be avoided, or endoscopes should be cooled during surgical procedures.
abstract_id: PUBMED:25119327
Decontamination methods for flexible nasal endoscopes. A national survey was carried out to investigate the current UK practice for decontaminating flexible nasal endoscopes. A postal questionnaire was sent to Sisters in Charge of 200 ear, nose and throat (ENT) outpatient departments in the UK, with an overall response rate of 60.5%. Decontamination with chlorine dioxide wipes was the most favoured method, used in 58% of the hospitals that participated in this survey. Automated machines were also used in many places (34%). Only a few hospitals used flexible sheaths (7%). Many departments do not use a separate protocol for high-risk patients.
abstract_id: PUBMED:19033060
Sterilization and disinfection of endoscopes in urology Sterilization and disinfection of endoscopes take account of the risk of transmitted infections and nosocomial infections. These risks are ruled by legal texts. Urology is a high risk speciality. The material which is used must be single use or at least sterilisable (18min at 134 degrees C). Flexible endoscopes are sensitive to high temperatures and needs disinfection, and immediate use. These steps are subjected to quality control rules and marking.
abstract_id: PUBMED:9343769
The integrated light endoscope: a technical simplification of nasal endoscopy. Objective: Nasal endoscopes depend on cumbersome light generators and fibre-optic cables. This results in restriction of the operator's movements, impairment of tactile sensation, and visual field limitation during the examination. More importantly, its use is difficult outside a clinic setting. A system which integrates a readily available, portable, and inexpensive light source with a nasal endoscope was tested in our department.
Method: Twenty patients underwent endoscopic examination of their nasal cavities using this simple endoscopic system followed by the traditional light-cable technique.
Results: In one patient, the visual information was insufficient with the new system. In all other cases, no additional information was demonstrated by the use of light cables.
Conclusions: The advantages and disadvantages of the system are discussed, as well as the future possibilities suggested by this development.
abstract_id: PUBMED:20818917
Combined application of oto-endoscopes and nasal endoscopes for resection of dermoid tumor in eustachian tube. Dermoid tumor in the eustachian tube (DTIET) is a congenital disease. Our patient had the symptom of discharging pus in the left ear when he was about 1 year old. Due to lack of understanding of the disease at that time, he was misdiagnosed as having cholesteatoma, for which he underwent three surgical operations in the middle ear. Recently, with detailed preoperative imaging estimation and full consideration of surgical risks, under the guidance of TV, using oto-endoscopes and nasal endoscopes jointly, the DTIET was resected completely via the combined approach of pharyngeal and tympanic openings of the eustachian tube. This case indicated again that it was a space-occupying lesion in the eustachian tube. On the basis of the characteristic expressions of CT and MRI, the possibility of DTIET should be fully considered. Endoscopic technology has advantages for surgical operation in the eustachian tube, which might remove the mass and achieve the aim of minimal trauma.
abstract_id: PUBMED:23578363
Evaluation of a storage cabinet for heat-sensitive endoscopes in a clinical setting. Background: In most countries, endoscopes must be disinfected or fully reprocessed before the beginning of each session, even if they were cleaned and disinfected after their last use. Several storage cabinets for heat-sensitive endoscopes (SCHE) are commercially available. They are designed to maintain the microbiological quality of reprocessed endoscopes for a predefined period of time validated by the SCHE manufacturer. Use of an SCHE increases the acceptable storage time before it is necessary to re-disinfect the endoscope.
Aim: To evaluate the efficacy of an SCHE (DSC8000, Soluscope, SAS Marseilles, France) in a clinical setting.
Method: The microbiological quality of endoscopes was assessed after 72 h of storage in an SCHE (Group I), and compared with the microbiological quality of endoscopes stored for 72 h in a clean, dry, dedicated cupboard without morning disinfection (Group II) and the microbiological quality of endoscopes stored for 72 h in a clean, dry, dedicated cupboard with morning disinfection (Group III). Forty-one endoscopes in each group were sampled for microbiological quality. Endoscope contamination levels were analysed according to guidelines published by the National Technical Committee on Nosocomial Infection in 2007.
Findings/conclusion: Use of an SCHE helps to maintain the microbiological quality of endoscopes, provided that staff members are well trained and all practices are framed by a proven quality assurance process.
abstract_id: PUBMED:35861136
Thermal tissue damage caused by new endoscope model due to light absorption. Background And Aim: Bright endoscopic light sources improve the visibility of the intestinal mucosa. A newly launched endoscopic system developed by Olympus Corporation (Tokyo, Japan) in 2020 required modification to prevent heat-induced tissue damage, which reportedly occurs during magnifying chromoendoscopy. We investigated the mechanism of this phenomenon by evaluating the rise in temperature of stained and unstained porcine mucosa using the new and previous endoscopic systems.
Methods: Surface temperatures of stained (India ink, 0.05% crystal violet, 0.5% methylene blue, or 0.2% indigo carmine) and unstained porcine mucosa were evaluated using infrared imaging after contact with the new endoscopic system before it was modified (system-EVIS X1; scope-GIF-EZ1500) and compared with a previous endoscopic system (system-EVIS EXERAIII; scope-GIF-H190). We performed histological analysis of the porcine mucosa stained with 0.05% crystal violet after contact with the new endoscope to evaluate the degree of tissue damage.
Results: Surface temperatures remained < 40°C when the new endoscope was in contact with the unstained mucosa. However, the maximum surface temperature rose to > 70°C when the new endoscope was in contact with the stained mucosa (stained other than indigo carmine). Histological analysis revealed cavity formation in porcine epithelium stained with crystal violet where the endoscope made contact for ≥5s . Using the previous endoscope, the maximum surface temperature of stained mucosa remained below approximately 60°C, and the surface temperature of the unstained mucosa remained below 30°C.
Conclusions: Heat transfer by light absorption could cause heat-induced tissue damage during magnifying chromoendoscopy using the new endoscope.
abstract_id: PUBMED:34420601
The effect of heat flux distribution and internal heat generation on the thermal damage in multilayer tissue in thermotherapy. Proper analysis of the temperature distribution during heat therapy in the target tissue and around it will prevent damage to other adjacent healthy cells. In this study, the exact solution of steady and unsteady of the hyperbolic bioheat equations is performed for multilayer skin with tumor at different heat fluxes on its surface and the generation of internal heat in the tumor. By determining the temperature distribution in three modes of constant heat flux, parabolic heat flux and internal heat generation in tumor tissue, the amount of burn in all three modes is evaluated. The results indicated that the Fourier or non-Fourier behavior of tissue has no role in the rate of burns in thermotherapy processes. At equal powers applied to the tissue, the internal heat generation in the tumor, constant flux and parabolic flux on the skin surface have the most uniform and most non-uniform temperature distribution, respectively and cause the least and the most thermal damage in the tissue.
abstract_id: PUBMED:22753271
Guidelines for reprocessing nonlumened heat-sensitive ear/nose/throat endoscopes. Endoscopes have become an indispensable instrument in the daily activity of the ear/nose/throat (ENT) department, but their use has introduced potential health risks such as the transmission of infection. Over the years, scientific knowledge has been consolidated regarding the most appropriate ways for the correct disinfection, and numerous guidelines have been issued for both digestive and respiratory endoscopes, whereas to date specific references to ENT endoscopes do not exist. The diagnostic ENT endoscope does not generally have an operative channel; it is shorter and thinner and has a much more frequent usage, also in the outpatient setting. As a consequence, the guidelines for digestive or respiratory endoscopes are not always functional for the ENT department in that they do not take into account the dynamics or the intensity of the work performed therein. This article proposes: 1) to standardize the correct way to carry out the disinfection procedure of heat-sensitive nonlumened ENT endoscopes to reduce to a minimum the possibility of errors or oversights; and 2) to guarantee the disinfection within a limited time frame, appropriate for an ENT outpatient department. In the initial phase, the critical areas encountered in ENT endoscopy are determined. This is followed by an examination of the literature to identify existing guidelines for the reprocessing of endoscopes (mainly digestive and respiratory), with a view to establishing a common disinfection procedure for nonlumened ENT endoscopes. Finally, the new methods of disinfection developed specifically for the reprocessing of ENT endoscopes are examined and discussed.
abstract_id: PUBMED:9385391
DNA damage in the nasal passageway: a literature review. The purpose of this review is to provide a compilation of work examining DNA damage in the nasal cavity. There are numerous methods to identify and quantify damage to DNA and the diversity of methods and toxicologic endpoints is illustrated by the range of studies presented here. There are a large number of independent studies measuring endpoints in the upper respiratory tract; however, with regard to toxicant induced DNA damage in the nasal passageway, the effects of two compounds, 4-(N-Methyl-N-nitrosamino)-1-(3-pyridyl)-1-butanone (NNK) and formaldehyde (HCHO), appear to have been extensively characterized. The body of work on NNK and formaldehyde have provided insights into molecular mechanisms of DNA damage and repair and induced cell replication and its relationship to nasal cancer. With new technologies and molecular techniques, the sensitivity to enable evaluations of the minute quantities of nasal tissue available in test species and human biopsy impact the study of the nasal-toxicant interactions. As methods used to characterize DNA damage increase in sensitivity, the importance of both exogenous and endogenous sources of DNA damage, steady-state levels of cellular damage, repair, and resulting mutations, low-dose exposure assessments and inter-species extrapolation will become increasingly complex. Additional studies of DNA damage in the nasal passage will undoubtedly challenge future estimations of risk and impact what are perceived to be acceptable levels of exposure to known and predicted carcinogens. The aim of this paper is to provide to the interested scientist literature relevant to the effects of agents on nasal DNA, so that areas of insufficient information can be identified and used to further develop and expand the knowledge base for nasal DNA toxicant interactions.
Answer: The study of heat development at nasal endoscopes' tips indicates that there is indeed a danger of tissue damage due to the temperatures reached during their use. The highest temperatures were observed with a xenon light source, which at 100% power level reached potentially harmful temperatures of 91.4°C for 4-mm, 0° endoscopes and 95.5°C for 30° endoscopes (PUBMED:22555994). These temperatures are well above the threshold for the beginning of necrosis of human protein, which starts at 40°C, suggesting that direct contact with mucosa or sensitive structures like the optic nerve or brain could lead to tissue damage. The study recommends avoiding direct contact in narrow corridors or cooling the endoscopes during surgical procedures to mitigate this risk.
In contrast, power LED and halogen light sources were found to have the highest safety, with LED also providing very good illumination. At a 5-mm distance from the tip, temperatures remained below body temperature for all light sources at all power settings, and temperatures dropped to room temperature within 2 minutes after switching off the light sources (PUBMED:22555994). This suggests that while xenon light sources pose a risk of thermal damage, other light sources such as LED and halogen may be safer alternatives for endoscopic procedures. |
Instruction: Does the number of rescuers affect the survival rate from out-of-hospital cardiac arrests?
Abstracts:
abstract_id: PUBMED:22705832
Does the number of rescuers affect the survival rate from out-of-hospital cardiac arrests? Two or more rescuers are not always better than one. Review: An increased number of rescuers may improve the survival rate from out-of-hospital cardiac arrests (OHCAs). The majority of OHCAs occur at home and are handled by family members.
Materials And Methods: Data from 5078 OHCAs that were witnessed by citizens and unwitnessed by citizens or emergency medical technicians from January 2004 to March 2010 were prospectively collected. The number of rescuers was identified in 4338 OHCAs and was classified into two (single rescuer (N=2468) and multiple rescuers (N=1870)) or three (single rescuer, two rescuers (N=887) and three or more rescuers (N=983)) groups. The backgrounds, characteristics and outcomes of OHCAs were compared between the two groups and among the three groups.
Results: When all OHCAs were collectively analysed, an increased number of rescuers was associated with better outcomes (one-year survival and one-year survival with favourable neurological outcomes were 3.1% and 1.9% for single rescuers, 4.1% and 2.0% for two rescuers, and 6.0% and 4.6% for three or more rescuers, respectively (p=0.0006 and p<0.0001)). A multiple logistic regression analysis showed that the presence of multiple rescuers is an independent factor that is associated with one-year survival (odds ratio (95% confidence interval): 1.539 (1.088-2.183)). When only OHCAs that occurred at home were analysed (N=2902), the OHCAs that were handled by multiple rescuers were associated with higher incidences of bystander CPR but were not associated with better outcomes.
Conclusions: In summary, an increased number of rescuers improves the outcomes of OHCAs. However, this beneficial effect is absent in OHCAs that occur at home.
abstract_id: PUBMED:25957944
Psychological impact on dispatched local lay rescuers performing bystander cardiopulmonary resuscitation. Aim: We studied the short-term psychological impact and post-traumatic stress disorder (PTSD)-related symptoms in lay rescuers performing cardiopulmonary resuscitation (CPR) after a text message (TM)-alert for out-of-hospital-cardiac arrest, and assessed which factors contribute to a higher level of PTSD-related symptoms.
Methods: The lay rescuers received a TM-alert and simultaneously an email with a link to an online questionnaire. We analyzed all questionnaires from February 2013 until October 2014 measuring the short-term psychological impact. We interviewed by telephone all first arriving lay rescuers performing bystander CPR and assessed PTSD-related symptoms with the Impact of Event Scale (IES) 4-6 weeks after the resuscitation. IES-scores 0-8 reflected no stress, 9-25 mild, 26-43 moderate, and 44-75 severe stress. A score ≥ 26 indicated PTSD symptomatology.
Results: Of all alerted lay rescuers, 6572 completed the online questionnaire. Of these, 1955 responded to the alert and 507 assisted in the resuscitation. We interviewed 203 first arriving rescuers of whom 189 completed the IES. Of these, 41% perceived no/mild short-term impact, 46% bearable impact and 13% severe impact. On the IES, 81% scored no stress and 19% scored mild stress. None scored moderate or severe stress. Using a multivariable logistic regression model we identified three factors with an independent impact on mild stress level: no automated external defibrillator connected by the lay rescuer, severe short-term impact, and no (very) positive experience.
Conclusion: Lay rescuers alerted by text messages, do not show PTSD-related symptoms 4-6 weeks after performing bystander CPR, even if they perceive severe short-term psychological impact.
abstract_id: PUBMED:9044494
Automated external versus blind manual defibrillation by untrained lay rescuers. Introduction: sudden cardiac death is an important cause of mortality in the United States today. A major determinant of survival from sudden cardiac death is rapid defibrillation. Communities with high rates of bystander cardiopulmonary resuscitation (CPR) and early defibrillation enjoy the highest survival rates from out-of-hospital cardiac arrest. First responders and emergency medical technicians (EMTs) have been trained to use external defibrillators (AEDs). The period of instruction for successful use of the AED remains to be determined. It was the purpose of this study to compare AED versus blind manual defibrillation (BMD) by untrained lay rescuers using a simple instruction sheet and following a 20-min training period.
Methods: 50 employed volunteers were confronted with a stimulated cardiac arrest and asked to attempt defibrillation using either AED or BMD by following a written instruction sheet. Success was defined as delivery of three countershocks during the simulated resuscitation. Time to first and third shocks were recorded.
Results: 24 of 25 volunteers (96%) were successful in operating the AED compared to none in the BMD group. Time to delivery of first shock averaged 119.5 +/- 45.0 s and time to third shock averaged 158.7 +/- 46.3 s. A 95% confidence interval for time to first shock for untrained lay rescuers was 100.5-138.4 s.
Conclusions: untrained lay rescuers demonstrated a very high success rate using the AED during simulated cardiac arrest. Success with BMD by untrained rescuers is poor. This study suggests that prehospital personnel can be successfully trained in the use of AED in a substantially shorter period of time than in current practice. Strategic placement of AEDs like fire hoses and pool-side life preservers could result in improved survival from sudden cardiac death.
abstract_id: PUBMED:24384508
Factors associated with quality of bystander CPR: the presence of multiple rescuers and bystander-initiated CPR without instruction. Aims: To identify the factors associated with good-quality bystander cardiopulmonary resuscitation (BCPR).
Methods: Data were prospectively collected from 553 out-of-hospital cardiac arrests (OHCAs) managed with BCPR in the absence of emergency medical technicians (EMT) during 2012. The quality of BCPR was evaluated by EMTs at the scene and was assessed according to the standard recommendations for chest compressions, including proper hand positions, rates and depths.
Results: Good-quality BCPR was more frequently confirmed in OHCAs that occurred in the central/urban region (56.3% [251/446] vs. 39.3% [42/107], p=0.0015), had multiple rescuers (31.8% [142/446] vs. 11.2% [12/107], p<0.0001) and received bystander-initiated BCPR (22.0% [98/446] vs. 5.6% [6/107], p<0.0001). Good-quality BCPR was less frequently performed by family members (46.9% [209/446] vs. 67.3% [72/107], p=0.0001), elderly bystanders (13.5% [60/446] vs. 28.0% [30/107], p=0.0005) and in at-home OHCAs (51.1% [228/446] vs. 72.9% [78/107], p<0.0001). BCPR duration was significantly longer in the good-quality group (median, 8 vs. 6min, p=0.0015). Multiple logistic regression analysis indicated that multiple rescuers (odds ratio=2.8, 95% CI 1.5-5.6), bystander-initiated BCPR (2.7, 1.1-7.3), non-elderly bystanders (1.9, 1.1-3.2), occurrence in the central region (2.1, 1.3-3.3) and duration of BCPR (1.1, 1.0-1.1) were associated with good-quality BCPR. Moreover, good-quality BCPR was initiated earlier after recognition/witness of cardiac arrest compared with poor-quality BCPR (3 vs. 4min, p=0.0052). The rate of neurologically favourable survival at one year was 2.7 and 0% in the good-quality and poor-quality groups, respectively (p=0.1357).
Conclusions: The presence of multiple rescuers and bystander-initiated CPR are predominantly associated with good-quality BCPR.
abstract_id: PUBMED:36963560
Impact of number of defibrillation attempts on neurologically favourable survival rate in patients with Out-of-Hospital cardiac arrest. Aim Of The Study: Defibrillation plays a crucial role in early return of spontaneous circulation (ROSC) and survival of patients with out-of-hospital cardiac arrest (OHCA) and shockable rhythm. Prehospital adrenaline administration increases the probability of prehospital ROSC. However, little is known about the relationship between number of prehospital defibrillation attempts and neurologically favourable survival in patients treated with and without adrenaline.
Methods: Using a nationwide Japanese OHCA registry database from 2006 to 2020, 1,802,084 patients with OHCA were retrospectively analysed, among whom 81,056 with witnessed OHCA and initial shockable rhythm were included. The relationship between the number of defibrillation attempts before hospital admission and neurologically favourable survival rate (cerebral performance category score of 1 or 2) at 1 month was evaluated with subgroup analysis for patients treated with and without adrenaline.
Results: At 1 month, 18,080 (22.3%) patients had a cerebral performance category score of 1 or 2. In the study population, the probability of prehospital ROSC and favourable neurological survival rate were inversely associated with number of defibrillation attempts. Similar trends were observed in patients treated without adrenaline, whereas a greater number of defibrillation attempts was counterintuitively associated with favourable neurological survival rate in patients treated with prehospital adrenaline.
Conclusions: Overall, a greater number of prehospital defibrillation attempts was associated with lower neurologically favourable survival at 1 month in patients with OHCA and shockable rhythm. However, an increasing number of shocks (up to the 4th shock) was associated with better neurological outcomes when considering only patients treated with adrenaline.
abstract_id: PUBMED:23509061
Duration of ventilations during cardiopulmonary resuscitation by lay rescuers and first responders: relationship between delivering chest compressions and outcomes. Background: The 2010 guidelines for cardiopulmonary resuscitation allow 5 seconds to give 2 breaths to deliver sufficient chest compressions and to keep perfusion pressure high. This study aims to determine whether the recommended short interruption for ventilations by trained lay rescuers and first responders can be achieved and to evaluate its consequence for chest compressions and survival.
Methods And Results: From a prospective data collection of out-of-hospital cardiac arrest, we used automatic external defibrillator recordings of cardiopulmonary resuscitation by rescuers who had received a standard European Resuscitation Council basic life support and automatic external defibrillator course. Ventilation periods and total compressions delivered per minute during each 2 minutes of cardiopulmonary resuscitation cycle were measured, and the chest compression fraction was calculated. Neurological intact survival to discharge was studied in relation to these factors and covariates. We included 199 automatic external defibrillator recordings. The median interruption time for 2 ventilations was 7 seconds (25th-75th percentile, 6-9 seconds). Of all rescuers, 21% took <5 seconds and 83% took <10 seconds for a ventilation period; 97%, 88%, and 63% of rescuers were able to deliver >60, >70, and >80 chest compressions per minute, respectively. The median chest compression fraction was 65% (25th-75th percentile, 59%-71%). Survival was 25% (49 of 199), not associated with long or short ventilation pauses when controlled for covariates.
Conclusions: The great majority of rescuers can give 2 rescue breaths in <10 seconds and deliver at least 70 compressions in a minute. Longer pauses for ventilations are not associated with worse outcome. Guidelines may allow longer pauses for ventilations with no detriment to survival.
abstract_id: PUBMED:25034496
Could the survival and outcome benefit of adrenaline also be dependent upon the presence of gasping upon arrival of emergency rescuers? A recent systematic review and meta-analysis of randomized controlled trials of adrenaline use during resuscitation of out-of-hospital cardiac arrest found no benefit of adrenaline in survival to discharge or neurological outcomes. It did, however, find an advantage of standard dose adrenaline (SDA) over placebo and high dose adrenaline over SDA in overall survival to admission and return of spontaneous circulation (ROSC), which was also consistent with previous reviews. As a result, the question that remains is "Why is there no difference in the rate of survival to discharge when there are increased rates of ROSC and survival to admission in patients who receive adrenaline?" It was suggested that the lack of efficacy and effectiveness of adrenaline may be confounded by the quality of cardiopulmonary resuscitation (CPR) during cardiac arrest, which has been demonstrated in animal models. CPR quality was not measured or reported in the included randomized controlled trials. However, the survival and outcome benefit of adrenaline may also depend upon the presence of witnessed gasping and/or gasping upon arrival of emergency rescuers, which is a critical factor not accounted for in the analyses of the cited animal studies that allowed gasping but showed the survival and neurological outcome benefits of adrenaline use. Moreover, without the aid of gasping, very few rescuers can provide high-quality CPR. Also, age and the absence of gasping observed by bystanders and/or upon arrival of emergency- rescuers may be important factors in the determination of whether vasopressin instead of adrenaline should be used first.
abstract_id: PUBMED:34223318
Willingness to perform bystander cardiopulmonary resuscitation: A scoping review. Background: Despite the proven effectiveness of rapid initiation of cardiopulmonary resuscitation (CPR) for patients with out-of-hospital cardiac arrest (OHCA) by bystanders, fewer than half of the victims actually receive bystander CPR. We aimed to review the evidence of the barriers and facilitators for bystanders to perform CPR.
Methods: This scoping review was conducted as part of the continuous evidence evaluation process of the International Liaison Committee on Resuscitation (ILCOR), and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews. This review included studies assessing barriers or facilitators for the lay rescuers to perform CPR in actual emergency settings and excluded studies that overlapped with other ILCOR systematic reviews/scoping reviews (e.g. dispatcher instructed CPR etc). The key findings were classified into three kinds of factors: personal factors; CPR knowledge; and procedural issues.
Results: We identified 18 eligible studies. Of these studies addressing the reduced willingness to respond to cardiac arrest, 14 related to "personal factors", 3 to "CPR knowledge", and 2 to "procedural issues". On the other hand, we identified 5 articles assessing factors increasing bystanders' willingness to perform CPR. However, we observed significant heterogeneity among study populations, methodologies, factors definitions, outcome measures utilized and outcomes reported.
Conclusions: We found that a number of factors were present in actual settings which either inhibit or facilitate lay rescuers' performance of CPR. Interventional strategies to improve CPR performance of lay rescuers in the actual settings should be established, taking these factors into consideration.
abstract_id: PUBMED:21482014
Mobile phone in the chain of survival. Each day, approximately 750 Europeans suffer from an out-of-hospital cardiac arrest, which presents a large public health problem. In such circumstances, rapid activation of the Chain of Survival with effective and continuous realisation of its four links can have a large impact on survival. Mobile phones, which have become the most ubiquitous piece of modern technology, possess a strong potential to strengthen each link of the chain. Initially, they can be used to educate rescuers about appropriate actions performed in each step of the resuscitation process. However, mobile phones can also assume a more active role of helping the rescuer in a real medical emergency. They have a potential to allow for a faster and superior emergency medical services contact, assure a higher quality of cardiopulmonary resuscitation (CPR) and quicker retrieval of an automated external defibrillator and facilitate a finer post-resuscitation care through telemedical and clinical decision support systems. Smartphones, mobile phones with advanced computing abilities and connectivity, should be considered as medical devices, and their use, among lay rescuers and medical professionals in cardiovascular emergencies, further investigated and strongly encouraged.
abstract_id: PUBMED:30560444
Characteristics of a novel citizen rescue system for out-of-hospital cardiac arrest in the Dutch province of Limburg: relation to incidence and survival. Background: We evaluated the characteristics of a novel text message system notifying citizen rescuers in cases of out-of-hospital circulatory arrest (OHCA) in the Dutch province of Limburg, including their relation to incidence and survival.
Methods And Results: The study area comprised 2,153 km2 (831 mi2) with 1.12 mio. inhabitants. During the 2‑year study period approximately 9,000 volunteers were registered, about 60% male, 59% with no experience in actual resuscitation, and 27.4% healthcare professionals. The system was not activated in 557 of 1,085 (51.3%) OHCAs, frequently because there was no resuscitation setting present yet at the time of the emergency call. Rescuers were notified on 1,076 occasions, with no resuscitation setting being present in 548 of 1,076 (50.9%) notifications. OHCA incidence rates were 67 per 100,000 inhabitants per year, 95 per 100,000 men and 39 per 100,000 women standardised for age with the European Standard Population. The mean number of notifications per volunteer was 1.3 times per year. Higher volunteer density was related to increased survival if at least one volunteer attended the cardiac arrest. If the density exceeded 0.75%, survival increased to 34.8% compared to 20.6% at a density below 0.25%.
Conclusion: In about half of OHCAs needing resuscitation the system was activated and in approximately half of the notifications resuscitation proved to be justified. Volunteers are notified 1.3 times per year on average. Survival was related to volunteer density, suggesting that further improvement can be achieved by increasing the number of citizen rescuers.
Answer: Yes, the number of rescuers does affect the survival rate from out-of-hospital cardiac arrests (OHCAs). An increased number of rescuers is associated with better outcomes when all OHCAs are collectively analyzed. Specifically, one-year survival and one-year survival with favorable neurological outcomes were higher with an increased number of rescuers: 3.1% and 1.9% for single rescuers, 4.1% and 2.0% for two rescuers, and 6.0% and 4.6% for three or more rescuers, respectively. The presence of multiple rescuers is an independent factor associated with one-year survival (PUBMED:22705832). Additionally, good-quality bystander cardiopulmonary resuscitation (BCPR) was more frequently confirmed in OHCAs that had multiple rescuers, and the presence of multiple rescuers was associated with good-quality BCPR (PUBMED:24384508).
However, it is important to note that while an increased number of rescuers improves the outcomes of OHCAs in general, this beneficial effect is absent in OHCAs that occur at home. In such cases, the OHCAs handled by multiple rescuers were associated with higher incidences of bystander CPR but were not associated with better outcomes (PUBMED:22705832).
Moreover, the impact of the number of defibrillation attempts on neurologically favorable survival rate in patients with OHCA also indicates that a greater number of prehospital defibrillation attempts was associated with lower neurologically favorable survival at 1 month. However, when considering only patients treated with adrenaline, an increasing number of shocks (up to the 4th shock) was associated with better neurological outcomes (PUBMED:36963560).
In summary, the number of rescuers can have a positive impact on the survival rate from OHCAs, particularly when the cardiac arrest occurs in public settings. However, this effect may not be as pronounced in home settings, and the quality of CPR and defibrillation attempts also play significant roles in survival outcomes. |
Instruction: Does radiographic beam angle affect the radiocapitellar ratio measurement of subluxation in the elbow?
Abstracts:
abstract_id: PUBMED:23653100
Does radiographic beam angle affect the radiocapitellar ratio measurement of subluxation in the elbow? Background: Radial head alignment is the key to determine elbow reduction after treatment of subluxations or Monteggia fractures. The radiocapitellar ratio (RCR) quantifies the degree of subluxation, by evaluating radial head alignment with the capitellum of the humerus; this ratio is reproducible when measured on true lateral radiographs of nonsubluxated elbows. However, the impact of beam angulation on RCR measurement is unknown.
Questions/purposes: Our hypotheses were that the RCR of the nonsubluxated elbow would remain in the normal range as the beam angle changed and that the RCR variability would increase for the subluxated elbow with small deviations in the beam angle.
Methods: Radiographs were taken of six healthy cadaveric extremities using beam angles ranging from -20° to 20° along the inferosuperior axis and from -20° to 20° along the dorsoventral axis. The same views then were taken of the six arms with anterior radiocapitellum subluxation followed by posterior radiocapitellum subluxation. RCRs were measured by one observer. As a reference value, the RCR was measured in the 0° to 0° position and the difference between each RCR in a nonreference position was subtracted from each RCR reference to obtain the delta-RCR. An ANOVA was performed to assess the main and interactive effects on the RCR measured in each C-arm position compared with the RCR measured on a true lateral radiograph.
Results: The RCR remained in the normal range even as the beam angle of the C-arm varied between -20° and 20°. The position of the beam did not affect the RCR in anteriorly subluxated elbows (p = 0.777), whereas RCR variation increased especially in the presence of posterior radial head subluxation when the C-arm position was 10° or more out of plane (p = 0.006). The inferosuperior malposition of the C-arm had a greater impact on quantification of radial head alignment measurement. Despite that, the RCR measurement is reliable in reduced and subluxated elbows on lateral radiographs with a C-arm position deviation of as much as 20°.
Conclusions: Identification of a subluxated elbow could be made on any lateral radiograph with a beam angulation deviation of as much as 20°. This suggests that the RCR is a useful diagnostic tool for clinical and research purposes, although for subluxated elbows, it is important to pay careful attention to the inferosuperior position of the C-arm.
abstract_id: PUBMED:26925383
Effect of elbow position on radiographic measurements of radio-capitellar alignment. Aim: To evaluate the effect of different elbow and forearm positions on radiocapitellar alignment.
Methods: Fifty-one healthy volunteers were recruited and bilateral elbow radiographs were taken to form a radiologic database. Lateral elbow radiographs were taken with the elbow in five different positions: Maximal extension and forearm in neutral, maximal flexion and forearm in neutral, elbow at 90° and forearm in neutral, elbow at 90° and forearm in supination and elbow at 90° and forearm in pronation. A goniometer was used to verify the accuracy of the elbow's position for the radiographs at a 90° angle. The radiocapitellar ratio (RCR) measurements were then taken on the collected radiographs using the SliceOmatic software. An orthopedic resident performed the radiographic measurements on the 102 elbows, for a total of 510 lateral elbow radiographic measures. ANOVA paired t-tests and Pearson coefficients were used to assess the differences and correlations between the RCR in each position.
Results: Mean RCR values were -2% ± 7% (maximal extension), -5% ± 9% (maximal flexion), and for elbow at 90° and forearm in neutral -2% ± 5%, supination 1% ± 6% and pronation 1% ± 5%. ANOVA analyses demonstrated significant differences between the RCR in different elbow and forearm positions. Paired t-tests confirmed significant differences between the RCR at maximal flexion and flexion at 90°, and maximal extension and flexion. The Pearson coefficient showed significant correlations between the RCR with the elbow at 90° - maximal flexion; the forearm in neutral-supination; the forearm in neutral-pronation.
Conclusion: Overall, 95% of the RCR values are included in the normal range (obtained at 90° of flexion) and a value outside this range, in any position, should raise suspicion for instability.
abstract_id: PUBMED:38147072
Anterior radial head subluxation in primary elbow osteoarthritis. Purpose: To investigate elbows with primary osteoarthritis (OA) for the presence of anterior radial head subluxation.
Methods: A total of 71 patients with elbow osteoarthritis and 45 with lateral epicondylitis were initially identified. The baseline characteristics and preoperative elbow X-rays of consecutive patients that had been clinically confirmed with elbow OA or lateral epicondylitis between March 2011 and January 2020 were then retrospectively reviewed. The radiocapitellar ratio (RCR; the ratio of the displacement of the radial head about the diameter of the capitulum) was calculated using lateral views. These RCR values were compared between the OA and lateral epicondylitis cases.
Result: A significant increase was detected in RCR values between patients in elbow OA and the control group (13.2% (± 10.6) vs -1.2% (± 6.8), P<0.001). Based on receiver operating characteristic curves, RCR values had an excellent area under the curve (0.89) for the detection of elbow OA (Youden index, 0.69; sensitivity, 89%; specificity, 80%). Based on the ROC curve, the cutoff value of RCR was 0.04. Patients with RCR ≥ 0.04 had a significantly higher proportion of cases with elbow OA (risk ratio, 31.50 [95% CI, 11.17-88.82]) than those with RCR ˂ 0.04 (P ˂ 0.001).
Conclusion: Radial head subluxation is a radiographic finding associated with elbow OA and RCR ≥ 0.04 could be used as an aetiological factor for elbow OA diagnosis.
abstract_id: PUBMED:27727059
Radiocapitellar contact characteristics during prosthetic radial head subluxation. Background: Metallic radial head prostheses are often used in the management of comminuted radial head fractures and elbow instability. We hypothesized that during radiocapitellar subluxation, the contact pressure characteristics of an anatomic radial head prosthesis will more closely mimic those of the native radial head compared with a monopolar circular or a bipolar circular radial head design.
Materials And Methods: With use of 6 fresh frozen cadaver elbows, mean radiocapitellar contact pressures, contact areas, and peak pressures of the native radial head were assessed at 0, 2, 4, and 6 mm of posterior subluxation. These assessments were repeated after the native radial head was replaced with anatomic, monopolar circular and bipolar circular prostheses.
Results: The joint contact pressures increased with the native and the prosthetic radial head subluxation. The mean contact pressures for the native radial head and anatomic prosthesis increased progressively and significantly from 0 to 6 mm of subluxation (native, 0.6 ± 0.0 MPa to 1.9 ± 0.2 MPa; anatomic, 0.7 ± 0.0 MPa to 2.1 ± 0.3 MPa; P < .0001). The contact pressures with the monopolar and bipolar prostheses were significantly higher at baseline and did not change significantly further with subluxation (monopolar, 2.0 ± 0.1 MPa to 2.2 ± 0.2 MPa [P = .31]; bipolar, 1.7 ± 0.1 MPa to 1.9 ± 0.1 MPa [P = .12]). The pattern of increase in contact pressures with the anatomic prosthesis mimicked that of the native radial head. Conversely, the circular prostheses started out with higher contact pressures that stayed elevated.
Conclusion: The articular surface design of a radial head prosthesis is an important determinant of joint contact pressures.
abstract_id: PUBMED:31029518
Joint contact areas after radial head arthroplasty: a comparative study of 3 prostheses. Background: Contact stresses of radial head prostheses remain a concern, potentially leading to early capitellar cartilage wear and erosion. In particular, point contact or edge loading could have a detrimental effect. The purpose of this study was to compare 3 different types of radial head prostheses in terms of joint contact areas with each other and with the native situation. The hypothesis was that the joint contact areas would be lower after monopolar arthroplasty.
Methods: Seven fresh-frozen cadaveric upper limbs were used. Radiocapitellar contact areas of a monopolar design, a straight-neck bipolar design, and an angled-neck bipolar design were compared with each other and with the native joint. After standardized preparation, polysiloxane was injected into the loaded radiocapitellar joint to create a cast from which the joint contact area was measured. Measurements were performed at 3 angles of elbow flexion and in 3 different forearm positions.
Results: In the native elbow, contact areas were highest in supination. Elbow flexion had no significant effect on native and prosthetic joint contact areas. Contact areas were decreased for all types of arthroplasties compared with the native joint (from 11% to 53%). No significant contact area difference was found between the 3 designs. However, bipolar prostheses showed lateral subluxation in neutral forearm rotation, resulting in a significant decrease in the contact areas from pronation to the neutral position.
Conclusions: All types of radial head prostheses tested showed a significant decrease in radiocapitellar contact area compared with the native joint. Bipolar designs led to subluxation of the radial head, further decreasing radiocapitellar contact.
abstract_id: PUBMED:21276926
Radiocapitellar stability: the effect of soft tissue integrity on bipolar versus monopolar radial head prostheses. Introduction: Radiocapitellar stability depends, in part, on concavity-compression mechanics. This study was conducted to examine the effects of the soft tissues on radiocapitellar stability with radial head prostheses.
Hypothesis: Monopolar radial head implants are more effective in stabilizing the radiocapitellar joint than bipolar radial head prostheses, with the soft tissues intact or repaired.
Materials And Methods: Twelve fresh frozen elbow specimens were used to evaluate radiocapitellar stability with monopolar and bipolar radial heads. The study variables focused on varying soft tissue conditions and examined the mean peak subluxation forces put forth by each prosthesis design.
Results: With the soft tissues intact, the mean peak force resisting posterior subluxation depended significantly on the radial head used (P = .03). Peak force was greatest for the native radial head (32 ± 7 N) and least with the bipolar prosthesis (12 ± 3 N), with the monopolar prosthesis falling in between (21 ± 4 N). The presence of soft tissues significantly affected the bipolar implant's ability to resist subluxation, though it did not significantly impact the native or monopolar radial heads.
Discussion: This study reveals the dependence of radiocapitellar stability on soft tissue integrity, particularly for bipolar prostheses. Overall, monopolar prostheses have a better capacity to resist radiocapitellar subluxation.
Conclusion: From a biomechanical perspective, the enhancement of elbow stability with a monopolar radial head prosthesis is superior to that with a bipolar design. This is especially true when the integrity of the soft tissues has been compromised, such as in trauma.
abstract_id: PUBMED:25510156
The effect of capitellar impaction fractures on radiocapitellar stability. Purpose: To determine the effect of capitellar impaction fractures on radiocapitellar stability in a model that simulated a terrible triad injury.
Methods: Six cadaveric elbows were dissected free of skin and muscles. Tendons were preserved. The lateral collateral ligament was released and repaired (surgical control). Two sizes of capitellar impaction defects were created. After lateral collateral ligament release and repair, we then sequentially created osseous components of a terrible triad injury (partial radial head resection and coronoid fracture) through an olecranon osteotomy that was fixed with a plate. Radiocapitellar stability was recorded after the creation of each new condition.
Results: Significantly less force was required for radiocapitellar subluxation after the creation of 20° and 40° capitellar defects compared with the surgical control (intact capitellum). After the addition of a Mason type II radial head defect and then a coronoid defect, stability decreased significantly further.
Conclusions: Impaction fractures of the distal portion of the capitellum may contribute to a loss of radiocapitellar stability, particularly in an elbow fracture-dislocation.
Clinical Relevance: Because these injuries may be unrecognized, consideration should be given to diagnosing and addressing them.
abstract_id: PUBMED:22377508
The biomechanical effect of prosthetic design on radiocapitellar stability in a terrible triad model. Objectives: The integrity of elbow soft tissues affects radiocapitellar joint stability in the presence of bipolar radial head (RH) prostheses. This study examined the effect on radiocapitellar stability of monopolar designs versus bipolar RH prostheses in an elbow model with a surgically controlled terrible triad injury.
Methods: In each of 8 fresh-frozen elbow specimens (4 male and 4 female), a terrible triad fracture dislocation was created through soft tissue releases, coronoid fracture, and RH resection. Radiocapitellar stability was recorded under the following 3 sets of conditions: (1) surgical control (native RH), (2) RH replacement (circular monopolar or bipolar), (3) replacement with alternate circular RH not used in condition 2, and (4) replacement with the anatomic RH.
Results: The type of RH used significantly impacted the mean peak force required to resist posterior subluxation (p = 0.0001). The mean peak subluxation force of the bipolar prosthesis (1 ± 1 N) was significantly less than both the anatomic (16 ± 1 N) and nonanatomic circular (12 ± 1 N) implants (p = 0.0002). The peak subluxation force of the native RH (18 ± 2 N) was not different than the anatomic implant (p = 0.09) but was greater than the nonanatomic circular design (p = 0.0006).
Conclusions: Monopolar RHs confer greater radiocapitellar stability than bipolar implants in the setting of terrible triad injuries. Of the 2 monopolar designs tested, the anatomic design provided more stability than the non-anatomic RH prosthesis.
abstract_id: PUBMED:2065495
The shoulder, elbow, and forearm in Apert syndrome. Clinical and radiographic follow-up study of the elbow and shoulder in 19 children and adults with Apert syndrome demonstrates that these two joints are affected to varying degrees. Shoulder motion is never normal and may be quite restricted, with the progression of growth abnormalities of the proximal humerus and accompanying glenoid dysplasia. Limitation of abduction is secondary to an impingement of an overgrown greater tuberosity upon the acromion. Clinically, these children appear to have anterior subluxations. Elbow motion is limited in a much smaller percentage of patients and was not a significant problem unless elbow fusion occurred. A radiocapitellar dysplasia was associated with a mean 30-degree extension loss in 7 of 19 patients. Two patients had no elbow motion.
abstract_id: PUBMED:11818812
Simultaneous dislocations of the radiocapitellar and distal radioulnar joints. Posttraumatic instability of the distal radioulnar joint (DRUJ), occurring in isolation or in association with fractures of the radius, is well-documented. Few reports have identified simultaneous ipsilateral DRUJ instability and elbow dislocation. The authors of the present study describe a case of persistent radiocapitellar subluxation after closed reduction of an elbow dislocation, secondary to irreducible volar subluxation of the DRUJ.
Answer: Yes, the radiographic beam angle can affect the radiocapitellar ratio (RCR) measurement of subluxation in the elbow. A study found that the RCR remained in the normal range even as the beam angle of the C-arm varied between -20° and 20°. However, the position of the beam did not affect the RCR in anteriorly subluxated elbows, whereas RCR variation increased, especially in the presence of posterior radial head subluxation when the C-arm position was 10° or more out of plane. The inferosuperior malposition of the C-arm had a greater impact on quantification of radial head alignment measurement. Despite this, the RCR measurement is reliable in reduced and subluxated elbows on lateral radiographs with a C-arm position deviation of as much as 20° (PUBMED:23653100). |
Instruction: Does outreach case management improve patients' quality of life?
Abstracts:
abstract_id: PUBMED:9525795
Does outreach case management improve patients' quality of life? Objective: This study examined whether enhancing standard aftercare with an outreach case management intervention would improve patients' quality of life.
Methods: A sample of 292 patients discharged from an inpatient psychiatry service at an urban general hospital were randomly assigned either to an intervention group (N = 147), which received outreach case management services in addition to standard aftercare service, or to a control group (N = 145), which received only standard aftercare services. The follow-up period was 15 to 52 months. Individuals in both groups were reinterviewed by an independent research team about 21.6 months after discharge. The groups were compared using 39 measures of quality of life. The interviews elicited information about patients' physical well-being and competence in performing activities of daily living; their emotional well-being as shown in emotional expressiveness, sadness, suicidal thoughts, and substance abuse; and their interpersonal relationships, living arrangements, friendships, income maintenance, and employment.
Results: No difference was found between the groups on any of the quality-of-life variables.
Conclusions: Outreach case management was not associated with improved quality of life.
abstract_id: PUBMED:33019431
Can case management improve cancer patients quality of life?: A systematic review following PRISMA. Background: Cancer patients are associated with a series of long lasting and stressful treatments and experiencing, and case management (CM) has been widely used and developed with the aim to increase the quality of treatments and improve the patient care services. The purpose of this review is to identify and synthesize the evidence of randomized controlled trial studies to prove that case management could be one way to address the quality of life of cancer patients.
Methods: We performed a literature search in 4 electronic bibliographic databases and snowball searches were performed to ensure a complete collection. Two review authors independently extracted and analyzed data. A data extraction form was used to collect the characteristics of case management intervention, report outcomes, and quality assessment.
Results: Our searches identified 3080 articles, of which 7 randomized controlled trials met the inclusion criteria. The intervention was varied from the target population, measurement tools, duration of intervention, and so on, and 5 studies consistently showed improvement in the intervention group compared with control groups, no significant difference was found between health care costs of case management care services and the routine care services.
Conclusion: There is some evidence that case management can be effective in cancer patients quality of life. However, due to the heterogeneity in the target population, measurement tools, and results applied, no conclusion can be made from a meta-analysis on the present bias. More rigorously multi-centered randomized controlled studies should be provided with detailed information about intervention in future research.
abstract_id: PUBMED:28351354
Case management to increase quality of life after cancer treatment: a randomized controlled trial. Background: Case management has been shown to be beneficial in phases of cancer screening and treatment. After treatment is completed, patients experience a loss of support due to reduced contact with medical professionals. Case management has the potential to offer continuity of care and ease re-entry to normal life. We therefore aim to investigate the effect of case management on quality of life in early cancer survivors.
Methods: Between 06/2010 and 07/2012, we randomized 95 patients who had just completed cancer treatment in 11 cancer centres in the canton of Zurich, Switzerland. Patients in the case management group met with a case manager at least three times over 12 months. Patient-reported outcomes were assessed after 3, 6 and 12 months using the Functional Assessment of Cancer Therapy (FACT-G) scale, the Patient Assessment of Chronic Illness Care (PACIC) and the Self-Efficacy scale.
Results: The change in FACT-G over 12 months was significantly greater in the case management group than in the control group (16.2 (SE 2.0) vs. 9.2 (SE 1.5) points, P = 0.006). The PACIC score increased by 0.20 (SE 0.14) in the case management group and decreased by 0.29 (SE 0.12) points in the control group (P = 0.009). Self-Efficacy increased by 3.1 points (SE 0.9) in the case management group and by 0.7 (SE 0.8) points in the control group (P = 0.049).
Conclusions: Case management has the potential to improve quality of life, to ease re-entry to normal life and to address needs for continuity of care in early cancer survivors.
Trial Registration: The study has been submitted to the ISRCTN register under the name "Case Management in Oncology Rehabilitation" on the 12th of October 2010 and retrospectively registered under the number ISRCTN41474586 on the 24th of November 2010.
abstract_id: PUBMED:28295784
Health-related quality of life and satisfaction with case management in cancer survivors. Aims And Objectives: To (i) investigate the characteristics of health-related quality of life and satisfaction with case management and (ii) to identify factors associated with health-related quality of life in cancer survivors.
Background: The level of health-related quality of life can reflect treatment efficacy and satisfaction with cancer care.
Design: A cross-sectional study design was adopted.
Methods: Subjects from the outpatient setting of a cancer centre in northern Taiwan were recruited by consecutive sampling. A set of questionnaires were employed, including a background information form, case management service satisfaction survey (CMSS) and The European Quality of Life Scale (EQ-5D). Descriptive statistics were used to examine levels of health-related quality of life and satisfaction with case management. Pearson's correlation was used to identify relationships between treatment characteristics, satisfaction with case management and health-related quality of life. Multiple stepwise regression was used to identify factors associated with health-related quality of life.
Results: A total of 252 cancer patients were recruited. The three lowest scores for items of health-related quality of life were mobility, self-care and usual activities. Cancer survivors with higher mobility, less pain and discomfort, and lower anxiety and depression were more likely to have better health-related quality of life.
Conclusion: Mobility, pain and discomfort, and anxiety and depression are important predictive factors of high health-related quality of life in cancer survivors.
Relevance To Clinical Practice: In clinical care, patients' physical mobility, pain and discomfort, and anxiety and depression are important indicators of health-related quality of life. Case managers should include self-care and symptom management into survivorship care plans to improve health-related quality of life during survival after treatment concludes.
abstract_id: PUBMED:36242021
The effectiveness of case management for cancer patients: an umbrella review. Background: Case management (CM) is widely utilized to improve health outcomes of cancer patients, enhance their experience of health care, and reduce the cost of care. While numbers of systematic reviews are available on the effectiveness of CM for cancer patients, they often arrive at discordant conclusions that may confuse or mislead the future case management development for cancer patients and relevant policy making. We aimed to summarize the existing systematic reviews on the effectiveness of CM in health-related outcomes and health care utilization outcomes for cancer patient care, and highlight the consistent and contradictory findings.
Methods: An umbrella review was conducted followed the Joanna Briggs Institute (JBI) Umbrella Review methodology. We searched MEDLINE (Ovid), EMBASE (Ovid), PsycINFO, CINAHL, and Scopus for reviews published up to July 8th, 2022. Quality of each review was appraised with the JBI Critical Appraisal Checklist for Systematic Reviews and Research Syntheses. A narrative synthesis was performed, the corrected covered area was calculated as a measure of overlap for the primary studies in each review. The results were reported followed the Preferred reporting items for overviews of systematic reviews checklist.
Results: Eight systematic reviews were included. Average quality of the reviews was high. Overall, primary studies had a slight overlap across the eight reviews (corrected covered area = 4.5%). No universal tools were used to measure the effect of CM on each outcome. Summarized results revealed that CM were more likely to improve symptom management, cognitive function, hospital (re)admission, treatment received compliance, and provision of timely treatment for cancer patients. Overall equivocal effect was reported on cancer patients' quality of life, self-efficacy, survivor status, and satisfaction. Rare significant effect was reported on cost and length of stay.
Conclusions: CM showed mixed effects in cancer patient care. Future research should use standard guidelines to clearly describe details of CM intervention and its implementation. More primary studies are needed using high-quality well-powered designs to provide solid evidence on the effectiveness of CM. Case managers should consider applying validated and reliable tools to evaluate effect of CM in multifaced outcomes of cancer patient care.
abstract_id: PUBMED:24285618
Cluster RCT of case management on patients' quality of life and caregiver strain in ALS. Objectives: To study the effect of case management on quality of life, caregiver strain, and perceived quality of care (QOC) in patients with amyotrophic lateral sclerosis (ALS) and their caregivers.
Methods: We conducted a multicenter cluster randomized controlled trial with the multidisciplinary ALS care team as the unit of randomization. During 12 months, patients with ALS and their caregivers received case management plus usual care or usual care alone. Outcome measures were the 40-item ALS Assessment Questionnaire (ALSAQ-40), Emotional Functioning domain (EF); the Caregiver Strain Index (CSI); and the QOC score. These measures were assessed at baseline and at 4, 8, and 12 months.
Results: Case management resulted in no changes in ALSAQ-40 EF, CSI, or QOC from baseline to 12 months. ALSAQ-40 EF scores in both groups were similar at baseline and did not change over time (p = 0.331). CSI scores in both groups increased significantly (p < 0.0001). Patients with ALS from both groups rated their perceived QOC at baseline with a median score of 8, which did not change significantly during follow-up.
Conclusion: Within the context of multidisciplinary ALS care teams, case management appears to confer no benefit for patients with ALS or their caregivers.
Classification Of Evidence: This study provides Class III evidence that case management in addition to multidisciplinary ALS care does not significantly improve health-related quality of life of patients with ALS.
abstract_id: PUBMED:26232570
The effect of self-management training on health-related quality of life in patients with epilepsy. Purpose: Epilepsy is the most common chronic neurological disease after headache. Health-related quality of life in patients with epilepsy is disturbed by psychosocial factors, seizures, and treatment side effects. This study was conducted to determine the effect of a self-management training program on quality of life in patients with epilepsy.
Methods: In this controlled clinical trial, 60 patients with epilepsy going to Zanjan Neurology Clinic were examined. The samples were selected using convenience sampling and divided randomly into the case group (30 people) and control group (30 people) using the table of random numbers. Four training sessions on the nature of epilepsy and self-managementwere run for the case group. All the patients completed an inventory for quality of life twice: before and one month after the intervention. The data were analyzed using the chi-square test, independent t-test, and paired t-test.
Results: There was no statistically significant difference between the two groups before the intervention in terms of personal specifications and scores and dimensions of the quality of life. One month after the intervention, a statistically significant difference was observed between the two groups in terms of the scores and dimensions of quality of life that indicated improved quality of life in the case group (P<0.001).
Conclusion: The self-management training program improved the quality of life in patients with epilepsy. The present findings highlight that psychosocial variables can have incremental significance over biomedical variables in the health-related quality of life of patients with epilepsy.
abstract_id: PUBMED:1427679
Case management, quality of life, and satisfaction with services of long-term psychiatric patients. Two scales developed in Great Britain, the QOL Profile and the General Satisfaction Questionnaire, were used to examine the relationship between type of case management services and quality of life and satisfaction with treatment of 68 long-term psychiatric patients in Colorado. Factor analysis identified three types of case management activities that tended to occur together: assertive outreach (direct help, out-of-office visits, and monitoring), brokerage (referral to other agencies), and counseling and assessment. Monitoring was the only variable positively associated with quality of life for all patients; brokerage was the only variable negatively associated with acceptability of services. The number of case management contacts was negatively associated with treatment satisfaction.
abstract_id: PUBMED:21526985
Case management in oncology rehabilitation (CAMON): the effect of case management on the quality of life in patients with cancer after one year of ambulant rehabilitation. a study protocol for a randomized controlled clinical trial in oncology rehabilitation. Background: Cancer diseases and their therapies have negative effects on the quality of life. The aim of this study is to assess the effectiveness of case management in a sample of oncological outpatients with the intent of rehabilitation after cancer treatment. Case management wants to support the complex information needs of the patients in addition to the segmented structure of the health care system. Emphasis is put on support for self-management in order to enhance health - conscious behaviour, learning to deal with the burden of the illness and providing the opportunity for regular contacts with care providers. We present a study protocol to investigate the efficacy of a case management in patients following oncology rehabilitation after cancer treatment.
Methods: The trial is a multicentre, two-arm randomised controlled study. Patients are randomised parallel in either 'usual care' plus case management or 'usual care' alone. Patients with all types of cancer can be included in the study, if they have completed the therapy with chemo- and/or radiotherapy/surgery with curative intention and are expected to have a survival time >1 year. To determine the health-related quality of life the general questionnaire FACT G is used. The direct correlation between self-management and perceived self-efficacy is measured with the Jerusalem & Schwarzer questionnaire. Patients satisfaction with the care received is measured using the Patient Assessment of Chronic Illness Care 5 As (PACIC-5A). Data are collected at the beginning of the trial and after 3, 6 and 12 months. The power analysis revealed a sample size of 102 patients. The recruitment of the centres began in 2009. The inclusion of patients began in May 2010.
Discussion: Case management has proved to be effective regarding quality of life of patients with chronic diseases. When it comes to oncology, case management is mainly used in cancer treatment, but it is not yet common in the rehabilitation of cancer patients. Case management in oncology rehabilitation is not well-established in Switzerland. A major challenge of the study will therefore probably be the recruitment of the patients due to the physicians' and patients' scarcely existing awareness of this issue. Trial registrationISRCTN41474586
abstract_id: PUBMED:36206119
Professional Case Management in Outpatient Settings for Patients With Myeloproliferative Neoplasms. Purpose And Objectives: The purpose of this discussion is to review the management and symptomatic burden of patients with myeloproliferative neoplasms (MPNs). Patients with MPNs are at increased risk for thromboembolic events; thus, cardiovascular complications are not uncommon. Professional case managers can coordinate outpatient services with the health care team and assist patients to mitigate cardiovascular complications.
Primary Practice Setting: This discussion is relevant to professional case managers who manage patients with chronic conditions in outpatient settings.
Findings/conclusions: Professional case managers can intervene for patients with MPNs to improve (a) coordination of scheduled therapeutic phlebotomy procedures, and (b) education about subcutaneous injections and frequent monitoring of laboratory reports. In the outpatient setting, professional case managers can improve health care utilization for patients with MPNs and promote high-quality self-care and symptom management to prevent adverse complications.
Implications For Case Management Practice: As a member of the outpatient health care team, the professional case manager can serve as a key clinician with comprehensive knowledge of the patient's experience of living with an MPN, their usual state of health and symptoms, and the factors affecting their quality of life. This article explores the role of the professional case manager in the outpatient setting and suggests specific interventions to improve delivery of care and contribute to better self-care management. Professional case managers can assess and validate symptom burden, coordinate and manage routine laboratory testing, support careful medication adjustment, and contribute to improved health outcomes.
Answer: The evidence on whether outreach case management improves patients' quality of life is mixed and seems to depend on the patient population and the context of the case management intervention.
For patients discharged from an inpatient psychiatry service, outreach case management did not show an improvement in quality of life when compared to standard aftercare services. A study with a sample of 292 such patients found no difference between the intervention and control groups on any quality-of-life variables (PUBMED:9525795).
In contrast, for cancer patients, some evidence suggests that case management can be effective in improving quality of life. A systematic review following PRISMA guidelines, which included 7 randomized controlled trials, found that 5 studies consistently showed improvement in the intervention group compared to control groups (PUBMED:33019431). Another randomized controlled trial specifically looking at early cancer survivors found that case management had the potential to improve quality of life, ease re-entry to normal life, and address needs for continuity of care (PUBMED:28351354). However, an umbrella review highlighted that the effects of case management on cancer patients' quality of life were overall equivocal, with some studies reporting improvements while others did not (PUBMED:36242021).
For patients with amyotrophic lateral sclerosis (ALS), case management added to multidisciplinary care did not significantly improve health-related quality of life (PUBMED:24285618).
In the context of epilepsy, a controlled clinical trial found that a self-management training program, which can be considered a form of case management, improved the quality of life in patients with epilepsy (PUBMED:26232570).
Overall, the effectiveness of outreach case management in improving patients' quality of life appears to vary based on the disease context, the structure of the case management intervention, and the specific needs of the patient population. While some studies show benefits, others do not, indicating that more research is needed to identify the conditions under which case management is most effective. |
Instruction: Does the decision in a validation process of a surrogate endpoint change with level of significance of treatment effect?
Abstracts:
abstract_id: PUBMED:18809512
Does the decision in a validation process of a surrogate endpoint change with level of significance of treatment effect? A proposal on validation of surrogate endpoints. Background: In recent years the use of surrogate end points (S) has become an interesting issue. In clinical trials, it is important to get treatment outcomes as early as possible. For this reason there is a need for surrogate endpoints (S) which are measured earlier than the true endpoint (T). However, before a surrogate endpoint can be used it must be validated. For a candidate surrogate endpoint, for example time to recurrence, the validation result may change dramatically between clinical trials. The aim of this study is to show how the validation criterion (R(2)(trial)) proposed by Buyse et al. are influenced by the magnitude of treatment effect with an application using real data.
Methods: The criterion R(2)(trial) proposed by Buyse et al. (2000) is applied to the four data sets from colon cancer clinical trials (C-01, C-02, C-03 and C-04). Each clinical trial is analyzed separately for treatment effect on survival (true endpoint) and recurrence free survival (surrogate endpoint) and this analysis is done also for each center in each trial. Results are used for standard validation analysis. The centers were grouped by the Wald statistic in 3 equal groups.
Results: Validation criteria R(2)(trial) were 0.641 95% CI (0.432-0.782), 0.223 95% CI (0.008-0.503), 0.761 95% CI (0.550-0.872) and 0.560 95% CI (0.404-0.687) for C-01, C-02, C-03 and C-04 respectively. The R(2)(trial) criteria changed by the Wald statistics observed for the centers used in the validation process. Higher the Wald statistic groups are higher the R(2)(trial) values observed.
Conclusion: The recurrence free survival is not a good surrogate for overall survival in clinical trials with non significant treatment effects and moderate for significant treatment effects. This shows that the level of significance of treatment effect should be taken into account in validation process of surrogate endpoints.
abstract_id: PUBMED:26522510
Differences in surrogate threshold effect estimates between original and simplified correlation-based validation approaches. Surrogate endpoint validation has been well established by the meta-analytical correlation-based approach as outlined in the seminal work of Buyse et al. (Biostatistics, 2000). Surrogacy can be assumed if strong associations on individual and study levels can be demonstrated. Alternatively, if an effect on a true endpoint is to be predicted from a surrogate endpoint in a new study, the surrogate threshold effect (STE, Burzykowski and Buyse, Pharmaceutical Statistics, 2006) can be used. In practice, as individual patient data (IPD) are hard to obtain, some authors use only aggregate data and perform simplified regression analyses. We are interested in to what extent such simplified analyses are biased compared with the ones from a full model with IPD. To this end, we conduct a simulation study with IPD and compute STEs from full and simplified analyses for varying data situations in terms of number of studies, correlations, variances and so on. In the scenarios considered, we show that, for normally distributed patient data, STEs derived from ordinary (weighted) linear regression generally underestimate STEs derived from the original model, whereas meta-regression often results in overestimation. Therefore, if individual data cannot be obtained, STEs from meta-regression may be used as conservative alternatives, but ordinary (weighted) linear regression should not be used for surrogate endpoint validation.
abstract_id: PUBMED:23565041
The risky reliance on small surrogate endpoint studies when planning a large prevention trial. The definitive evaluation of treatment to prevent a chronic disease with low incidence in middle age, such as cancer or cardiovascular disease, requires a trial with a large sample size of perhaps 20,000 or more. To help decide whether to implement a large true endpoint trial, investigators first typically estimate the effect of treatment on a surrogate endpoint in a trial with a greatly reduced sample size of perhaps 200 subjects. If investigators reject the null hypothesis of no treatment effect in the surrogate endpoint trial they implicitly assume they would likely correctly reject the null hypothesis of no treatment effect for the true endpoint. Surrogate endpoint trials are generally designed with adequate power to detect an effect of treatment on surrogate endpoint. However, we show that a small surrogate endpoint trial is more likely than a large surrogate endpoint trial to give a misleading conclusion about the beneficial effect of treatment on true endpoint, which can lead to a faulty (and costly) decision about implementing a large true endpoint prevention trial. If a small surrogate endpoint trial rejects the null hypothesis of no treatment effect, an intermediate-sized surrogate endpoint trial could be a useful next step in the decision-making process for launching a large true endpoint prevention trial.
abstract_id: PUBMED:36998240
Using Bayesian Evidence Synthesis Methods to Incorporate Real-World Evidence in Surrogate Endpoint Evaluation. Objective: Traditionally, validation of surrogate endpoints has been carried out using randomized controlled trial (RCT) data. However, RCT data may be too limited to validate surrogate endpoints. In this article, we sought to improve the validation of surrogate endpoints with the inclusion of real-world evidence (RWE).
Methods: We use data from comparative RWE (cRWE) and single-arm RWE (sRWE) to supplement RCT evidence for the evaluation of progression-free survival (PFS) as a surrogate endpoint to overall survival (OS) in metastatic colorectal cancer (mCRC). Treatment effect estimates from RCTs, cRWE, and matched sRWE, comparing antiangiogenic treatments with chemotherapy, were used to inform surrogacy patterns and predictions of the treatment effect on OS from the treatment effect on PFS.
Results: Seven RCTs, 4 cRWE studies, and 2 matched sRWE studies were identified. The addition of RWE to RCTs reduced the uncertainty around the estimates of the parameters for the surrogate relationship. The addition of RWE to RCTs also improved the accuracy and precision of predictions of the treatment effect on OS obtained using data on the observed effect on PFS.
Conclusion: The addition of RWE to RCT data improved the precision of the parameters describing the surrogate relationship between treatment effects on PFS and OS and the predicted clinical benefit of antiangiogenic therapies in mCRC.
Highlights: Regulatory agencies increasingly rely on surrogate endpoints when making licensing decisions, and for the decisions to be robust, surrogate endpoints need to be validated. In the era of precision medicine, when surrogacy patterns may depend on the drug's mechanism of action and trials of targeted therapies may be small, data from randomized controlled trials may be limited.Real-world evidence (RWE) is increasingly used at different stages of the drug development process. When used to enhance the evidence base for surrogate endpoint evaluation, RWE can improve inferences about the strength of surrogate relationships and the precision of predicted treatment effect on the final clinical outcome based on the observed effect on the surrogate endpoint in a new trial.Careful selection of RWE is needed to reduce risk of bias.
abstract_id: PUBMED:25682941
Statistical evaluation of surrogate endpoints with examples from cancer clinical trials. A surrogate endpoint is intended to replace a clinical endpoint for the evaluation of new treatments when it can be measured more cheaply, more conveniently, more frequently, or earlier than that clinical endpoint. A surrogate endpoint is expected to predict clinical benefit, harm, or lack of these. Besides the biological plausibility of a surrogate, a quantitative assessment of the strength of evidence for surrogacy requires the demonstration of the prognostic value of the surrogate for the clinical outcome, and evidence that treatment effects on the surrogate reliably predict treatment effects on the clinical outcome. We focus on these two conditions, and outline the statistical approaches that have been proposed to assess the extent to which these conditions are fulfilled. When data are available from a single trial, one can assess the "individual level association" between the surrogate and the true endpoint. When data are available from several trials, one can additionally assess the "trial level association" between the treatment effect on the surrogate and the treatment effect on the true endpoint. In the latter case, the "surrogate threshold effect" can be estimated as the minimum effect on the surrogate endpoint that predicts a statistically significant effect on the clinical endpoint. All these concepts are discussed in the context of randomized clinical trials in oncology, and illustrated with two meta-analyses in gastric cancer.
abstract_id: PUBMED:25616958
Inference for Surrogate Endpoint Validation in the Binary Case. Surrogate endpoint validation for a binary surrogate endpoint and a binary true endpoint is investigated using the criteria of proportion explained (PE) and the relative effect (RE). The concepts of generalized confidence intervals and fiducial intervals are used for computing confidence intervals for PE and RE. The numerical results indicate that the proposed confidence intervals are satisfactory in terms of coverage probability, whereas the intervals based on Fieller's theorem and the delta method fall short in this regard. Our methodology can also be applied to interval estimation problems in a causal inference-based approach to surrogate endpoint validation.
abstract_id: PUBMED:29164641
Five criteria for using a surrogate endpoint to predict treatment effect based on data from multiple previous trials. A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation.
abstract_id: PUBMED:35608044
Development of a framework and decision tool for the evaluation of health technologies based on surrogate endpoint evidence. In the drive toward faster patient access to treatments, health technology assessment (HTA) agencies and payers are increasingly faced with reliance on evidence based on surrogate endpoints, increasing decision uncertainty. Despite the development of a small number of evaluation frameworks, there remains no consensus on the detailed methodology for handling surrogate endpoints in HTA practice. This research overviews the methods and findings of four empirical studies undertaken as part of COMED (Pushing the Boundaries of Cost and Outcome Analysis of Medical Technologies) program work package 2 with the aim of analyzing international HTA practice of the handling and considerations around the use of surrogate endpoint evidence. We have synthesized the findings of these empirical studies, in context of wider contemporary body of methodological and policy-related literature on surrogate endpoints, to develop a web-based decision tool to support HTA agencies and payers when faced with surrogate endpoint evidence. Our decision tool is intended for use by HTA agencies and their decision-making committees together with the wider community of HTA stakeholders (including clinicians, patient groups, and healthcare manufacturers). Having developed this tool, we will monitor its use and we welcome feedback on its utility.
abstract_id: PUBMED:15972889
A simple meta-analytic approach for using a binary surrogate endpoint to predict the effect of intervention on true endpoint. A surrogate endpoint is an endpoint that is obtained sooner, at lower cost, or less invasively than the true endpoint for a health outcome and is used to make conclusions about the effect of intervention on the true endpoint. In this approach, each previous trial with surrogate and true endpoints contributes an estimated predicted effect of intervention on true endpoint in the trial of interest based on the surrogate endpoint in the trial of interest. These predicted quantities are combined in a simple random-effects meta-analysis to estimate the predicted effect of intervention on true endpoint in the trial of interest. Validation involves comparing the average prediction error of the aforementioned approach with (i) the average prediction error of a standard meta-analysis using only true endpoints in the other trials and (ii) the average clinically meaningful difference in true endpoints implicit in the trials. Validation is illustrated using data from multiple randomized trials of patients with advanced colorectal cancer in which the surrogate endpoint was tumor response and the true endpoint was median survival time.
abstract_id: PUBMED:29163828
Surrogate endpoint for overall survival in assessment of adjuvant therapies after curative treatment for hepatocellular carcinoma: a re-analysis of meta-analyses of individual patients' data. The gold standard endpoint to evaluate the effect of treatment for hepatocellular carcinoma (HCC) is overall survival (OS), but it requires a longer follow-up period to observe. This study aimed to identify whether disease-free survival (DFS) could be used as a surrogate endpoint for OS to assess the efficacy of adjuvant therapies after curative treatment (surgical resection and ablation) for HCC patients. A systematic review was conducted to identify trials about curative treatment combined with or without adjuvant therapies (interferon, IFN; or transarterial chemoembolization, TACE) for HCC. Total of 2211 patients' data from 17 trials were analyzed. At the individual study level, DFS was strongly correlated to OS (ρ = 0.988 and 0.930, 95% CI: 0.965-0.996 and 0.806-0.976 for the studies comparing Radiofrequency ablation (RFA) + TACE to RFA alone; and for the studies comparing curative treatment + IFN to curative treatment alone, respectively). At the trial level, the effects of treatment on DFS and OS were also strongly correlated to each other (R = 0.815 and 0.854, 95% CI: 0.536-0.934 and 0.621-0.948, respectively). In conclusion, DFS could be used as a potential surrogate endpoint for OS to assess the effect of adjuvant therapies after curative treatment for HCC.
Answer: Yes, the decision in a validation process of a surrogate endpoint can change with the level of significance of treatment effect. The study by Mrozikiewicz-Rakowska et al. (PUBMED:18809512) found that the validation criterion R(2)(trial), which is used to assess the validity of a surrogate endpoint, was influenced by the magnitude of treatment effect. They observed that higher Wald statistic groups, which indicate a stronger treatment effect, were associated with higher R(2)(trial) values. This suggests that the recurrence-free survival is not a good surrogate for overall survival in clinical trials with non-significant treatment effects and is moderate for significant treatment effects. Therefore, the level of significance of treatment effect should be considered in the validation process of surrogate endpoints. |
Instruction: Do multiple ureteroscopies alter long-term renal function?
Abstracts:
abstract_id: PUBMED:24915495
Do multiple ureteroscopies alter long-term renal function? A study using estimated glomerular filtration rate. Background And Purpose: Although considered standard of care for symptomatic urolithiasis, few data are available that evaluate the effects of multiple ureteroscopies (URS) with laser lithotripsies on long-term renal function. We investigated this relationship in a population with preexisting mild to moderate kidney disease. Previous studies have been limited by estimates of glomerular filtration rate (eGFR) calculated from creatinine level during acute stone obstruction, and inclusion of patients with a history of other stone procedures, such as shockwave lithotripsy (SWL) or percutaneous nephrolithotomy (PCNL).
Methods: Charts were reviewed for patients with a baseline eGFR below 90 mL/min/1.73 m(2) who underwent at least two URS for nephrolithiasis at our institution from 2004 to 2012. Patients undergoing SWL or PCNL at any point in their history were excluded. A total of 26 patients, with a mean of 2.3±0.6 URS procedures, were included. The eGFR was recorded at baseline before acute stone presentation and surgery, and at the last recorded follow-up visit. Stone location, total stone burden, and comorbidities were also recorded.
Results: The mean eGFR changed from 68.0±13.3 to 75.4±23.0 mL/min/1.73m(2) (mean increase of 10.1±25.0%; mean annual increase of 3.8±15.3%) over a mean follow-up period of 28.1 months (range 5-75 mos). There was no significant difference in eGFR change between patients with stones treated in the kidney alone vs the ureter and kidney combined (12.1% vs 8.3% mean increase; P=0.74). Age, presence of diabetes mellitus or hypertension, baseline creatinine level, total stone burden, and number of URS performed were not significantly associated with change in eGFR.
Conclusions: Using eGFR measured before acute stone presentation, our results suggest that multiple ureteroscopies for stones are not detrimental to long-term renal function, even in patients with preexisting stage 2-3 chronic kidney disease.
abstract_id: PUBMED:31058118
Renal Function in Children on Long Term Home Parenteral Nutrition. Objectives: To assess renal function in pediatric intestinal failure (IF) patients on long term home parenteral nutrition (HPN). Methods: Children who received HPN for a minimum of 3 years between 2007 and 2017 were identified from the IF clinic of a large tertiary referral center. Estimated glomerular filtration rate (eGFR) was calculated using the Schwartz formula at discharge on HPN, after 6 months, 1, 2, and 3 years. Results: Twenty five patients (40% male) fulfilled the inclusion criteria. The indications for HPN were due to an underlying motility disorder in 56% (14/25), enteropathy in 24% (6/25), and short bowel syndrome in 20% (5/25). At the start of HPN 80% (20/25) had a normal eGFR. Four children (17%) had an abnormal eGFR. In the group of patients with normal eGFR at the start of HPN 30% (6/20) had at least one episode of decreased eGFR in the following 3 years, however there was no significant decline in eGFR at the end of the 3 year study period. Overall there was no statistically significant deterioration of eGFR in the study population (p = 0.7898). Conclusion: In our cohort of children on long term HPN no significant decline of eGFR could be demonstrated within 3 years of starting PN.
abstract_id: PUBMED:24175190
Long-term renal function, complications and life expectancy in living kidney donors. Living kidney transplantation is now a widely accepted treatment for end stage renal disease (ESRD) because it provides excellent outcomes for recipients. However, long-term outcomes of living kidney donors have not been well understood. Because securing the safety of the donor is essential to the continued success of this procedure, we reviewed articles discussing long-term outcomes of living kidney donors. Most studies found no decrease in long-term survival or progressive renal dysfunction in previous kidney donors. Moreover, the prevalence of hypertension was comparable to that expected in the general population, although some did report otherwise. Urinary protein showed small increases in this population and was associated with hypertension and a lower glomerular filtration rate. Quality of life following living kidney donation seems to be better than the national norm. We also encountered several reports of ESRD in previous living kidney donors. Regular follow-up of kidney donors is recommended and future controlled, prospective studies will better delineate risk factors which cause health problems following living kidney donation.
abstract_id: PUBMED:34913764
Influence of Robot-Assisted Partial Nephrectomy on Long-Term Renal Function as Assessed Using 99m-Tc DTPA Renal Scintigraphy. Background: The long-term split renal function after robot-assisted partial nephrectomy (RAPN) is yet to be elucidated. This study aimed to assess long-term renal function of RAPN, using renal scintigraphy, and to identify clinical factors related to deterioration of renal function on the affected side of the kidney. Patients and Methods: RAPN for small tumors was performed, and split renal function was evaluated using 99m-Tc DTPA renal scintigraphy before and 1 year after surgery. Clinical factors (age, gender, body mass index, tumor side, presence of urinary protein, diabetes, hypertension, and dyslipidemia), perioperative factors (renal nephrectomy score [RNS], tumor diameter, overall surgery duration, console time, warm ischemic time, and amount of bleeding), and renal function (estimated glomerular filtration rate [eGFR] and glomerular filtration rate [GFR] measured using scintigraphy on both the affected and contralateral kidneys) were analyzed. Results: Sixty-six patients were included in the study. The median eGFR decreased from 71.9 to 63.9 mL/min after 1 year (p < 0.001), accounting for a mean loss of 10.1%. In scintigraphy examination, the median GFR on the affected kidney side decreased from 41.1 to 33.7 mL/min after 1 year (p < 0.001), accounting for a mean loss of 16.8%. RNS was significantly associated with renal function. Among RNS factors, the N factor is associated with renal function after RAPN. Conclusion: RNS, particularly the N factor, possibly influences the long-term deterioration of renal function after RAPN.
abstract_id: PUBMED:30064370
Off-clamp partial nephrectomy has a positive impact on short- and long-term renal function: a systematic review and meta-analysis. Background: Ongoing efforts are focused on shortening ischemia intervals as much as possible during partial nephrectomy to preserve renal function. Off-clamp partial nephrectomy (off-PN) has been a common strategy for to avoid ischemia in small renal tumors. Although studies comparing the advantages between off-PN with conventional on-clamp partial nephrectomy (on-PN) have been reported, the impact on short- and especially long-term renal function of the two surgical methods has not been discussed seriously and remained unclear. Our purpose is to evaluate the impact on short- (within postoperative 3 months) and long-term (postoperative 6 months or longer) renal function of off-PN compared with that of on-PN.
Methods: We comprehensively searched databases, including PubMed, EMBASE, and the Cochrane Library, without restrictions on language or region. A systematic review and cumulative meta-analysis of the included studies were performed to assess the impact of the two techniques on short- and long-term renal function.
Results: A total of 23 retrospective studies and 2 prospective cohort studies were included. The pooled postoperative short-term decrease of estimated glomerular filtration rate (eGFR) was significantly less in the off-PN group (weighted mean difference [WMD]: 4.81 ml/min/1.73 m2; 95% confidence interval [CI]: 3.53 to 6.08; p < 0.00001). The short-term increase in creatinine (Cr) level in the on-PN group was also significant (WMD: - 0.05 mg/dl; 95%CI: - 0.09 to - 0.00; p = 0.04). Significant differences between groups was observed for the long-term change and percent (%) change of eGFR (p = 0.04 and p < 0.00001, respectively) but not for long-term Cr change (p = 0.40). The postoperative short-term eGFR and Cr levels, but not the postoperative long-term eGFR, differed significantly between the two groups. The pooled odds ratios for acute renal failure and postoperative progress to chronic kidney disease (stage≥3) in the off-PN group were found to be 0.25 (p = 0.003) and 0.73 (p = 0.34), respectively, compared with the on-PN group.
Conclusions: Off-PN exerts a positive impact on the short- and long-term renal function compared with conventional on-PN. Given the inherent limitations of our included studies, large-volume and well-designed RCTS with extensive follow up are needed to confirm and update the conclusion of this analysis.
abstract_id: PUBMED:37404730
Safety and efficacy of renal sympathetic denervation: a 9-year long-term follow-up of 24-hour ambulatory blood pressure measurements. Background: Renal sympathetic denervation (RDN) has been shown to lower arterial blood pressure both in the presence and in the absence of antihypertensive medication in an observation period of up to 3 years. However, long-term results beyond 3 years are scarcely reported.
Methods: We performed a long-term follow-up on patients who were previously enrolled in a local renal denervation registry and who underwent radiofrequency RDN with the Symplicity Flex® renal denervation system between 2011 and 2014. The patients were assessed to evaluate their renal function by performing 24-hour ambulatory blood pressure measurement (ABPM), recording their medical history, and conducting laboratory tests.
Results: Ambulatory blood pressure readings for 24 h were available for 72 patients at long-term follow-up (FU) [9.3 years (IQR: 8.5-10.1)]. We found a significant reduction of ABP from 150.1/86.1 ± 16.9/12.0 mmHg at baseline to 138.3/77.1 ± 16.5/11.1 mmHg at long-term FU (P < 0.001 for both systolic and diastolic ABP). The number of antihypertensive medications used by the patients significantly decreased from 5.4 ± 1.5 at baseline to 4.8 ± 1.6 at long-term FU (P < 0.01). Renal function showed a significant but expected age-associated decrease in the eGFR from 87.8 (IQR: 81.0-100.0) to 72.5 (IQR: 55.8-86.8) ml/min/1.73 m2 (P < 0.01) in patients with an initial eGFR > 60 ml/min/1.73 m2, while a non-significant decrease was observed in patients with an initial eGFR < 60 ml/min/1.73 m2 at long-term FU [56.0 (IQR: 40.9-58.4) vs. 39.0 (IQR: 13.5-56.3) ml/min/1.73 m2].
Conclusions: RDN was accompanied by a long-lasting reduction in blood pressure with a concomitant reduction in antihypertensive medication. No negative effects could be detected, especially with regard to renal function.
abstract_id: PUBMED:3147581
The impact of long-term lithium treatment on renal function and structure. The effects of long-term lithium treatment on kidney function have been intensively studied. Both retrospective and prospective functional studies have shown that prolonged lithium treatment has a profound influence on renal tubular function, and may result in reduced renal concentrating capacity and increased urine volumes. More important, long-term lithium administration at non-toxic serum levels has little demonstrable or no effect at all on glomerular function. Prospective studies found no evidence of a progressive impairment of renal clearance. Renal biopsy studies have shown significant histopathological changes in patients with acute lithium intoxication, but only a slight degree of morphological injury in patients with no episodes of lithium poisoning. Other factors than lithium may contribute (e.g. analgesic abuse) or predispose (e.g. pyelonephritis) to similar changes. The lithium dosage schedule may represent an important factor.
abstract_id: PUBMED:33416196
Long-term renal function after venoarterial extracorporeal membrane oxygenation. Background: The utilization of venoarterial extracorporeal membrane oxygenation (VA-ECMO) as a life-supporting therapy has increased exponentially over the last decade. As more patients receive and survive ECMO, there are a number of unanswered clinical questions about their long-term prognosis and organ function including the need for long-term dialysis.
Methods: We aimed to utilize over 208 patient-years of follow-up data from our large institutional cohort of VA-ECMO patients to determine the incidence of requiring VA-ECMO support on the need for renal replacement therapy after discharge (LT-dialysis). This retrospective review included all adult VA-ECMO patients at our institution from January 2014 to October 2018 (N = 283).
Results: Out of the 99 (35%) survivors, 88 (89%) did not require LT-dialysis of any duration after discharge from the index hospitalization. Patients who required VA-ECMO for decompensated cardiogenic shock were more likely to need LT-dialysis (p = .034), and those who required renal replacement therapy during VA-ECMO (N = 27) also had a higher incidence of LT-dialysis (33%).
Conclusion: Overall, these data suggest there is a low incidence of long-term dialysis dependence among survivors of VA-ECMO support. Worries about the potential long-term detrimental effect of VA-ECMO should not preclude patients from receiving this life-saving support.
abstract_id: PUBMED:22092145
Long-term outcome after acute renal replacement therapy: a narrative review. Background: Acute kidney injury (AKI) necessitating renal replacement therapy (RRT) is associated with high short-term mortality, relatively little however is known of the long-term outcome in these patients. This narrative review describes renal recovery, long-term mortality, and quality of life in RRT patients with acute kidney injury.
Methods: A literature search using the PubMed search engine from the year 2000 to present with the MeSH terms 1) acute kidney injury, renal replacement therapy, prognosis, and 2) acute kidney injury, quality of life, prognosis, was performed, including studies addressing long-term outcome (over 60 days) in adults with AKI on RRT.
Results: According to inclusion criteria, twenty two studies were eligible. Outcome varied depending on AKI aetiology, setting, co-morbidity and pre-morbid renal function. Five-year-survival was between 15% and 35%, with dialysis dependence in less than 10% of survivors. Renal recovery, even if incomplete occurred during the first year. Quality of life assessment amongst survivors indicated moderate physical impairment and reduced mental health scores. A majority of patients returned to employment and self-sustainability and reported acceptable to good quality of life. Over 90% of patients indicated that they would undergo the same treatment again.
Discussion And Conclusions: Early initiation of treatment and fine-tuning of the RRT technique may improve outcome. Consensus regarding AKI definitions, renal function measurement and standardised follow-up regimens are required. Further long-term studies are needed.
abstract_id: PUBMED:7686085
Effects of long-term therapy with arotinolol on blood pressure and renal function in hypertensive patients with chronic renal failure. The effects on blood pressure and renal function of long-term treatment with the alpha- and beta-blocker arotinolol at a dose of 20 mg/day were studied in 10 hypertensive patients with chronic renal failure. Patients received low-protein therapy in which the protein and phosphorus intakes were controlled at a certain level. The average duration of arotinolol treatment was 18.4 months. A significant decrease in blood pressure was seen after the second month of treatment, and this effect continued up to the 20th month. The progression rates of renal failure (creatinine clearance/month) before and after treatment were -0.377 +/- 0.344 and -0.164 +/- 0.172 ml/min/month, respectively. No side effects attributable to the drug were observed. These findings indicate that arotinolol has a stable antihypertensive effect and no adverse effects on renal function. Arotinolol appears to be a useful drug in the long-term treatment of hypertension in patients with chronic renal failure.
Answer: Multiple ureteroscopies do not appear to be detrimental to long-term renal function, even in patients with preexisting stage 2-3 chronic kidney disease. A study that investigated the relationship between multiple ureteroscopies with laser lithotripsies and long-term renal function in a population with preexisting mild to moderate kidney disease found that the mean estimated glomerular filtration rate (eGFR) actually increased over a mean follow-up period of 28.1 months. The study included patients who had undergone at least two ureteroscopies for nephrolithiasis and excluded those who had undergone other stone procedures such as shockwave lithotripsy (SWL) or percutaneous nephrolithotomy (PCNL). The results suggested that multiple ureteroscopies for stones were not associated with a significant change in eGFR, indicating no adverse impact on long-term renal function (PUBMED:24915495). |
Instruction: Remission in rheumatoid arthritis: wishful thinking or clinical reality?
Abstracts:
abstract_id: PUBMED:16325659
Remission in rheumatoid arthritis: wishful thinking or clinical reality? Objectives: To review the concept of remission in rheumatoid arthritis (RA), as defined by the Food and Drug Administration (FDA), the American College of Rheumatology (ACR), and the European League Against Rheumatism (EULAR). To delineate differences between significant clinical improvements, very low disease activity, and the achievement of true remission. To evaluate the prevalence of these outcomes with biologic therapy and traditional disease-modifying antirheumatic drugs (DMARD) regimens.
Methods: The MEDLINE database was searched for the key words "remission" and "rheumatoid arthritis." Efficacy data of RA clinical trials from 1985 to 2004 are based on a literature review of medical journals and abstracts from rheumatology meetings. We review 3 well-defined sets of criteria established by the ACR, EULAR, and the FDA for measuring remission.
Results: Defining remissions in clinical trials and clinical practice requires appropriate standardized and objective outcome measures, such as the ACR and EULAR remission criteria. Traditional DMARDs often provide symptom relief, improvements in physical function, and the slowing of radiographic progression in patients with RA, but rarely lead to the complete cessation of RA activity. Remission, as defined by the ACR criteria, has been observed in 7 to 22% of patients treated with traditional DMARD monotherapy (ie, gold, penicillamine, methotrexate [MTX], cyclosporine A, or sulfasalazine), but these remissions have often been short-lived. Treatments with DMARD combinations, biologic monotherapy, and biologic combination therapy with MTX offer greater hope and may facilitate the higher rates of remission. Clinical trial results have shown that newer DMARDs such as leflunomide or the combination of multiple DMARDs can generally elicit greater EULAR remission rates (ranging from 13 to 42%) than monotherapies. Biologic combinations with MTX have also been shown to induce significant remission (as defined by the EULAR criteria) in RA patients, with a 31% rate observed with infliximab plus MTX at 54 weeks, a 50% rate observed for adalimumab plus MTX after 2 years of therapy, and a 41% rate observed for etanercept plus MTX after 2 years of therapy.
Conclusions: In the era of biologics and combination therapy, identifying remission or at least very low disease activity as the ultimate goal in RA therapy should become the new standard for the outcome of all RA trials. The criteria established by the FDA, the ACR, and the EULAR represent an important step toward achieving this goal.
abstract_id: PUBMED:32613582
The Use of Augmented Reality to Raise Awareness of the Differences Between Osteoarthritis and Rheumatoid Arthritis. Arthritis is one of the most common disease states worldwide but is still publicly misunderstood and lacks engaging public awareness materials. Within the UK, the most prevalent types of arthritis are osteoarthritis (OA) and rheumatoid arthritis (RA). The two are commonly mistaken as the same disease but, in fact, have very different pathogenesis, symptoms and treatments. This chapter describes a study which aimed to assess whether an augmented reality (AR) application could be used to raise awareness about the difference between OA and RA.An application was created for Android tablets that included labelled 3D models, animations and AR scenes triggered from a poster. In total 11 adult participants tested the application taking part in a pretest and posttest which aim to measure the usability of the application and the acquisition of knowledge on OA and RA. A T-test was performed to assess the effectiveness of the application from the pretest and posttest questionnaire outcomes. Overall results were encouraging reporting a very significant acquisition of knowledge and a highly satisfactory user experience.
abstract_id: PUBMED:31894573
Innovative Education and Engagement Tools for Rheumatology and Immunology Public Engagement with Augmented Reality. Rheumatoid arthritis (RA) affects around 1% of the population, which places a heavy burden on society and has severe consequences for the individuals affected. The early diagnosis and implementation of disease-modifying anti-rheumatic drugs significantly increase the chance of achieving long-term sustained remission. Therefore, raising awareness of RA amongst the general public is important in order to decrease the time of diagnosis of the disease. Augmented reality (AR) can be tremendously valuable in a teaching and learning context, as the coexistence of real and virtual objects aids learners in understanding abstract ideas and complicated spatial relationships. It has also been suggested that it raises motivation in users through interactivity and novelty. In this chapter we explore the use AR in public engagement, and detail the design, development and evaluation of a blended learning experience utilising AR. A set of informative printed posters was produced, enhanced by an accompanying interactive AR application. The main user testing was conducted with 27 participants at a science outreach event at the Glasgow Science Centre. Findings report mean positive attitudes regarding all aspects of the study, highlighting the potential of AR for public engagement with topics such as RA.
abstract_id: PUBMED:31343702
Is Virtual Reality Effective in Orthopedic Rehabilitation? A Systematic Review and Meta-Analysis. Background: Virtual reality (VR) is an interactive technology that allows customized treatment and may help in delivering effective person-centered rehabilitation.
Purpose: The purpose of this review was to systematically review and critically appraise the controlled clinical trials that investigated VR effectiveness in orthopedic rehabilitation.
Data Sources: Pubmed, CINAHL, Embase, PEDro, REHABDATA, and Sage publications were searched up to September 2018. In addition, manual searching and snowballing using Scopus and Web of Science were done.
Study Selection: Two reviewers screened studies for eligibility first by title and abstract and then full text.
Data Extraction: Articles were categorized into general or region-specific (upper limbs, lower limbs, and spine) orthopedic disorders. Study quality was assessed using the Evaluation Guidelines for Rating the Quality of an Intervention Study scoring. Meta-analysis quantified VR effectiveness, compared with no treatment, in back pain.
Data Synthesis: Nineteen studies were included in the quality assessment. The majority of the studies were of moderate quality. Fourteen studies showed that VR did not differ compared with exercises. Compared with the no-treatment control, 5 studies favored VR and 3 other studies showed no differences. For low back pain, the meta-analysis revealed no significant difference between VR and no-treatment control (n = 116; standardized mean difference = -0.21; 95% confidence interval = -0.58 to 0.15).
Limitations: Limitations included heterogeneity in interventions and the outcome measures of reviewed studies. Only articles in English were included.
Conclusion: The evidence of VR effectiveness is promising in chronic neck pain and shoulder impingement syndrome. VR and exercises have similar effects in rheumatoid arthritis, knee arthritis, ankle instability, and post-anterior cruciate reconstruction. For fibromyalgia and back pain, as well as after knee arthroplasty, the evidence of VR effectiveness compared with exercise is absent or inconclusive.
abstract_id: PUBMED:37847542
Virtual Reality Meditation for Fatigue in Persons With Rheumatoid Arthritis: Mixed Methods Pilot Study. Background: Effective symptom management is crucial to enhancing the quality of life for individuals with chronic diseases. Health care has changed markedly over the past decade as immersive, stand-alone, and wearable technologies including virtual reality have become available. One chronic pain population that could benefit from such an intervention is individuals with rheumatoid arthritis (RA). Recent pharmacologic advances in the management of RA have led to a decrease in inflammatory symptoms (eg, chronic pain) or even disease remission, yet up to 70% of patients with RA still suffer from fatigue. While VR-delivered behavior, meditation, and biofeedback programs show promise for pain and anxiety management, there is little information on the use of virtual reality meditation (VRM) for fatigue management among individuals with RA.
Objective: This study aims to (1) examine the feasibility of implementing a study protocol that uses VRM, (2) determine the acceptability of using VRM for fatigue management in an outpatient population, and (3) identify barriers and contextual factors that might impact VRM use for fatigue management in outpatients with RA.
Methods: We used a convergent, mixed methods design and enrolled adults aged 18 years or older with a clinical diagnosis of RA. Patient-Reported Outcome Measure Information System (PROMIS) measures of fatigue, depression, anxiety, pain behavior, and physical function were assessed alongside the brief mood introspection scale at baseline and weekly for 4 weeks. VRM use across the 4-week study period was automatically stored on headsets and later extracted for analysis. Semistructured interview questions focused on feedback regarding the participant's experience with RA, previous experience of fatigue, strategies participants use for fatigue management, and the participant's experience using VRM and recommendations for future use.
Results: A total of 13 participants completed this study. Most participants completed all study surveys and measures (11/13, 84% and 13/13, 100%, respectively) and were active participants in interviews at the beginning and end of the program. Participants used VRM an average of 8.9 (SD 8.5) times over the course of the 4-week program. Most participants enjoyed VRM, found it relaxing, or recommended its use (12/13, 92%), but 8 (62%) noted barriers and conceptual factors that impacted VRM use. On average, participants saw decreases in PROMIS fatigue (-6.4, SD 5.1), depression (-5.6, SD 5.7), anxiety (-4.5, SD 6), and pain behavior (-3.9, SD 5.3), and improvements in PROMIS physical function (1.5, SD 2.7) and Brief Mood Introspection Scale mood (5.3, SD 6.7) over the course of this 4-week study.
Conclusions: While this study's implementation was feasible, VRM's acceptability as an adjunctive modality for symptom management in RA is contingent on effectively overcoming barriers to use and thoughtfully addressing the contextual factors of those with RA to ensure successful intervention deployment.
Trial Registration: ClinicalTrials.gov NCT04804462; https://classic.clinicaltrials.gov/ct2/show/NCT04804462.
abstract_id: PUBMED:33973858
A Virtual Reality-Based App to Educate Health Care Professionals and Medical Students About Inflammatory Arthritis: Feasibility Study. Background: Inflammatory arthritides (IA) such as rheumatoid arthritis or psoriatic arthritis are disorders that can be difficult to comprehend for health professionals and students in terms of the heterogeneity of clinical symptoms and pathologies. New didactic approaches using innovative technologies such as virtual reality (VR) apps could be helpful to demonstrate disease manifestations as well as joint pathologies in a more comprehensive manner. However, the potential of using a VR education concept in IA has not yet been evaluated.
Objective: We evaluated the feasibility of a VR app to educate health care professionals and medical students about IA.
Methods: We developed a VR app using data from IA patients as well as 2D and 3D-visualized pathological joints from X-ray and computed tomography-generated images. This VR app (Rheumality) allows the user to interact with representative arthritic joint and bone pathologies of patients with IA. In a consensus meeting, an online questionnaire was designed to collect basic demographic data (age, sex); profession of the participants; and their feedback on the general impression, knowledge gain, and potential areas of application of the VR app. The VR app was subsequently tested and evaluated by health care professionals (physicians, researchers, and other professionals) and medical students at predefined events (two annual rheumatology conferences and academic teaching seminars at two sites in Germany). To explore associations between categorical variables, the χ2 or Fisher test was used as appropriate. Two-sided P values ≤.05 were regarded as significant.
Results: A total of 125 individuals participated in this study. Among them, 56% of the participants identified as female, 43% identified as male, and 1% identified as nonbinary; 59% of the participants were 18-30 years of age, 18% were 31-40 years old, 10% were 41-50 years old, 8% were 51-60 years old, and 5% were 61-70 years old. The participants (N=125) rated the VR app as excellent, with a mean rating of 9.0 (SD 1.2) out of 10, and many participants would recommend use of the app, with a mean recommendation score of 3.2 (SD 1.1) out of 4. A large majority (120/125, 96.0%) stated that the presentation of pathological bone formation improves understanding of the disease. We did not find any association between participant characteristics and evaluation of the VR experience or recommendation scores.
Conclusions: The data show that IA-targeting innovative teaching approaches based on VR technology are feasible.
abstract_id: PUBMED:37740288
Enhancing student understanding of rheumatic disease pathologies through augmented reality: findings from a multicentre trial. Objective: The possibility of combining real and virtual environments is driving the increased use of augmented reality (AR) in education, including medical training. The aim of this multicentre study was to evaluate the students' perspective on the AR-based Rheumality GO!® app as a new teaching concept, presenting six real anonymised patient cases with rheumatoid arthritis (RA), psoriatic arthritis (PsA), and axial spondyloarthritis (axSpA).
Methods: The study encompassed 347 undergraduate medical students (232 women and 115 men) from four medical universities in Germany (Jena, Bad Nauheim/Gießen, Nuremberg, Erlangen). The course was divided into a theoretical refresher lecture followed by six AR-based cases in each of the three indications presented in the Rheumality GO!® app. All participants evaluated the course after completion, assessing the benefit of the app from a student´s perspective using a questionnaire with 16 questions covering six subject areas.
Results: The use of the AR-based app Rheumality GO!® improved the understanding of pathologies in RA, PsA, and axSpA for 99% of the participants. For 98% of respondents, the concept of AR with real patient data has made a positive impact on the teaching environment. On the other hand, 82% were in favour of the use of virtual tools (e.g. AR) in addition to this conventional approach.
Conclusion: The results of our survey showed that from medical students' perspective, an AR-based concept like the Rheumality GO!® app can complement rheumatology teaching in medical school as an effective and attractive tool though not replace bedside teaching.
abstract_id: PUBMED:36862072
Does Augmented Reality-based Portable Navigation Improve the Accuracy of Cup Placement in THA Compared With Accelerometer-based Portable Navigation? A Randomized Controlled Trial. Background: Previous studies reported good outcomes of acetabular cup placement using portable navigation systems during THA. However, we are aware of no prospective studies comparing inexpensive portable navigation systems using augmented reality (AR) technology with accelerometer-based portable navigation systems in THA.
Questions/purposes: (1) Is the placement accuracy of the acetabular cup using the AR-based portable navigation system superior to that of an accelerometer-based portable navigation system? (2) Do the frequencies of surgical complications differ between the two groups?
Methods: We conducted a prospective, two-arm, parallel-group, randomized controlled trial involving patients scheduled for unilateral THA. Between August and December 2021, we treated 148 patients who had a diagnosis of osteoarthritis, idiopathic osteonecrosis, rheumatoid arthritis, or femoral neck fracture and were scheduled to undergo unilateral primary THA. Of these patients, 100% (148) were eligible, 90% (133) were approached for inclusion in the study, and 85% (126) were finally randomized into either the AR group (62 patients) or the accelerometer group (64 patients). An intention-to-treat analysis was performed, and there was no crossover between groups and no dropouts; all patients in both groups were included in the analysis. There were no differences in any key covariates, including age, sex, and BMI, between the two groups. All THAs were performed via the modified Watson-Jones approach with the patient in the lateral decubitus position. The primary outcome was the absolute difference between the cup placement angle displayed on the screen of the navigation system and that measured on postoperative radiographs. The secondary outcome was intraoperative or postoperative complications recorded during the study period for the two portable navigation systems.
Results: There were no differences between the AR and accelerometer groups in terms of the mean absolute difference in radiographic inclination angle (3° ± 2° versus 3° ± 2° [95% CI -1.2° to 0.3°]; p = 0.22). The mean absolute difference in radiographic anteversion angle displayed on the navigation screen during surgery compared with that measured on postoperative radiographs was smaller in the AR group than that in the accelerometer group (2° ± 2° versus 5° ± 4° [95% CI -4.2° to -2.0°]; p < 0.001). There were few complications in either group. In the AR group, there was one patient each with a surgical site infection, intraoperative fracture, distal deep vein thrombosis, and intraoperative pin loosening; in the accelerometer group, there was one patient each with an intraoperative fracture and intraoperative loosening of pins.
Conclusion: Although the AR-based portable navigation system demonstrated slight improvements in radiographic anteversion of cup placement compared with the accelerometer-based portable navigation system in THA, whether those small differences will prove clinically important is unknown. Until or unless future studies demonstrate clinical advantages that patients can perceive that are associated with such small radiographic differences, because of the costs and the unquantified risks associated with novel devices, we recommend against the widespread use of these systems in clinical practice.
Level Of Evidence: Level I, therapeutic study.
abstract_id: PUBMED:2765011
Pain control and rational thinking. Implications for rheumatoid arthritis. In this study, we examined the factor structure of the Coping Strategies Questionnaire and studied the relevance of the coping process to health status in rheumatoid arthritis patients. The 2 factors of the questionnaire that were analyzed were Coping Attempts and Pain Control and Rational Thinking. The Pain Control and Rational Thinking factor was related to pain and psychological status, even after demographic variables and disease severity were statistically controlled. In addition, increases in Pain Control and Rational Thinking scores were related to improvements in pain, psychological status, and health status. Implications for the psychological care of rheumatoid arthritis patients are discussed.
abstract_id: PUBMED:6551776
An exploratory investigation of causal thinking of arthritics. Research in academic achievement situations suggests that the causes people give for achievement events are linked to subsequent behaviors, emotions, and expectations. An attributional analysis of the causes arthritics gave for their condition tested the limits of the attributional model in the situation of chronic illness. Results indicated that the assumption that causal thinking occurs needs further testing. Fifteen percent of the subjects did not give causes. Those not giving causes were significantly more anxious, more depressed, and more hostile than those who had constructed causes. In terms of the dimensions of attribution theory, the results suggest that when causes are given, they do not easily fit within the classification scheme currently proposed. Suggestions for testing of the attributional model in clinical situations are offered.
Answer: Remission in rheumatoid arthritis (RA) is a clinical reality, although it has been a challenging goal to achieve. The concept of remission in RA has been defined by various organizations, including the Food and Drug Administration (FDA), the American College of Rheumatology (ACR), and the European League Against Rheumatism (EULAR). These criteria are used to measure remission in clinical trials and practice (PUBMED:16325659).
Traditional disease-modifying antirheumatic drugs (DMARDs) often provide symptom relief and improvements in physical function, but they rarely lead to the complete cessation of RA activity. Remission, as defined by the ACR criteria, has been observed in 7 to 22% of patients treated with traditional DMARD monotherapy, but these remissions are often short-lived. However, with the advent of biologic therapies and combination therapy regimens, there is greater hope for achieving remission. Clinical trial results have shown that newer DMARDs, such as leflunomide or combinations of multiple DMARDs, can elicit greater EULAR remission rates (ranging from 13 to 42%) than monotherapies. Biologic combinations with methotrexate (MTX) have also been shown to induce significant remission in RA patients, with rates observed up to 50% after 2 years of therapy with adalimumab plus MTX (PUBMED:16325659).
In conclusion, while remission in RA was once considered wishful thinking, it has become a clinical reality with the development of biologic therapies and combination treatment approaches. The establishment of standardized remission criteria by the FDA, ACR, and EULAR, along with advancements in RA treatments, have made remission or very low disease activity an achievable goal and the new standard for the outcome of RA trials (PUBMED:16325659). |
Instruction: Nasal bacterial carriage in adult epistaxis: is neomycin the answer?
Abstracts:
abstract_id: PUBMED:18761771
Nasal bacterial carriage in adult epistaxis: is neomycin the answer? Introduction: After treatment of epistaxis, patients are routinely supplied with an intranasal bactericidal cream containing neomycin. Neomycin cream is effective in preventing recurrent paediatric epistaxis. This study aimed to assess whether there is an increased rate of nasal bacterial infections in adult epistaxis patients.
Methods: Between October 2004 and April 2005, nasal swabs were taken from adult patients presenting with epistaxis, and from a control group comprising elective ENT patients.
Results: There were 23 controls and 26 epistaxis patients. Staphylococcus aureus was grown in 21 per cent and 23 per cent, respectively. There was no significant difference in bacterial carriage rates between the epistaxis and control groups.
Conclusions: The epistaxis and control groups demonstrated the same bacterial species and the same proportion of bacterial carriage. Although the majority of bacterial species encountered were sensitive to neomycin, a significant proportion was not. These results do not support the routine use of neomycin in the prevention of recurrent adult epistaxis.
abstract_id: PUBMED:32998847
The addition of silver nitrate cautery to antiseptic nasal cream for patients with epistaxis: A systematic review and meta-analysis. Objective: To compare the outcomes of the addition of silver nitrate cautery versus antiseptic cream alone in paediatric patients with recurrent epistaxis.
Methods: A systematic review and meta-analysis were performed as per the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) Guidelines and a search of electronic information was conducted to identify all Randomised Controlled Trials (RCTs) and non-randomised studies comparing the outcomes of the addition of silver nitrate cautery versus antiseptic cream alone in paediatric patients with recurrent epistaxis. Treatment success and persistence of bleeding were primary outcome measures. Secondary outcome measures included treatment side effects. Fixed effects modelling was used for the analysis.
Results: Four studies enrolling 240 patients were identified. There was no significant difference between silver nitrate cautery group and antiseptic cream alone group in terms of complete resolution (Odds Ratio [OR] = 1.07, P = 0.81), the partial resolution (OR = 1.02, P = 0.96) and persistence of bleeding (OR = 0.91, P = 0.71). For secondary outcomes, antiseptic nasal cream was associated with few side effects such as rash in one case and several complaints of bad smell or taste.
Conclusions: The addition of silver nitrate cautery is not superior to the use of antiseptic cream alone in paediatric patients with recurrent epistaxis as it does not improve treatment success or persistence of bleeding.
abstract_id: PUBMED:30895432
Bilateral nasal septal chemical cautery: a safe and effective outpatient procedure for control of recurrent epistaxis, our experience in 134 patients. Purpose: To assess the effectiveness and complications of bilateral nasal septal cautery using silver nitrate in anterior nasal epistaxis.
Methods: This prospective study was carried out on 180 consecutive patients presenting with epistaxis to a general ENT clinic. Local anaesthetic cautery was performed using 5% lidocaine hydrochloride and 0.5% phenylephrine hydrochloride spray in all the patients except eight children that were 4 years or younger that were done under general anaesthetic. Visible vessels in Little's areas were cauterised using two silver nitrate sticks each side. Patients were prescribed naseptin cream and followed-up. We classified re-bleeds as follow: 0-1 episodes: significant improvement, 2-3 episodes: moderate improvement, 4 + episodes: no improvement.
Results: We analysed 134 (74%) patients who were seen at follow-up. Age range was 5-88 years (mean 25, median 15), there were 89 (67%) males. Children made up 60% (81) of the study population (aged 16 years and under), of these 56 (69%) were male. Significant improvement was seen in 93% (124) of the study population, but there were relapses in two children (1.5%) and only moderate improvement in eight patients (6%). There was no significant complication in the study population, but 11 patients had crusting at the sites of cautery at follow-up.
Conclusions: Bilateral silver nitrate cauterisation is an effective method of treating recurrent epistaxis with low risk of complications.
abstract_id: PUBMED:9438931
Analysis of the intranasal distribution of ointment. Objective: The use of intranasal ointment sniffed into the nasal cavity is widely recommended as a method to lubricate the nose and to prevent drying and crusting of the nasal mucosa and secretions. This therapy is often prescribed to patients with problems with minor episodes of epistaxis, after nasal packing has been removed, and in patients complaining of excessive dryness or crusting within the nose. Various preparations have been used for this purpose. At our institution Polysporin ointment is one of the commonly used preparations. This study evaluated the distribution of Polysporin ointment within the nasal cavities of subjects with no symptoms related to the nasal cavity.
Conclusions: The results of the study raise doubts about the effectiveness of this therapy.
abstract_id: PUBMED:24290872
Epistaxis management at Guy's Hospital, 2009-2011: full audit cycles. Objective: To assess management of epistaxis at a tertiary ENT referral hospital against a recently published standard of best practice.
Methods: Fifty consecutive cases of acute epistaxis that required admission to Guy's Hospital in 2009 were evaluated. Epistaxis education sessions were held to introduce our algorithm of best practice in tandem with an emphasis on emergency department care. Similar retrospective reviews were carried out in both 2010 and 2011 (on groups of 50 patients).
Results And Conclusion: The first audit cycle demonstrated that only 8 per cent of patients underwent a suitable nasal examination in the emergency department prior to transfer, with no documented attempts at nasal cautery. Surgical intervention procedures were performed on only 40 per cent of eligible patients. The audit cycles that followed the introduction of the epistaxis algorithm demonstrated continued improvement in initial evaluation and management of epistaxis. In select patients, sphenopalatine artery ligation can provide timely, definitive management of refractory epistaxis.
abstract_id: PUBMED:10384851
A randomised clinical trial of antiseptic nasal carrier cream and silver nitrate cautery in the treatment of recurrent anterior epistaxis. Sixty-four consecutive patients with a history of recurrent epistaxis were randomly assigned in the outpatient clinic to receive treatment with either Naseptin antiseptic nasal carrier cream alone (Group A) or a combination of Naseptin cream and silver nitrate cautery (Group B). Results were available on 50 patients, 22 in Group A and 28 in Group B. Twenty patients (91%) in Group A and 25 patients (89%) in Group B demonstrated improvement in their symptoms. There was no statistically significant difference in outcome between the two treatment arms (P = 0.7569). On comparing the different age groups (under and over 16 years) in the two treatment arms, once again there was no statistically significant difference in the treatment outcome (P = 1.000). In conclusion, silver nitrate cautery offers no added advantage to the management of simple epistaxis in both children and adults.
abstract_id: PUBMED:4208399
Staphylococcal endocarditis following septorhinoplasty. N/A
abstract_id: PUBMED:25035314
Fifteen-minute consultation: investigation and management of childhood epistaxis. Recurrent epistaxis is very common in children, and the majority of cases are self-limiting with simple first aid measures. However, recurrent episodes are a source of distress and anxiety for child and parent alike, and commonly result in hospital referral. We present a structured approach highlighting initial assessment, examination and management including when to refer to ear, nose and throat (ENT) surgery.
abstract_id: PUBMED:1889950
Management of epistaxis in children. A randomised clinical trial of antiseptic nasal carrier cream (Naseptin) and silver nitrate cautery in the treatment of epistaxis in children was carried out. Fifty-four percent responded to cautery with silver nitrate and 50% to treatment with Naseptin antiseptic carrier cream. There were 24 patients in each group. No statistically significant difference was observed between the groups when tested with the chi 2 test. We believe that the first line treatment of epistaxis in children should be the use of antiseptic nasal cream.
abstract_id: PUBMED:14738623
Naseptin nasal cream 'contains peanut oil'. N/A
Answer: Based on the provided abstracts, neomycin does not appear to be the definitive answer for nasal bacterial carriage in adult epistaxis. A study aimed at assessing the rate of nasal bacterial infections in adult epistaxis patients found no significant difference in bacterial carriage rates between the epistaxis and control groups. Staphylococcus aureus was grown in 21% and 23% of the control and epistaxis groups, respectively. Although the majority of bacterial species encountered were sensitive to neomycin, a significant proportion was not. These results do not support the routine use of neomycin in the prevention of recurrent adult epistaxis (PUBMED:18761771).
Furthermore, other studies have focused on the effectiveness of treatments such as silver nitrate cautery and antiseptic creams in managing recurrent epistaxis, particularly in pediatric patients. One systematic review and meta-analysis found that the addition of silver nitrate cautery is not superior to the use of antiseptic cream alone in pediatric patients with recurrent epistaxis, as it does not improve treatment success or persistence of bleeding (PUBMED:32998847). Another study on bilateral nasal septal cautery using silver nitrate in anterior nasal epistaxis reported it as an effective method of treating recurrent epistaxis with a low risk of complications (PUBMED:30895432).
In conclusion, while neomycin cream may have some effectiveness against certain bacterial species, the evidence does not support its routine use for preventing recurrent adult epistaxis due to the presence of neomycin-resistant bacteria and the lack of a significant difference in bacterial carriage rates between epistaxis patients and controls. Alternative treatments, such as silver nitrate cautery and antiseptic creams, are also used in the management of recurrent epistaxis, particularly in children, but their effectiveness compared to neomycin is not directly addressed in the provided abstracts. |
Instruction: Is Illinois heeding the call to regionalize pancreatic surgery?
Abstracts:
abstract_id: PUBMED:23335035
Is Illinois heeding the call to regionalize pancreatic surgery? Background And Objectives: Recommendations to refer pancreatic procedures to high-volume centers have been in place for a decade. We sought to determine whether regionalization of pancreatic procedures to high-volume centers is occurring in Illinois.
Methods: We compared pancreatic procedures performed in Illinois hospitals from 2000 to 2004 [time period (TP) 1] versus 2005-2009 (TP2) for changes in inpatient mortality and hospital volume. Hospitals were categorized into low- (LVH), intermediate- (IVH), or high-volume (HVH).
Results: From TP1 to TP2, there was a 23% increase in absolute case volume (2,232-2,737), despite fewer hospitals performing pancreatic procedures (114-95). In hospital mortality decreased (5.5-3.3%, P < 0.01) and was lowest at HVHs. LVHs and IVHs were associated with a 4.7 and 3.0 higher odds of mortality, respectively (both P < 0.001). Overall, HVHs performed 659 (+73%) more procedures, whereas cumulative procedure volume dropped by 154 cases at LVHs (+1%) and IVHs (-18%).
Conclusions: We observed limited evidence of regionalization of pancreatic procedures in Illinois. The increase in HVH case volume cannot be solely attributed to regionalization, given the corresponding modest decrease seen at non-HVHs. There is opportunity for Illinois hospitals to implement strategies such as selective referral to improve mortality after pancreatic resection.
abstract_id: PUBMED:24008085
Trends in thyroid surgery in Illinois. Background: Endocrine surgery is an evolving subspecialty in general surgery. To determine whether this subspecialty is having an effect on practice patterns of thyroid surgery, we reviewed all thyroidectomies performed in Illinois over an 11-year period.
Methods: The Illinois COMPdata database from the Illinois Hospital Association was used to retrieve all the thyroid operations performed in the state of Illinois from 1999 to 2009. Univariate and multivariate logistic regression analyses were performed to assess the effects of surgeon and hospital type on practice patterns of thyroidectomies.
Results: In the early period (1999-2004), 5,824 operations were identified compared with 8,454 in the late period (2005-2009; P < .001). Total thyroidectomy represented 2,679 (46%) of the thyroid operations done in the early period compared with 4,976 (59%) in the late period (P < .001). Sixty-two percent of all the thyroid operations were done at community hospitals in the early period compared with 56% in the late period. Endocrine surgeons (ES) performed the greatest rate of thyroidectomies, 0.7 and 0.6/10(5) population, in both early and late periods, respectively.
Conclusion: In Illinois, the volume of thyroid operations has increased significantly over the past 10 years with a shift toward total thyroidectomy. Although most thyroidectomies are still performed in community hospitals, this percentage has decreased. ES perform a minority of thyroid operations, but they have the greatest volume of thyroidectomies per surgeon. These findings may represent broader trends in thyroid surgery throughout the United States.
abstract_id: PUBMED:33972140
'Hello, MaxFax on-call?' - maxillofacial 'bleep sheet' proforma for on-call referrals. The on-call component of a dental core training (DCT) post in oral and maxillofacial surgery (OMFS) is considered to be the most daunting and challenging aspect of the job. The average trainee is a singly-qualified dentist with limited knowledge and experience of managing OMFS presentations. Given the short duration of DCT posts, there is a continual rotation of junior staff through OMFS departments with a varying skillset and knowledge mix. As such, the consistent recording of appropriate information remains a constant challenge. The coronavirus pandemic has presented a unique situation in which the majority of dental foundation trainees (DFTs) entering OMFS DCT posts will only have around six months experience of independent practice. This lack of experience and onerous on-call workload could be a potentially dangerous combination, especially during nightshift patterns of on-call. We demonstrate that implementation of an on-call bleep-sheet proforma provides a validated, standardised, systematic, and chronological method of record keeping that exceeds the minimum required standard of clinical governance, in an era where junior trainees entering OMFS will have had even more limited experience than normal.
abstract_id: PUBMED:19785578
Perspectives on rural health workforce issues: Illinois-Arkansas comparison. Context: Past research has documented rural physician and health care professional shortages.
Purpose: Rural hospital chief executive officers' (CEOs') reported shortages of health professionals and perceptions about recruiting and retention are compared in Illinois and Arkansas.
Methods: A survey, previously developed and sent to 28 CEOs in Illinois, was mailed to 110 CEOs in Arkansas. Only responses from rural CEOs are presented (Arkansas n = 39 and Illinois n = 22).
Findings: Physician shortages were reported by 51 CEOs (83.6%). Most reported physician shortages in Arkansas were for family medicine, internal medicine, cardiology, obstetrics-gynecology, general surgery, and psychiatry. Most reported physician shortages in Illinois were for family medicine, obstetrics-gynecology, orthopedic surgery, internal medicine, cardiology, and general surgery. Additionally, registered nurses and pharmacists were the top 2 allied health professions shortages. Multivariate analysis (factor and discriminant analyses) examined community attributes associated with ease of recruiting physicians. Six factors were identified and assessed as to their importance in influencing ease of recruitment, with the state included in the model. Three factors were identified as discriminating whether or not physician recruitment was easy: community supportive for family, community cooperates and perceives a good future, and community attractiveness.
Conclusions: Similarities in shortages and attributes influencing recruitment in both states suggest that efforts and policies in health professions workforce development can be generalized between regions. This study further reinforces some important known issues concerning retention and recruitment, such as the importance of identifying providers whose preferences are matched to the characteristics and lifestyle of a given area.
abstract_id: PUBMED:26852707
Neurosurgical Defensive Medicine in Texas and Illinois: A Tale of 2 States. Objective: To compare the self-reported liability characteristics and defensive medicine practices of neurosurgeons in Texas with neurosurgeons in Illinois in an effort to describe the effect of medicolegal environment on defensive behavior.
Methods: An online survey was sent to 3344 members of the American Board of Neurological Surgery. Respondents were asked questions in 8 domains, and responses were compared between Illinois, the state with the highest reported average malpractice insurance premium, and Texas, a state with a relatively low average malpractice insurance premium.
Results: In Illinois, 85 of 146 (58.2%) neurosurgeons surveyed responded to the survey. In Texas, 65 of 265 (24.5%) neurosurgeons surveyed responded. In Illinois, neurosurgeons were more likely to rate the overall burden of liability insurance premiums to be an extreme/major burden (odds ratio [OR] = 7.398, P < 0.001) and to have >$2 million in total coverage (OR = 9.814, P < 0.001) than neurosurgeons from Texas. Annual malpractice insurance premiums in Illinois were more likely to be higher than $50,000 than in Texas (OR = 9.936, P < 0.001), and survey respondents from Illinois were more likely to believe that there is an ongoing medical liability crisis in the United States (OR = 9.505, P < 0.001). Neurosurgeons from Illinois were more likely to report that they very often/always order additional imaging (OR = 2.514, P = 0.011) or very often/always request additional consultations (OR = 2.385, P = 0.014) compared with neurosurgeons in Texas.
Conclusions: Neurosurgeons in Illinois are more likely to believe that there is an ongoing medical liability crisis and more likely to practice defensively than neurosurgeons in Texas.
abstract_id: PUBMED:21547133
Study on chronic pancreatitis and pancreatic cancer using MRS and pancreatic juice samples. Aim: To investigate the markers of pancreatic diseases and provide basic data and experimental methods for the diagnosis of pancreatic diseases.
Methods: There were 15 patients in the present study, among whom 10 had pancreatic cancer and 5, chronic pancreatitis. In all patients, pancreatic cancer or chronic pancreatitis was located on the head of the pancreas. Pathology data of all patients was confirmed by biopsy and surgery. Among the 10 patients with pancreatic cancer, 3 people had a medical history of long-term alcohol consumption. Of 5 patients with chronic pancreatitis, 4 men suffered from alcoholic chronic pancreatitis. Pancreatic juice samples were obtained from patients by endoscopic retrograde cholangio-pancreatography. Magnetic resonance spectroscopyn was performed on an 11.7-T scanner (Bruker DRX-500) using Call-Purcell-Meiboom-Gill pulse sequences. The parameters were as follows: spectral width, 15 KHz; time domain, 64 K; number of scans, 512; and acquisition time, 2.128 s.
Results: The main component of pancreatic juice included leucine, iso-leucine, valine, lactate, alanine, acetate, aspartate, lysine, glycine, threonine, tyrosine, histidine, tryptophan, and phenylalanine. On performing 1D (1)H and 2D total correlation spectroscopy, we found a triplet peak at the chemical shift of 1.19 ppm, which only appeared in the spectra of pancreatic juice obtained from patients with alcoholic chronic pancreatitis. This triplet peak was considered the resonance of the methyl of ethoxy group, which may be associated with the metabolism of alcohol in the pancreas.
Conclusion: The triplet peak, at the chemical shift of 1.19 ppm is likely to be the characteristic metabolite of alcoholic chronic pancreatitis.
abstract_id: PUBMED:34756619
Hand Call Practices and Satisfaction: Survey Results From Hand Surgeons in the United States. Purpose: To describe current hand call practices in the United States (US) and identify aspects of call practices that lead to surgeon satisfaction.
Methods: An anonymous survey was administered to practicing members of the American Society for Surgery of the Hand, and responses were filtered to US surgeons taking hand call. Hand call was considered: (A) hand-specific call including replantation or microvascular services or (B) hand-specific call without replantation or microvascular responsibilities. Data were collected pertaining to practices, compensation, assistance, frequency, and satisfaction. Descriptive analyses were performed and regionally subdivided. Pearson correlations were used to determine aspects of a call that influenced surgeon satisfaction.
Results: A total of 662 US hand surgeons from 49 states responded. Among the respondents, 38% (251) participate in replantation or microvascular call, 34% (225) participate in hand-specific call excluding replantation, and 28% (186) do not participate in hand-specific call. Of those practicing hand call (476), 60% take 6 or fewer days of call per month, 62% have assistance with staffing consultations, 65% have assistance with surgical procedures, and 49% are financially incentivized to take call. More than half (51%) reported that they have a protected time for call aside from their elective practice, and 10% of the surgeons reported that they have a dedicated operating room (OR) time after a call to care for cases. Two percent reported that the day following call is free from clinical duties. Only 46% of the surgeons were satisfied with their call schedule, with the top concerns among unsatisfied respondents relating to pay, OR availability, and burnout. The factors correlating to surgeon satisfaction included less frequent call, assistance with performing consultations and surgery, pay for call, and OR availability.
Conclusions: The majority of US hand surgeons are not satisfied with their current call practices, with frequent concerns relating to pay, OR availability, and burnout.
Clinical Relevance: These findings may promote awareness regarding aspects of hand call that correlate with surgeon satisfaction and highlight practice patterns that may reduce burnout.
abstract_id: PUBMED:19561950
On-call emergency workload of a general surgical team. Background: To examine the on-call emergency workload of a general surgical team at a tertiary care teaching hospital to guide planning and provision of better surgical services.
Patients And Methods: During six months period from August to January 2007; all emergency calls attended by general surgical team of Surgical Unit II in Accident and Emergency department (A and E) and in other units of Civil, Hospital Karachi, Pakistan were prospectively recorded. Data recorded includes timing of call, diagnosis, operation performed and outcome apart from demography.
Results: Total 456 patients (326 males and 130 females) were attended by on-call general surgery team during 30 emergency days. Most of the calls, 191 (41.9%) were received from 8 am to 5 pm. 224 (49.1%) calls were of abdominal pain, with acute appendicitis being the most common specific pathology in 41 (9.0%) patients. Total 73 (16.0%) calls were received for trauma. Total 131 (28.7%) patients were admitted in the surgical unit for urgent operation or observation while 212 (46.5%) patients were discharged from A and E. 92 (20.1%) patients were referred to other units with medical referral accounts for 45 (9.8%) patients. Total 104 (22.8%) emergency surgeries were done and the most common procedure performed was appendicectomy in 34 (32.7%) patients.
Conclusion: Major workload of on-call surgical emergency team is dealing with the acute conditions of abdomen. However, significant proportion of patients are suffering from other conditions including trauma that require a holistic approach to care and a wide range of skills and experience. These results have important implications in future healthcare planning and for the better training of general surgical residents.
abstract_id: PUBMED:34866033
Resident Night Float or 24-hour Call Hospital Coverage: Impact on Training, Patient Outcome, and Length of Stay. Objective: The impact of neurosurgical resident hospital coverage system, performed via a night float (12-hour shifts overnight) or a 24-hour call, on neurological surgery resident training and patient care is unknown.
Design: Retrospective review comparing night float and 24-hour call coverage on trainee surgical experience, elective time, annual program surveys, patient outcomes, and length of stay.
Setting: The Ohio State Wexner Medical Center Neurosurgery residency program, Columbus, Ohio.
Participants: The neurosurgical residents from 2016 to 2019.
Results: Monthly cases performed by junior residents significantly increased after transitioning to a 24-hour call schedule (18 versus 30, p < 0.001). There were no differences for total cases among program graduates during this time (p = 0.7). Trainee elective time significantly increased after switching to 24-hour call coverage (18 versus 24 months after the transition; p = 0.004). Risk-adjusted mortality and length of stay indices were not different (0.5 versus 0.3, p = 0.1; 0.9 versus 0.9; p = 0.3). Program surveys had minimal change after the transition to 24-hour call.
Conclusions: Transitioning from a night float to a 24-hour call coverage system led to improved junior resident case volume and elective time without detrimental effect on patient-related outcomes.
abstract_id: PUBMED:19276278
Screening for pancreatic cancer in high-risk individuals: a call for endoscopic ultrasound. Pancreatic cancer has a dismal prognosis, and early detection through screening is likely to be our best hope to improve survival. The relatively low incidence of pancreatic cancer and the insensitive screening techniques available currently render this approach prohibitively expensive and inefficient in the general population. Screening has begun, however, in the subset of patients at the highest risk of disease, such as those with inherited risk due to familial multiorgan cancer syndromes or in familial groupings of pancreatic cancer with yet unidentified genetic abnormalities, termed familial pancreatic cancer. Screening is currently done at several large centers in the world, each with a unique multidisciplinary approach and series of screening tests. Endoscopic ultrasound has emerged as the most promising imaging test given its high sensitivity and potential for tissue sampling. However, this potential to detect and cure early lesions should be carefully balanced with the risk of overtreatment, especially in view of the morbidity and mortality of pancreatic surgery. Additional experience to help determine the best screening strategy is greatly needed. Screening should therefore be done at experienced centers with multidisciplinary teams of specialists and in the context of research protocols.
Answer: Based on the abstract from PUBMED:23335035, Illinois has shown limited evidence of regionalizing pancreatic procedures. Although there was a 23% increase in absolute case volume of pancreatic procedures from time period 1 (2000-2004) to time period 2 (2005-2009), and inpatient mortality decreased, the increase in high-volume hospital (HVH) case volume cannot be solely attributed to regionalization. This is because there was only a modest decrease in procedure volume at non-HVHs. The study concludes that there is an opportunity for Illinois hospitals to implement strategies such as selective referral to improve mortality after pancreatic resection, suggesting that regionalization efforts could be further optimized. |
Instruction: Death of terminally ill patients on a stretcher in the emergency department: a French speciality?
Abstracts:
abstract_id: PUBMED:12415451
Death of terminally ill patients on a stretcher in the emergency department: a French speciality? Objectives: To determine the frequency, modalities of admission and management of terminally ill patients who died on a stretcher in an emergency department (ED).
Design And Setting: Retrospective study in an ED of a university hospital.
Methods: Current place of residence, modalities of admission in ED, mortality probability scores and type of management were extracted for each patient in the terminal stage of chronic disease who died on a stretcher in our ED during a 3year period.
Results: Of 159 deaths observed in the ED, 56 (35%) concerned terminally ill patients. The illness was a malignancy in 22 cases, a neurological disease in 22 cases and a cardiopulmonary disease in 12 cases. Most of the patients were referred by their regular doctor. Seventy-two percent of the malignancy patients were living at home, 55% of the neurological patients came from nursing facilities and 58% of the cardio-respiratory patients came from the hospital. In 73%, 83% and 23% of the patients with malignancy, cardiopulmonary and neurological diseases, respectively, admission was related to the evolution of the chronic disease. Severity of illness on admission was similar whatever the disease. Request for compassionate end-of-life care was expressed in only 12.5%. At the ED, 91% of patients with neurological diseases received palliative support care. Supportive therapy was undertaken in one third of patients with malignancy or cardiopulmonary disease.
Conclusion: An ED may be used as a place for dying for some terminally ill patients. This could be related to the legal opposition to withdrawal or withholding of life-support therapies as well as the absence of guidelines from scientific bodies.
abstract_id: PUBMED:25007797
The preference of place of death and its predictors among terminally ill patients with cancer and their caregivers in China. Purpose: To describe the preference of place of death among Chinese patients with cancer and their caregivers and to identify factors associated with the preference.
Methods: A prospective questionnaire research was conducted in terminally ill patients with cancer and their caregivers. Questions included sociodemographic characteristics and information about patients' diseases and patients' preference of place of death.
Results: Home (53.64%) was the first choice for 522 patients, 51.34% of participated caregivers chose home as the preferred place of death, and patient-caregiver dyads achieved 84.10% agreement. Patients who lived in rural area, with lower education level and lived with relatives, expressed more preference to die at home.
Conclusion: This study described information about the preference of place of death and its potential predictive factors in terminally ill patients with cancer in mainland of China.
abstract_id: PUBMED:36269286
Quality of death and its related factors in terminally ill patients, as perceived by nurses. Background: Little is known about the quality of death of terminally ill patients in hospitals in Thailand.
Aim: To examine the quality of death of terminally ill patients and investigate correlations between the quality of death and the organisational climate; nurses' palliative care knowledge; nurses' palliative care practice; and nurses' perceptions of barriers in providing palliative care.
Methods: A cross-sectional survey design was used. Data collected among 281 nurses were analysed by descriptive statistics, Pearson correlation and Spearman's rank correlation.
Results: The overall quality of death of terminally ill patients in the hospital was moderate. Organisational climate and nurses' palliative care practice positively correlate with terminally ill patients' quality of death. Nurses' difficulty in providing palliative care negatively correlates with terminally ill patients' quality of death.
Conclusion: Promoting an organisational climate and enhancing nurses' palliative care practice may improve the quality of death of terminally ill patients in this hospital.
abstract_id: PUBMED:29845917
The Understanding of Death in Terminally Ill Cancer Patients in China: An Initial Study. Patient's needs and rights are the key to delivering state-of-the-art modern nursing care. It is especially challenging to provide proper nursing care for patients who are reaching the end of life (EOL). In Chinese culture nursing practice, the perception and expectations of these EOL patients are not well known. This article explores the feelings and wishes of 16 terminally ill Chinese cancer patients who are going through the dying process. An open-ended questionnaire with eight items was used to interview 16 terminally ill Chinese cancer patients, and was then analyzed by a combined approach employing grounded theory and interpretive phenomenological analysis. Four dimensions were explored: first, patient's attitudes towards death, such as accepting the fact calmly, striving to survive, and the desire for control; second, the care desired during the dying process, including avoiding excessive treatment and dying with dignity; third, the degree of the patient's acceptance of death; and fourth, the consequences of death. This cognitive study offers a fundamental understanding of perceptions of death of terminally ill cancer patients from the Chinese culture. Their attitude toward death was complex. They did not prefer aggressive treatment and most of them had given a great deal of thought to their death.
abstract_id: PUBMED:34622494
'Who would even want to talk about death?' A qualitative study on nursing students' experiences of talking about death with terminally ill patients with cancer. Objectives: This study aimed to describe nursing students' experiences of talking about death with terminally ill patients with cancer.
Methods: The study adopted a qualitative design, and participants (n = 28) were final-year undergraduate nursing students. Data were collected by conducting in-depth semi-structured face-to-face interviews using a pilot-tested interview guide. The researchers followed a systematic data analysis procedure which is an appropriate method of analysis when aiming to create knowledge based on experiences and meanings from cross-case analysis.
Results: The responses of the nursing students were subsumed under the following three themes: (1) 'balance on the rope', (2) 'who would even want to talk about death' and (3) 'need to talk but …'. The findings suggest that many nursing students do not believe that they are competent enough to talk about death with terminally ill patients with cancer, even though they believe it is essential to end-of-life care.
Conclusion: The findings underscore the importance of examining students' perspectives on death, which not only shapes their experiences of caring for terminally ill patients but also influences the quality of care. Further, students feel unprepared for talking to terminally ill patients with cancer and require support to avoid ignoring calls to speak about death.
abstract_id: PUBMED:29622389
Dying in hospital: Qualitative study among caregivers of terminally ill patients who are transferred to the emergency department. Introduction: Most people in France die in the hospital, even though a majority would like to die at home. These end-of-life hospital admissions sometimes occur in the emergency setting, in the hours preceding death.
Objective: To understand the motives that incite main natural caregivers to transfer terminally ill patients at the end of life to the emergency department.
Methods: A qualitative study was performed among caregivers of terminally ill patients receiving palliative care and living at home, and who died within 72hours of being admitted to the emergency department of the University Hospital of Besançon, France.
Results: Eight interviews were performed; average duration 48minutes. The caregivers described the difficult conditions of daily life, characterised by marked anguish about what the future might hold. Although they were aware that the patient was approaching the end of life, the caregivers did not imagine the death at all. The transfer to the emergency department was considered as a logical event, occurring in the continuity of the home care, and was not in any way criticised, even long after death had occurred. Overall, the caregivers had a positive opinion of how the end-of-life accompaniment went.
Discussion: Difficulty in imagining death at home is underpinned by its unpredictable nature, and by the accumulation of suffering and anguish in the caregiver. Hospital admission and medicalisation of death help to channel the caregiver's anguish. In order to improve end-of-life accompaniment, it is mandatory to make home management more reassuring for the patient and their family.
abstract_id: PUBMED:35861203
Denying Death: The Terminally Critically Ill. We describe a subgroup of the Chronically Critically Ill (CCI) we call the Terminally Critically Ill as demonstrated by terminally ill cancer patients. These cancer patients, though clearly terminally ill and with relatively short prognoses, can be kept alive for extended periods with medical interventions aimed at treating the complications of the cancer and cancer treatment. Such interventions can be painful, exhausting, costly and may interfere with attending to end of life concerns. We present a typical (composite) case and discuss ethical concerns regarding this growing subgroup of the chronically critically ill patients for whom death is routinely denied and delayed for extended periods.
abstract_id: PUBMED:14593687
Relationships with death: the terminally ill talk about dying. This article describes a qualitative study exploring the experiences of terminally ill patients and their families as they lived with the inevitability of death. Frustrated by the dominant discourse surrounding the culture of dying--namely that of Elisabeth Kübler-Ross's stage theory--I sought to revisit the experiences of the terminally ill by talking directly with them. Instead of focusing on how people reacted to the introduction of death into their lives, this research attended to how the dying began relating to life and death differently as a result of death's presence. Through an analysis of ethnographically collected data, the meanings participants constructed around their experiences were explored--culminating in the creation of seven "relationships" that participants shared with death.
abstract_id: PUBMED:27413014
Cancer Transitional Care for Terminally Ill Cancer Patients Can Reduce the Number of Emergency Admissions and Emergency Department Visits. Background: Emergency admissions and emergency department visits (EAs/EDVs) have been used as quality indicators of home care in terminally ill cancer patients. We established a cancer transitional care (CTC) program to monitor and manage terminally ill cancer patients receiving care at home. The purpose of this study was to evaluate the effectiveness of CTC by the frequency of EAs/EDVs.
Methods: In a retrospective chart review, we identified 133 patients with cancer admitted to our department, of whom 56 met study eligibility criteria. The CTC consisted of at least 1 or more following components: (1) a 24-hour hotline for general physicians or home care nurses to reach hospital-based physicians, (2) periodic phone calls from an expert hospital-based oncology nurse to home care medical staff, and (3) reports sent to our department from home care medical staff. The primary outcome variable was the frequency of EAs/EDVs.
Results: There were 32 EAs/EDVs and 69 planned admissions during the observation period. In the last 30 days of life, 16 patients (28.6%) had 1 EA/EDV and none had multiple EAs/EDVs. Compared with previous studies, our study found a similar or lower frequency of EAs/EDVs.
Conclusion: Our findings suggest that the implementation of CTC reduces the number of EAs/EDVs by replacing them with planned admissions. Further prospective studies to evaluate CTC are warranted.
abstract_id: PUBMED:12832958
When death is imminent: where terminally ill patients with cancer prefer to die and why. End-of-life care strives to honor terminally ill patients' preferences regarding the way of dying. Scholars defined one domain of quality of dying and death as dying at the place of one's choice. Despite efforts over more than two decades and more than 40 studies to investigate the influencing factors associated with the place of death for terminally ill patients with cancer, there is a notable lack of empirical data examining the reasons why terminally ill patients with cancer choose a specific setting as their preferred place of death. An exploratory and descriptive study was conducted to explore the preferences of terminally ill patients with cancer for the place of death, to identify the reasons for selecting a preferred place of death, and to examine the importance of dying at a place one prefers. A convenience sample of 180 terminally ill patients with cancer was recruited from four tertiary care hospitals and two home care programs in Connecticut. Nearly 90% of the subjects preferred to die at home. Quality of life, availability and ability of family caregivers, concerns of being a burden to others, long-standing relationships with healthcare providers, and quality of healthcare were the major considerations in decision making regarding the place of death. Terminally ill patients with cancer acknowledged dying at their preferred place of death as highly important. Effective nursing interventions need developing to facilitate death at a place that is in accord with dying patients' preferences.
Answer: The abstract from PUBMED:12415451 discusses a retrospective study conducted in a French university hospital's emergency department (ED) to determine the frequency and management of terminally ill patients who died on a stretcher in the ED. The study found that 35% of the 159 deaths observed in the ED during a 3-year period involved terminally ill patients. The majority of these patients were referred by their regular doctor, and the illnesses included malignancies, neurological diseases, and cardiopulmonary diseases. The study suggests that the ED may be used as a place for dying for some terminally ill patients, which could be related to legal opposition to withdrawal or withholding of life-support therapies as well as the absence of guidelines from scientific bodies.
This indicates that the phenomenon of terminally ill patients dying on a stretcher in the ED is not unique to France but may be influenced by specific legal and medical practice factors within the country. The study does not directly compare the situation in France to other countries, so it cannot conclusively state that this is a "French specialty." However, it does highlight the use of the ED for end-of-life care in France and suggests that this could be related to specific legal and healthcare system factors present in the country. |
Instruction: Can opportunities be enhanced for vaccinating children in home visiting programs?
Abstracts:
abstract_id: PUBMED:25379052
Home visiting programs for HIV-affected families: a comparison of service quality between volunteer-driven and paraprofessional models. Home visiting is a popular component of programs for HIV-affected children in sub-Saharan Africa, but its implementation varies widely. While some home visitors are lay volunteers, other programs invest in more highly trained paraprofessional staff. This paper describes a study investigating whether additional investment in paraprofessional staffing translated into higher quality service delivery in one program context. Beneficiary children and caregivers at sites in KwaZulu-Natal, South Africa were interviewed after 2 years of program enrollment and asked to report about their experiences with home visiting. Analysis focused on intervention exposure, including visit intensity, duration and the kinds of emotional, informational and tangible support provided. Few beneficiaries reported receiving home visits in program models primarily driven by lay volunteers; when visits did occur, they were shorter and more infrequent. Paraprofessional-driven programs not only provided significantly more home visits, but also provided greater interaction with the child, communication on a larger variety of topics, and more tangible support to caregivers. These results suggest that programs that invest in compensation and extensive training for home visitors are better able to serve and retain beneficiaries, and they support a move toward establishing a professional workforce of home visitors to support vulnerable children and families in South Africa.
abstract_id: PUBMED:30911204
Parent Engagement in a Head Start Home Visiting Program Predicts Sustained Growth in Children's School Readiness. This study examined three components of parent engagement in an enriched Head Start home visiting program: intervention attendance, the working alliance between parents and home visitors, and parents' use of program materials between sessions. The study identified those family and child characteristics that predicted the different components of parent engagement, and the study tested whether those components predicted sustained growth in children's school readiness skills across four years, from preschool through second grade. Ninety-five low-income parents with four year-old children attending Head Start (56% white; 26% black; 20% Latino; 44% girls) were randomly assigned to receive the home visiting program. Assessments included home visitor, parent, and teacher ratings, as well as interviewer observations and direct testing of children; data analyses relied on correlations and hierarchical multiple regression equations. Results showed that baseline family characteristics, like warm parent-child interactions, and child functioning predicted both working alliance and use of program materials, but only race/ethnicity predicted intervention attendance. The use of program materials was the strongest predictor of growth in children's literacy skills and social adjustment at home during the intervention period itself. In contrast, working alliance emerged as the strongest predictor of growth in children's language arts skills, attention skills, and social adjustment at school through second grade, two years after the end of the home visiting intervention. To maximize intervention effectiveness across school readiness domains over time, home visiting programs need to support multiple components of parent engagement, particularly working alliance and the use of program materials between sessions.
abstract_id: PUBMED:31598130
Innovative Research Methods to Advance Precision in Home Visiting for More Efficient and Effective Programs. Home visiting during early childhood can improve a range of outcomes for children and families. As evidence-based models are implemented across the nation, two questions have emerged. First, can home visiting improve outcomes more efficiently? Second, can overall effects be strengthened for specific subgroups of families? For the past several decades, research focused on testing the average effects of home visiting models on short- to long-term outcomes has found small impacts. These effects are not the same for all families. The field needs new evidence produced in new ways to overcome these challenges. In this article, we provide an overview of the evidence in this field, including what works and for whom. Next, we explain precision approaches to various fields and how this approach could be used in home visiting programs. Research on precision home visiting focuses on the ingredients of home visiting models, collaborating with practitioners to identify the ingredients and testing them on near-term outcomes, and using innovative study designs to learn more quickly what works best for which families. We conclude by proposing four pillars of research that will help achieve precision home visiting services.
abstract_id: PUBMED:36737527
Referrals to Home Visiting: Current Practice and Unrealized Opportunities. Introduction: Evidence supports ongoing investment in maternal and early childhood home visiting in the US. Yet, a small fraction of eligible families accesses these services, and little is known about how families are referred. This report describes priority populations for home visiting programs, the capacity of programs to enroll more families, common sources of referrals to home visiting, and sources from which programs want to receive more referrals.
Methods: We conducted a secondary analysis of data from a national web-based survey of members of the Home Visiting Applied Research Collaborative (HARC), focusing on a small set of items that directly addressed study aims. Survey respondents (N = 87) represented local programs implementing varying home visiting models diverse in size and geographic context.
Results: Programs prioritized enrollment of pregnant women; parents with mental health, substance abuse or intimate partner violence concerns; teen parents; and children with developmental delays or child welfare involvement. Most respondents reported capacity to enroll more families in their programs. Few reported receiving any referrals from pediatric providers, child welfare, early care and education, or TANF/other social services. Most desired more referrals, especially from healthcare providers, WIC, and TANF/other social services.
Discussion: Given that most programs have the capacity to serve more families, this study provides insights regarding providers with whom home visiting programs might strengthen their referral systems.
abstract_id: PUBMED:26149681
Can opportunities be enhanced for vaccinating children in home visiting programs? A population-based cohort study. Background: Home visiting programs focused on improving early childhood environments are commonplace in North America. A goal of many of these programs is to improve the overall health of children, including promotion of age appropriate vaccination. In this study, population-based data are used to examine the effect of a home visiting program on vaccination rates in children.
Methods: Home visiting program data from Manitoba, Canada were linked to several databases, including a provincial vaccination registry to examine vaccination rates in a cohort of children born between 2003 and 2009. Propensity score weights were used to balance potential confounders between a group of children enrolled in the program (n = 4,562) and those who were eligible but not enrolled (n = 5,184). Complete and partial vaccination rates for one and two year old children were compared between groups, including stratification into area-level income quintiles.
Results: Complete vaccination rates from birth to age 1 and 2 were higher for those enrolled in the Families First program [Average Treatment Effect Risk Ratio (ATE RR) 1.06 (95 % CI 1.03-1.08) and 1.10 (95 % CI 1.05-1.15) respectively]. No significant differences were found between groups having at least one vaccination at age 1 or 2 [ATE RR 1.01 (95 % CI 1.00-1.02) and 1.00 (95 % CI 1.00-1.01) respectively). The interaction between program and income quintiles was not statistically significant suggesting that the program effect did not differ by income quintile.
Conclusions: Home visiting programs have the potential to increase vaccination rates for children enrolled, despite limited program content directed towards this end. Evidence-based program enhancements have the potential to increase these rates further, however more research is needed to inform policy makers of optimal approaches in this regard, especially with respect to cost-effectiveness.
abstract_id: PUBMED:32292264
Engagement in home visiting: An overview of the problem and how a coalition of researchers worked to address this cross-model concern. Home visiting is a widely supported intervention strategy for parents of young children who are in need of parenting skill improvement. However, parental engagement limits the potential public health impact of home visiting, as these programs often have low enrollment rates, as well as high attrition and low completion rates for those who enroll in these programs. The Coalition for Research on Engagement and Well-being (CREW) provided support for three pilot projects representing different home visiting models and aspects of engagement. The results of these pilot projects are presented in this special section. The purpose of this commentary is to introduce CREW and highlight the importance of a cross-model project to improve engagement among home visiting programs.
abstract_id: PUBMED:26576591
Assessing the Deployment of Home Visiting: Learning from a State-Wide Survey of Home Visiting Programs. Objectives: Large-scale planning for health and human services programming is required to inform effective public policy as well as deliver services to meet community needs. The present study demonstrates the value of collecting data directly from deliverers of home visiting programs across a state. This study was conducted in response to the Patient Protection and Affordable Care Act, which requires states to conduct a needs assessment of home visiting programs for pregnant women and young children to receive federal funding. In this paper, we provide a descriptive analysis of a needs assessment of home visiting programs in Ohio.
Methods: All programs in the state that met the federal definition of home visiting were included in this study. Program staff completed a web-based survey with open- and close-ended questions covering program management, content, goals, and characteristics of the families served.
Results: Consistent with the research literature, program representatives reported great diversity with regard to program management, reach, eligibility, goals, content, and services delivered, yet consistently conveyed great need for home visiting services across the state.
Conclusions: Results demonstrate quantitative and qualitative assessments of need have direct implications for public policy. Given the lack of consistency highlighted in Ohio, other states are encouraged to conduct a similar needs assessment to facilitate cross-program and cross-state comparisons. Data could be used to outline a capacity-building and technical assistance agenda to ensure states can effectively meet the need for home visiting in their state.
abstract_id: PUBMED:36683604
Policy measures to expand home visiting programs in the postpartum period. The postpartum period is characterized by a myriad of changes-emotional, physical, and spiritual; whilst the psychosocial health of new parents is also at risk. More alarmingly, the majority of pregnancy-related deaths in the U.S. occur during this critical period. The higher maternal mortality rate is further stratified by dramatic racial and ethnic variations: Black, brown, and American Indian/Alaska Native indigenous people have 3-4x higher rates of pregnancy-related deaths and severe morbidity than their White, non-Hispanic, and Asian/Pacific Islander counterparts. This policy brief explores how expanding evidence based home visiting programs (HVPs) and strengthening reimbursement policies that invest in such programs can be pivoted to optimize the scope of care in the postpartum period.
abstract_id: PUBMED:35982773
Maternal Warning Signs Education During Home Visiting: Results from a Formative Evaluation in Maryland. Background: Maternal mortality rate reviews have identified the need for improved patient education regarding the warning signs of maternal complications to reduce preventable deaths. Maternal and child home visiting programs have the potential to deliver this education in communities.
Aims: This study sought to evaluate the baseline provision of warning signs education among home visiting programs in Maryland and to assess the acceptability of and preferences for warning signs education materials among program staff.
Materials And Methods: This sequential exploratory, mixed-methods study included qualitative interviews and focus group discussions followed by a web-based survey of all home visiting programs providing early postpartum visits in Maryland.
Results: Twenty-five home visiting program staff took part in qualitative data collection, and survey responses were submitted by a manager from 40 of 58 eligible home visiting programs (69% response rate). All survey respondents agreed that home visiting programs should provide warning signs education and more than 80% of programs provided some warning signs education during pregnancy and the postpartum period. Printed pamphlets were provided by 68% of programs for pregnancy complications and 43% for postpartum complications. Only 33% of respondents were satisfied with their existing warnings signs education materials; 98% were interested in new illustrated pamphlets and 88% were interested in education videos. Qualitative participants considered pamphlets with simple designs, limited text, and visuals, as the most accessible for home visiting clients.
Conclusions: There are opportunities to strengthen and expand warning signs education in Maryland through home visiting programs using new printed and video education materials.
abstract_id: PUBMED:24187122
Enhancing home visiting with mental health consultation. Home visiting programs have been successful in engaging and enrolling families who are at high risk for stress, depression, and substance abuse. However, many of these mothers may not be receiving mental health services because home visitors lack the knowledge and skills to identify mental health or determine how to appropriately address these problems. In response, a growing number of home visiting programs are expanding their capacity by integrating a mental health provider into their ongoing operations. This approach, referred to as early childhood mental health consultation, involves a partnership between a professional consultant with early childhood mental health expertise and home visiting or family support programs, staff, and families. This integrated model holds the promise of promoting parent and child behavioral health by enhancing the capacity of home visitors to identify and appropriately address the unmet mental health needs of children and families. The article highlights efforts under way in several federally funded Linking Actions for Unmet Needs in Children's Health Project sites where local programs are testing the effectiveness of this model.
Answer: Yes, opportunities can be enhanced for vaccinating children in home visiting programs. A study conducted in Manitoba, Canada, which linked home visiting program data with a provincial vaccination registry, found that complete vaccination rates for children up to age 1 and 2 were higher for those enrolled in the Families First program. This suggests that home visiting programs have the potential to increase vaccination rates for children enrolled, even though the program content may not be directly focused on vaccination promotion. The study also indicates that evidence-based program enhancements could potentially increase vaccination rates further, but more research is needed to inform policymakers of the most effective and cost-efficient approaches (PUBMED:26149681).
Moreover, integrating mental health consultation into home visiting programs can enhance the capacity of home visitors to identify and address unmet health needs, including potentially the need for vaccinations. This integrated model, which involves a partnership between a mental health consultant with expertise in early childhood and home visiting staff, can promote parent and child behavioral health and may indirectly support increased vaccination rates by addressing overall health and well-being (PUBMED:24187122). |
Instruction: Can the unenhanced phase be eliminated from dual-phase CT angiography for chest pain?
Abstracts:
abstract_id: PUBMED:25415694
Can the unenhanced phase be eliminated from dual-phase CT angiography for chest pain? Implications for diagnostic accuracy in acute aortic intramural hematoma. Objective: The purposes of this study were to retrospectively assess the frequency of acute aortic intramural hematoma and evaluate whether the elimination of the unenhanced imaging acquisition series from the dual-phase MDCT angiography (CTA) protocol for chest pain might affect diagnostic accuracy in detecting intramural hematoma and justify the reduced radiation dose.
Materials And Methods: From October 2006 to November 2012, 306 patients (mean age, 65.0 years) with acute chest pain underwent emergency CTA with a 64-MDCT scanner. Two experienced cardiovascular radiologists, blinded to the diagnosis, assessed the images in two different sessions in which enhanced (single-phase CTA) and combined unenhanced and contrast-enhanced (dual-phase CTA) findings were evaluated. Sensitivity, specificity, and accuracy along with 95% CIs were calculated. Surgical and pathologic diagnoses, including findings at clinical follow-up and any subsequent imaging modality, were used as reference standards.
Results: Thirty-six patients were suspected of having intramural hematoma; 16 patients underwent both surgery and transesophageal echocardiography (TEE), and the remaining 20 underwent TEE. Single-phase CTA showed a higher number of false-negative and false-positive results than dual-phase CTA. With intramural hematoma frequency of 12% (95% CI, 8.38-15.91%), sensitivity, specificity, and accuracy were 94.4% (81.3-99.3%), 99.3% (97.4-99.9%), and 98.7% (96.7-99.6%) for combined dual-phase CTA and 68.4% (51.4-82.5%), 96.3% (93.2-98.2%), and 92.8% (89.3-95.4%) for single-phase CTA. Dual-phase was significantly better than single-phase CTA with respect to sensitivity (p=0.002), specificity (p=0.008), overall accuracy (p<0.001), and interrater agreement (p=0.001).
Conclusion: The frequency of acute aortic intramural hematoma is similar to that previously reported. The acquisition of unenhanced images during the chest pain dual-phase CTA protocol significantly improves diagnostic accuracy over single-phase CTA.
abstract_id: PUBMED:31679380
Coronary CT Angiography and Dual-Energy Computed Tomography in Ischemic Heart Disease Suspected Patients. Background: Advanced computed tomography (CT) scanners enable concurrent assessment of coronary artery anatomy and myocardial perfusion. The purpose of this study was to assess dual-energy CT images in a group of patients suspected for ischemic heart disease and to evaluate agreement of cardiac computed tomography perfusion (CTP) images with CT angiography results in a single dual-energy computed tomography (DECT) acquisition.
Methods: Thirty patients (mean age: 53.8 ± 12.9 years, 60% male) with angina pectoris or atypical chest pain, suspected for ischemic heart disease, were investigated using a 384-row detector CT scanner in dual-energy mode (DECT). Firstly, resting CTP images were acquired, and then from the same raw data, computed tomography angiography (CTA) studies were reconstructed for stenosis detection. CT-based dipyridamole-stress myocardial perfusion imaging was then performed in patients who exhibited coronary stenosis >50% or had myocardial bridge (MB). A color-coded iodine map was used for evaluation of myocardial perfusion defects using the 17-segment model. Two independent blinded readers analyzed all images for stenosis and myocardial perfusion defects. Different myocardial iodine content (mg/mL) was calculated by parametric tests. The kappa agreement was calculated between results of two methods in cardiac scans.
Results: All 30 CT angiograms were evaluated and assessment ability was 100% for combined CTA/CTP. According to the combined CT examination, 17 patients (56.7%) exhibited significant coronary stenosis and/or deep MB (DMB). A total of 510 myocardial segments and 90 vascular territories were analyzed. Coronary CTA demonstrated significant stenosis in 22 vessels (24.4% of all main coronary arteries) among 12 patients (40%), DMB in 6 vessels (6.7% of all main coronary arteries) in 17 out of 30 patients (56.7%). Twenty-eight out of 90 vascular territories (31.1%) and 41 out of 510 segments (8%) showed reversible perfusion defects on stress DECT. Kappa agreement between CTA and CTP results in whole heart was 0.79 (95% confidence interval=0.57-1). There were significant differences in mean iodine concentration between ischemic (0.59 ± 0.07 mg/mL) and normal segments (2.2 ± 0.15) with P < 0.001.
Conclusion: Agreement of CTA and CTP in whole heart and in LAD considering DMB and significant CAD together were good to excellent; however, considering sole pathologies, most of the agreements were weak (<0.5). DECT with iodine quantification may provide a valuable method in comparison with previous methods for identifying both coronary stenosis and myocardial ischemia.
abstract_id: PUBMED:28168514
Limits of the possible: diagnostic image quality in coronary angiography with third-generation dual-source CT. Background: The usage of coronary CT angiography (CTA) is appropriate in patients with acute or chronic chest pain; however the diagnostic accuracy may be challenged with increased Agatston score (AS), increased heart rate, arrhythmia and severe obesity. Thus, we aim to determine the potential of the recently introduced third-generation dual-source CT (DSCT) for CTA in a 'real-life' clinical setting.
Methods: Two hundred and sixty-eight consecutive patients (age: 67 ± 10 years; BMI: 27 ± 5 kg/m²; 61% male) undergoing clinically indicated CTA with DSCT were included in the retrospective single-center analysis. A contrast-enhanced volume dataset was acquired in sequential (SSM) (n = 151) or helical scan mode (HSM) (n = 117). Coronary segments were classified in diagnostic or non-diagnostic image quality. A subset underwent invasive angiography to determine the diagnostic accuracy of CTA.
Results: SSM (96.8 ± 6%) and HSM (97.5 ± 8%) provided no significant differences in the overall diagnostic image quality. However, AS had significant influence on diagnostic image quality exclusively in SSM (B = 0.003; p = 0.0001), but not in HSM. Diagnostic image quality significantly decreased in SSM in patients with AS ≥2,000 (p = 0.03). SSM (sensitivity: 93.9%; specificity: 96.7%; PPV: 88.6%; NPV: 98.3%) and HSM (sensitivity: 97.4%; specificity: 94.3%; PPV: 86.0%; NPV: 99.0%) provided comparable diagnostic accuracy (p = n.s.). SSM yielded significantly lower radiation doses as compared to HSM (2.1 ± 2.0 vs. 5.1 ± 3.3 mSv; p = 0.0001) in age and BMI-matched cohorts.
Conclusion: SSM in third-generation DSCT enables significant dose savings and provides robust diagnostic image quality in patients with AS ≤2000 independent of heart rate, heart rhythm or obesity.
abstract_id: PUBMED:25121042
Comparison of 128-Slice Dual Source CT Coronary Angiography with Invasive Coronary Angiography. Background: Coronary artery disease (CAD) is one of the leading cause of the morbidity and mortality in India as well as worldwide and last decade has seen a steep rise in incidence of CAD in India. Direct visualization of the coronary arteries by invasive catheterization still represents the cornerstone of the evaluation of CAD. Cardiac imaging is a challenge of 21 (st) century and is being answered by 128 slice dual source CT as it has good temporal resolution, high scanning speed as well as low radiation dose.
Aim: To assess the diagnostic accuracy of 128-slice dual source CT Cardiac Angiography in comparison with Conventional Catheter Cardiac Angiography.
Materials And Methods: Forty patients attending the cardiology OPD with complaint of chest pain and suspected of having CAD were evaluated by CT coronary angiography and conventional invasive Catheter coronary angiography and the results were compared. All patients were checked for serum creatinine and ECG before the angiography. Computed Tomography (CT) coronary angiography was done using SIEMENS 128-slice Dual Source Flash Definition CT Scanner under either Retrospective or Prospective mode depending on the heart rate of the patient. Oral/IV beta-blocker were used whenever required.
Results: Coronary arteries were assessed as per 17- segment AHA model. A total of 600/ 609 segments were evaluable in 40 suspected patients on CT coronary angiography, of which 21 were false positives and 8 were false negatives with specificity of 95.12% and sensitivity and positive predictive value of 95.26% & 88.46% respectively.
Conclusion: Non-invasive assessment of CAD is now possible with high accuracy on 128-slice dual source CT scanner.
abstract_id: PUBMED:35220940
Clinical value of resting cardiac dual-energy CT in patients suspected of coronary artery disease. Background: Rest/stress myocardial CT perfusion (CTP) has high diagnostic value for coronary artery disease (CAD), but the additional value of resting CTP especially dual-energy CTP (DE-CTP) beyond coronary CT angiography (CCTA) in chest pain triage remains unclear. We aimed to evaluate the diagnostic accuracy of resting myocardial DE-CTP, and additional value in detecting CAD beyond CCTA (obstructive stenosis: ≥ 50%) in patients suspected of CAD.
Methods: In this prespecified subanalysis of 54 patients, we included patients suspected of CAD referred to invasive coronary angiography (ICA). Diagnostic accuracy of resting myocardial DE-CTP in detecting myocardial perfusion defects was assessed using resting 13N-ammonia positron emission tomography (PET) as the gold standard. Diagnostic accuracy of cardiac dual-energy CT in detecting flow-limiting stenoses (justifying revascularization) by CCTA combined with resting myocardial DE-CTP, using ICA plus resting 13N-ammonia PET as the gold standard. The CCTA and DE-CTP datasets derived from a single-phase scan performed with dual-energy mode.
Results: For detecting myocardial perfusion defects, DE-CTP demonstrated high diagnostic accuracy with a sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of 95.52%, 85.93%, and 0.907 on a per-segment basis. For detecting flow-limiting stenoses by CCTA alone, sensitivity, specificity, and AUC were 100%, 56.47%, and 0.777 respectively on a per-vessel basis. For detecting flow-limiting stenoses by CCTA combined with resting myocardial DE-CTP, sensitivity, specificity, and AUC were 96.10%, 95.29% and 0.956 respectively on a per-vessel basis. Additionally, CCTA combined with resting myocardial DE-CTP detected five patients (9%) with no obstructive stenosis but with myocardial perfusion defects confirmed by ICA plus 13N-ammonia PET.
Conclusions: Resting cardiac DE-CTP demonstrates a high diagnostic accuracy in detecting myocardial perfusion defects and provides an additional clinical value by reducing rates of false-positive and false-negative patients beyond CCTA in patients suspected of CAD.
abstract_id: PUBMED:30539378
Evaluation of the proximal coronary arteries in suspected pulmonary embolism: diagnostic images in 51% of patients using non-gated, dual-source CT pulmonary angiography. Purpose: This retrospective study reports the frequency and severity of coronary artery motion on dual-source high-pitch (DSHP), conventional pitch single-source (SS), and dual-source dual-energy (DE) CT pulmonary angiography (CTPA) studies.
Methods: Two hundred eighty-eight consecutive patients underwent CTPA scans for suspected pulmonary embolism between September 1, 2013 and January 31, 2014. One hundred ninety-four at DSHP scans, 57 SS scans, and 37 DE scans were analyzed. Coronary arteries were separated into nine segments, and coronary artery motion was qualitatively scored using a scale from 1 to 4 (non-interpretable to diagnostic with no motion artifacts). Signal intensity, noise, and signal to noise ratio (SNR) of the aorta, main pulmonary artery, and paraspinal muscles were also assessed.
Results: DSHP CTPA images had significantly less coronary artery motion, with 30.1% of coronary segments being fully evaluable compared to 4.2% of SS segments and 7.9% of DE segments (p < 0.05 for all comparisons). When imaging with DSHP, the proximal coronary arteries were more frequently evaluable than distal coronary arteries (51% versus 11.3%, p < 0.001). Without ECG synchronization and heart rate control, the distal left anterior descending coronary artery and mid right coronary artery remain infrequently interpretable (7% and 9%, respectively) on DSHP images.
Conclusions: DSHP CTPA decreases coronary artery motion artifacts and allows for full evaluation of the proximal coronary arteries in 51% of cases. The study highlights the increasing importance of proximal coronary artery review when interpreting CTPA for acute chest pain.
abstract_id: PUBMED:35747220
FFR-CT strengthens multi-disciplinary reporting of CT coronary angiography. The utility of computed tomography (CT) coronary angiography (CTCA) is underpinned by its excellent sensitivity and negative-predictive value for coronary artery disease (CAD), although it lacks specificity. Invasive coronary angiography (ICA) and invasive fractional flow reserve (FFR), are gold-standard investigations for coronary artery disease, however, they are resource intensive and associated with a small risk of serious complications. FFR-CT has been shown to have comparable performance to FFR measurements and has the potential to reduce unnecessary ICAs. The aim of this study is to briefly review FFR-CT, as an investigational modality for stable angina, and to share 'real-world' UK data, in consecutive patients, following the initial adoption of FFR-CT in our district general hospital in 2016. A retrospective analysis was performed of a previously published consecutive series of 157 patients referred for CTCA by our group in a single, non-interventional, district general hospital. Our multi-disciplinary team (MDT) recorded the likely definitive outcome following CTCA, namely intervention or optimised medical management. FFR-CT analysis was performed on 24 consecutive patients where the MDT recommendation was for ICA. The CTCA + MDT findings, FFR-CT and ICA ± FFR were correlated along with the definitive outcome. In comparing CTCA + MDT, FFR-CT and definitive outcome, in terms of whether a percutaneous coronary intervention was performed, FFR-CT was significantly correlated with definitive outcome (r=0.471, p=0.036) as opposed to CTCA + MDT (r=0.378, p=0.07). In five cases (21%, 5/24), FFR-CT could have altered the management plan by reclassification of coronary stenosis. FFR-CT of 60 coronary artery vessels (83%, 60/72) (mean FFR-CT ratio 0.82 ± 0.10) compared well with FFR performed on 18 coronary vessels (mean 0.80 ± 0.11) (r=0.758, p=0.0013). In conclusion, FFR-CT potentially adds value to MDT outcome of CTCA, increasing the specificity and predictive accuracy of CTCA. FFR-CT may be best utilised to investigate CTCAs where there is potentially prognostically significant moderate disease or severe disease to maximise cost-effectiveness. These data could be used by other NHS trusts to best incorporate FFR-CT into their diagnostic pathways for the investigation of stable chest pain.
abstract_id: PUBMED:18042339
Innovations in emergency management of chest pain with CT angiography Recent technological innovations modify the diagnosis opportunities of multislice CT angiography. Emergency chest pain management is therefore optimised and still oriented by clinical presentation. Aortic CT angiography allows the diagnosis and classification of aortic dissection or intramural haematoma. It also shows the extension to aortic thoracoabdominal branches and visceral involvement. Pulmonary embolism diagnosis will be completed by scanographic evaluation of its seriousness. Chest pain caused by pulmonary or digestive diseases will also be documented. A late phase imaging seems useful to diagnose acute myocardial pathology.
abstract_id: PUBMED:29255687
Coronary CT angiography-future directions. Clinical applications of coronary CT angiography (CTA) will typically be based on the method´s very high sensitivity to identify coronary stenosis if image quality is good and if the pre-test likelihood of the patients is in the lower range. Guidelines of national and international cardiac societies are starting to incorporate coronary CTA into their recommendations for the management of patients with stable and acute chest pain. Initial data show that in the future, the use of coronary CTA may not only be able to replace other forms of diagnostic testing, but, in fact, may improve patient outcome. In this article, a perspective is provided on the future directions of coronary CTA.
abstract_id: PUBMED:17285271
Dual-source CT in chest pain diagnosis Background: With the depiction of pulmonary arteries, coronary arteries, and the aorta, CT angiography of the chest offers a comprehensive diagnostic work-up of unclear chest pain. The aim of this study was to assess the diagnostic accuracy of dual-source CT in this patient group.
Materials And Methods: A total of 47 patients suffering from unclear chest pain were examined with a Siemens Somatom Definition. Volume and flow of contrast media (Ultravist, Schering) were adapted to the body weight. The examinations were evaluated with regard to image quality and contrast opacification and to the diagnostic accuracy with reference to the final clinical diagnosis.
Results: Adequate contrast opacification was achieved in all examinations. The depiction of the coronary arteries was diagnostic in all cases. The cause of chest pain could be identified in 41 cases. Among the diagnoses were coronary and myocardial pathologies, valvular disease, aortic aneurysms and dissections, pulmonary embolism, and pneumonic consolidation.
Conclusion: DSCT angiography of the chest offers a very good image quality even at high heart rates so that a high diagnostic accuracy is achieved in patients with acute chest pain.
Answer: Based on the provided abstracts, the unenhanced phase in dual-phase CT angiography for chest pain plays a significant role in certain diagnostic scenarios. Specifically, in the context of acute aortic intramural hematoma, the inclusion of unenhanced images significantly improves diagnostic accuracy over single-phase CT angiography. The study by PUBMED:25415694 found that dual-phase CT angiography, which includes both unenhanced and contrast-enhanced series, had significantly better sensitivity, specificity, overall accuracy, and interrater agreement compared to single-phase CT angiography. Therefore, the elimination of the unenhanced phase could potentially affect the diagnostic accuracy for detecting intramural hematoma.
However, the other abstracts focus on different aspects of CT angiography and do not directly address the necessity of the unenhanced phase for chest pain evaluation. For instance, PUBMED:31679380 discusses the agreement between coronary CT angiography and computed tomography perfusion images in dual-energy CT, PUBMED:28168514 evaluates the diagnostic image quality in coronary angiography with third-generation dual-source CT, PUBMED:25121042 compares 128-slice dual-source CT coronary angiography with invasive coronary angiography, PUBMED:35220940 assesses the clinical value of resting cardiac dual-energy CT in patients suspected of coronary artery disease, PUBMED:30539378 reports on the evaluation of proximal coronary arteries in suspected pulmonary embolism using non-gated, dual-source CT pulmonary angiography, PUBMED:35747220 discusses the role of FFR-CT in multi-disciplinary reporting of CT coronary angiography, PUBMED:18042339 mentions innovations in emergency management of chest pain with CT angiography, PUBMED:29255687 provides a perspective on future directions of coronary CT angiography, and PUBMED:17285271 assesses the diagnostic accuracy of dual-source CT in patients with unclear chest pain.
In conclusion, while the unenhanced phase may not be necessary for all types of evaluations in CT angiography for chest pain, it appears to be crucial for the accurate diagnosis of acute aortic intramural hematoma (PUBMED:25415694). Therefore, the decision to eliminate the unenhanced phase should be made with consideration of the specific clinical scenario and the potential impact on diagnostic accuracy. |
Instruction: "Is this a dagger I see before me?
Abstracts:
abstract_id: PUBMED:37837107
Soft DAgger: Sample-Efficient Imitation Learning for Control of Soft Robots. This paper presents Soft DAgger, an efficient imitation learning-based approach for training control solutions for soft robots. To demonstrate the effectiveness of the proposed algorithm, we implement it on a two-module soft robotic arm involved in the task of writing letters in 3D space. Soft DAgger uses a dynamic behavioral map of the soft robot, which maps the robot's task space to its actuation space. The map acts as a teacher and is responsible for predicting the optimal actions for the soft robot based on its previous state action history, expert demonstrations, and current position. This algorithm achieves generalization ability without depending on costly exploration techniques or reinforcement learning-based synthetic agents. We propose two variants of the control algorithm and demonstrate that good generalization capabilities and improved task reproducibility can be achieved, along with a consistent decrease in the optimization time and samples. Overall, Soft DAgger provides a practical control solution to perform complex tasks in fewer samples with soft robots. To the best of our knowledge, our study is an initial exploration of imitation learning with online optimization for soft robot control.
abstract_id: PUBMED:37363060
The genome sequence of the Grey Dagger, Acronicta psi (Linnaeus, 1758). We present a genome assembly from an individual male Acronicta psi (the Grey Dagger; Arthropoda; Insecta; Lepidoptera; Noctuidae). The genome sequence is 405 megabases in span. The whole assembly is scaffolded into 31 chromosomal pseudomolecules, including the assembled Z sex chromosome. The mitochondrial genome has also been assembled and is 15.4 kilobases long.
abstract_id: PUBMED:28775658
Representation of [Formula: see text]-Bernstein polynomials in terms of [Formula: see text]-Jacobi polynomials. A representation of [Formula: see text]-Bernstein polynomials in terms of [Formula: see text]-Jacobi polynomials is obtained.
abstract_id: PUBMED:27026912
Convergence in [Formula: see text]-quasicontinuous posets. In this paper, we present one way to generalize [Formula: see text]-convergence and [Formula: see text]-convergence of nets for arbitrary posets by use of the cut operator instead of joins. Some convergence theoretical characterizations of [Formula: see text]-continuity and [Formula: see text]-quasicontinuity of posets are given. The main results are: (1) a poset P is [Formula: see text]-continuous if and only if the [Formula: see text]-convergence in P is topological; (2) P is [Formula: see text]-quasicontinuous if and only if the [Formula: see text]-convergence in P is topological.
abstract_id: PUBMED:29670323
A note on [Formula: see text]-Bernstein polynomials and their applications based on [Formula: see text]-calculus. Nowadays [Formula: see text]-Bernstein polynomials have been studied in many different fields such as operator theory, CAGD, and number theory. In order to obtain the fundamental properties and results of Bernstein polynomials by using [Formula: see text]-calculus, we give basic definitions and results related to [Formula: see text]-calculus. The main purpose of this study is to investigate a generating function for [Formula: see text]-Bernstein polynomials. By using an approach similar to that of Goldman et al. in (SIAM J. Discrete Math. 28(3):1009-1025, 2014), we derive some new identities, relations, and formulas for the [Formula: see text]-Bernstein polynomials. Also, we give a plot generating function of [Formula: see text]-Bernstein polynomials for some selected p and q values.
abstract_id: PUBMED:32282936
Experimental and theoretical studies of frequency response function of dagger-shaped atomic force microscope cantilever in different immersion environments. In this study, the amplitude of frequency response functions of vertical and rotational displacements and resonant frequency of a dagger-shaped atomic force microscope cantilever have been investigated. To increase the accuracy of theoretical model, all necessary details for cantilever and sample surface have been taken into account. In this paper, carbon tetrachloride (CCL4 ), methanol, acetone, water and air have been considered as the environments. In the most cases, presence and absence of tip-sample interaction force have studied. For a sample cantilever immersed in air, both of the Euler-Bernoulli and Timoshenko beam theories have been compared. The results indicate that the tip-sample interaction force raises the resonant frequency. Increasing the liquid viscosity leads to a decrease in the resonant frequency and the amplitude of frequency response functions of vertical and rotational displacements. Increasing the rectangular and tapered parts lengths, decreases the resonant frequency and amplitude of frequency response functions of vertical and rotational displacements. By increasing the cantilever thickness the resonant frequency and amplitude of frequency response functions of vertical and rotational displacements increases. Theoretical model for air and water has been compared with experimental work. Results show good agreement.
abstract_id: PUBMED:34433726
Dagger Sign in Ankylosing Spondylitis. N/A
abstract_id: PUBMED:28824262
Approximation properties of Chlodowsky variant of [Formula: see text] Bernstein-Stancu-Schurer operators. In the present paper, we introduce the Chlodowsky variant of [Formula: see text] Bernstein-Stancu-Schurer operators which is a generalization of [Formula: see text] Bernstein-Stancu-Schurer operators. We also discuss its Korovkin-type approximation properties and rate of convergence.
abstract_id: PUBMED:23776306
Spectrum of the [Formula: see text]-Neumann Laplacian on the Fock space. The spectrum of the [Formula: see text]-Neumann Laplacian on the Fock space [Formula: see text] is explicitly computed. It turns out that it consists of positive integer eigenvalues, each of which is of infinite multiplicity. Spectral analysis of the [Formula: see text]-Neumann Laplacian on the Fock space is closely related to Schrödinger operators with magnetic fields and to the complex Witten Laplacian.
abstract_id: PUBMED:29290666
k-fractional integral trapezium-like inequalities through [Formula: see text]-convex and [Formula: see text]-convex mappings. In this paper, a new general identity for differentiable mappings via k-fractional integrals is derived. By using the concept of [Formula: see text]-convexity, [Formula: see text]-convexity and the obtained equation, some new trapezium-like integral inequalities are established. The results presented provide extensions of those given in earlier works.
Answer: The question "Is this a dagger I see before me?" is a famous line from William Shakespeare's play "Macbeth." However, in the context of the provided abstracts, the term "dagger" appears in various scientific contexts unrelated to the literary reference.
For instance, the term "dagger" is used in the context of a specific shape of an atomic force microscope cantilever, as mentioned in the study of frequency response functions of such a cantilever in different immersion environments (PUBMED:32282936). Additionally, "Dagger Sign" is mentioned in relation to Ankylosing Spondylitis, although no further information is provided in the abstract (PUBMED:34433726).
The term also appears in the context of the Grey Dagger, which is a species of moth (Acronicta psi), and the abstract discusses the genome sequence of this species (PUBMED:37837107).
Other abstracts mention mathematical concepts and polynomials, such as "k-Bernstein polynomials," "k-Jacobi polynomials," and "k-convex mappings," where "k" seems to be a mathematical variable or parameter (PUBMED:28775658, PUBMED:27026912, PUBMED:29670323, PUBMED:28824262, PUBMED:23776306, PUBMED:29290666).
None of the abstracts provided are related to the Shakespearean use of the word "dagger" as a physical or hallucinated weapon. Instead, they all pertain to scientific research in various fields, including robotics, genomics, mathematics, and physics. |
Instruction: Are nursing home patients with dementia diagnosis at increased risk for inadequate pain treatment?
Abstracts:
abstract_id: PUBMED:16035124
Are nursing home patients with dementia diagnosis at increased risk for inadequate pain treatment? Background: Mentally impaired and demented nursing home patients are at increased risk of undertreatment for pain. We wanted to examine pain assessment and complaints and pain treatment of nursing home patients according to mental state, and with special regard to treatment of patients with dementia diagnosis and cognitively impaired patients who did not have a dementia diagnosis.
Methods: Cross sectional study from three nursing homes in Bergen, Norway including 125 persons (median age 84 years), living permanently in a nursing home. Diagnoses and prescribed and administered analgesic drugs were recorded. An experienced nurse interviewed nurses in charge and patients regarding presence of pain during the last week. Patients who were able to answer whether they had experienced pain during the last week were categorised as communicative. Cognitive function was assessed by means of the Abbreviated Mental Test.
Results: Seventeen percent of the patients were cognitively intact, 30% cognitively impaired and 54% had a dementia diagnosis. Forty-seven percent of communicative patients complained of pain, nurses reported pain in 67% patients. Twenty-nine percent of the patients had received scheduled analgesics during the last week, cognitively intact patients 38%, cognitively impaired 30%, demented 25% (p = 0.53). Twenty percent were given analgesics PRN: cognitively intact patients 33%, cognitively impaired 27%, demented 12% (p = 0.05). Logistic regression analyses revealed that patients with dementia diagnosis were less likely to receive PRN medication [Adjusted odds ratio (AOR) 0.22 95% confidence interval (CI) 0.06-0.76] compared to mentally impaired patients. Regarding scheduled medication there was no difference between the groups. Nurses' opinion of pain was a significant factor for receiving analgesic drugs, scheduled AOR 3.95 95% CI 1.48-10.5, PRN 3.80-95% CI 1.28-11.3).
Conclusions: A label of dementia may bias the interpretation of pain cues of demented patients, while complaints from cognitively impaired patients may be taken for granted. This may contribute to lower use of PRN medication in demented patients compared to cognitively impaired patients.
abstract_id: PUBMED:28456075
Associations between pain and depression in nursing home patients at different stages of dementia. Background: Pain is associated with depression in nursing home patients with dementia. It is, however, unclear whether pain increases depression. Therefore we evaluated the prospective associations between pain and depressive symptoms in nursing home patients at different stages of cognitive impairment.
Methods: Two longitudinal studies were combined, including 931 patients (≥65 years) from 65 nursing homes. One study assessed patients at admission, with 6-month follow-up (2012-2014). The other study assessed residents with varying lengths of stay, with 4-month follow-up (2014-2015). Patients were assessed with the Mini-Mental State Examination, the Mobilisation-Observation-Behaviour-Intensity-Dementia-2 Pain Scale, and the Cornell Scale for Depression in Dementia.
Results: At baseline, 343 patients (40% of 858 assessed) had moderate to severe pain, and 347 (38% of 924) had depression. Pain increased the risk of depression (OR 2.35, 95% CI 1.76-3.12). Using mixed model analyses, we found that a 1-point increase in pain was associated with a .48 increase in depression (p<.001). This association persisted in mild, moderate, and severe cognitive impairment. In those recently admitted, depressive symptoms decreased over time, and having less pain at follow-up was associated with a decrease in depressive symptoms (within-subject effect; p=.042).
Limitations: The two cohorts had different inclusion criteria, which may reduce generalisability. The study design does not allow conclusions on causality.
Conclusions: Pain and depressive symptoms are associated in patients with dementia. Because reduced pain is associated with less depressive symptoms, these patients should be assessed regularly for untreated pain. The benefit of analgesic treatment should be weighed carefully against the potential for adverse effects.
abstract_id: PUBMED:24703946
Cancer-related pain and symptoms among nursing home residents: a systematic review. Context: Many older nursing home (NH) residents with cancer experience pain and distressing symptoms. Although some develop cancer during their time in the institution, an increasing number are admitted during their final stages of their lives. Numerous studies have evaluated various treatment approaches, but how pain and symptoms are assessed and managed in people with cancer with and without dementia is unclear.
Objectives: The objective of this review was to summarize the evidence on cancer-related symptoms among NH residents with and without dementia.
Methods: We systematically searched the PubMed (1946-2012), Embase (1974-2012), CINAHL (1981-2012), AgeLine, and Cochrane Library (1998-2012) databases using the search terms neoplasms, cancer, tumor, and nursing home. The inclusion criteria were studies including NH residents with a diagnosis of cancer and outcome measures including pain and cancer-related symptoms.
Results: We identified 11 studies (cross-sectional, longitudinal, clinical trial, and qualitative studies). Ten studies investigated the prevalence and treatment of cancer-related symptoms such as vomiting, nausea, urinary tract infections, and depression. Studies clearly report a high prevalence of pain and reduced prescribing and treatment, regardless of the cognitive status. Only one small study included people with cancer and a diagnosis of dementia. Studies of new cancer diagnoses in NHs could not be identified.
Conclusion: This review clearly reports a high prevalence of pain and reduced drug prescribing and treatment among NH residents with cancer. This issue appears to be most critical among people with severe dementia, emphasizing the need for better guidance and evidence on pain assessment for these individuals.
abstract_id: PUBMED:29487556
Long-Term Pain Treatment Did Not Improve Sleep in Nursing Home Patients with Comorbid Dementia and Depression: A 13-Week Randomized Placebo-Controlled Trial. Objective: Previous research indicates that pain treatment may improve sleep among nursing home patients. We aimed to investigate the long-term effect of pain treatment on 24-h sleep patterns in patients with comorbid depression and dementia. Design: A 13-week, multicenter, parallel-group, double-blind, placebo-controlled randomized clinical trial conducted between August 2014 and September 2016. Setting: Long-term patients from 47 nursing homes in Norway. Participants: We included 106 patients with comorbid dementia and depression according to the Mini Mental Status Examination (MMSE) and the Cornell Scale for Depression in Dementia (CSDD).Intervention: Patients who were not using analgesics were randomized to receive either paracetamol (3 g/day) or placebo tablets. Those who already received pain treatment were randomized to buprenorphine transdermal system (maximum 10 μg/h/7 days) or placebo transdermal patches. Measurements: Sleep was assessed continuously for 7 days by actigraphy, at baseline and in week 13. Total sleep time (TST), sleep efficiency (SE), sleep onset latency (SOL), wake after sleep onset (WASO), early morning awakening (EMA), and number of wake bouts (NoW) were evaluated. In addition, daytime total sleep time (DTS) was estimated. Pain was assessed with Mobilization-Observation-Behavior-Intensity-Dementia-2 Pain Scale (MOBID-2). Results: The linear mixed model analyses for TST, SE, SOL, WASO, EMA, NoW and DTS showed no statistically significant differences between patients who received active pain treatment and those who received placebo. Post hoc subgroup analyses showed that there were no statistically significant differences between active treatment and placebo from baseline to week 13 in patients who were in pain (MOBID-2 ≥ 3) at baseline, or in patients who had poor sleep (defined as SE < 85%) at baseline. Patients who received active buprenorphine showed an increase in TST and SE compared to those who received active paracetamol. Conclusion: The main analyses showed that long-term pain treatment did not improve sleep as measured with actigraphy. Compared to paracetamol, TST and SE increased among patients who received buprenorphine. This could indicate that some patients had beneficial effects from the most potent pain treatment. However, based on the present findings, long-term pain treatment is not recommended as a strategy to improve sleep. Clinical Trial https://clinicaltrials.gov/ct2/show/NCT02267057.
abstract_id: PUBMED:32039556
Nursing Staff Needs in Providing Palliative Care for Persons With Dementia at Home or in Nursing Homes: A Survey. Purpose: This study aimed to evaluate what types and forms of support nursing staff need in providing palliative care for persons with dementia. Another aim was to compare the needs of nursing staff with different educational levels and working in home care or in nursing homes.
Design: A cross-sectional, descriptive survey design was used.
Methods: A questionnaire was administered to a convenience sample of Dutch nursing staff working in the home care or nursing home setting. Data were collected from July through October 2018. Quantitative survey data were analyzed using descriptive statistics. Data from two open-ended survey questions were investigated using content analysis.
Findings: The sample comprised 416 respondents. Nursing staff with different educational levels and working in different settings indicated largely similar needs. The highest-ranking needs for support were in dealing with family disagreement in end-of-life decision making (58%), dealing with challenging behaviors (41%), and recognizing and managing pain (38%). The highest-ranking form of support was peer-to-peer learning (51%). If respondents would have more time to do their work, devoting personal attention would be a priority.
Conclusions: Nursing staff with different educational levels and working in home care or in nursing homes endorsed similar needs in providing palliative care for persons with dementia and their loved ones.
Clinical Relevance: It is critical to understand the specific needs of nursing staff in order to develop tailored strategies. Interventions aimed at increasing the competence of nursing staff in providing palliative care for persons with dementia may target similar areas to support a heterogeneous group of nurses and nurse assistants, working in home care or in a nursing home.
abstract_id: PUBMED:27321869
Signs of Imminent Dying and Change in Symptom Intensity During Pharmacological Treatment in Dying Nursing Home Patients: A Prospective Trajectory Study. Objectives: To investigate whether it is possible to determine signs of imminent dying and change in pain and symptom intensity during pharmacological treatment in nursing home patients, from day perceived as dying and to day of death.
Design: Prospective, longitudinal trajectory trial.
Setting: Forty-seven nursing homes within 35 municipalities of Norway.
Participants: A total of 691 nursing home patients were followed during the first year after admission and 152 were assessed carefully in their last days of life.
Measurements: Time between admission and day of death, and symptom severity by Edmonton symptom assessment system (ESAS), pain (mobilization-observation-behavior-intensity-dementia-2), level of dementia (clinical dementia rating scale), physical function (Karnofsky performance scale), and activities of daily living (physical self-maintenance scale).
Results: Twenty-five percent died during the first year after admission. Increased fatigue (logistic regression, odds ratio [OR] 1.8, P = .009) and poor appetite (OR 1.2, P = .005) were significantly associated with being able to identify the day a person was imminently dying, which was possible in 61% of the dying (n = 82). On that day, the administration of opioids, midazolam, and anticholinergics increased significantly (P < .001), and was associated with amelioration of symptoms, such as pain (mixed-models linear regression, 60% vs 46%, P < .001), anxiety (44% vs 31%, P < .001), and depression (33% vs 15%, P < .001). However, most symptoms were still prevalent at day of death, and moderate to severe dyspnea and death rattle increased from 44% to 53% (P = .040) and 8% to 19% (P < .001), respectively. Respiratory symptoms were not associated with opioids or anticholinergics.
Conclusion: Pharmacological treatment ameliorated distressing symptoms in dying nursing home patients; however, most symptoms, including pain and dyspnea, were still common at day of death. Results emphasize critical needs for better implementation of guidelines and staff education.
Trial Registration: ClinicalTrials.govNCT01920100.
abstract_id: PUBMED:11899483
Treatment of insomnia in demented nursing home patients: a review Insomnia and fragmentation are features of the sleep of these patients. In order to list the factors disturbing the sleep of demented nursing home patients and the interventions improving their sleep quality, the literature was reviewed. A Medline search over the period 1966-2000 was performed. This resulted in 22 research articles. Admission to a nursing home is associated with sleep disturbances caused by patient problems (e.g. pain), care routines (e.g. nightly nursing round) and environment (e.g. noise). There are indications that the use of hypnotics in nursing home patients is not always effective and increases the risk of falls. There are several ways to reduce hypnotic consumption in nursing homes. Non-pharmacological interventions to decrease sleep disturbances caused by environmental factors have a favourable although weak effect on sleep itself. By reducing nightly noise, sleep quality does not necessarily improve. Light therapy seems to be the most effective non-pharmacological method to strengthen the circadian sleep/wake rhythm. The struggle against insomnia without using medication perhaps requires a two tracks management: detection and elimination of disturbing environmental factors and implementation of an adequate method to strengthen the circadian sleep/wake rhythm.
abstract_id: PUBMED:28793905
The Liverpool Care Pathway: discarded in cancer patients but good enough for dying nursing home patients? A systematic review. Background: The Liverpool Care Pathway (LCP) is an interdisciplinary protocol, aiming to ensure that dying patients receive dignified and individualized treatment and care at the end-of-life. LCP was originally developed in 1997 in the United Kingdom from a model of cancer care successfully established in hospices. It has since been introduced in many countries, including Norway. The method was withdrawn in the UK in 2013. This review investigates whether LCP has been adapted and validated for use in nursing homes and for dying people with dementia.
Methods: This systematic review is based on a systematic literature search of MEDLINE, CINAHL, EMBASE, and Web of Science.
Results: The search identified 12 studies, but none describing an evidence-based adaption of LCP to nursing home patients and people with dementia. No studies described the LCP implementation procedure, including strategies for discontinuation of medications, procedures for nutrition and hydration, or the testing of such procedures in nursing homes. No effect studies addressing the assessment and treatment of pain and symptoms that include dying nursing home patients and people with dementia are available.
Conclusion: LCP has not been adapted to nursing home patients and people with dementia. Current evidence, i.e. studies investigating the validity and reliability in clinically relevant settings, is too limited for the LCP procedure to be recommended for the population at hand. There is a need to develop good practice in palliative medicine, Advance Care Planning, and disease-specific recommendations for people with dementia.
abstract_id: PUBMED:33959072
Sleep and its Association With Pain and Depression in Nursing Home Patients With Advanced Dementia - a Cross-Sectional Study. Objective: Previous research suggests a positive association between pain, depression and sleep. In this study, we investigate how sleep correlates with varying levels of pain and depression in nursing home (NH) patients with dementia. Materials and methods: Cross-sectional study (n = 141) with sleep-related data, derived from two multicenter studies conducted in Norway. We included NH patients with dementia according to the Mini-Mental State Examination (MMSE ≤ 20) from the COSMOS trial (n = 46) and the DEP.PAIN.DEM trial (n = 95) whose sleep was objectively measured with actigraphy. In the COSMOS trial, NH patients were included if they were ≥65 years of age and with life expectancy >6 months. In the DEP.PAIN.DEM trial, patients were included if they were ≥60 years and if they had depression according to the Cornell Scale for Depression in Dementia (CSDD ≥ 8). In both studies, pain was assessed with the Mobilization-Observation-Behavior-Intensity-Dementia-2 Pain Scale (MOBID-2), and depression with CSDD. Sleep parameters were total sleep time (TST), sleep efficiency (SE), sleep onset latency (SOL), wake after sleep onset (WASO), early morning awakening (EMA), daytime total sleep time (DTS) and time in bed (TiB). We registered use of sedatives, analgesics, opioids and antidepressants from patient health records and adjusted for these medications in the analyses. Results: Mean age was 86.2 years and 76.3% were female. Hierarchical regressions showed that pain was associated with higher TST and SE (p < 0.05), less WASO (p < 0.01) and more DTS (p < 0.01). More severe dementia was associated with more WASO (p < 0.05) and TiB (p < 0.01). More severe depression was associated with less TST (p < 0.05), less DTS (p < 0.01) and less TiB (p < 0.01). Use of sedative medications was associated with less TiB (p < 0.05). Conclusion: When sleep was measured with actigraphy, NH patients with dementia and pain slept more than patients without pain, in terms of higher total sleep time. Furthermore, their sleep efficiency was higher, indicating that the patients had more sleep within the time they spent in bed. Patients with more severe dementia spent more time awake during the time spent in bed. Furthermore, people with more severe depression slept less at daytime and had less total sleep time Controlling for concomitant medication use did not affect the obtained results.
abstract_id: PUBMED:25687926
Markers of Impaired Decision Making in Nursing Home Residents: Assessment by Nursing Home Staff in a Population-Based Study. Introduction: Many nursing home residents have cognitive impairment that affects their decision making. In order to identify potential markers of impaired decision making, we investigated the association between a range of nursing home resident characteristics and impaired decision making in a population-based sample.
Methods: Participants were 13,013 residents in the 2004 National Nursing Home Survey. We used logistic regression to determine the association between resident characteristics (ie, gender, age, race, mood, recent pain, falls, fractures, or hospitalizations, length of stay, number of activities of daily living (ADL) requiring help, and diagnoses of dementia, anxiety disorders, and depression) and impaired (vs independent) decision making.
Results: After controlling for depression and anxiety diagnoses, as well as gender, age, race, and recent hospitalization or pain, characteristics associated with impaired decision making included depressed, sad, or anxious mood ["mild" odds ratio (OR) = 1.39, 95% confidence interval (CI) = 1.23-1.58; "severe" OR = 2.69, 95% CI = 2.27-3.20); diagnosed dementia or living on a dementia hall (OR = 5.07, 95% CI = 4.52-5.67); number of ADL requiring assistance (with 5 ADL, OR = 10.69, 95% CI = 6.82-16.75); length of nursing home stay [101-365 days (OR = 1.60, 95% CI = 1.36-1.89); 366 days-2 years (OR = 1.60, 95% CI = 1.34-1.90); >2 years (OR = 2.25, 95% CI = 1.92-2.63)]; and history of falls or fractures in the last 6 months (OR = 1.19, 95% CI = 1.07-1.32)]. Residents reporting pain in the last week were less likely to have impaired decision making (OR = 0.58, 95% CI = 0.52-0.66).
Conclusions: We found several independent markers of impaired decision making in nursing home residents, including depressed, sad, or anxious mood (independent of depression or anxiety diagnosis); dementia; and greater need for ADL assistance. Some of these factors, in particular mood, are modifiable and addressing them may help improve decision making. These markers should be explored further to help identify residents with impaired decision making.
Answer: Yes, nursing home patients with a dementia diagnosis are at increased risk for inadequate pain treatment. A study found that patients with dementia were less likely to receive PRN (as needed) medication compared to mentally impaired patients without a dementia diagnosis, suggesting that a label of dementia may bias the interpretation of pain cues, leading to lower use of PRN medication in these patients (PUBMED:16035124). Another study highlighted that pain is associated with depression in nursing home patients with dementia, and untreated pain could contribute to depressive symptoms, emphasizing the need for regular assessment and careful treatment of pain in this population (PUBMED:28456075).
Furthermore, a systematic review reported a high prevalence of pain and reduced drug prescribing and treatment among nursing home residents with cancer, which appears to be most critical among people with severe dementia, indicating a need for better guidance on pain assessment for individuals with dementia (PUBMED:24703946). Another study found that long-term pain treatment did not improve sleep in nursing home patients with comorbid dementia and depression, suggesting that pain management strategies need to be carefully considered and may not always lead to improvements in related symptoms such as sleep disturbances (PUBMED:29487556).
Additionally, nursing staff have indicated a need for support in recognizing and managing pain among persons with dementia, which suggests that there may be gaps in the provision of adequate pain treatment for this population (PUBMED:32039556). Lastly, a study on signs of imminent dying in nursing home patients found that pharmacological treatment ameliorated distressing symptoms, including pain, but most symptoms, including pain and dyspnea, were still common at the day of death, indicating a need for better implementation of guidelines and staff education in pain management (PUBMED:27321869).
In summary, the evidence suggests that nursing home patients with dementia are at risk of inadequate pain treatment, and there is a need for improved assessment, management, and education regarding pain in this vulnerable population. |
Instruction: Can murine diabetic nephropathy be separated from superimposed acute renal failure?
Abstracts:
abstract_id: PUBMED:15954931
Can murine diabetic nephropathy be separated from superimposed acute renal failure? Background: Streptozotocin (STZ) is commonly used to induce diabetes in experimental animal models, but not without accompanying cytotoxic effects. This study was undertaken to (1) determine an optimal dose and administration route of STZ to induce diabetic nephropathy in wild-type mice but without the concurrent acute renal injury resulting from cytotoxic effects of STZ and (2) evaluate the pattern of tubular injury and interstitial inflammation in this model.
Methods: Male Balb/c mice received either (1) STZ (225 mg/kg by intraperitoneal injection.); or (2) two doses of STZ 5 days apart (150 mg/150 mg/kg; 75 mg/150 mg/kg; 75 mg/75 mg/kg; and 100 mg/100 mg/kg by intravenous injection). Another strain of mice, C57BL/6J, also received STZ (200 mg/kg intravenously or intraperitoneally). Renal function and histology were examined at weeks 1, 2, 4, and 8 after induction of diabetes. In initial optimization studies, animals were sacrificed at week 1 or week 2 and histology examined for acute renal injury.
Results: Following a single intraperitoneal injection of 225 mg/kg of STZ, only two thirds of animals developed hyperglycemia, yet the model was associated with focal areas of acute tubular necrosis (ATN) at week 2. ATN was also observed in C57BL/6J mice given a single intravenous or intraperitoneal dose of STZ (200 mg/kg), at week 2 post-diabetes. At an optimal diabetogenic dose and route (75 mg/150 mg/kg by intravenous injection 5 days apart), all mice developed diabetes and no ATN was observed histologically. However, even with this regimen, glomerular filtration rate (GFR) was significantly impaired from week 2. This regimen was accompanied by progressive histologic changes, including tubular and glomerular hypertrophy, mesangial area expansion, as well as interstitial macrophage, CD4+ and CD8+ T-cell accumulation.
Conclusion: By careful optimization of STZ dose, a stable and reproducible diabetic murine model was established. However, even in this optimized model, renal functional impairment was observed. The frequency of ATN and functional impairment casts doubt on conclusions about experimental diabetic nephropathy drawn from reports in which ATN has not been excluded rigorously.
abstract_id: PUBMED:25080521
eEOC-mediated modulation of endothelial autophagy, senescence, and EnMT in murine diabetic nephropathy. Diabetic nephropathy is the most frequent single cause of end-stage renal disease in our society. Microvascular damage is a key event in diabetes-associated organ malfunction. Early endothelial outgrowth cells (eEOCs) act protective in murine acute kidney injury. The aim of the present study was to analyze consequences of eEOC treatment of murine diabetic nephropathy with special attention on endothelial-to-mesenchymal transdifferentiation, autophagy, senescence, and apoptosis. Male C57/Bl6N mice (8-12 wk old) were treated with streptozotocin for 5 consecutive days. Animals were injected with untreated or bone morphogenetic protein (BMP)-5-pretreated syngeneic murine eEOCs on days 2 and 5 after the last streptozotocin administration. Four, eight, and twelve weeks later, animals were analyzed for renal function, proteinuria, interstitial fibrosis, endothelial-to-mesenchymal transition, endothelial autophagy, and senescence. In addition, cultured mature murine endothelial cells were investigated for autophagy, senescence, and apoptosis in the presence of glycated collagen. Diabetes-associated renal dysfunction (4 and 8 wk) and proteinuria (8 wk) were partly preserved by systemic cell treatment. At 8 wk, antiproteinuric effects were even more pronounced after the injection of BMP-5-pretreated cells. The latter also decreased mesenchymal transdifferentiation of the endothelium. At 8 wk, intrarenal endothelial autophagy (BMP-5-treated cells) and senescence (native and BMP-5-treated cells) were reduced. Autophagy and senescence in/of cultured mature endothelial cells were dramatically reduced by eEOC supernatant (native and BMP-5). Endothelial apoptosis decreased after incubation with eEOC medium (native and BMP-5). eEOCs act protective in diabetic nephropathy, and such effects are significantly stimulated by BMP-5. The cells modulate endothelial senescence, autophagy, and apoptosis in a protective manner. Thus, the renal endothelium could serve as a therapeutic target in diabetes-associated kidney dysfunction.
abstract_id: PUBMED:18651554
Subacute renal failure in diabetic nephropathy due to endocapillary glomerulonephritis and cholesterol embolization. Patients with established diabetic nephropathy could have other glomerular diseases superimposed on diabetic glomerulosclerosis. Cholesterol embolization syndrome (CES) is a systemic disorder caused by cholesterol crystal embolization from ulcerated atherosclerosis plaques in the aorta and its major branches. Curiously, there are few papers describing the association between diabetic nephropathy and CES. On the other hand, the clinical picture of CES resembles systemic vasculitis, and there is a controversy regarding the association between CES and glomerular or vascular inflammation. We report a case of atypical CES that developed after cardiac catheterization in a diabetic man; it presented as subacute renal failure with proliferative and exudative endocapillary glomerulonephritis.
abstract_id: PUBMED:36082725
Histological Spectrum of Clinical Kidney Disease in Type 2 Diabetes Mellitus Patients with special Reference to nonalbuminuric Diabetic Nephropathy: A Kidney Biopsy-based Study. Background: Diabetic nephropathy (DN) is an important and catastrophic complication of diabetes mellitus (DM). Kidney disease has heterogeneity in histology in diabetes patients and includes both diabetic kidney disease (DKD) (albuminuric or nonalbuminuric) and nondiabetic kidney disease (NDKD) either in isolation or in coexistence with DN. Diabetic nephropathy is hard to overturn. While NDKD is treatable and reversible.
Materials And Methods: We enrolled a total of 50 type 2 diabetes mellitus (T2DM) patients with clinical kidney disease, of both genders and age &gt;18 years, who underwent kidney biopsy from October 2016 to October 2018. Patients with proteinuria &lt;30 mg per day were excluded from the study. The indications of the renal biopsy were nephrotic syndrome (NS), active urinary sediment, rapid decline in renal function, asymptomatic proteinuria, and hematuria.
Result: A total of 50 (males: 42 and females: eight) patients with T2DM who underwent kidney biopsy were enrolled. The clinical presentation was: NS 26 (52%), chronic kidney disease (CKD) 11 (22%), asymptomatic proteinuria and hematuria six (12%), acute kidney injury (AKI) four (8%), and acute nephritic syndrome (ANS) three (6%). Diabetic retinopathy (DR) was noted in 19 (38%) cases. Kidney biopsy revealed isolated DN, isolated NDKD, and NDKD superimposed on DN in 26 (52%), 14 (28%), and 10 (20%) cases, respectively. Idiopathic membranous nephropathy (MN) (4) and amyloidosis (2) were the most common forms of NDKD, whereas diffuse proliferative glomerulonephritis (DPGN) was the main form of NDKD superimposed on DN. Diabetic nephropathy was observed in 15 (79%) cases in presence of DR and also in 11 (35.5%) cases even in absence of DR. Of eight patients with microalbuminuria four (50%) cases have biopsy-proven DN.
Conclusion: About 48% of patients had NDKD either in isolation or in coexistence with DN. Diabetic nephropathy was found in absence of DR and in patients with a low level of proteinuria. The level of proteinuria and presence of DR does not help to distinguish DN vs NDKD. Hence, renal biopsy may be useful in selected T2DM patients with clinical kidney disease to diagnose NDKD.
abstract_id: PUBMED:11762610
Non-diabetic renal disease in patients with type 2 diabetes mellitus. Objectives: A wide spectrum of non-diabetic renal diseases (NDRD) are reported to occur in patients with type 2 diabetes mellitus. However, the prevalence and nature of NDRD in type 2 diabetics is not widely documented in our country. Therefore, the objectives of this study were to analyse prevalence and spectrum of non-diabetic renal disease in type 2 diabetic patients.
Methods: Two hundred sixty type 2 diabetic with clinical renal diseases were screened for evidence of NDRD, between April 1997 to March 1999. Renal disease other than diabetic nephropathy was found in 32 (12.3%) patients. Their (male 23; female 9) age ranged between 35-72 (mean 54.15+/-10.3) years. The duration of diabetes was < 5 years in 14 (43.7%), between 5-9 years in 8 (25%) and > 10 years in 10 (31.2%) patients.
Results: The presenting clinical syndromes were : chronic renal failure 15 (47%), acute nephritic syndrome 6 (18.7%), nephrotic syndrome 5 (15.6%), acute renal failure 4 (12.5%) and rapidly progressive glomerulonephritis (RPGN) in 2 (6.2%) cases. Overall, incidence of glomerular (46.8%) and tubulo-interstitial lesions (53.2%) were almost equal in type 2 diabetes patients. The spectrum of non-diabetic renal diseases includes : primary isolated glomerulopathy 12 (37.5%); mesangioproliferative GN superimposed on diabetic glomerulosclerosis (DGS) in 3 (9.3%); acute tubulo-interstitial nephropathy (TIN) 4 (12.5%); chronic TIN 10 (31.25%) and three patients had chronic pyelonephritis. Diabetic retinopathy was absent in 22 (69%) cases where 10 (31%) patients had background diabetic retinopathy. None of the patients with non-diabetic glomerular disease had diabetic retinopathy, except two who had DGS in addition to mesangioproliferative GN on renal biopsy. The background diabetic retinopathy was seen in 47% of patients with TIN without clinical evidence of diabetic nephropathy. The recovery of renal function or clinical improvement was observed in 47% of patients with NDRD with institution of appropriate treatment.
Conclusion: The prevalence of NDRD was 12.3% in our type 2 diabetic patients. Both non-diabetic glomerulopathy (47%) and tubulo-interstitial nephropathy (53%) can occur with nearly equal frequency in such patients. It is also gratifying to diagnose and treat NDRD in type 2 diabetics in selected cases.
abstract_id: PUBMED:19229833
Adult-onset perinuclear antineutrophil cytoplasmic antibody-positive Henoch-Schönlein purpura in diabetic nephropathy. Adult-onset Henoch-Schönlein purpura (HSP) is a rare systemic vasculitis characterized by a leukocytoclastic vasculitis of small vessels with the deposition of IgA immune complexes involving skin, gastrointestinal tract, joints and kidneys. Antineutrophil cytoplasmic antibody (ANCA) detected by indirect immunofluorescence assay is commonly found in other vasculitic disorders but rarely discovered in HSP patients. ANCA with perinuclear pattern has hardly ever reported in HSP patients. The diagnostic importance of ANCA still remains controversial. In addition, the simultaneous presence of diabetic nephropathy and HSP is uncommon. We present a case of an adult patient with diabetic nephropathy and superimposed HSP, which resulted in acute renal failure. Perinuclear-pattern ANCA was detected in the acute phase of HSP but disappeared when the disease resolved. Further, we have reviewed ANCA-positive HSP in this article.
abstract_id: PUBMED:31059201
Predictors and histopathological characteristics of non-diabetic renal disorders in diabetes: a look from the tubulointerstitial point of view. Background: Prevalence and characteristics of non-diabetic renal diseases (NDRD) in patients with type 2 diabetes mellitus is different between populations, and seems to be largely dependent on biopsy policies.
Aim: To investigate clinical clues for NDRD in patients with type 2 diabetes mellitus and to analyse renal prognosis of patients based on pathological diagnosis.
Methods: We retrospectively searched medical records of 115 patients with type 2 diabetes who underwent a renal biopsy between 2004 and 2018. Patients were divided into three groups as diabetic nephropathy (DN), NDRD + DN or NDRD based on histopathological examination.
Results: Thirty-six (31.3%) patients had DN, 33 (28.7%) had DN + NDRD and 46 (40%) had NDRD. The absence of diabetic retinopathy, recent onset of diabetes, abnormal disease chronology, and blood haemoglobin was associated with the presence of NDRD in univariate analysis. Abnormal disease chronology which was defined as the presence of acute proteinuria and/or acute kidney injury that are unexpected to be related to evolution of diabetic nepropathy (odds ratio 4.65, 95% confidence interval 1.44-15.00; P = 0.010) and absence of diabetic retinopathy (odds ratio 3.44, 95% confidence interval 1.32-8.98; P = 0.012) were independently associated with the presence of NDRD in multivariate analysis. Focal segmental glomerulosclerosis was the most frequent type of NDRD. Diseases that affect tubulointerstitial area were more prevalent in the DN + NDRD group compared to the NDRD group (P = 0.001). Renal survival, which was defined as evolution to end-stage renal disease, was 59.5 ± 14.4 months, 93.7 ± 11.7 months and 87.2 ± 2.6 months for DN, DN + NDRD and NDRD groups, respectively (P = 0.005).
Conclusions: Renal biopsy is essential in certain clinical conditions as diagnosis of NDRD is vital for favourable renal survival. DN may facilitate superimposed tubular injury in the presence of toxic insults.
abstract_id: PUBMED:14691907
IgA-dominant acute poststaphylococcal glomerulonephritis complicating diabetic nephropathy. Two pathological patterns of acute poststaphylococcal glomerulonephritis are well defined and include (1) an acute proliferative and exudative glomerulonephritis closely resembling classical acute poststreptococcal glomerulonephritis in patients with Staphylococcus aureus infection and (2) a membranoproliferative glomerulonephritis in patients with Staphylococcus epidermidis infection secondary to ventriculovascular shunts. In this study, we report a novel immunopathologic phenotype of immunoglobulin (Ig) A-dominant acute poststaphylococcal glomerulonephritis occurring in patients with underlying diabetic nephropathy. Five patients with type 2 diabetes presented with acute renal failure occurring after culture-positive staphylococcal infection. Renal biopsy disclosed an atypical pattern of acute endocapillary proliferative and exudative glomerulonephritis with intense deposits of IgA as the sole or dominant immunoglobulin, mimicking IgA nephropathy. The deposits were predominantly mesangial in distribution with few subepithelial humps. All five cases occurred superimposed on well-established diabetic nephropathy. Outcome was poor with irreversible renal failure in four of five (80%) cases. The possible pathophysiological basis of this atypical form of acute poststaphylococcal glomerulonephritis in diabetic patients is explored. Proper recognition of this entity is needed to avoid an erroneous diagnosis of IgA nephropathy, with corresponding therapeutic and prognostic implications.
abstract_id: PUBMED:37916289
β2-Adrenergic receptor agonists as a treatment for diabetic kidney disease. We have previously shown that the long-acting β2-adrenergic receptor (β2-AR) agonist formoterol induced recovery from acute kidney injury in mice. To determine whether formoterol protected against diabetic nephropathy, the most common cause of end-stage kidney disease (ESKD), we used a high-fat diet (HFD), a murine type 2 diabetes model, and streptozotocin, a murine type 1 diabetes model. Following formoterol treatment, there was a marked recovery from and reversal of diabetic nephropathy in HFD mice compared with those treated with vehicle alone at the ultrastructural, histological, and functional levels. Similar results were seen after formoterol treatment in mice receiving streptozotocin. To investigate effects in humans, we performed a competing risk regression analysis with death as a competing risk to examine the association between Veterans with chronic kidney disease (CKD) and chronic obstructive pulmonary disease (COPD), who use β2-AR agonists, and Veterans with CKD but no COPD, and progression to ESKD in a large national cohort of Veterans with stage 4 CKD between 2011 and 2013. Veterans were followed until 2016 or death. ESKD was defined as the initiation of dialysis and/or receipt of kidney transplant. We found that COPD was associated with a 25.6% reduction in progression from stage 4 CKD to ESKD compared with no COPD after adjusting for age, diabetes, sex, race-ethnicity, comorbidities, and medication use. Sensitivity analysis showed a 33.2% reduction in ESKD in Veterans with COPD taking long-acting formoterol and a 20.8% reduction in ESKD in Veterans taking other β2-AR agonists compared with those with no COPD. These data indicate that β2-AR agonists, especially formoterol, could be a treatment for diabetic nephropathy and perhaps other forms of CKD.NEW & NOTEWORTHY Diabetic nephropathy is the most common cause of ESKD. Formoterol, a long-acting β2-adrenergic receptor (β2-AR) agonist, reversed diabetic nephropathy in murine models of type 1 and 2 diabetes. In humans, there was an association with protection from progression of CKD in patients with COPD, by means of β2-AR agonist intake, compared with those without COPD. These data indicate that β2-AR agonists, especially formoterol, could be a new treatment for diabetic nephropathy and other forms of CKD.
abstract_id: PUBMED:3536200
Clinical diabetic nephropathy: natural history and complications. Diabetic nephropathy develops in about 45% of insulin dependent diabetics of whom two-thirds will develop renal failure, the rest dying from cardiovascular disease. Most of the excess mortality of insulin dependent diabetics occurs in those with proteinuria. Among non-insulin dependent diabetics nephropathy is also an important cause of increased mortality but this is mainly from cardiovascular disease. Once diabetic nephropathy is established it progresses relentlessly to end-stage renal failure over about seven years, but ranging from five to 20 years. The explanation for the different rates of progression in individual patients is not understood. Hypertension accompanies diabetic nephropathy and its treatment may retard the progression of renal failure. Other forms of intervention include glycaemic control which has not been shown to have any effect, and protein restriction for which no conclusions can be drawn at present. The diagnosis of diabetic nephropathy is straightforward in the presence of a typical history and clinical features. Non-diabetic renal disease is sometimes the cause of renal failure and may require specific treatment; prognosis for renal failure treatment may be better than for nephropathy patients with other diabetic complications. Other diabetic complications develop as diabetic nephropathy progresses, most notably cardiac and peripheral vascular disease. Proliferative retinopathy and neuropathy are considerable problems and their management needs attention both before and after renal failure treatment.
Answer: Yes, murine diabetic nephropathy can be separated from superimposed acute renal failure, but it requires careful optimization of experimental conditions. In a study using male Balb/c mice and C57BL/6J mice treated with streptozotocin (STZ) to induce diabetes, it was found that certain doses and administration routes of STZ could induce diabetic nephropathy without concurrent acute renal injury resulting from the cytotoxic effects of STZ. Specifically, a regimen of 75 mg/150 mg/kg STZ administered by intravenous injection 5 days apart was optimal for inducing diabetes without histological evidence of acute tubular necrosis (ATN) at week 2. However, even with this optimized regimen, renal functional impairment was observed from week 2, indicating that renal functional impairment is a component of the diabetic nephropathy model (PUBMED:15954931).
The study concluded that by carefully optimizing the STZ dose, a stable and reproducible diabetic murine model could be established. However, the presence of ATN and functional impairment in some models casts doubt on conclusions about experimental diabetic nephropathy drawn from reports where ATN has not been rigorously excluded. Therefore, it is crucial to distinguish between the chronic changes of diabetic nephropathy and the acute changes due to superimposed renal injuries such as ATN when studying murine models of diabetic nephropathy (PUBMED:15954931). |
Instruction: Does IQ predict total and cardiovascular disease mortality as strongly as other risk factors?
Abstracts:
abstract_id: PUBMED:18801778
Does IQ predict total and cardiovascular disease mortality as strongly as other risk factors? Comparison of effect estimates using the Vietnam Experience Study. Objective: To compare the strength of the relation of two measurements of IQ and 11 established risk factors with total and cardiovascular disease (CVD) mortality.
Methods: Cohort study of 4166 US male former army personnel with data on IQ test scores (in early adulthood and middle age), a range of established risk factors and 15-year mortality surveillance.
Results: When CVD mortality (n = 61) was the outcome of interest, the relative index of inequality (RII: hazard ratio; 95% CI) for the most disadvantaged relative to the advantaged (in descending order of magnitude of the first six based on age-adjusted analyses) was: 6.58 (2.54 to 17.1) for family income; 5.55 (2.16 to 14.2) for total cholesterol; 5.12 (2.01 to 13.0) for body mass index; 4.70 (1.89 to 11.7) for IQ in middle age; 4.29 (1.70 to 10.8) for blood glucose and 4.08 (1.63 to 10.2) for high-density lipoprotein cholesterol (the RII for IQ in early adulthood was ranked tenth: 2.88; 1.19 to 6.97). In analyses featuring all deaths (n = 233), the RII for risk factors most strongly related to this outcome was 7.46 (4.54 to 12.3) for family income; 4.41 (2.77 to 7.03) for IQ in middle age; 4.02 (2.37 to 6.83) for smoking; 3.81 (2.35 to 6.17) for educational attainment; 3.40 (2.14 to 5.41) for pulse rate and 3.26 (2.06 to 5.15) for IQ in early adulthood. Multivariable adjustment led to marked attenuation of these relations, particularly those for IQ.
Conclusions: Lower scores on measures of IQ at two time points were associated with CVD and, particularly, total mortality, at a level of magnitude greater than several other established risk factors.
abstract_id: PUBMED:20101181
Does IQ predict cardiovascular disease mortality as strongly as established risk factors? Comparison of effect estimates using the West of Scotland Twenty-07 cohort study. Objective: To compare the strength of the association between intelligence quotient (IQ) and cardiovascular disease (CVD) mortality with the predictive power for established risk factors.
Design: Population-based cohort study of 1145 men and women with IQ test scores, a range of established risk factors, and 20-year mortality surveillance.
Results: When CVD mortality was the outcome of interest, the relative index of inequality (sex-adjusted hazard ratio, 95% confidence interval) for the most disadvantaged relative to the advantaged persons was (in descending order of magnitude for the top five risk factors): 5.58 (2.89, 10.8) for cigarette smoking; 3.76 (2.14, 6.61) for IQ; 3.20 (1.85, 5.54) for income; 2.61 (1.49, 4.57) for systolic blood pressure and 2.06 (1.07, 3.99) for physical activity. Mutual adjustment led to some attenuation of these relationships. Similar observations were made in the analyses featuring all deaths where, again, IQ was the second most powerful predictor of mortality risk.
Conclusion: In this cohort, lower intelligence scores were associated with increased rates of CVD and total mortality at a level of magnitude greater than most established risk factors.
abstract_id: PUBMED:19602715
Does IQ explain socio-economic differentials in total and cardiovascular disease mortality? Comparison with the explanatory power of traditional cardiovascular disease risk factors in the Vietnam Experience Study. Aims: The aim of this study was to examine the explanatory power of intelligence (IQ) compared with traditional cardiovascular disease (CVD) risk factors in the relationship of socio-economic disadvantage with total and CVD mortality, that is the extent to which IQ may account for the variance in this well-documented association.
Methods And Results: Cohort study of 4289 US male former military personnel with data on four widely used markers of socio-economic position (early adulthood and current income, occupational prestige, and education), IQ test scores (early adulthood and middle-age), a range of nine established CVD risk factors (systolic and diastolic blood pressure, total blood cholesterol, HDL cholesterol, body mass index, smoking, blood glucose, resting heart rate, and forced expiratory volume in 1 s), and later mortality. We used the relative index of inequality (RII) to quantify the relation between each index of socio-economic position and mortality. Fifteen years of mortality surveillance gave rise to 237 deaths (62 from CVD and 175 from 'other' causes). In age-adjusted analyses, as expected, each of the four indices of socio-economic position was inversely associated with total, CVD, and 'other' causes of mortality, such that elevated rates were evident in the most socio-economically disadvantaged men. When IQ in middle-age was introduced to the age-adjusted model, there was marked attenuation in the RII across the socio-economic predictors for total mortality (average 50% attenuation in RII), CVD (55%), and 'other' causes of death (49%). When the nine traditional risk factors were added to the age-adjusted model, the comparable reduction in RII was less marked than that seen after IQ adjustment: all-causes (40%), CVD (40%), and 'other' mortality (43%). Adding IQ to the latter model resulted in marked, additional explanatory power for all outcomes in comparison to the age-adjusted analyses: all-causes (63%), CVD (63%), and 'other' mortality (65%). When we utilized IQ in early adulthood rather than middle-age as an explanatory variable, the attenuating effect on the socio-economic gradient was less pronounced although the same pattern was still present.
Conclusion: In the present analyses of socio-economic gradients in total and CVD mortality, IQ appeared to offer greater explanatory power than that apparent for traditional CVD risk factors.
abstract_id: PUBMED:18525394
IQ in late adolescence/early adulthood, risk factors in middle-age and later coronary heart disease mortality in men: the Vietnam Experience Study. Objective: Examine the relation between IQ in early adulthood and later coronary heart disease (CHD) mortality, and assess the extent to which established risk factors measured in middle-age might explain this gradient.
Design: Cohort study of 4316 male former Vietnam-era US army personnel with IQ scores (mean age 20.4 years), risk factor data (mean age 38.3 years) and 15 years mortality surveillance.
Results: In age-adjusted analyses, lower IQ scores were associated with an increased rate of CHD mortality (hazard ratio per SD decrease in IQ; 95% confidence interval: 1.34; 1.00, 1.79). Adjustment for later chronic disease (1.22; 0.91, 1.64), behavioural (1.29; 0.95, 1.74) and physiological risk factors (1.19; 0.88, 1.62) led to some attenuation of this gradient. This attenuation was particularly pronounced on adding socioeconomic indices to the multivariable model when the IQ-CHD relation was eliminated (1.05; 0.73, 1.52). A similar pattern of association was apparent when cardiovascular disease was the outcome of interest.
Conclusion: High IQ may lead to educational success, well remunerated and higher prestige employment, and this pathway may confer cardio-protection.
abstract_id: PUBMED:25928436
Association Between Low IQ Scores and Early Mortality in Men and Women: Evidence From a Population-Based Cohort Study. Lower (versus higher) IQ scores have been shown to increase the risk of early mortality, however, the underlying mechanisms are poorly understood and previous studies underrepresent individuals with intellectual disability (ID) and women. This study followed one third of all senior-year students (approximately aged 17) attending public high school in Wisconsin, U.S. in 1957 (n = 10,317) until 2011. Men and women with the lowest IQ test scores (i.e., IQ scores ≤ 85) had increased rates of mortality compared to people with the highest IQ test scores, particularly for cardiovascular disease. Importantly, when educational attainment was held constant, people with lower IQ test scores did not have higher mortality by age 70 than people with higher IQ test scores. Individuals with lower IQ test scores likely experience multiple disadvantages throughout life that contribute to increased risk of early mortality.
abstract_id: PUBMED:25501686
Lifetime cumulative risk factors predict cardiovascular disease mortality in a 50-year follow-up study in Finland. Background: Systolic blood pressure, total cholesterol and smoking are known predictors of cardiovascular disease (CVD) mortality. Less is known about the effect of lifetime accumulation and changes of risk factors over time as predictors of CVD mortality, especially in very long follow-up studies.
Methods: Data from the Finnish cohorts of the Seven Countries Study were used. The baseline examination was in 1959 and seven re-examinations were carried out at approximately 5-year intervals. Cohorts were followed up for mortality until the end of 2011. Time-dependent Cox models with regular time-updated risk factors, time-dependent averages of risk factors and latest changes in risk factors, using smoothing splines to discover nonlinear effects, were used to analyse the predictive effect of risk factors for CVD mortality.
Results: A model using cumulative risk factors, modelled as the individual-level averages of several risk factor measurements over time, predicted CVD mortality better than a model using the most recent measurement information. This difference seemed to be most prominent for systolic blood pressure. U-shaped effects of the original predictors can be explained by partitioning a risk factor effect between the recent level and the change trajectory. The change in body mass index predicted the risk although body mass index itself did not.
Conclusions: The lifetime accumulation of risk factors and the observed changes in risk factor levels over time are strong predictors of CVD mortality. It is important to investigate different ways of using the longitudinal risk factor measurements to take full advantage of them.
abstract_id: PUBMED:28523821
Risk factors for early cardiovascular mortality in patients with bipolar disorder. Aim: We attempted to determine risk factors, particularly pathophysiological changes, for early cardiovascular mortality in bipolar disorder (BD).
Methods: A total of 5416 inpatients with bipolar I disorder were retrospectively followed through record linkage for cause of death. A total of 35 patients dying from cardiovascular disease (CVD; ICD 9: 401-443) before the age of 65 years were identified. Two living BD patients and two mentally healthy adults were matched with each deceased patient as control subjects according to age (±2 years), sex, and date (±3 years) of the final/index admission or the date of general health screening. Data were obtained through medical record reviews.
Results: Eighty percent of CVD deaths occurred within 10 years following the index admission. Conditional logistic regression revealed that the variables most strongly associated with CVD mortality were the leukocyte count and heart rate on the first day of the index hospitalization, as the deceased BD patients were compared with the living BD controls. Systolic pressure on the first day of the index hospitalization can be substituted for heart rate as another risk factor for CVD mortality.
Conclusion: It is suggested that systemic inflammation and sympathetic overactivity during the acute phase of BD may be risk factors for early CVD mortality.
abstract_id: PUBMED:8557446
Do cardiovascular disease risk factors predict all-cause mortality? Background: The purpose of this study is to describe associations between a number of standard cardiovascular risk factors and all-cause mortality.
Method: Mortality data were collected for a randomly selected cohort of 1029 New Zealand men aged 35-64 years, followed up over a 9-year period. A proportional hazards regression model was used to estimate the relative risks (RR) for all-cause mortality associated with a number of cardiovascular risk factors.
Results: In all, 96 deaths occurred over the 9-year period, of which 50% were due to cardiovascular causes. All-cause mortality was positively associated with cigarette smoking (age-adjusted RR = 2.01, 95% CI:1.15-3.53, current versus never), systolic blood pressure (age-adjusted RR = 2.18, 95% CI:1.23-4.44, upper versus lower tertile), and body mass index (age-adjusted RR = 1.59, 95% CI:0.94-2.66, upper versus lower tertile) and inversely associated with high density lipoprotein (HDL)-cholesterol (age-adjusted RR = 0.45, 95% CI:0.25-0.80, upper versus lower tertile). All-cause mortality was only weakly associated with serum total cholesterol (age-adjusted RR = 1.19, 95% CI:0.70-1.99, upper versus lower tertile), and there was no evidence of a U-shaped relationship for this risk factor. There was an inverse association between all-cause mortality and socioeconomic status (age-adjusted RR = 1.70, 95% CI:1.03-2.80, lower versus upper). Light alcohol consumption was associated with reduced all-cause mortality (age-adjusted RR = 0.63, 95% CI:0.37-1.05, light versus teetotal), but this benefit did not persist for alcohol consumption above about three standard drinks per day.
Conclusions: The findings of this study indicate that the standard cardiovascular risk factors are likely to have a beneficial impact on all-cause mortality as well as cardiovascular disease in middle-aged and older men.
abstract_id: PUBMED:15969847
Childhood IQ and all-cause mortality before and after age 65: prospective observational study linking the Scottish Mental Survey 1932 and the Midspan studies. Objectives: The objective was to investigate how childhood IQ related to all-cause mortality before and after age 65.
Design: The Midspan prospective cohort studies, followed-up for mortality for 25 years, were linked to individuals' childhood IQ from the Scottish Mental Survey 1932.
Methods: The Midspan studies collected data on risk factors for cardiorespiratory disease from a questionnaire and at a screening examination, and were conducted on adults in Scotland in the 1970s. An age 11 IQ from the Scottish Mental Survey 1932, a cognitive ability test conducted on 1921-born children attending schools in Scotland, was found for 938 Midspan participants. The relationship between childhood IQ and mortality risk, adjusting for adulthood socio-economic confounders, was analysed. The effect of adjustment for childhood IQ on the relationship between established risk factors (blood pressure, smoking, height and respiratory function) and mortality was also investigated.
Results: For deaths occurring up to age 65, there was a 36% increased risk per standard deviation decrease (15 points) in childhood IQ which was reduced to 29% after adjusting for social class and deprivation category. There was no statistically significant relationship between childhood IQ and deaths occurring after the age of 65. Adjustment for childhood IQ attenuated the risk factor-mortality relationship in deaths occurring up to age 65, but had no effect in deaths occurring after age 65.
Conclusions: Childhood IQ was significantly related to deaths occurring up to age 65, but not to deaths occurring after age 65.
abstract_id: PUBMED:29576250
Dietary inflammatory index or Mediterranean diet score as risk factors for total and cardiovascular mortality. Background And Aims: Dietary patterns are associated with risk of cardiovascular disease (CVD). We aimed to examine associations of the Dietary Inflammatory Index (DII) and the Mediterranean Diet Score (MDS) with total, cardiovascular disease (CVD) and coronary heart disease (CHD) mortality in the Melbourne Collaborative Cohort Study; and compare the strengths of the associations.
Methods And Results: In our prospective cohort study of 41,513 men and women aged 40-69 years, a food frequency questionnaire was completed at baseline and mortality data were obtained via linkage with local and national registries over an average of 19 years follow up. At baseline, questionnaires were completed and physical measures and blood samples taken. Cox proportional hazards models, adjusting for age, alcohol consumption, sex, region of origin, personal history of CVD or diabetes and family history of CVD, were used to assess associations between dietary scores and mortality. More Mediterranean or less inflammatory diets were associated with lower total, CVD and CHD mortality. The hazard ratio for total mortality comparing the highest and lowest quintiles was 1.16 (95%CI: 1.08-1.24) for DII; and 0.86 (95%CI: 0.80-0.93) comparing the highest and lowest three categories of MDS. Using the Bayesian information criterion, there was no evidence that the DII score was more strongly associated with total and CVD mortality than was the MDS.
Conclusions: The MDI and the DII show similar associations with total and cardiovascular mortality, consistent with the consensus that plant-based diets are beneficial for health.
Answer: The relationship between IQ and mortality, particularly cardiovascular disease (CVD) mortality, has been explored in several studies, and the findings suggest that IQ does indeed predict total and CVD mortality, often at a magnitude greater than several other established risk factors.
In the Vietnam Experience Study, lower scores on measures of IQ at two time points were associated with CVD and total mortality at a level of magnitude greater than several other established risk factors, such as body mass index, blood glucose, and smoking, after adjusting for age (PUBMED:18801778). Similarly, in the West of Scotland Twenty-07 cohort study, lower intelligence scores were associated with increased rates of CVD and total mortality at a level of magnitude greater than most established risk factors, with IQ being the second most powerful predictor of mortality risk (PUBMED:20101181).
Further analysis of the Vietnam Experience Study indicated that IQ might account for a significant portion of the variance in the association between socio-economic disadvantage and total and CVD mortality, offering greater explanatory power than traditional CVD risk factors (PUBMED:19602715). Another study from the Vietnam Experience Study cohort found that high IQ may lead to educational success and higher prestige employment, which may confer cardio-protection (PUBMED:18525394).
A population-based cohort study in Wisconsin, U.S., showed that men and women with the lowest IQ scores had increased rates of mortality, particularly for cardiovascular disease. However, when educational attainment was held constant, lower IQ scores did not predict higher mortality by age 70 compared to higher IQ scores, suggesting that the disadvantages associated with lower IQ might be mediated through lower educational attainment (PUBMED:25928436).
In summary, these studies suggest that IQ is a significant predictor of total and CVD mortality, often with a predictive power comparable to or greater than other established risk factors. However, the relationship between IQ and mortality may be influenced by socio-economic factors and educational attainment, indicating a complex interplay between cognitive ability, socio-economic status, and health outcomes. |
Instruction: Occurrence and effects of personality disorders in depression: are they the same in the old and young?
Abstracts:
abstract_id: PUBMED:8726787
Occurrence and effects of personality disorders in depression: are they the same in the old and young? Objectives: To determine the frequency and effects of personality disorders on episodes of depression in elderly and young inpatients. Personality disorders are common and may affect the prognosis of Axis I disorders.
Methods: Clinical records of 89 elderly inpatients and a matched comparison group of 119 young inpatients were reviewed to confirm the diagnosis of a major depressive episode according to the DSM-III-R criteria. The frequency of personality disorder diagnoses in the 2 groups was determined. Within each group, severity, functioning, and treatment were compared between those with and without personality disorders.
Results: Personality disorders were diagnosed more frequently in the young (40.3%) than in the elderly (27%). Both rates were similar to previous reports. Cluster C disorders were the most common personality disorders found in the elderly, compared to cluster B disorders in the young. Personality disorder in the young was associated with longer episodes of depression (P = 0.035) and poorer family relations (P < 0.001); whereas in the elderly, personality disorder was associated with more severe episodes (P = 0.014).
Conclusions: These findings suggest that the frequency and effects of personality disorders on the depressed patient may differ according to age.
abstract_id: PUBMED:24002993
The moderating effects of impulsivity on Chinese rural young suicide. Objectives: As only about 50% of Chinese suicides have mental disorders, nonpsychiatric factors such as social environment and personality may account for the variance that is not explained by mental problems. We try to explore the effects of impulsivity on Chinese suicides and the role impulsivity plays in the relationship between negative life events (NLEs) and suicidal behavior.
Method: A total of 392 suicide cases (178 female and 214 male, aged 15-34 years) and 416 community controls (202 males and 214 females) of the same age range were sampled in China. The case-control data were obtained using psychological autopsy method with structured and semistructured instruments.
Results: Impulsivity was an important predictor of Chinese rural young suicides and it was a moderator between NLEs and suicide.
Conclusions: Findings of the study may be translated into practical measures in suicide prevention in China as well as elsewhere in the world.
abstract_id: PUBMED:16563082
Co-occurrence of personality disorders with mood, anxiety, and substance use disorders in a young adult population. The purpose of this study was to determine the co-occurrence of DSM- III-R personality disorders (PDs) with mood, anxiety, and substance use disorders in a young adult population. The members of the Northern Finland 1966 Birth Cohort Project, living in the city of Oulu with an age of 31 years (N = 1,609) were invited to participate in a two-phase field study. The SCID I and II were used as diagnostic instruments. One hundred and seventy-seven out of 321 interviewed subjects met the criteria for mood, anxiety, or substance use disorders. Altogether 72 (41%) of the subjects with an Axis I disorder met the criteria for at least one PD. The weighted co-occurrence rate of any PD varied from 28% for mood disorders to 47% for anxiety disorders. PDs, especially those in Cluster C, are highly associated with Axis I psychiatric disorders in population.
abstract_id: PUBMED:31556784
The clinical relevance of asking young psychiatric patients about childhood ADHD symptoms. Aim: The aim of this study was to explore the relevance of asking young psychiatric patients about childhood symptoms of attention deficit hyperactivity disorder (ADHD).Method: A total of 180 young adults (18-25 years of age) from a general psychiatric out-patient clinic in Uppsala filled in the Child and Adolescent Psychiatric Screening Inventory-Retrospect (CAPSI-R) as part of the diagnostic procedure. The study population was divided into groups based on number and subtype of reported ADHD symptoms, inattention (IN) or hyperactivity/impulsivity (HI). The clinical characteristics associated with different symptoms of ADHD were explored.Results: The groups with five or more self-reported ADHD childhood symptoms, of either IN or HI, had more psychiatric comorbid conditions, a significantly higher co-occurrence of substance use disorders and personality disorders, and experienced more psychosocial and environmental problems.Conclusion: High level of self-reported ADHD childhood symptoms in young psychiatric patients identified a group more burdened with psychiatric comorbid conditions and more psychosocial problems. This group should be offered a thorough diagnostic assessment of ADHD.
abstract_id: PUBMED:16582065
Personality and substance use disorders in young adults. Background: There have been no studies of the co-occurrence of personality and substance use disorders in young community-dwelling adults.
Aims: To examine the association between DSM-IV personality disorders and substance use disorders in a large representative sample of young community-dwelling participants.
Method: Young Australian adults (n=1520, mean age=24.1 years) were interviewed to determine the prevalence of substance use disorders; 1145 also had an assessment for personality disorder.
Results: The prevalence of personality disorder was 18.6% (95% CI 16.5-20.7). Personality disorder was associated with indices of social disadvantage and the likely presence of common mental disorders. Independent associations were found between cluster B personality disorders and substance use disorders. There was little evidence for strong confounding or mediating effects of these associations.
Conclusions: In young adults, there are independent associations between cluster B personality disorders and substance use disorders.
abstract_id: PUBMED:21851441
Relationship between personality change and the onset and course of alcohol dependence in young adulthood. Aims: To examine the reciprocal effects between the onset and course of alcohol use disorder (AUD) and normative changes in personality traits of behavioral disinhibition and negative emotionality during the transition between adolescence and young adulthood.
Design: Longitudinal-epidemiological study assessing AUD and personality at ages 17 and 24 years.
Setting: Participants were recruited from the community and took part in a day-long, in-person assessment.
Participants: Male (n = 1161) and female (n = 1022) twins participating in the Minnesota Twin Family Study.
Measurements: The effects of onset (adolescent versus young adult) and course (persistent versus desistent) of AUD on change in personality traits of behavioral disinhibition and negative emotionality from ages 17 to 24 years.
Findings: Onset and course of AUD moderated personality change from ages 17 to 24 years. Adolescent onset AUD was associated with greater decreases in behavioral disinhibition. Those with an adolescent onset and persistent course failed to exhibit normative declines in negative emotionality. Desistence was associated with a 'recovery' towards psychological maturity in young adulthood, while persistence was associated with continued personality dysfunction. Personality traits at age 11 predicted onset and course of AUD, indicating personality differences were not due to active substance abuse.
Conclusions: Personality differences present prior to initiation of alcohol use increase risk for alcohol use disorder, but the course of alcohol use disorder affects the rate of personality change during emerging adulthood. Examining the reciprocal effects of personality and alcohol use disorder within a developmental context is necessary to improve understanding for theory and intervention.
abstract_id: PUBMED:30229710
Beyond not bad or just okay: social predictors of young adults' wellbeing and functioning (a TRAILS study). Background: Various childhood social experiences have been reported to predict adult outcomes. However, it is unclear how different social contexts may influence each other's effects in the long run. This study examined the joint contribution of adolescent family and peer experiences to young adult wellbeing and functioning.
Methods: Participants came from the TRacking Adolescents' Individual Lives Survey (TRAILS) study (n = 2230). We measured family and peer relations at ages 11 and 16 (i.e. family functioning, perceived parenting, peer status, peer relationship quality), and functioning as the combination of subjective wellbeing, physical and mental health, and socio-academic functioning at age 22. Using structural equation modelling, overall functioning was indicated by two latent variables for positive and negative functioning. Positive, negative and overall functioning at young adulthood were regressed on adolescent family experiences, peer experiences and interactions between the two.
Results: Family experiences during early and mid-adolescence were most predictive for later functioning; peer experiences did not independently predict functioning. Interactions between family and peer experiences showed that both protective and risk factors can have context-dependent effects, being exacerbated or overshadowed by negative experiences or buffered by positive experiences in other contexts. Overall the effect sizes were modest at best.
Conclusions: Adolescent family relations as well as the interplay with peer experiences predict young adult functioning. This emphasizes the importance of considering the relative effects of one context in relation to the other.
abstract_id: PUBMED:28211556
Occurrence of selected lower urinary tract symptoms in patientsof a day hospital for neurotic disorders. Objectives: To assess the occurrence of selected lower urinary tract symptoms in the population of patients with neurotic and personality disorders.
Methods: This was a retrospective analysis of occurrence, co-existence and severity of two selected lower urinary tract symptoms in 3,929 patients in a day hospital for neurotic disorders. The KO "O" symptom checklist was used to measure the study variables.
Results: Although the symptoms associated with micturition are not the most prevalent symptoms of neurotic disorders, neither are they the most typical ones, the prevalence of urinary frequency referring to the last week before psychotherapy evaluated among the patients of a day hospital, was approximately 50%. Involuntary micturition, a symptom with a significant implication on the self-esteem and social functioning was much less common; it was reported by approximately 5% relatively healthy and young group of patients. Major bother from urinary frequency was reported by 9-14% of patients, whereas from involuntary micturition by only 0.6%-1% of the surveyed patients.
Conclusions: Selected urological symptoms seem to be prevalent among the patients with neurotic and personality disorders, and are independent of the specific diagnosis or patients' gender. Their co-existence with other symptoms of neurotic disorders reported by the patients indicates their strongest relationship with the somatoform, dissociative, sexual and agoraphobic disorders.
abstract_id: PUBMED:34616334
Mediators and Theories of Change in Psychotherapy for Young People With Personality Disorders: A Systematic Review Protocol. Background: Personality disorders (PDs) are a severe health issue already prevalent among adolescents and young adults. Early detection and intervention offer the opportunity to reduce disease burden and chronicity of symptoms and to enhance long-term functional outcomes. While psychological treatments for PDs have been shown to be effective for young people, the mediators and specific change mechanisms of treatment are still unclear. Aim: As part of the "European Network of Individualized Psychotherapy Treatment of Young People with Mental Disorders" (TREATme), funded by the European Cooperation in Science and Technology (COST), we will conduct a systematic review to summarize the existing knowledge on mediators of treatment outcome and theories of change in psychotherapy for young people with personality disorders. In particular, we will evaluate whether mediators appear to be common or specific to particular age groups, treatment models, or outcome domains (e.g., psychosocial functioning, life quality, and adverse treatment effects). Method: We will follow the reporting guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement recommendations. Electronic databases (PubMed and PsycINFO) have been systematically searched for prospective, longitudinal, and case-control designs of psychological treatment studies, which examine mediators published in English. Participants will be young people between 10 and 30years of age who suffer from subclinical personality symptoms or have a personality disorder diagnosis and receive an intervention that aims at preventing, ameliorating, and/or treating psychological problems. Results: The results will be published in a peer-reviewed journal and at conference presentations and will be shared with relevant stakeholder groups. The data set will be made available to other research groups following recommendations of the open science initiative. Databases with the systematic search will be made openly available following open science initiatives. The review has been registered in PROSPERO (evaluation is pending, registration number ID 248959). Implications: This review will deliver a comprehensive overview on the empirical basis to contribute to the further development of psychological treatments for young people with personality disorders.
abstract_id: PUBMED:27269006
Functioning of Young Adults With ADHD in the Military. Objective: To assess the functioning of young adults with ADHD in a military setting. Method: In all, 14,655 young adults with ADHD (mean age at first examination 17.8 ± 0.7) attending mandatory service in the Israeli military were compared with matched controls on several functioning domains. Results: Young adults with ADHD had more sessions with mental health care professionals, physician appointments, sick days, and disqualifications of professions than controls (p < .001). Young adults with ADHD were also less medically fit for combat service (odds ratio [OR] = 0.75, 95% confidence interval [CI] = [0.72, 0.79]), more medically unfit for military service in general (OR = 1.26, 95% CI = [1.13, 1.40]), and had higher rates of overweight and obesity (p < .001). In addition, they were more likely to be diagnosed with a personality disorder (OR = 1.29, 95% CI = [1.07, 1.53]) or with minor affective and anxiety disorders (OR = 1.33, 95% CI = [1.06, 1.67]) than matched controls. Conclusion: These results support a negative effect of ADHD on the functioning of young adults in a military setting.
Answer: The occurrence and effects of personality disorders in depression are not the same in the old and young. In a study comparing elderly and young inpatients with major depressive episodes, personality disorders were diagnosed more frequently in the young (40.3%) than in the elderly (27%). Cluster C disorders were the most common personality disorders found in the elderly, while cluster B disorders were more common in the young. In the young, the presence of a personality disorder was associated with longer episodes of depression and poorer family relations. In contrast, in the elderly, personality disorder was associated with more severe episodes of depression (PUBMED:8726787).
This suggests that the frequency and effects of personality disorders on depressed patients may differ according to age. Additionally, personality disorders, especially those in Cluster C, are highly associated with Axis I psychiatric disorders in the population (PUBMED:16563082).
Moreover, young adults with personality disorders have been found to have independent associations with substance use disorders, particularly cluster B personality disorders (PUBMED:16582065). The onset and course of alcohol use disorder (AUD) can also affect the rate of personality change during emerging adulthood, with personality traits present prior to initiation of alcohol use increasing the risk for AUD (PUBMED:21851441).
In summary, while personality disorders are common in both elderly and young patients with depression, their frequency, associated types of personality disorders, and the effects on the course and severity of depression differ between these age groups. |
Instruction: The Neonatal Neurobiologic Risk Score: does it predict outcome in very premature infants?
Abstracts:
abstract_id: PUBMED:8772924
The Neonatal Neurobiologic Risk Score: does it predict outcome in very premature infants? Objective: To examine the validity of the Neonatal Neurobiologic Risk Score (NBRS) for predicting neurodevelopmental outcome to 3 years in infants born at < 28 weeks gestation. METHOLDOLOGY: The NBRS was retrospectively determined for 56 consecutive infants cared for in our NICU and prospectively followed to 3 years. Neurodevelopmental assessments performed at 3 years were correlated with the NBRS, and the predictive powers of individual items in the NBRS determined.
Results: The mean (range) birth weight was 908 (514-1295) g and gestational age was 26 (24-27) weeks. Three-year outcome was abnormal in 12 (21%) infants. A high NBRS at discharge was associated with an increased risk of abnormal 3-year outcome (odds ratio 2.56; 95% C.I. 1.4-4.7, p = 0.002). A modified NBRS using only significantly predictive items (acidosis, hypoxemia, hypotension, intraventricular hemorrhage, infection and hypoglycemia) demonstrated high sensitivity (1.00), specificity (0.98), positive predictive value (0.92) and negative predictive value (1.00) for abnormal 3-year outcome.
Conclusions: This study confirms the validity of the NBRS as a simple and objective means of identifying very premature infants at highest risk of abnormal neurodevelopmental outcome, and of identifying specific events which may contribute to such outcomes.
abstract_id: PUBMED:32442448
Association between Transport Risk Index of Physiologic Stability in Extremely Premature Infants and Mortality or Neurodevelopmental Impairment at 18 to 24 Months. Objectives: To examine the association between mortality or neurodevelopmental impairment at 18-24 months of corrected age and the Transport Risk Index of Physiologic Stability (TRIPS) score on admission to the neonatal intensive care unit (NICU) in extremely premature infants.
Study Design: Retrospective cohort study of extremely premature infants (inborn and outborn) born at 22-28 weeks of gestational age and admitted to NICUs in the Canadian Neonatal Network between April 2009 and September 2011. TRIPS scores and clinical data were collected from the Canadian Neonatal Network database. Follow-up data at 18-24 months of corrected age were retrieved from the Canadian Neonatal Follow-Up Network database. Neurodevelopment was assessed using the Bayley Scales of Infant and Toddler Development, Edition III. The primary outcome was death or significant neurodevelopmental impairment at 18-24 months of corrected age. The secondary outcomes were individual components of the Bayley Scales of Infant and Toddler Development, Edition III assessment.
Results: A total of 1686 eligible infants were included. A TRIPS score of ≥20 on admission to the NICU was significantly associated with mortality (aOR 2.71 [95% CI, 2.02-3.62]) and mortality or significant neurodevelopmental impairment (aOR 1.91 [95% CI, 1.52-2.41]) at 18-24 months of corrected age across all gestational age groups of extremely premature infants.
Conclusion: The TRIPS score on admission to the NICU can be used as an adjunctive, objective tool for counselling the parents of extremely premature infants early after their admission to the NICU.
abstract_id: PUBMED:32671275
Developmental status of human immunodeficiency virus-exposed uninfected premature infants compared with premature infants who are human immunodeficiency virus unexposed and uninfected. Background: There is growing concern about the developmental outcome of infants exposed to HIV in utero. HIV-infected women are at greater risk of premature delivery which poses a further developmental risk factor.
Objectives: To determine whether there is a difference between the development of premature infants born at 28-37 weeks gestational age that are HIV exposed but uninfected (HEU) compared with HIV-unexposed uninfected infants (HUU).
Method: A cross-sectional study was conducted in a Johannesburg state hospital. Thirty HEU and 30 HUU infants, aged between 16 days and six months, were assessed using the Bayley Scales of Infant and Toddler Development III.
Results: The two groups were well matched for gestational age and birth weight; however, more HUU infants presented with neonatal complications. HUU infants had lower developmental scores than HEU infants in the language (p = 0.003) and motor (p = 0.037) subscales. Expressive language was more affected in the HUU infants (p = 0.001), and fine (p = 0.001) and gross motor (p = 0.03) were affected as well. HUU infants with neonatal complications such as meningitis (p = 0.02) and neonatal jaundice (NNJ) (p = 0.01) are more likely to present with language and motor delay.
Conclusion: Meningitis and NNJ have more impact on infant development than in-utero HIV and ARV exposure.
Clinical Implications: It is important for all premature infants to be screened regularly in order to diagnose developmental delays early so as to ensure early intervention and improved quality of life.
abstract_id: PUBMED:9722248
Nursery Neurobiologic Risk Score and outcome at 18 months. The aim of this study was to confirm the predictive value of the nursery Neurobiologic Risk Score. Prospectively, 121 infants (mean birthweight 961 +/- 179 g, gestation 27.0 +/- 1.2 weeks) were followed at 18 months. The nursery Neurobiologic Risk Score was correlated to the developmental quotient (r = -0.54). From low (scores 0-4), to moderate (scores 5-7) to high (scores > or = 8) risk groups, respectively, significant differences were found in mean developmental quotient (101 +/- 9 vs 92 +/- 19 vs 76 +/- 24) and in prevalence of developmental quotients < 90 (12 vs 24 vs 71%), of cerebral palsy (4 vs 19 vs 41%), of severe disabilities (0 vs 24 vs 50%) and of any disability (16 vs 30 vs 71%). Sensitivity, specificity, positive and negative predictive values for any disability were 81, 54, 49 and 84% for a score > or = 5 and 56, 87, 71 and 78% for a score > or = 8. The nursery Neurobiologic Risk Score was useful in predicting 18 months outcome of very premature infants.
abstract_id: PUBMED:32025257
Respiratory Outcome of the Former Premature Infants. The research aims to identify the respiratory pathology during the first two years of life in premature infants with gestational ages between 30-34 weeks and the risk factors for these conditions (familial, prenatal, and neonatal). There were investigated 31 premature infants with gestational ages between 30-34 weeks and the incidence of bronchopulmonary dysplasia, infections with the respiratory syncytial virus, or other viral infections requiring hospitalization, recurrent wheezing, and nasal colonization with pathogenic bacteria were noted. Also, regression models for each type of respiratory pathology as a function of the antenatal (smoking in the family, atopy, mother's age) and neonatal (gestational age, respiratory distress syndrome, duration of the treatment with antibiotics, use of the reserve antibiotics) factors were elaborated. Respiratory distress syndrome was present in 20 premature infants, and 19 infants received respiratory support. Two former premature infants presented with bronchopulmonary dysplasia, 3 with severe respiratory syncytial virus infections, 7 with recurrent wheezing, and 16 with viral infections requiring hospitalization. Respiratory distress syndrome and severe viral infections were more frequently found in families of smokers. Low gestational age and familial atopy were identified as good predictors of severe respiratory syncytial virus infections (p< 0.03) Premature infants with gestational ages between 30-34 weeks present with the risk of appearance of respiratory diseases during the first two years of life, especially disorders of the airways. Familial atopy and low gestational age represent independent risk factors for severe respiratory syncytial virus infections.
abstract_id: PUBMED:8085449
Nursery neurobiologic risk score as a predictor of mortality in premature infants. Severity-of-illness scales have been proven to be valuable in assessing clinical outcomes and several risk scores have been used to evaluate neonatal illness severity. At present, we use nursery Neurobiologic Risk Score (NBRS) to evaluate the short term outcome of preterm newborns and their correlation to length of stay and morbidity. The NBRS was determined at the neonatal intensive care unit (NICU) of Changhua Christian Hospital in 183 preterm infants with birth weight < 2500 gm, gestational age < 37 weeks and age less than 7 days between July 1991 and June 1992. The NBRS correlates significantly with the birth weight and gestational age (p < 0.0001), it is also a significant predictor of NICU mortality (r = 0.68, p < 0.0001) and morbidity (Bronchopulmonary dysplasia, BPD: r = 0.46, p < 0.0001; Retinopathy of prematurity, ROP: r = 0.20, p < 0.01), there is also a significant correlation with the length of stay (r = 0.53, p < 0.0001). We created a revised NBRS, it is also a significant predictor of NICU mortality and morbidity which are independent of birth weight. Further validation of revised NBRS in prediction of NICU mortality & morbidity is needed.
abstract_id: PUBMED:23899553
Effects of placental inflammation on neonatal outcome in preterm infants. Background: Intrauterine infection is the most commonly identified cause of preterm birth. In this study, our aim was to determine the association between placental inflammation and neonatal outcome in a prospective observational cohort of preterm infants of less than 34 weeks gestational age. We especially focused on the distinct effects of maternal inflammatory response (MIR) with and without fetal inflammatory response (FIR).
Methods: Clinical characteristics and placental histological results were prospectively collected from 216 singleton infants born at a gestational age of less than 34 weeks.
Results: Of the 216 newborns, 104 (48.1%) infants had histological placental inflammation. Based on their pathological findings, the premature infants were divided into three groups: (1) the MIR negative-FIR negative (MIR-FIR-) group; (2) the MIR positive-FIR positive (MIR+FIR+) group; and (3) the MIR positive-FIR negative (MIR+FIR-) group. The incidence of neonatal respiratory distress syndrome (RDS) in the MIR+FIR- group (5.7%) and in the MIR+FIR+ group (2.0%) was significantly lower than in the MIR-FIR- group (19.6%) (p < 0.05). Logistic regression analysis showed that MIR+FIR+ group had a decreased incidence of neonatal RDS (OR = 0.076; 95% CI 0.009-0.624; p = 0.016). The incidence of intraventricular hemorrhage (IVH) Grade 2 or greater was significantly higher in the MIR+FIR+ group (42.3%) than in the MIR+FIR- group (13.0%) (p < 0.05) or in the MIR-FIR- group (15.2%) (p < 0.05). Logistic regression analysis also showed that MIR+FIR+ was associated with an increased incidence of IVH Grade 2 or greater (OR = 4.08; 95% CI 1.259-13.24; p = 0.019).
Conclusion: A positive MIR in association with a positive FIR decreases the risk of RDS, but increases the risk of IVH Grade 2 or greater in preterm infants with a gestational age of less than 34 weeks. However, a positive MIR alone has little effect on neonatal outcome.
abstract_id: PUBMED:24871362
The effect of placental abruption on the outcome of extremely premature infants. Objective: To determine the effect of placental abruption on the outcome of infants born between 22 and 26 weeks of gestation.
Methods: A retrospective study involving 32 cases of placental abruption. Controls were matched to cases according to gestational age and birth weight. Medical records were reviewed to confirm maternal background and neonatal outcome. We compared characteristics of maternal background and neonatal outcome between the two groups.
Results: There were no significant differences in the incidence of pregnancy-induced hypertension, low maternal fibrinogen (<200 mg/dl), premature rupture of membrane, intrauterine infection, ischemic changes of the placenta, or funisitis between the groups. Non-reassuring fetal heart rate patterns (NRFHRs) during intrapartum were frequently seen in the placental abruption group compared to controls (75% versus 51%, p = 0.02). However, no differences were found for the incidence of low umbilical artery pH (<7.1), cerebral palsy, or neonatal death. The incidence of chronic lung disease (CLD, 66% versus 43%, p = 0.04) and hemosiderin deposition on the placenta (16% versus 0%, p < 0.01) was higher in abruptions compared to controls.
Conclusion: Placental abruption has a risk for the development of NRFHRs and CLD in infants born between 22 and 26 weeks of gestation, but shows no effect on neonatal mortality.
abstract_id: PUBMED:22825762
Late preterms: the influence of foetal gender on neonatal outcome Background: The group of the so-called late preterms (infants born at 34 0/7-36 6/7 weeks gestational age) has been underestimated with respect to their neonatal outcome. Among infants born before the 29th week of pregnancy, a gender-specific difference in favour of females regarding morbidity became evident. The aim of this study is to investigate whether these findings are transferable to the group of late preterms.
Methods: The neonatal outcome of 528 consecutive singletons, born at 34 0/7-36 6/7 weeks gestational age and requiring intensive care, was examined.
Results: Neonatal complications have been particularly analysed with regard to gender-specific differences. Boys (n=292) were significantly more frequently affected by sepsis (3.8 vs. 0.9%; p=0,0314, x²-test). Girls had significantly longer stays in the neonatal intensive care unit (median 12 (Q1:8; Q3:17) vs. 11 (6;16) days; p=0.0149, t-test). In a multiple logistic regression model, male gender and premature rupture of membranes were borderline significant with respect to the occurrence of sepsis - boys had a 4.4-fold risk (OR=0.228 [95% CI: 0.050-1.041]; p=0.0564) and premature rupture of membranes had a 3.5-fold risk (OR=3.462 [0.938-12.779]; p=0.0623). Strong cause variables for the length of stay in the neonatal intensive care unit were birth weight, gestational age and premature rupture of membranes after adjustment.
Conclusion: The influence of foetal gender on the neonatal outcome in the late preterm group (34 0/7-36 6/7 gestational age) has been relativised.
abstract_id: PUBMED:33603845
Risk factors for respiratory assistance in premature infants. Premature infants are prone to dyspnea after birth due to immature development, and some infants require respiratory assistance. However, the risk factors for respiratory assistance in premature infants are rarely reported. The present study enrolled 3,394 premature infants (665 infants had been provided with respiratory assistance and 2,729 had not used respiratory assistance) to retrospectively analyze the risk factors associated with respiratory aid. The multivariate logistic regression analysis demonstrated that placental abnormality [odds ratio (OR)=1.284; P=0.048], the male sex (OR=0.696; P=0.001), delivery via cesarean section (OR=1.538; P<0.001), low 1-min Apgar score (OR=0.727; P<0.001), low birth weight (OR=0.999; P=0.005) and low gestational age (OR=0.616; P<0.001) were independent risk factors for respiratory assistance in premature infants. Overall, a number of risk factors, including placental abnormality, cesarean section, low 1-min Apgar score, low birth weight and small gestational age, were identified for respiratory assistance in premature infants. By conducting a risk assessment of risk factors at birth and using this information to provide timely respiratory assistance, the survival rates of premature infants may increase.
Answer: Yes, the Neonatal Neurobiologic Risk Score (NBRS) does predict outcomes in very premature infants. A study examining the validity of the NBRS for predicting neurodevelopmental outcomes up to 3 years in infants born at less than 28 weeks gestation found that a high NBRS at discharge was associated with an increased risk of abnormal 3-year outcomes. The study confirmed the validity of the NBRS as a simple and objective means of identifying very premature infants at the highest risk of abnormal neurodevelopmental outcomes (PUBMED:8772924). Additionally, the study identified specific events, such as acidosis, hypoxemia, hypotension, intraventricular hemorrhage, infection, and hypoglycemia, which may contribute to such outcomes and demonstrated high sensitivity, specificity, positive predictive value, and negative predictive value for abnormal 3-year outcomes when using a modified NBRS that included only these significantly predictive items. |
Instruction: Is the pediatric service coordinated?
Abstracts:
abstract_id: PUBMED:25435655
The diabetic foot: the importance of coordinated care. Because of the severe morbidity and mortality associated with diabetes, diabetic foot care is an essential component of a peripheral vascular service. The goal of this article is to describe the vascular diabetic foot care pathway and how the coordinated foot care service for diabetic patients is delivered at King's College Hospital, London.
abstract_id: PUBMED:24919941
The LIFEspan model of transitional rehabilitative care for youth with disabilities: healthcare professionals' perspectives on service delivery. Purpose: LIFEspan is a service delivery model of continuous coordinated care developed and implemented by a cross-organization partnership between a pediatric and an adult rehabilitation hospital. Previous work explored enablers and barriers to establishing the partnership service. This paper examines healthcare professionals' (HCPs') experiences of 'real world' service delivery aimed at supporting transitional rehabilitative care for youth with disabilities.
Methods: This qualitative study - part of an ongoing mixed method longitudinal study - elicited HCPs' perspectives on their experiences of LIFEspan service delivery through in-depth interviews. Data were categorized into themes of service delivery activities, then interpreted from the lens of a service integration/coordination framework.
Results: Five main service delivery themes were identified: 1) addressing youth's transition readiness and capacities; 2) shifting responsibility for healthcare management from parents to youth; 3) determining services based on organizational resources; 4) linking between pediatric and adult rehabilitation services; and, 5) linking with multi-sector services.
Conclusions: LIFEspan contributed to service delivery activities that coordinated care for youth and families and integrated inter-hospital services. However, gaps in service integration with primary care, education, social, and community services limited coordinated care to the rehabilitation sector. Recommendations are made to enhance service delivery using a systems/sector-based approach.
abstract_id: PUBMED:22042550
Development of a pediatric hospitalist sedation service: training and implementation. Objective: There is growing demand for safe and effective procedural sedation in pediatric facilities nationally. Currently, these needs are being met by a variety of providers and sedation techniques, including anesthesiologists, pediatric intensivists, emergency medicine physicians, and pediatric hospitalists. There is currently no consensus regarding the training required by non-anesthesiologists to provide safe sedation. We will outline the training method developed at St. Louis Children's Hospital.
Methods: In 2003, the Division of Pediatric Anesthesia at St. Louis Children's Hospital approached the Division of Pediatric Hospitalist Medicine as a resource to provide pediatric sedation outside of the operating room. Over the last seven years, Pediatric Hospitalist Sedation services have evolved into a three-tiered system of sedation providers. The first tier provides sedation services in the emergency unit (EU) and the Center for After Hours Referral for Emergency Services (CARES). The second tier provides sedation throughout the hospital including the EU, CARES, inpatient units, Ambulatory Procedure Center (APC), and Pediatric Acute Wound Service (PAWS); it also provides night/weekend sedation call for urgent needs. The third tier provides sedation in all of the second-tier locations, as well as utilizing propofol in the APC.
Results: This training program has resulted in a successful pediatric hospitalist sedation service. Based on fiscal year 2009 billing data, the division performed 2,471 sedations. We currently have 43 hospitalists providing Tier-One sedation, 18 Tier-Two providers, and six Tier-Three providers.
Conclusions: A pediatric hospitalist sedation service with proper training and oversight can successfully augment sedation provided by anesthesiologists.
abstract_id: PUBMED:33856504
Strategies to optimize a pediatric magnetic resonance imaging service. A pediatric MRI service is a vital component of a successful radiology department. Building an efficient and effective pediatric MRI service is a multifaceted process that requires detailed planning for considerations related to finance, operations, quality and safety, and process improvement. These are compounded by the unique challenges of caring for pediatric patients, particularly in the setting of the recent coronavirus disease 2019 (COVID-19) pandemic. In addition to material resources, a successful pediatric MRI service depends on a collaborative team consisting of radiologists, physicists, technologists, nurses and vendor specialists, among others, to identify and resolve challenges and to strive for continued improvement. This article provides an overview of the factors involved in both starting and optimizing a pediatric MRI service, including commonly encountered obstacles and some proposed solutions to address them.
abstract_id: PUBMED:35090506
"…the availability of contraceptives is everywhere.": coordinated and integrated public family planning service delivery in Rwanda. Background: Contraceptive use in Rwanda tripled since 2005. This study aims to understand the role of coordinated and integrated public family planning service delivery in achieving this increase in contraceptive use in Rwanda.
Methods: This qualitative study in 2018 included eight focus group discussions with family planning providers and 32 in-depth interviews with experienced family planning users.
Results: Results indicate a well-coordinated public family planning service delivery system with community health workers and nurses filling different and complementary roles in meeting family planning client needs at the local level. In addition, integration of family planning into other maternal and child health services is the norm.
Conclusions: The coordination and integration of family planning across both providers and services may help explain the rapid increase in Rwanda's contraceptive use and has potential applications for enhancing family planning service delivery in other settings.
abstract_id: PUBMED:34345792
How the Integration of Telehealth and Coordinated Care Approaches Impact Health Care Service Organization Structure and Ethos: Mixed Methods Study. Background: Coordinated care and telehealth services have the potential to deliver quality care to chronically ill patients. They can both reduce the economic burden of chronic care and maximize the delivery of clinical services. Such services require new behaviors, routines, and ways of working to improve health outcomes, administrative efficiency, cost-effectiveness, and user (patient and health professional) experience.
Objective: The aim of this study was to assess how health care organization setup influences the perceptions and experience of service managers and frontline staff during the development and deployment of integrated care with and without telehealth.
Methods: As part of a multinational project exploring the use of coordinated care and telehealth, questionnaires were sent to service managers and frontline practitioners. These questionnaires gathered quantitative and qualitative data related to organizational issues in the implementation of coordinated care and telehealth. Three analytical stages were followed: (1) preliminary analysis for a direct comparison of the responses of service managers and frontline staff to a range of organizational issues, (2) secondary analysis to establish statistically significant relationships between baseline and follow-up questionnaires, and (3) thematic analysis of free-text responses of service managers and frontline staff.
Results: Both frontline practitioners and managers highlighted that training, tailored to the needs of different professional groups and staff grades, was a crucial element in the successful implementation of new services. Frontline staff were markedly less positive than managers in their views regarding the responsiveness of their organization and the pace of change.
Conclusions: The data provide evidence that the setup of health care services is positively associated with outcomes in several areas, particularly tailored staff training, rewards for good service, staff satisfaction, and patient involvement.
abstract_id: PUBMED:12887874
Neonatal visits to a pediatric emergency service Objective: To determine the profile of neonatal visits to a pediatric emergency service and to compare this profile with that of other pediatric age groups.
Method: We retrospectively reviewed the reports of all neonates who presented to the pediatric emergency service in 2000. Patients transferred from other hospitals were excluded. Age, sex, time of presentation, source of referral, presenting complaint, investigations, final diagnosis and hospitalization were analyzed.
Results: Three hundred and nine neonatal visits were identified. The mean age was 14.3 days and 57.3 % were male. Demand was greatest during evening and night shifts and on Sundays. The most common presenting complaints were irritability/crying (19.1 %), constipation (11.7 %) and jaundice (8.7 %). The most frequent diagnoses were infantile colic (16.8 %), constipation (9.7 %) and jaundice (8.7 %). No morbid processes were found in 12.0 % of the patients and complementary investigations were not required in 68.3 %. Fifty-one neonates (16.5 %) were admitted, mainly due to jaundice (9 patients) and sepsis (8 patients). Patients referred by physicians (29 patients, 9.4 %), especially those referred by pediatricians, were admitted and required investigations more often than self-referred patients. The admission rate was higher in neonates than in other pediatric age groups.
Conclusions: Most neonatal utilization of emergency services is due to trivial problems that could be solved in primary care. Appropriate training is required to avoid unnecessary tests without overlooking potentially serious conditions.
abstract_id: PUBMED:28695337
Development of pediatric neurosurgical service at Dr. Soetomo Hospital, Surabaya, Indonesia. Purpose: This review traces the history of pediatric neurosurgery at Dr. Soetomo General Hospital (DSGH) and its role in advancing the field of pediatric neurosurgery.
Methods: The history, the founding fathers, and the next generations of the pediatric neurosurgery in DSGH were traced back from original sources and authors' life stories.
Result: Pediatric neurosurgical service at DSGH has its own unique perspective being a pediatric service in general hospital setting. It serves second largest city of Indonesia - the fifth most populated country in the world. Historical vignette and future perspectives are narratively presented.
Conclusion: As a pediatric neurosurgical service at general hospital in developing country, its development deserves a special mention.
abstract_id: PUBMED:33457305
Stressors, coping styles, and anxiety & depression in pediatric nurses with different lengths of service in six tertiary hospitals in Chengdu, China. Background: The level of stress experienced by nurses is related to their length of service. In the current study, we investigated the potential correlations among stressors, coping styles, and anxiety and depression in pediatric nurses with different lengths of service in six tertiary hospitals in Chengdu, Sichuan, China.
Methods: Between January and June 2018, we enrolled 500 pediatric nurses from 6 tertiary hospitals in Chengdu using a convenience sampling method. A cross-sectional survey was carried out using the Chinese Nurses' Occupational Stressor Scale (NOSS), the Simplified Coping Style Questionnaire (SCSQ), and Zung's Self-rating Anxiety Scale (SAS) and Self-rating Depression Scale (SDS).
Results: Statistically significant differences were found in the average scores of NOSS and scores of various dimensions, SAS and SDS, and Simple Coping Style score among pediatric nurses with different lengths of service (all P<0.05). Nurses with 8-12 years of service had the highest average score for stressors. Anxiety and depression were both prevalent among nurses with 4-7 years of service. The average overall stress scores of nurses with different lengths of service were negatively correlated with positive coping style (P<0.05), were not significantly correlated with negative coping style (P>0.05), and were positively correlated with anxiety score and depression score (P<0.05). The positive coping style score showed negative correlations with anxiety score and depression score (both P<0.05). The negative coping style score was positively correlated with the scores for anxiety and depression (all P<0.05), except in nurses with 4-7 years of service, for whom the negative coping style score showed no significant correlation with the depression score.
Conclusions: Pediatric nurses with different lengths of service in tertiary hospitals in Chengdu experience different levels of workplace stress. Pediatric nurses with 4-7 and 8-12 years of service have higher levels of workplace stress and are more likely to experience anxiety and depression.
abstract_id: PUBMED:37150788
Establishing a pediatric interventional radiology inpatient consult service. Objective: To delineate pediatric interventional radiology (IR) inpatient consult growth and resulting collections after implementation of a pediatric IR consult service.
Methods: An inpatient IR consult process was created at a single academic children's hospital in October 2019. IR consult note templates were created in Epic (Epic Systems Corporation, Verona, Wisconsin) and utilized by 4 IR physicians. Automatic charge generation was linked to differing levels of evaluation and management (E&M) service relating to current procedural terminology (CPT) inpatient consult codes 99251-99255. The children's hospital informatics division identified IR consult notes entered from the implementation of the consult service: October 2019 to January 2022. The university radiology department billing office provided IR service E&M charge, payment, and relative value units (RVU) information during this study period. A chart review was performed to determine the IR procedure conversion rate. Mann-Whitney and a two-sample t-test statistical analyses compared use of the 25-modifier, monthly consult growth and monthly payment growth. P-value < 0.05 was considered statistically significant. RESULTS: Within this 27-month period, a total of 2153 inpatient IR consults were performed during 1757 Epic hospital encounters; monthly consult peak was reached 5 months into the study period. Consult level breakdown by CPT codes: 99251-8.7%, 99252-81.7%, and 99253-8.8%. 69.7% of IR consults had consult-specific billing with payments in 96.4% resulting in $143,976 new revenue. From 2020 to 2021, IR consult volume trended upward by 13.4% (P =0.069), and consult-specific payments increased by 84.1% (P<0.001). IR consult procedure conversion rate was 96.5%.
Conclusion: An inpatient pediatric IR consult service was quickly established and maintained by four physicians over a 27-month study period. Annual IR consult volume trended upward and consult-specific payments increased, resulting in previously uncaptured IR service revenue.
Answer: Based on the provided abstracts, it appears that various pediatric services are indeed coordinated, although the degree and nature of coordination vary across different services and institutions.
For instance, the LIFEspan model of transitional rehabilitative care for youth with disabilities demonstrates a coordinated care approach developed by a partnership between a pediatric and an adult rehabilitation hospital, aiming to support transitional care for youth with disabilities (PUBMED:24919941). However, this model also highlights gaps in service integration with primary care and other sectors, suggesting that while coordination exists within the rehabilitation sector, it may not be fully integrated across all relevant services.
The development of a pediatric hospitalist sedation service at St. Louis Children's Hospital outlines a tiered system of sedation providers, indicating a structured and coordinated approach to delivering sedation services across various hospital units (PUBMED:22042550).
Similarly, strategies to optimize a pediatric magnetic resonance imaging (MRI) service emphasize the importance of a collaborative team, including radiologists, physicists, technologists, nurses, and vendor specialists, to address challenges and improve service delivery, which is indicative of a coordinated effort (PUBMED:33856504).
In Rwanda, the public family planning service delivery system is described as well-coordinated, with community health workers and nurses playing different and complementary roles, and family planning being integrated into other maternal and child health services (PUBMED:35090506).
The integration of telehealth and coordinated care approaches is discussed in the context of how they impact health care service organization structure and ethos, with an emphasis on the importance of tailored staff training and the positive association with various outcomes (PUBMED:34345792).
Lastly, the establishment of a pediatric interventional radiology inpatient consult service at a children's hospital demonstrates a coordinated effort to create and maintain a consult service that resulted in increased revenue and a high procedure conversion rate (PUBMED:37150788).
Overall, these abstracts suggest that pediatric services are coordinated to various extents, with efforts made to improve integration and collaboration among different healthcare professionals and across different services. |
Instruction: Can a good death and quality of life be achieved for patients with terminal cancer in a palliative care unit?
Abstracts:
abstract_id: PUBMED:21126195
Can a good death and quality of life be achieved for patients with terminal cancer in a palliative care unit? Background: Lack of evidence supporting the claim that palliative care can improve quality of life and promote good death in patients with terminal cancer.
Objectives: This study was designed to evaluate the change of quality of life and quality of death over time and between patients of long and short survival in a palliative care unit.
Methods: Patient demography, cancer sites, Eastern Cooperative Oncology Group (ECOG) status were collected at admission. Quality of life, including physical and psychological symptoms, social support, and spirituality was assessed daily after admission. Quality of death was assessed by a Good Death Scale (GDS) at admission and retrospectively for 2 days before death.
Results: A total of 281 patients (52% women) were admitted and died in the study period. One hundred forty-five patients (51.6%) died within 3 weeks. Although those with short survival (<3 weeks) had more physical symptoms during the first week, there was no difference in quality of life dimensions at admission, at 1 week, and at 2 days before death between survival groups. Physical conditions deteriorated with time but other dimensions continued to improve until death. GDS and subdimensions continued to improve until death. Although those with long survival (≥3 weeks) have better scores for awareness, acceptance, timeliness, comfort, and GDS at admission, there was no difference between the two groups at 2 days before death.
Conclusion: Under comprehensive palliative care, patients with terminal cancer can have good quality of life and experience a good death even with short survival.
abstract_id: PUBMED:7518478
Dying in palliative care units and in hospital: a comparison of the quality of life of terminal cancer patients. A comparison of the quality of life of terminal cancer patients in two palliative care units with that of those in a general hospital is reported here. Quality of life was considered as a multidimensional concept. It was assessed for the 182 patients by applying content analysis scales to transcripts of their responses to part of a standardized interview. A personal construct model of dying provided the specific hypotheses about differences in quality of life. Patients in specialized palliative care units were, as predicted, found to differ from those dying in hospital, showing less indirectly expressed anger but more positive feelings. They also reported more anxiety about death but less anxiety about isolation and general anxiety, and fewer influential and nonspecified shared relationships. Against prediction, the patients in the two specialized units were also found to differ from each other, those in the smaller unit showing more directly expressed anger and helplessness than those in the larger unit.
abstract_id: PUBMED:24703943
The quality of dying and death in cancer and its relationship to palliative care and place of death. Context: Health care is increasingly focused on end-of-life care outcomes, but relatively little attention has been paid to how the dying experience is subjectively evaluated by those involved in the process.
Objectives: To assess the quality of death of patients with cancer and examine its relationship to receipt of specialized palliative care and place of death.
Methods: A total of 402 deaths of cancer patients treated at a university-affiliated hospital and home palliative care program in downtown Toronto, Ontario, Canada were evaluated by bereaved caregivers eight to 10 months after patient death with the Quality of Dying and Death (QODD) questionnaire. Caregivers also reported on bereavement distress, palliative care services received, and place of death.
Results: Overall quality of death was rated "good" to "almost perfect" by 39% and "neither good nor bad" by 61% of caregivers. The lowest QODD subscale scores assessed symptom control (rated "terrible" to "poor" by 15% of caregivers) and transcendence over death-related concerns (rated "terrible" to "poor" by 19% of caregivers). Multivariable analyses revealed that late or no specialized palliative care was associated with worse death preparation, and home deaths were associated with better symptom control, death preparation, and overall quality of death.
Conclusion: The overall quality of death was rated positively for the majority of these cancer patients. Ratings were highest for home deaths perhaps because they are associated with fewer complications and/or a more extensive support network. For a substantial minority, symptom control and death-related distress at the end of life were problematic, highlighting areas for intervention.
abstract_id: PUBMED:35174981
Association between temporary discharge from the inpatient palliative care unit and achievement of good death in end-of-life cancer patients: A nationwide survey of bereaved family members. Aim: To explore the unclear association between temporary discharge home from the palliative care unit and achievement of good death, in the background of increases in discharge from the palliative care unit. Association between experiences and circumstances of patient and family and duration of temporary discharge was also examined.
Methods: This study was a secondary analysis of data from a nationwide post-bereavement survey.
Results: Among 571 patients, 16% experienced temporary discharge home from the palliative care unit. The total good death inventory score (p < .05) and sum of 10 core attributes (p < .05) were significantly higher in the temporarily discharged and stayed home ≥2 weeks group. Among all attributes, "Independent in daily activities" (p < .001) was significantly better in the temporarily discharged and stayed home ≥2 weeks group. Regarding the experience and circumstance of patient and family, improvement of patient's appetite (p < .05), and sleep (p < .05) and peacefulness (p < .05) of family caregivers, compared to the patient being hospitalized, were associated with longer stay at home after discharge.
Conclusions: Patient's achievement of good death was better in the temporarily discharged and stayed home longer group, but this seemed to be affected by high levels of independence of the patient. Temporary discharge from the palliative care unit and staying home longer was associated with improvement of appetite of patients and better sleep and mental health status of family caregivers. Discharging home from palliative care unit is worth being considered even if it is temporary.
abstract_id: PUBMED:36125610
Factors associated with good death of patients with advanced cancer: a prospective study in Japan. Purpose: It is important for palliative care providers to identify what factors are associated with a "good death" for patients with advanced cancer. We aimed to identify factors associated with a "good death" evaluated by the Good Death Scale (GDS) score among inpatients with advanced cancer in palliative care units (PCUs) in Japan.
Methods: The study is a sub-analysis of a multicenter prospective cohort study conducted in Japan. All variables were recorded on a structured data collecting sheet designed for the study. We classified each patient into better GDS group or worse GDS group, and examined factors associated with better GDS using multivariate analysis.
Results: Between January and December 2017, 1896 patients were enrolled across 22 PCUs in Japan. Among them, a total of 1157 patients were evaluated. Five variables were significantly associated with a better GDS score in multivariate analysis: preferred place of death at PCU (odds ratio [OR] 2.85; 95% confidence interval [CI] 1.72-4.71; p < 0.01), longer survival time (OR 1.02; 95% CI 1.00-1.03; p < 0.01), not sudden death (OR 1.96; 95% CI 1.27-3.04; p < 0.01), better spiritual well-being in the last 3 days in life (OR 0.53; 95% CI 0.42-0.68; p < 0.01), and better communication between patient and family (OR 0.81; 95% CI 0.66-0.98; p = 0.03).
Conclusions: We identified factors associated with a "good death" using GDS among advanced cancer patients in Japanese PCUs. Recognition of factors associated with GDS could help to improve the quality of end-of-life care.
abstract_id: PUBMED:22962093
"A good death"--sequence (not stigma), to an enigma called life: case report on end-of-life decision making and care. Fear of death and the stigma associated with the terminal events of illness prevents us from dying well. Lack of recognition of palliative care as a speciality, in many countries, leads us to die a pathetic death in ICU rather than dying at home with near and dear ones around. Its time to break the taboo of death and to start talking about this terminal sequence (good death) of good living.
abstract_id: PUBMED:27837324
A good death from the perspective of palliative cancer patients. Purpose: Although previous research has indicated some recurrent themes and similarities between what patients from different cultures regard as a good death, the concept is complex and there is lack of studies from the Nordic countries. The aim of this study was to explore the perception of a good death in dying cancer patients in Sweden.
Methods: Interviews were conducted with 66 adult patients with cancer in the palliative phase who were recruited from home care and hospital care. Interviews were analysed using qualitative content analysis.
Results: Participants viewed death as a process. A good death was associated with living with the prospect of imminent death, preparing for death and dying comfortably, e.g., dying quickly, with independence, with minimised suffering and with social relations intact. Some were comforted by their belief that death is predetermined. Others felt uneasy as they considered death an end to existence. Past experiences of the death of others influenced participants' views of a good death.
Conclusions: Healthcare staff caring for palliative patients should consider asking them to describe what they consider a good death in order to identify goals for care. Exploring patients' personal experience of death and dying can help address their fears as death approaches.
abstract_id: PUBMED:33619220
Early palliative care and quality of dying and death in patients with advanced cancer. Objective: Early palliative care (EPC) in the outpatient setting improves quality of life for patients with advanced cancer, but its impact on quality of dying and death (QODD) and on quality of life at the end of life (QOL-EOL) has not been examined. Our study investigated the impact of EPC on patients' QODD and QOL-EOL and the moderating role of receiving inpatient or home palliative care.
Method: Bereaved family caregivers who had provided care for patients participating in a cluster-randomised trial of EPC completed a validated QODD scale and indicated whether patients had received additional home palliative care or care in an inpatient palliative care unit (PCU). We examined the effects of EPC, inpatient or home palliative care, and their interactions on the QODD total score and on QOL-EOL (last 7 days of life).
Results: A total of 157 caregivers participated. Receipt of EPC showed no association with QODD total score. However, when additional palliative care was included in the model, intervention patients demonstrated better QOL-EOL than controls (p=0.02). Further, the intervention by PCU interaction was significant (p=0.02): those receiving both EPC and palliative care in a PCU had better QOL-EOL than those receiving only palliative care in a PCU (mean difference=27.10, p=0.002) or only EPC (mean difference=20.59, p=0.02).
Conclusion: Although there was no association with QODD, EPC was associated with improved QOL-EOL, particularly for those who also received inpatient care in a PCU. This suggests a long-term benefit from early interdisciplinary palliative care on care throughout the illness.
Trial Registration Number: ClinicalTrials.gov Registry (#NCT01248624).
abstract_id: PUBMED:23788936
Selected aspects of palliative care and quality of life at the terminal stage of neoplastic disease. Neoplastic diseases are among the most common causes of death. The quality of life in neoplastic disease depends on the type of neoplasm, level of progression, location, treatment possibilities and prognosis. Cancer reduces the quality of life at the advanced stage of disease. At this time patients feel pain and suffering. Palliative care is used in the terminal phase of neoplastic disease. It includes overall care of an incurable patient and her/his family. The main objective of palliative care is meeting somatic and psycho-social requirements. Recently, more and more physicians dealing with oncological patients are moving away from treatment of the patient like an object. They pay attention to a subjective approach to treatment outcomes that are felt by the patient. The model of medicine is becoming a holistic one and during examination physicians pay attention to the patient's physical and mental state, economic condition, social situation, feelings or lack of complaints more often. Research on quality of life is becoming a vital part of medical examination.
abstract_id: PUBMED:38058004
Bereaved Family Caregivers Perception of Trust in Palliative Care Doctors by Patients with Terminal Cancer. Trust is a key factor in achieving a good death. However, few studies have focused on factors that help patients with terminal cancer establish trusting relationships with their palliative care doctors. This exploratory qualitative study, conducted in Japan, was designed to identify factors related to terminal cancer patients' trust in their doctors. Semi-structured interviews were conducted with 18 caregivers and grounded theory was the research approach chosen to guide this study. The data revealed seven factors related to the trust to palliative care doctors, including caring attitude, symptom management, courteous and specific explanations, long-term involvement in the patient's care, being faced with inevitable death, good impression of the institution, and referral by a trusted doctor. These factors were categorized into three main themes: [1] palliative care doctors, [2] patients with terminal cancer, and [3] professional reputation. There is potential for improving end-of-life experiences through understanding and implementing interventions to ensure trust identified by these caregivers.
Answer: Yes, a good death and quality of life can be achieved for patients with terminal cancer in a palliative care unit. Studies have shown that under comprehensive palliative care, patients with terminal cancer can maintain good quality of life and experience a good death, even with short survival times. Quality of life dimensions such as physical conditions, psychological symptoms, social support, and spirituality can improve or be maintained until death, and the Good Death Scale (GDS) scores also tend to improve over time in a palliative care setting (PUBMED:21126195). Additionally, specialized palliative care units have been found to provide a better quality of life for terminal cancer patients compared to general hospital settings, with patients showing less indirectly expressed anger and more positive feelings (PUBMED:7518478).
The quality of dying and death in cancer patients has been positively rated by a majority of caregivers, especially for home deaths, which are associated with better symptom control, death preparation, and overall quality of death. However, symptom control and death-related distress at the end of life remain areas for improvement (PUBMED:24703943). Temporary discharge from the palliative care unit and staying home longer has been associated with better achievement of a good death, improved appetite in patients, and better sleep and mental health status of family caregivers (PUBMED:35174981).
Factors such as the preferred place of death, longer survival time, not experiencing a sudden death, better spiritual well-being, and better communication between patient and family have been associated with a better Good Death Scale score (PUBMED:36125610). Early palliative care has also been shown to improve the quality of life at the end of life, particularly when combined with inpatient care in a palliative care unit (PUBMED:33619220). Trust in palliative care doctors is another key factor in achieving a good death, with factors such as a caring attitude, effective symptom management, and a good impression of the institution playing a role in establishing trust (PUBMED:38058004). Overall, these studies suggest that palliative care units can indeed facilitate a good death and quality of life for patients with terminal cancer. |
Instruction: Erysipelothrix rhusiopathiae endocarditis: a preventable zoonosis?
Abstracts:
abstract_id: PUBMED:38293860
Infective endocarditis with bivalvular involvement due to Erysipelothrix rhusiopathiae. Report of one case The microbiology of infective endocarditis (IE) varies in different populations and depends on public health conditions and socioeconomic status. In low-income countries, oral Streptococci affect hearts with rheumatic valve disease in patients with poor dentition. In high-income countries, Staphylococci are the most common cause, affecting elderly and immunocompromised patients, or those with invasive devices. Gram - positive bacili as IE pathogens are unusual. Erysipelothrix rhusiopathiae is a Gram positive bacili. It causes skin diseases in domestic and farm animals, but in humans, is a very unusual pathogen. This infection is considered a zoonosis, since most cases are linked to direct contact with vector animals. We report a 62 year-old male patient with a history of exposure to animals, who developed an infective endocarditis with severe bivalve regurgitation and septic shock, requiring antimicrobials and surgical resolution. Erysipelothrix rhusiopathiae was isolated from blood and valve vegetation cultures. The patient had a successful evolution and was discharged from the hospital.
abstract_id: PUBMED:16607829
Erysipelothrix rhusiopathiae--rare etiology of persistent febrile syndrome Erysipelothrix rhusiopathiae is a Gram-positive rod, carried by many domestic and pet animals and very resistant in the environmental habitat, causing an anthropo-zoonosis infection in humans. It can determine, most frequently, a skin infection and may cause also septic arthritis or systemic infection, usually associated with aortic endocarditis. Bacteremia without endocarditis is a very rare presentation, generally seen in immunocompromised patients. We report such an unexpected diagnosis in a 75-years old woman, with mitral regurgitation, who was investigated for a persistent febrile syndrome, with no evidence of vegetation on repeated echo examinations and no evidence of the entry portal and who recovered successfully from an E. rhusiopathiae bacteremia with Ampicillin iv. therapy for 14 days.
abstract_id: PUBMED:32499982
An Unusual Case of Tricuspid Valve Infective Endocarditis Caused by Erysipelothrix Rhusiopathiae. Erysipelothrix rhusiopathiae is an omnipresent commensal in the environment, studied for over a century. It is a zoonotic pathogen known to cause infections in animals and humans. Cases of Erysipelothrix rhusiopathiae in humans have been classified into three distinct entities: localized skin infections, diffuse skin infections, and systemic organ involvement. This particular pathogen is an uncommon cause of endocarditis, with an affinity for the aortic valve. We present a case of Erysipelothrix rhusiopathiae in a patient with involvement of the tricuspid valve.
abstract_id: PUBMED:25785050
Erysipelothrix rhusiopathiae-induced aortic valve endocarditis: case report and literature review. Erysipelothrix rhusiopathiae is a pathogen of zoonosis often associated with occupational exposure. Although Erysipelothrix rhusiopathiae infection has high mortality, the heart valves in humans are rarely involved. The clinical data of a case of a 65-year-old male with Erysipelothrix rhusiopathiae-induced aortic valve endocarditis was summarized retrospectively and analyzed with a literature review. Based on a literature review and our experience, cases of E. rhusiopathiae-induced aortic valve endocarditis are extremely rare and surgical treatment for this condition is useful and recommended.
abstract_id: PUBMED:36284553
Sacroiliitis with Erysipelothrix Rhusiopathiae revealing tricuspid endocarditis, the first case reported on the Guiana Shield: clinical case and review of the literature We report here an atypical case of acute sacroiliitis caused by Erysipelothrix rhusiopathiae revealing tricuspid endocarditis in a 53-year-old woman without medical history. She was admitted to Cayenne hospital because of intense right hip and thigh pain, associated with fever. A right sacroiliitis was visible on the computed tomography (CT) scan, confirmed on MRI. Transesophageal echocardiography revealed a large mobile tricuspid vegetation. Blood cultures were positive for E. rhusiopathiae. CT scan showed pulmonary alveolar opacities, consistent with septic emboli. Clinical improvement was obtained under ceftriaxone followed by ciprofloxacin for 6 weeks of treatment. We present a review of bone and joint infections caused by E. rhusiopathiae. So far, not a single case has been reported in Latin America.
abstract_id: PUBMED:23988830
Erysipelothrix bacteremia without endocarditis: rare event or under-reported occurrence? A patient presented with inflamed hands and Erysipelothrix rhusiopathiae bacteremia. Because a high incidence of endocarditis has been reported with this organism, a transesophageal echocardiogram was obtained, which was normal. Treatment with oral moxifloxacin resolved all manifestations of illness. The association between E. rhusiopathiae bacteremia and endocarditis may be spurious.
abstract_id: PUBMED:30542523
Aortic valve endocarditis with Erysipelothrix rhusiopathiae: A rare zoonosis. Erysipelothrix rhusiopathiae has an economic impact in animal husbandry by causing infection in swine, sheep and poultry. E. rhusiopathiae is present in the surface mucoid slime on fish, although fishes do not seem to be affected. Humans can get infected, maost often through occupational exposure and may suffer typical erysipeloid infection on exposed skin such as on hands and fingers, or deeper skin infections, and sometimes sepsis and endocarditis, associated with high case-fatality rate. We describe a case of aortic valve endocarditis caused by E. rhusiopathiae in a 59-year-old man who enjoyed fishing in his spare time.
abstract_id: PUBMED:27873166
A dangerous hobby? Erysipelothrix rhusiopathiae bacteremia most probably acquired from freshwater aquarium fish handling. Erysipelothrix rhusiopathiae is a facultative anaerobic Gram-positive rod that occurs widely in nature and is best known in veterinary medicine for causing swine erysipelas. In humans, infections are rare and mainly considered as occupationally acquired zoonosis. A case of E. rhusiopathiae bacteremia most likely associated with home freshwater aquarium handling is reported. The route of transmission was probably a cut with the dorsal fin of a dead pet fish. A short review of clinical presentations, therapeutic considerations and pitfalls of E. rhusiopathiae infections in humans is presented.
abstract_id: PUBMED:23140319
A case of apparent canine erysipeloid associated with Erysipelothrix rhusiopathiae bacteraemia. Background: Erysipelothrix rhusiopathiae is a Gram-positive facultative anaerobe found worldwide and is most commonly associated with skin disease in swine, while anecdotal reports of cases in dogs have been associated with endocarditis.
Hypothesis/objectives: Clinicians should consider systemic infectious diseases as a potential cause of erythematous skin lesions.
Animals: A 5-year-old female spayed Labrador retriever presented with lethargy, anorexia and erythematous skin lesions while receiving immunosuppressive therapy for immune-mediated haemolytic anaemia. Four days prior to presentation, the dog had chewed on a raw turkey carcase.
Methods: Complete blood count, serum chemistry profile, urinalysis and blood cultures.
Results: Blood cultures yielded a pure growth of E. rhusiopathiae serotype 1b. Amoxicillin 22 mg/kg orally twice daily for 2 weeks and discontinuation of azathioprine resulted in remission of fever and skin lesions.
Conclusions And Clinical Importance: This report is the first documentation, to the best of the authors' knowledge, of Erysipelothrix infection, a known zoonosis, in an immunosuppressed dog, highlighting the need for infectious disease monitoring in patients receiving such therapy. This information may also help educate veterinarians to include Erysipelothrix infection as a differential diagnosis in dogs with fever and skin lesions, as well as the role of blood cultures in diagnosing this disease.
abstract_id: PUBMED:19733019
Erysipelothrix rhusiopathiae. Erysipelothrix rhusiopathiae is a facultative, non-spore-forming, non-acid-fast, small, Gram-positive bacillus. The organism was first established as a human pathogen late in the nineteenth century. Three forms of human disease have been recognised since then. These include a localised cutaneous lesion form, erysipeloid, a generalised cutaneous form and a septicaemic form often associated with endocarditis. The organism is ubiquitous and able to persist for a long period of time in the environment, including marine locations. It is a pathogen or a commensal in a wide variety of wild and domestic animals, birds and fish. Swine erysipelas caused by E. rhusiopathiae is the disease of greatest prevalence and economic importance. Diseases in other animals include erysipelas of farmed turkeys, chickens, ducks and emus, and polyarthritis in sheep and lambs. Infection due to E. rhusiopathiae in humans is occupationally related, principally occurring as a result of contact with contaminated animals, their products or wastes, or soil. Erysipeloid is the most common form of infections in humans. While it has been suggested that the incidence of human infection could be declining due to technological advances in animal industries, infection still occurs in specific environments. Additionally, infection by the organism is possibly under-diagnosed due to the resemblance it bears to other infections, and problems encountered in isolation and identification. Various virulence factors have been suggested as being involved in the pathogenicity of E. rhusiopathiae. The presence of a hyaluronidase and neuraminidase has been recognised, and it was shown that neuraminidase plays a significant role in bacterial attachment and subsequent invasion into host cells. The role of hyaluronidase in the disease process is controversial. The presence of a heat labile capsule has been reported as important in virulence. Control of animal disease by sound husbandry, herd management, good sanitation and immunization procedures is recommended.
Answer: Erysipelothrix rhusiopathiae endocarditis is indeed a zoonosis, as the infection is typically acquired through occupational or environmental exposure to animals or animal products harboring the bacterium (PUBMED:38293860, PUBMED:16607829, PUBMED:32499982, PUBMED:25785050, PUBMED:36284553, PUBMED:30542523, PUBMED:27873166, PUBMED:23140319, PUBMED:19733019). The bacterium is a Gram-positive rod that is carried by many domestic and pet animals and is very resistant in the environmental habitat (PUBMED:16607829). It is present in the surface mucoid slime on fish and is found worldwide, being a pathogen or a commensal in a wide variety of wild and domestic animals, birds, and fish (PUBMED:30542523, PUBMED:19733019).
Prevention of Erysipelothrix rhusiopathiae endocarditis could potentially be achieved through measures that limit exposure to the bacterium. This might include the use of protective equipment or practices to avoid cuts and abrasions when handling animals or animal products, especially for individuals with occupational risks such as farmers, veterinarians, butchers, and fish handlers (PUBMED:27873166, PUBMED:19733019). Additionally, maintaining good hygiene and sanitation in environments where animals are kept could reduce the risk of transmission (PUBMED:19733019).
However, despite potential preventive measures, cases of Erysipelothrix rhusiopathiae endocarditis still occur, and the infection can be serious, sometimes requiring surgical intervention (PUBMED:38293860, PUBMED:25785050). The association between E. rhusiopathiae bacteremia and endocarditis may not always be straightforward, as there have been reports of bacteremia without endocarditis (PUBMED:23988830). Nonetheless, given the zoonotic nature of the pathogen and its potential for causing severe disease, efforts to prevent infection, particularly in high-risk populations, are important.
In summary, while Erysipelothrix rhusiopathiae endocarditis is a preventable zoonosis to some extent, complete prevention may be challenging due to the ubiquity of the organism in the environment and its presence in a wide range of animal hosts. |
Instruction: An uncommon case of failed suicide in a 94-year-old woman: "masked" depression or rational decision?
Abstracts:
abstract_id: PUBMED:18852554
An uncommon case of failed suicide in a 94-year-old woman: "masked" depression or rational decision? Aim: We report an unusual case of "failed suicide" in an oldest old woman who was apparently "aging successfully".
Method: This case was analysed in the light of a careful literature review.
Results: This was an unusual case of failed suicide, attempted by a 94-year-old woman who had planned the suicide several days earlier.
Conclusions: The unusual features of this case relate to: 1) the person's female gender and very advanced age; 2) her apparently "successful aging" condition; 3) the violent method and unusual means she used; 4) the suicide note written several days beforehand.
abstract_id: PUBMED:31569542
Rational Suicide in Late Life: A Systematic Review of the Literature. Background and Objectives: The complex concept of rational suicide, defined as a well-thought-out decision to die by an individual who is mentally competent, is even more controversial in the case of older adults. Materials and Methods: With the aim of better understanding the concept of rational suicide in older adults, we performed a systematic review of the literature, searching PubMed and Scopus databases and eventually including 23 published studies. Results: The main related topics emerging from the papers were: depression, self-determination, mental competence; physicians' and population's perspectives; approach to rational suicide; ageism; slippery slope. Conclusions: Despite contrasting positions and inconsistencies of the studies, the need to carefully investigate and address the expression of suicidal thoughts in older adults, as well as behaviours suggesting "silent" suicidal attitudes, clearly emerges, even in those situations where there is no diagnosable mental disorder. While premature conclusions about the "rationality" of patients' decision to die should be avoided, the possibility of rational suicide cannot be precluded.
abstract_id: PUBMED:28807701
Late-Life Suicide in Terminal Cancer: A Rational Act or Underdiagnosed Depression? Context: Previous studies have reported significantly elevated standardized mortality rates in older people with cancer. Terminally ill people represent a unique group where suicide may be considered as rational.
Objectives: The aims of this study are to compare the sociodemographic and clinical characteristics of older people with and without terminal cancer who died by suicide and analyze the suicide motives of those with terminal cancer to determine whether they represent rational suicide.
Methods: The New Zealand Coronial Services provided records of all older people (aged 65 years and older) who died by suicide between July 2007 and December 2012. Sociodemographic and clinical data were extracted from the records. Using the characteristics for defining rational suicide, we determined whether the motives in terminal cancer cases represented rational suicide.
Results: Of the 214 suicide cases, 23 (10.7%) older people were diagnosed with a terminal cancer. Univariate analysis found that older people with terminal cancer who died by suicide were less likely to have a diagnosis of depression (8.7% vs. 46.6%; P = 0.001) or previous contact with mental health services (4.5% vs. 35.0%; P = 0.004) than those without terminal cancer. About 82.6% of the terminal cancer cases had a motivational basis that would be understandable to uninvolved observers.
Conclusion: A high proportion of those with terminal cancer had motives suggestive of rational suicide. Future studies are needed to clarify whether the low rate of depression is secondary to underdiagnosis of depression or people with terminal cancer choosing to end their life as a rational act to alleviate suffering.
abstract_id: PUBMED:17484848
Depression in old age Affecting 3% of the old-age population and 10-20% of elderly patients with chronic medical illness or dementia, depression is an important health problem in late life. Depression with first onset in late life differs from early-onset depression clinically as well as by more organic cerebral involvement. If untreated, depression in the elderly leads to severe disability and to excess mortality by suicide and by adverse outcome of medical illness. The response to antidepressant drugs in old age is on the same level as in younger age-groups, and as less than 1 in 5 elderly people with depression is diagnosed and treated, there is substantial room for improving the prognosis of old-age depression.
abstract_id: PUBMED:36011664
Factors Related to Suicidal Ideation and Prediction of High-Risk Groups among Youngest-Old Adults in South Korea. (1) Background: The suicide of older adults shows different factors between the youngest-old adults and the old-old adults. This study aimed to identify factors predicting suicidal ideation among youngest-old adults (ages 65 to 74 years) and predict high-risk groups’ characteristics. (2) Methods: The subjects of this study were 970 youngest-old adults who participated in the Korean National Health and Nutrition Examination Survey (KNHANES VIII Year 1, 2019). Logistic regression analysis identified factors related to suicidal ideation, and decision tree analysis identified combined characteristics among high-risk groups. Data were analyzed using SPSS 27.0. (3) Results: Suicidal ideation became more common among those with relatively lower income levels (OR = 1.48, 95% CI = 1.04−2.12), those whom had experienced depression (OR = 9.28, 95% CI = 4.57−18.84), those with relatively higher stress levels (OR = 2.42, 95% CI = 1.11−5.28), and those reporting a relatively worse perceived health (OR = 1.88, 95% CI = 1.23−3.11). Complex characteristics that combined depression, low personal income level, and low perceived health predicted a high risk of suicidal ideation (64.6%, p < 0.05). (4) Conclusions: The findings indicate that this high-risk group should be prioritized when developing suicide prevention strategies.
abstract_id: PUBMED:1224371
Family dynamics, childhood depression, and attempted suicide in a 7-year-old boy: a case study. Overt suicidal behavior in children is uncommon. The few papers discussing this area include only minimal descriptions of family dynamics and do not, in general, utilize the concept of childhood depression in the formulation of this behavior. This paper reviews the literature in the two areas of suicidal behavior and depression in children and presents in detail the family dynamics surrounding a suicidal attempt in a 7-year-old boy. The case is formulated in terms of childhood depression, and the course of treatment is discussed.
abstract_id: PUBMED:36141291
Prevalence and Predictive Factors of Masked Depression and Anxiety among Jordanian and Palestinian Couples: A Cross-Sectional Study. Although anxiety and depression are among the most prevalent mental disorders worldwide, they continue to gain less attention than their physical counterparts in terms of health care provision and population mentalisation. This cross-sectional study explores and compares the national prevalence of depression and anxiety signs/symptoms and well as identifying associated socio-demographic factors among Jordanian and Palestinian fertile couples. Four-hundred and sixty-nine participants were eligible for inclusion and agreed to participate in the study. The mean score for HAM-A and BDI-II were 12.3 ± 8.2 and 15.30 ± 10.0, respectively. According to the grading of HAM-A and BDI-II, the majority of the participants have graded themselves to be mildly anxious (N = 323, 68.9%) and around one third of participants (N = 148, 31.6%) moderately to severe depressed. The suicidal intent was remarkable and of concern where around 18.6% of participants had suicidal thoughts and wishes. There was a significant correlation between both HAM-score and BDI-II score and age [p = 0.01, p = 0.011, respectively], body weight [p = 0.01, p = 0.006, respectively], and total monthly income [p < 0.001, p < 0.001, respectively]. Our findings ought to alert healthcare professionals and other interested parties that there is a high burden of anxiety and depression symptoms among Jordanian and Palestinian couples. To support Jordanian and Palestinian couples’ mental health, healthcare professionals, researchers, and educators favoured to concentrate on creating efficient and culturally relevant education, preventive, and intervention procedures utilising evidence-based guidelines.
abstract_id: PUBMED:17317474
Suicide in old age: illness or autonomous decision of the will? Depression, often accompanied by suicidal behavior or recurring thoughts about suicide, is one of the most common psychic impairments in old age. Statistics in Austria tell us clearly: Suicidal candidates among the elderly are likely to succeed. Especially in men, suicide has become a significant cause of death. In an age where traditional family structures are beginning to fall apart, and where the elderly increasingly feel to be a "burden" to society, unable to find their place, we tend to look at suicide more and more as a voluntary and autonomous decision, thus rationalizing it as in: "This life I would not want to live either". But is it permissible for physicians to consider a patient, who has acted suicidal, to be "not ill," or to have acted "with good reason"? The present paper shall critically revisit the concept of "rational suicide." What I hope to illuminate is the tension between medical care for, and autonomy of the patient that physicians have to negotiate in their work.
abstract_id: PUBMED:37778008
Clinical picture and differential diagnosis of depression in old age Depression in old age is often underdiagnosed, although it is the leading mental health problem at this age. The significance of assessment and adequate treatment of depression among elderly patients cannot be overstated, since it significantly impairs the quality of life, increases morbidity and mortality in many of the chronic disease groups. In addition, it is a primary risk factor in completed suicide, which occurs up to three times more often among elderly than in other age groups. In our non-systematic (narrative) summary study, we briefly review the clinical picture and differential diagnosis of depression in elderly patients, as well as the main aspects of screening and treatment. The clinical characteristics and the pathology of the disease at this age raise a number of methodological questions that could be the subject of further empirical studies. Orv Hetil. 2023; 164(39): 1537-1543.
abstract_id: PUBMED:27840559
Late-life Depression, Suicidal Ideation, and Attempted Suicide: The Role of Individual Differences in Maximizing, Regret, and Negative Decision Outcomes. Suicide rates are highest in adults of middle and older age. Research with psychiatric patients has shown that proneness to feel regret about past decisions can grow so intense that suicide becomes a tempting escape. Here, we examine the additional role of individual differences in maximizing, or the tendency to strive for the best decision, rather than one that is good enough. We provided individual-differences measures of maximizing, regret proneness, and negative life decision outcomes (as reported on the Decision Outcome Inventory or DOI) to a non-psychiatric control group, as well as three groups of psychiatric patients in treatment for suicide attempts, suicidal ideation, or non-suicidal depression. We found that scores on the three individual-differences measures were worse for psychiatric patients than for non-psychiatric controls, and were correlated to clinical assessments of depression, hopelessness, and suicidal ideation. More importantly, maximizing was associated with these clinical assessments, even after taking into account maximizers' worse life decision outcomes. Regret proneness significantly mediated those relationships, suggesting that maximizers could be at risk for clinical depression because of their proneness to regret. We discuss the theoretical relevance of our findings and their promise for clinical practice. Ultimately, late-life depression and suicidal ideation may be treated with interventions that promote better decision making and regret regulation.
Answer: The case of the 94-year-old woman who attempted suicide despite apparently "aging successfully" raises the question of whether her actions were due to "masked" depression or a rational decision (PUBMED:18852554). The concept of rational suicide, particularly in older adults, is controversial and complex, involving factors such as depression, self-determination, mental competence, and societal perspectives (PUBMED:31569542). In the context of terminal illness, such as cancer, some suicides may be considered rational acts to alleviate suffering, especially when there is a low rate of diagnosed depression or previous mental health service contact (PUBMED:28807701).
Depression in old age is a significant health problem, affecting a notable percentage of the elderly population, and can lead to severe disability and increased mortality due to suicide (PUBMED:17484848). Factors such as low income, depression, high stress levels, and poor perceived health have been associated with suicidal ideation among the elderly (PUBMED:36011664). Moreover, depression and anxiety can be underdiagnosed and may present with "masked" symptoms, which could contribute to suicidal thoughts (PUBMED:36141291).
The concept of "rational suicide" in old age is further complicated by the tension between medical care and patient autonomy. It is critical to consider whether a patient's suicidal action is a result of illness or an autonomous decision (PUBMED:17317474). Since depression in old age is often underdiagnosed and is a primary risk factor for completed suicide, it is essential to carefully assess and treat depression among elderly patients (PUBMED:37778008).
Individual differences in maximizing, regret, and negative decision outcomes have also been linked to late-life depression and suicidal ideation, suggesting that a tendency to strive for the best decision and a proneness to regret could be risk factors for depression and suicide in the elderly (PUBMED:27840559).
In conclusion, determining whether the 94-year-old woman's suicide attempt was due to "masked" depression or a rational decision requires a careful evaluation of her mental health, potential depressive symptoms, and her capacity for making autonomous decisions. It is important to avoid premature conclusions about the rationality of her decision and to consider the broader context of her life circumstances, mental state, and the complex interplay of factors that influence suicidal behavior in the elderly. |
Instruction: Do patients with sphincter of Oddi dysfunction benefit from endoscopic sphincterotomy?
Abstracts:
abstract_id: PUBMED:8724026
Do patients with sphincter of Oddi dysfunction benefit from endoscopic sphincterotomy? A 5-year prospective trial. Objectives: To assess the incidence of elevated sphincter of Oddi baseline pressure and the response to endoscopic sphincterotomy in patients with suspected sphincter of Oddi dysfunction.
Design: A 5-year prospective clinical trial.
Methods: One-hundred and eight patients with recurrent biliary-type pain after cholecystectomy were enrolled. After thorough investigation, 35 patients with suspected type II sphincter of Oddi dysfunction (SOD) and another 29 type III patients remained for further investigation. Both groups were similar with respect to demographic data and severity of pain. Biliary manometry was performed in all except three patients in either group. Endoscopic sphincterotomy was performed in all patients with abnormal sphincter of Oddi baseline pressure (> 40 mmHg). All patients were clinically re-evaluated after 4-6 weeks, and thereafter the sphincterotomized patients were followed for a median period of 2.5 years.
Results: An abnormal sphincter of Oddi baseline pressure was found in 62.5% of the type II patients and in 50% of the patients with suspected type III SOD (P = 0.66). At the 4-6 week follow-up none of those patients without abnormal manometry, but 70% of the patients with type II SOD, and 39% of the type III SOD patients, respectively, reported subjective benefit after sphincterotomy (P = 0.13 type II vs. type III). However, after a median follow-up of 2.5 years, sustained symptomatic improvement after sphincterotomy was found in 60% of the type II patients, but only in 8% of the patients with type III SOD (P < 0.01).
Conclusion: Disregarding a lack of difference in the incidence of abnormal sphincter of Oddi baseline pressure between type II and type III SOD, the Geenen-Hogan classification helps to predict the clinical outcome after endoscopic sphincterotomy.
abstract_id: PUBMED:18090984
Is endoscopic sphincterotomy avoidable in patients with sphincter of Oddi dysfunction? Aim: Endoscopic sphincterotomy is an efficient means of treating sphincter of Oddi dysfunction (SOD), but it is associated with a morbidity rate of 20%. The aim of this study was to assess how frequently endoscopic sphincterotomy was performed to treat SOD in a group of patients with a 1-year history of medical management.
Methods: A total of 59 patients, who had been cholecystectomized 9.3 years previously on average, were included in this study and they all underwent biliary scintigraphy. Medical treatment was prescribed for 1 year. Endoscopic sphincterotomy was proposed for patients whose medical treatment had been unsuccessful.
Results: Eleven patients were rated group 1 on the Milwaukee classification scale, 34 group 2 and 14 group 3. The hile-duodenum transit time (HDTT) was lengthened in 32 patients. The medical treatment was efficient or fairly efficient in 45% of the group 1 patients, 67% of the group 2 patients, and 71.4% of the group 3 patients (P=0.29). Only 14 patients out of the 21 whose medical treatment was unsuccessful agreed to undergo endoscopic sphincterotomy. HDTT was lengthened in 11 of the 14 patients undergoing endoscopic sphincterotomy and in 21 of the 45 non-endoscopic sphincterotomy patients (P=0.03). Twelve of the 14 patients who underwent endoscopic sphincterotomy were cured.
Conclusion: In this prospective series of patients with a 1-year history of medical management, only 23% of the patients with suspected SOD underwent endoscopic sphincterotomy although 54% had an abnormally long HDTT.
abstract_id: PUBMED:33161159
The Next EPISOD: Trends in Utilization of Endoscopic Sphincterotomy for Sphincter of Oddi Dysfunction from 2010-2019. Background & Aims: For years, the endoscopic management of the disorder formerly known as Type III Sphincter of Oddi Dysfunction (SOD) had been controversial. In 2013, the results of the Evaluating Predictors and Interventions in Sphincter of Oddi Dysfunction (EPISOD) trial demonstrated that there was no benefit associated with endoscopic sphincterotomy for patients with Type III SOD. We aimed to assess the utilization of endoscopic sphincterotomy for patients with SOD in a large population database from 2010-2019.
Methods: We searched a large electronic health record (EHR)-based dataset incorporating over 300 individual hospitals in the United States (Explorys, IBM Watson health, Armonk, NY). Using Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) we identified patients with a first-ever diagnosis of "disorder of Sphincter of Oddi" annually from 2010-2019. Subclassification of SOD types was not feasible using SNOMED-CT codes. Stratified by year, we identified the proportion of patients with newly-diagnosed SOD undergoing endoscopic sphincterotomy and those receiving newly-prescribed medical therapy.
Results: A total of 39,950,800 individual patients were active in the database with 7,750 index diagnoses of SOD during the study period. The incidence rates of SOD increased from 2.4 to 12.8 per 100,000 persons from 2010-2019 (P < .001). In parallel, there were reductions in the rates of biliary (34.3% to 24.5%) and pancreatic sphincterotomy (25% to 16.4%), respectively (P < .001). Sphincter of Oddi manometry (SOM) was infrequently utilized, <20 times in any given year, throughout the study duration. There were no significant increases in new prescriptions for TCAs, nifedipine, or vasodilatory nitrates.
Conclusions: Among a wide range of practice settings which do not utilize routine SOM, a sudden and sustained decrease in rates of endoscopic sphincterotomy for newly-diagnosed SOD was observed beginning in 2013. These findings highlight the critical importance of high-quality, multi-center, randomized controlled trials in endoscopy to drive evidence-based changes in real-world clinical practice.
abstract_id: PUBMED:23419531
The effect of endoscopic sphincterotomy on the motility of the gallbladder and of the sphincter of Oddi in patients with acalculous biliary pain syndrome Introduction: Sphincter of Oddi dysfunction usually occurs after cholecystectomy, but it can sometimes be detected in patients with intact gallbladder too. The diagnostic value of the non-invasive functional tests is not established in this group of patients and the effects of sphincterotomy on transpapillary bile outflow and gallbladder motility are unknown.
Aims: The aim of this study was to determine the effect of endoscopic sphincterotomy on the gallbladder ejection fraction, transpapillary bile outflow and the clinical symptoms of patients with acalculous biliary pain syndrome.
Patients And Methods: 36 patients with acalculous biliary pain syndrome underwent quantitative hepatobiliary scintigraphy, and all of them had decreased cholecytokinin-induced gallbladder ejection fraction. The endoscopic manometry of the sphincter of Oddi showed abnormal sphincter function in 26 patients who were enrolled the study. Before and after endoscopic sphincterotomy all patients had ultrasonographic measurement of cholecystokinin-induced gallbladder ejection fraction with and without nitroglycerin pretreatment and scintigraphy was repeated as well. The effects of sphincterotomy on gallbladder ejection fraction and transpapillary biliary outflow were evaluated. In addition, changes in biliary pain score with a previously validated questionnaire were also determined.
Results: All 26 patients had decreased gallbladder ejection fraction before sphincterotomy measured with scintigraphy (19+18%) and ultrasound (16+9.7%), which was improved after nitroglycerin pretreatment (48.2+17%; p<0.005). Detected with both methods, the ejection fraction was in the normal range after sphincterotomy (52+37% and 40.8+16.5%), but nitroglycerin pretreatment failed to produce further improvement (48.67+22.2%, NS). Based on scintigraphic examination sphincterotomy significantly improved transpapillary biliary outflow (common bile duct half time 63±33 min vs. 37±17 min; p<0.05). According to results obtained from questionneries, 22 of the 26 patients gave an account of significant symptom improvement after sphincterotomy.
Conclusions: Endoscopic sphincterotomy improves cholecystokinin-induced gallbladder ejection fraction, transpapillary biliary outflow as well as biliary symptoms in patients with acalculous biliary pain syndrome and sphincter of Oddi dysfunction. Cholecystokinin-induced gallbladder ejection fraction with nitroglycerin pretreatment, measured with ultrasonography can be useful to select a subgroup of patients who can benefit from sphincterotomy.
abstract_id: PUBMED:31448369
Clinical course of biliary-type sphincter of Oddi dysfunction: endoscopic sphincterotomy and functional dyspepsia as affecting factors. Background And Study Aims: The objective of this study was to clarify the effectiveness of treatment selection for biliary-type sphincter of Oddi dysfunction by severe pain frequency and the risk factors for recurrence including the history of functional gastrointestinal disorder.
Patients And Methods: Thirty-six sphincter of Oddi dysfunction patients who were confirmed endoscopic retrograde cholangiopancreatography enrolled in this study. Endoscopic sphincterotomy was performed for type I and manometry-confirmed type II sphincter of Oddi dysfunction patients with severe pain (⩾2 times/year; endoscopic sphincterotomy group). Others were treated medically (non-endoscopic sphincterotomy group).
Results: The short-term effectiveness rate of endoscopic sphincterotomy was 91%. The final remission rates of the endoscopic sphincterotomy and non-endoscopic sphincterotomy groups were 86% and 100%, respectively. Symptoms relapsed after endoscopic sphincterotomy in 32% of patients. Patients in the endoscopic sphincterotomy and non-endoscopic sphincterotomy groups had or developed functional dyspepsia in 41% and 14%, irritable bowel syndrome in 5% and 14%, and gastroesophageal reflux disorder in 14% and 0%, respectively. History or new onset of functional dyspepsia was related to recurrence on multivariate analysis. The frequency of occurrence of post-endoscopic retrograde cholangiopancreatography pancreatitis and post-endoscopic retrograde cholangiopancreatography cholangitis was high in both groups. Two new occurrences of bile duct stone cases were observed in each group.
Conclusion: According to the treatment criteria, endoscopic and medical treatment for biliary-type sphincter of Oddi dysfunction has high effectiveness, but recurrences are common. Recurrences may be related to new onset or a history of functional dyspepsia.
abstract_id: PUBMED:1587237
Evaluation of endoscopic sphincterotomy in sphincter of Oddi dysfunction. In this prospective study the efficacy of endoscopic sphincterotomy was evaluated in ten post-cholecystectomy patients with clinical and biliary manometric evidence of SO dysfunction. Ten patients (8 females, 2 males, median age 59 years) were assessed at a median period of 24 months (range 12-48) after endoscopic sphincterotomy. Eight of the ten patients (80%) were symptomatically improved after endoscopic sphincterotomy although only four were totally asymptomatic. The two patients who had unchanged symptoms after sphincterotomy have since had alternative diagnoses made and have improved on appropriate therapy. It is concluded that endoscopic sphincterotomy is effective in relieving symptoms in post-cholecystectomy patients with clinical and manometric evidence of SO dysfunction.
abstract_id: PUBMED:16842450
Systematic review: sphincter of Oddi dysfunction--non-invasive diagnostic methods and long-term outcome after endoscopic sphincterotomy. Background: Sphincter of Oddi dysfunction is a benign, functional gastrointestinal disorder for which invasive endoscopic therapy with potential complications is often recommended.
Aims: To review the available evidence regarding the diagnostic accuracy of non-invasive methods that have been used to establish the diagnosis and to estimate the long-term outcome after endoscopic sphincterotomy.
Methods: A systematic review of English language articles and abstracts containing relevant terms was performed.
Results: Non-invasive diagnostic methods are limited by their low sensitivity and specificity, especially in patients with Type III sphincter of Oddi dysfunction. Secretin-stimulated magnetic resonance cholangiopancreatography appears to be useful in excluding other potential causes of symptoms, and morphine-provocated hepatobiliary scintigraphy also warrants further study. Approximately 85%, 69% and 37%, of patients with biliary Types I, II and III sphincter of Oddi dysfunction, respectively, experience sustained benefit after endoscopic sphincterotomy. In pancreatic sphincter of Oddi dysfunction, approximately 75% of patients report symptomatic improvement after pancreatic sphincterotomy, but the studies have been non-controlled and heterogeneous.
Conclusions: Patients with suspected sphincter of Oddi dysfunction, particularly those with biliary Type III, should be carefully evaluated before considering sphincter of Oddi manometry and endoscopic sphincterotomy. Further controlled trials are needed to justify the invasive management of patients with biliary Type III and pancreatic sphincter of Oddi dysfunction.
abstract_id: PUBMED:17129553
Endoscopic pancreatic sphincterotomy: when and how Endoscopic pancreatic sphincterotomy (EPS) has fallen into disuse for some time because of the risk of severe complications. More recently, EPS has been advocated as an effective treatment modality for several pancreatic disorders, including severe chronic pancreatitis, pancreatic pseudocyst, ampulloma, pancreas divisum, and pancreatic sphincter dysfunction. Favorable outcomes in patients undergoing EPS to facilitate further interventions, in whom long-term follow-up was available, was 70%; complications occurred in 14% and reintervention was required in 23%. The results were as good as those of surgery after long-term follow-up. Patients who underwent some form of pancreatic drainage after sphincterotomy had fewer complications (p = 0.03). Approximately 75% of patients with pancreas divisum who presented with idiopathic acute recurrent pancreatitis improved after endoscopic therapy, but only 25% of patients experienced pain reduction of at least 50%. The National Institutes of Health Consensus recommends EPS in patients with type I sphincter of Oddi dysfunction (SOD). In patients with type II SOD, prior manometry should be performed. In our series of 17 patients, we obtained results similar to those of other studies, although the number of patients was small. EPS appears to be a safe and effective technique, but further, well-designed, multicenter, prospective and long-term studies are required to evaluate these results and settle current controversies.
abstract_id: PUBMED:12665757
Long-term outcome of endoscopic dual pancreatobiliary sphincterotomy in patients with manometry-documented sphincter of Oddi dysfunction and normal pancreatogram. Background: For patients with sphincter of Oddi dysfunction and abnormal pancreatic basal sphincter pressure, additional pancreatic sphincterotomy has been recommended. The outcome of endoscopic dual pancreatobiliary sphincterotomy in patients with manometry-documented sphincter of Oddi dysfunction was evaluated.
Methods: An ERCP database was searched for data entered from January 1995 to November 2000 on patients with sphincter of Oddi dysfunction who met the following parameters: sphincter of Oddi manometry of both ducts, abnormal pressure for at least 1 sphincter (> or =40 mm Hg), no evidence of chronic pancreatitis, and endoscopic dual pancreatobiliary sphincterotomy. Patients were offered reintervention by repeat ERCP if clinical symptoms were not improved. The frequency of reintervention was analyzed according to ducts with abnormal basal sphincter pressure, previous cholecystectomy, sphincter of Oddi dysfunction type, and endoscopic dual pancreatobiliary sphincterotomy method.
Results: A total of 313 patients were followed for a mean of 43.1 months (median, 41.0 months; interquartile range: 29.8-60.0 months). Immediate postendoscopic dual pancreatobiliary sphincterotomy complications occurred in 15% of patients. Reintervention was required in 24.6% of patients at a median follow-up (interquartile range) of 8.0 (5.5-22.5) months. The frequency of reintervention was similar irrespective of ducts with abnormal basal sphincter pressure, previous cholecystectomy, or endoscopic dual pancreatobiliary sphincterotomy method. Of patients with type III sphincter of Oddi dysfunction, 28.3% underwent reintervention compared with 20.4% with combined types I and II sphincter of Oddi dysfunction (p = 0.105). When compared with endoscopic biliary sphincterotomy alone in historical control patients from our unit, endoscopic dual pancreatobiliary sphincterotomy had a lower reintervention rate in patients with pancreatic sphincter of Oddi dysfuntion alone and a comparable outcome in those with sphincter of Oddi dysfunction of both ducts.
Conclusion: Endoscopic dual pancreatobiliary sphincterotomy is useful in patients with pancreatic sphincter of Oddi dysfunction. Prospective randomized trials of endoscopic biliary sphincterotomy alone versus endoscopic dual pancreatobiliary sphincterotomy based on sphincter of Oddi manometry findings are in progress.
abstract_id: PUBMED:19721854
Endoscopic pancreatic sphincterotomy: indications and complications. Background/aims: Although a few recent studies have reported the effectiveness of endoscopic pancreatic sphincterotomy (EPST), none has compared physicians' skills and complications resulting from the procedure. Thus, we examined the indications, complications, and safety of EPST performed by a single physician at a single center.
Methods: Among 2,313 patients who underwent endoscopic retrograde cholangiopancreatography between January 1996 and March 2008, 46 patients who underwent EPST were included in this retrospective study. We examined the indications, complications, safety, and effectiveness of EPST, as well as the need for a pancreatic drainage procedure and the concomitant application of EPST and endoscopic sphincterotomy (EST).
Results: Diagnostic indications for EPST were chronic pancreatitis (26 cases), pancreatic divisum (4 cases), and pancreatic cancer (8 cases). Therapeutic indications for EPST were removal of a pancreaticolith (10 cases), stent insertion for pancreatic duct stenosis (9 cases), nasopancreatic drainage (7 cases), and treatment of sphincter of Oddi dysfunction (1 case). The success rate of EPST was 95.7% (44/46). Acute complications of EPST included five cases (10.9%) of pancreatitis and one of cholangitis (2.2%). EPST with EST did not reduce biliary complications. Endoscopic pancreatic drainage procedures following EPST did not reduce pancreatic complications.
Conclusions: EPST showed a low incidence of complications and a high rate of treatment success; thus, EPST is a relatively safe procedure that can be used to treat pancreatic diseases. Pancreatic drainage procedures and additional EST following EPST did not reduce the incidence of procedure-related complications.
Answer: Patients with sphincter of Oddi dysfunction (SOD) may benefit from endoscopic sphincterotomy, but the degree of benefit varies depending on the type of SOD and individual patient characteristics. A 5-year prospective trial found that 70% of patients with type II SOD and 39% of type III SOD patients reported subjective benefit after sphincterotomy at a 4-6 week follow-up. However, after a median follow-up of 2.5 years, sustained symptomatic improvement was found in 60% of type II patients but only in 8% of type III SOD patients (PUBMED:8724026). Another study showed that only 23% of patients with suspected SOD underwent endoscopic sphincterotomy after a 1-year history of medical management, although 54% had an abnormally long hile-duodenum transit time (HDTT) (PUBMED:18090984).
The EPISOD trial in 2013 demonstrated no benefit associated with endoscopic sphincterotomy for patients with Type III SOD, leading to a decrease in the rates of endoscopic sphincterotomy for newly-diagnosed SOD beginning in 2013 (PUBMED:33161159). In patients with acalculous biliary pain syndrome, endoscopic sphincterotomy improved gallbladder ejection fraction, transpapillary biliary outflow, and biliary symptoms (PUBMED:23419531). A study on the clinical course of biliary-type SOD found that endoscopic sphincterotomy had a high effectiveness rate, but recurrences were common, and recurrences may be related to new onset or a history of functional dyspepsia (PUBMED:31448369).
A systematic review indicated that approximately 85%, 69%, and 37% of patients with biliary Types I, II, and III SOD, respectively, experience sustained benefit after endoscopic sphincterotomy (PUBMED:16842450). Another study concluded that endoscopic sphincterotomy is effective in relieving symptoms in post-cholecystectomy patients with clinical and manometric evidence of SO dysfunction (PUBMED:1587237). However, the long-term outcome of endoscopic dual pancreatobiliary sphincterotomy in patients with manometry-documented SOD and normal pancreatogram showed that reintervention was required in 24.6% of patients at a median follow-up, with immediate postendoscopic dual pancreatobiliary sphincterotomy complications occurring in 15% of patients (PUBMED:12665757). Lastly, a retrospective study on endoscopic pancreatic sphincterotomy (EPST) found a low incidence of complications and a high rate of treatment success, suggesting that EPST is a relatively safe procedure that can be used to treat pancreatic diseases related to SOD (PUBMED:19721854).
In summary, while endoscopic sphincterotomy can provide benefit to patients with SOD, particularly those with type II, the effectiveness and long-term outcomes can vary. Type III SOD patients seem to benefit less from this intervention. Additionally, the presence of functional gastrointestinal disorders like functional dyspepsia may affect the recurrence of symptoms after the procedure. It is important to carefully evaluate patients before considering invasive treatments and to consider individual patient characteristics and SOD type when predicting the potential benefit from endoscopic sphincterotomy. |
Instruction: Does knee stability in the coronal plane in extension affect function and outcome after total knee arthroplasty?
Abstracts:
abstract_id: PUBMED:24923689
Does knee stability in the coronal plane in extension affect function and outcome after total knee arthroplasty? Purpose: The aim of the present study was to clarify whether varus-valgus laxities under static stress in extension, femoral condylar lift-off during walking, and patient-reported outcomes after total knee arthroplasty (TKA) were correlated with each other.
Methods: Ninety-four knees, which had undergone posterior-stabilized TKA, were analysed. The varus-valgus laxity during knee extension was measured using a stress radiograph. New Knee Society Score (KSS) questionnaires were mailed to all patients. Correlations between the values of stress radiographs and KSS were analysed. Additionally, continuous radiological images were taken of 15 patients while each walked on a treadmill to determine condylar lift-off from the tibial tray using a 3D-to-2D image-to-model registration technique. Correlations between the amount of lift-off and either the stress radiograph or the KSS were also analyzed.
Results: The mean angle measured was 5.9 ± 2.7° with varus stress and 5.0 ± 1.6° with valgus stress. The difference between them was 0.9 ± 2.8°. Varus-valgus laxities, or the differences between them, did not show any statistically significant correlation with either component of the KSS (p > 0.05). The average amount of femoral condylar lift-off during walking was 1.4 ± 0.8 mm (medial side) and 1.3 ± 0.6 mm (lateral side). The amount of lift-off did not correlate with either varus-valgus laxities or the KSS (p > 0.05).
Conclusions: No correlations were found among varus-valgus laxities under static stress in extension, femoral condylar lift-off during walking, or patient-reported outcomes after well-aligned TKA. This study suggests that small variations in coronal laxities do not influence lift-off during walking and the patient-reported outcomes.
Level Of Evidence: IV.
abstract_id: PUBMED:30865864
Coronal and Sagittal Balancing of Total Knee Arthroplasty Old Principles and New Technologies. The number of total knee arthroplasties performed in the United States is growing, and a leading cause of failure is postoperative knee instability from suboptimal coronal or sagittal balancing. This article reviews native knee anatomy as well as several guiding principles of total knee arthroplasty such as limb axis, femoral referencing, and implant constraint. Next, techniques that can be used by the surgeon to achieve ideal sagittal balance and coronal balance are discussed in detail. Finally, due to the growing use of computer and robotic technologies in knee replacement, the impact of advanced technologies on total knee arthroplasty balancing and alignment is reviewed. An in-depth understanding of these topics will enable surgeons to optimize the outcome of their total knee arthroplasty patients.
abstract_id: PUBMED:32025865
Variation in ligamentous laxity in well-functioning total knee arthroplasty is not associated with clinical outcomes or functional ability. Background: Around 20% of revision knee arthroplasty procedures are carried out for a diagnosis of instability. Clinical evaluation of instability is primarily through physical stress testing of knee ligamentous laxity and joint space opening. It is assumed that increased knee ligament laxity is associated with instability of the knee and, by association, reduced physical function. The range of knee ligament laxity in asymptomatic patients with total knee arthroplasty has however not been reported, nor has the association with measures of physical outcomes.
Methods: Patients who reported being happy with the outcomes of TKA and denied any feelings of knee instability were evaluated at routine follow-up clinicas. Knee ligamentous stability was evaluated seperately by 2 blinded assessors in both coronal and saggital planes. Assessors classified the ligamentous stability as 'tight', 'neutrol' or 'loose'. Clinical outcome was evaluated by Oxford Knee Score, patient satisfaction metric, timed performance test, range of motion and lower limb power. Analysis of variance was employed to evaluate variables between groups with post hoc pairwise comparisons.
Results: In total, 42 patients were evaluated. Mean time since index surgery was 46 (SD 8) months. In the coronal plane, 11 (26.2%) were categorised as 'tight', 22 (52.4%) as 'neutral' and 9 (21.4%) as 'loose'. In the sagittal plane, 15 (35.7%) were categorised as 'tight', 17 (40.5%) as 'neutral' and 10 (23.8%) as 'loose'. There were no between-group differences in outcomes: Oxford Knee Score, range of motion, lower limb power, timed functional assessment score or in satisfaction response in either plane (p = 0.05).
Conclusions: We found a range of ligamentous laxity in asymptomatic patients satisfied with the outcome of their knee arthroplasty, and no association between knee laxity and physical ability.
abstract_id: PUBMED:29798554
Research progress of larger flexion gap than extension gap in total knee arthroplasty Objective: To summarize the progress of larger flexion gap than extension gap in total knee arthro-plasty (TKA).
Methods: The domestic and foreign related literature about larger flexion gap than extension gap in TKA, and its impact factors, biomechanical and kinematic features, and clinical results were summarized.
Results: During TKA, to adjust the relations of flexion gap and extension gap is one of the key factors of successful operation. The biomechanical, kinematic, and clinical researches show that properly larger flexion gap than extension gap can improve both the postoperative knee range of motion and the satisfaction of patients, but does not affect the stability of the knee joint. However, there are also contrary findings. So adjustment of flexion gap and extension gap during TKA is still in dispute.
Conclusion: Larger flexion gap than extension gap in TKA is a new joint space theory, and long-term clinical efficacy, operation skills, and related complications still need further study.
abstract_id: PUBMED:28970125
Medial rather than lateral knee instability correlates with inferior patient satisfaction and knee function after total knee arthroplasty. Background: It is commonly thought that balanced medial and lateral tibiofemoral joint gaps are essential, but the effect of joint laxity on clinical outcome after total knee arthroplasty (TKA) is unclear. It was hypothesised that medial joint laxity correlates with inferior patient satisfaction and knee function, although lateral joint laxity is allowed to a certain degree in TKA.
Methods: This study included 50 knees that underwent primary TKA. Knee laxity was measured with postoperative stress radiographs in flexion and extension, and patient satisfaction and knee function were evaluated by the 2011 Knee Society Knee Scoring System.
Results: In a comparison of medially tight and medially loose knees in flexion, the scores for satisfaction, symptoms, standard activity, and advanced activity were significantly better in medially tight than in medially loose knees (satisfaction: 29.8, 22.2; symptoms: 20.3, 15.9; standard activities: 24.2, 19.1; and advanced activities: 15.3, 8.7, in the tight and loose knees, respectively). Neither lateral joint laxity during knee flexion nor medial joint laxity during knee extension was associated with a poor postoperative clinical outcome, whereas lateral joint laxity and the standard activity score in extension had a moderate positive correlation.
Conclusions: Knees with medial joint laxity during flexion resulted in an inferior postoperative outcome, and lateral joint laxity did not influence patient satisfaction or function. Care should be taken to maintain medial joint stability during the TKA procedure.
abstract_id: PUBMED:24900897
Results of revision surgery and causes of unstable total knee arthroplasty. Background: The aim of this study was to evaluate causes of unstable total knee arthroplasty and results of revision surgery.
Methods: We retrospectively reviewed 24 knees that underwent a revision arthroplasty for unstable total knee arthroplasty. The average follow-up period was 33.8 months. We classified the instability and analyzed the treatment results according to its cause. Stress radiographs, postoperative component position, and joint level were measured. Clinical outcomes were assessed using the Hospital for Special Surgery (HSS) score and range of motion.
Results: Causes of instability included coronal instability with posteromedial polyethylene wear and lateral laxity in 13 knees, coronal instability with posteromedial polyethylene wear in 6 knees and coronal and sagittal instability in 3 knees including post breakage in 1 knee, global instability in 1 knee and flexion instability in 1 knee. Mean preoperative/postoperative varus and valgus angles were 5.8°/3.2° (p = 0.713) and 22.5°/5.6° (p = 0.032). Mean postoperative α, β, γ, δ angle were 5.34°, 89.65°, 2.74°, 6.77°. Mean changes of joint levels were from 14.1 mm to 13.6 mm from fibular head (p = 0.82). The mean HSS score improved from 53.4 to 89.2 (p = 0.04). The average range of motion was changed from 123° to 122° (p = 0.82).
Conclusions: Revision total knee arthroplasty with or without a more constrained prosthesis will be a definite solution for an unstable total knee arthroplasty. The solution according to cause is very important and seems to be helpful to avoid unnecessary over-constrained implant selection in revision surgery for total knee instability.
abstract_id: PUBMED:34095401
Trunnion Failure in Revision Total Knee Arthroplasty. In revision total knee arthroplasty, joint kinematics must be maintained amid bone and ligamentous insufficiency. Current modular designs address defects while allowing for intraoperative prosthesis customization through a variety of stem extensions and constraints. Additional constraint improves knee stability while increasing stress at the implant-host interface and modular junction of the implant. This renders the prosthetic stem-condyle junction more prone to fatigue failure. We report 2 cases of prosthetic stem-condyle junction failure in in a varus-valgus constrained revision total knee arthroplasty.
abstract_id: PUBMED:29656976
The Influence of Postoperative Knee Stability on Patient Satisfaction in Cruciate-Retaining Total Knee Arthroplasty. Background: Although knee stability is well known as an important element for the success of total knee arthroplasty (TKA), the direct relationship between clinical outcomes and knee stability is still unknown. The purpose of this study was to determine if postoperative knee stability and soft-tissue balance affect the functional outcomes and patient satisfaction after cruciate-retaining (CR) TKA.
Methods: Fifty-five patients with varus osteoarthritis of the knee who underwent CR TKA were included in this study, and their postoperative knee stability was assessed by stress radiography at extension and flexion 1 month postoperatively. Timed Up and Go test, patient-derived clinical scores using the 2011 Knee Society Score, and Forgotten Joint Score-12 were also assessed at 1 year postoperatively. The effects of stability parameters on clinical outcomes were analyzed using Spearman's rank correlation.
Results: Medial stability at both knee extension and flexion had significant correlations with the shorter Timed Up and Go test and the higher patient satisfaction. Moreover, lateral laxity at extension was significantly correlated with the better patient satisfaction and Forgotten Joint Score-12. However, these correlation coefficients in this study were low in the range of 0.32-0.51.
Conclusion: Medial stability and lateral laxity play an important role in influencing 1-year postoperative clinical outcomes after CR TKA. However, we should keep in mind that these correlations are weak with coefficients at 0.50 or less and the clinical results are also affected by various other factors.
abstract_id: PUBMED:26846656
Clinical outcome of increased flexion gap after total knee arthroplasty. Can controlled gap imbalance improve knee flexion? Purpose: Increased range of motion (ROM) while maintaining joint stability is the goal of modern total knee arthroplasty (TKA). A biomechanical study has shown that small increases in flexion gap result in decreased tibiofemoral force beyond 90° flexion. The purpose of this paper was to investigate clinical implications of controlled increased flexion gap.
Methods: Four hundred and four TKAs were allocated into one of two groups and analysed retrospectively. In the first group (n = 352), flexion gap exceeded extension gap by 2.5 mm, while in the second group (n = 52) flexion gap was equal to the extension gap. The procedures were performed from 2008 to 2012. The patients were reviewed 12 months postoperatively. Objective clinical results were assessed for ROM, mediolateral and sagittal stability. Patient-reported outcome measures were the WOMAC score and the Forgotten Joint Score (FJS-12).
Results: After categorizing postoperative flexion into three groups (poor < 90°, satisfactory 91°-119°, good ≥ 120°) significantly more patients in group 1 achieved satisfactory or good ROM (p = 0.006). Group 1 also showed a significantly higher mean FJS-12 (group 1: 73, group 2: 61, p = 0.02). The mean WOMAC score was 11 in the first and 14 in the second group (n.s.). Increase in flexion gap did not influence knee stability.
Conclusions: The clinical relevance of this study is that a controlled flexion gap increase of 2.5 mm may have a positive effect on postoperative flexion and patient satisfaction after TKA. Neither knee stability in the coronal and sagittal planes nor complications were influenced by a controlled increase in flexion gap.
Level Of Evidence: III.
abstract_id: PUBMED:29846753
Increased valgus laxity in flexion with greater tibial resection depth following total knee arthroplasty. Purpose: Soft tissue balancing is of central importance to outcome following total knee arthroplasty (TKA). However, there are lack of data analysing the effect of tibial bone cut thickness on valgus laxity. A cadaveric study was undertaken to assess the biomechanical consequences of tibial resection depth on through range knee joint valgus stability. We aimed to establish a maximum tibial resection depth, beyond which medial collateral ligament balancing becomes challenging, and a constrained implant should be considered.
Methods: Eleven cadaveric specimens were included for analysis. The biomechanical effects of increasing tibial resection were studied, with bone cuts made at 6, 10, 14, 18 and 24 mm from the lateral tibial articular surface. A computer navigation system was used to perform the tibial resection and to measure the valgus laxity resulting from a torque of 10 Nm. Measurements were taken in four knee positions: 0° or extension, 30°, 60° and 90° of flexion. Intra-observer reliability was assessed. A minimum sample size of eight cadavers was necessary. Statistical analysis was performed using a nonparametric Spearman's ranking correlation matrix at the different stages: in extension, at 30°, 60° and 90° of knee flexion. Significance was set at p < 0.05.
Results: There was no macroscopic injury to the dMCL or sMCL in any of the specimens during tibial resection. There was no significant correlation found between the degree of valgus laxity and the thickness of the tibial cut with the knee in extension. There was a statistically significant correlation between valgus laxity and the thickness of the tibial cut in all other knee flexion positions: 30° (p < 0.0001), 60° (p < 0.001) and 90° (p < 0.0001). We identified greater than 5° of valgus laxity, at 90° of knee flexion, after a tibial resection of 14 mm.
Conclusion: Increased tibial resection depth is associated with significantly greater valgus laxity when tested in positions from 30° to 90° of flexion, despite stability in extension. Greater than 5° of laxity was identified with a tibial resection of 14 mm. When a tibial bone cut of 14 mm or greater is necessary, as may occur with severe preoperative coronal plane deformity, it is recommended to consider the use of a constrained knee prosthesis.
Answer: Knee stability in the coronal plane in extension does not appear to significantly affect function and outcome after total knee arthroplasty (TKA). Studies have shown that varus-valgus laxities under static stress in extension, as well as femoral condylar lift-off during walking, do not correlate with patient-reported outcomes after well-aligned TKA (PUBMED:24923689). Additionally, variation in ligamentous laxity in well-functioning TKA is not associated with clinical outcomes or functional ability, suggesting that a range of knee ligament laxity can be present in asymptomatic patients who are satisfied with their TKA (PUBMED:32025865).
However, medial knee instability has been correlated with inferior patient satisfaction and knee function after TKA, indicating that maintaining medial joint stability during the TKA procedure is important (PUBMED:28970125). On the other hand, lateral knee instability does not seem to influence patient satisfaction or function to the same extent (PUBMED:28970125).
In the context of revision surgery for unstable TKA, addressing the cause of instability and selecting an appropriate level of constraint in the prosthesis is crucial for improving clinical outcomes (PUBMED:24900897). Moreover, in cruciate-retaining TKA, postoperative knee stability and soft-tissue balance have been shown to affect functional outcomes and patient satisfaction, although the correlations are weak (PUBMED:29656976).
Overall, while knee stability in the coronal plane in extension is a factor in TKA, it is not the sole determinant of function and outcome. Other factors such as medial stability, soft-tissue balance, and appropriate surgical technique also play significant roles in the success of TKA (PUBMED:24923689, PUBMED:32025865, PUBMED:28970125, PUBMED:24900897, PUBMED:29656976). |
Instruction: Rising incidence of intrahepatic cholangiocarcinoma in the United States: a true increase?
Abstracts:
abstract_id: PUBMED:15123362
Rising incidence of intrahepatic cholangiocarcinoma in the United States: a true increase? Background/aims: The incidence of intrahepatic cholangiocarcinoma (ICC) has been reported to be increasing in the USA. The aim of this study is to examine whether this is a true increase or a reflection of improved detection or reclassification.
Methods: Using data from the Surveillance Epidemiology and End Results (SEER) program, incidence rates for ICC between 1975 and 1999 were calculated. We also calculated the proportions of cases with each tumor stage, microscopically confirmed cases, and the survival rates.
Results: A total of 2864 patients with ICC were identified. The incidence of ICC increased by 165% during the study period. Most of this increase occurred after 1985. There were no significant changes in the proportion of patients with unstaged cancer, localized cancer, microscopic confirmation, or with tumor size <5 cm during the period of the most significant increase. The 1-year survival rate increased significantly from 15.8% in 1975-1979 to 26.3% in 1995-1999, while 5-year survival rate remained essentially the same (2.6 vs. 3.5%).
Conclusions: The incidence of ICC continues to rise in the USA. The stable proportions over time of patients with early stage disease, unstaged disease, tumor size <5 cm, and microscopic confirmation suggest a true increase of ICC.
abstract_id: PUBMED:11391522
Increasing incidence and mortality of primary intrahepatic cholangiocarcinoma in the United States. Clinical observations suggest a recent increase in intrahepatic biliary tract malignancies. Thus, our aim was to determine recent trends in the epidemiology of intrahepatic cholangiocarcinoma in the United States. Reported data from the Surveillance, Epidemiology, and End Results (SEER) program and the United States Vital Statistics databases were analyzed to determine the incidence, mortality, and survival rates of primary intrahepatic cholangiocarcinoma. Between 1973 and 1997, the incidence and mortality rates from intrahepatic cholangiocarcinoma markedly increased, with an estimated annual percent change (EAPC) of 9.11% (95% CI, 7.46 to 10.78) and 9.44% (95%, CI 8.46 to 10.41), respectively. The age-adjusted mortality rate per 100,000 persons for whites increased from 0.14 for the period 1975-1979 to 0.65 for the period 1993-1997, and that for blacks increased from 0.15 to 0.58 over the same period. The increase in mortality was similar across all age groups above age 45. The relative 1- and 2-year survival rates following diagnosis from 1989 to 1996 were 24.5% and 12.8%, respectively. In conclusion, there has been a marked increase in the incidence and mortality from intrahepatic cholangiocarcinoma in the United States in recent years. This tumor continues to be associated with a poor prognosis.
abstract_id: PUBMED:29893702
Racial, Ethnic, and Age Disparities in Incidence and Survival of Intrahepatic Cholangiocarcinoma in the United States; 1995-2014. Introduction And Aim: Despite reports of increased incidence of intrahepatic cholangiocarcinoma (iCCA) in the United States, the impact of age or influences of race and ethnicity are not clear. Disparities in iCCA outcomes across various population subgroups also are not readily recognized due to the rarity of this cancer. We examined ethnic, race, age, and gender variations in iCCA incidence and survival using data from the Surveillance, Epidemiology, and End Results Program (1995-2014).
Material And Methods: We assessed age-adjusted incidence rates, average annual percentage change in incidence, and hazard ratios (HRs) with 95% confidence intervals (CIs) for all-cause and iCCA-specific mortality.
Results: Overall, 11,127 cases of iCCA were identified, with an age-adjusted incidence rate of 0.92 per 100,000. The incidence rate increased twofold, from 0.49 per 100,000 in 1995 to 1.49 per 100,000 in 2014, with an average annual rate of increase of 5.49%. The iCCA incidence rate was higher among persons age 45 years or older than those younger than 45 years (1.71 vs. 0.07 per 100,000), among males than females (0.97 vs. 0.88 per 100,000) and among Hispanics than non-Hispanics (1.18 vs. 0.89 per 100,000). Compared to non-Hispanics, Hispanics had poorer 5-year allcause mortality (HR = 1.11, 95%CI: 1.05-1.19) and poorer iCCA-specific mortality (HR = 1.15, 95%CI: 1.07-1.24). Survival rates were poor also for individuals age 45 years or older, men, and Blacks and American Indians/Alaska Natives.
Conclusion: The results demonstrate ethnic, race, age and gender disparities in iCCA incidence and survival, and confirm continued increase in iCCA incidence in the United States.
abstract_id: PUBMED:38398075
Projected Incidence of Hepatobiliary Cancers and Trends Based on Age, Race, and Gender in the United States. Background: Identifying the projected incidence of hepatobiliary cancers and recognizing patient cohorts at increased risk can help develop targeted interventions and resource allocation. The expected incidence of subtypes of hepatobiliary cancers in different age groups, races, and genders remains unknown.
Methods: Historical epidemiological data from the Surveillance, Epidemiology, and End Results (SEER) database was used to project future incidence of hepatobiliary malignancies in the United States and identify trends by age, race, and gender. Patients ≥18 years of age diagnosed with a hepatobiliary malignancy between 2001 and 2017 were included. US Census Bureau 2017 National Population projects provided the projected population from 2017 to 2029. Age-Period-Cohort forecasting model was used to estimate future births cohort-specific incidence. All analyses were completed using R Statistical Software.
Results: We included 110381 historical patients diagnosed with a hepatobiliary malignancy between 2001 and 2017 with the following subtypes: hepatocellular cancer (HCC) (68%), intrahepatic cholangiocarcinoma (iCCA) (11.5%), gallbladder cancer (GC) (8%), extrahepatic cholangiocarcinoma (eCCA) (7.6%), and ampullary cancer (AC) (4%). Our models predict the incidence of HCC to double (2001 to 2029) from 4.5 to 9.03 per 100,000, with the most significant increase anticipated in patients 70-79 years of age. In contrast, incidence is expected to continue to decline among the Asian population. Incidence of iCCA is projected to increase, especially in the white population, with rates in 2029 double those in 2001 (2.13 vs. 0.88 per 100,000, respectively; p < 0.001). The incidence of GC among the black population is expected to increase. The incidence of eCCA is expected to significantly increase, especially among the Hispanic population, while that of AC will remain stable.
Discussion: The overall incidence of hepatobiliary malignancies is expected to increase in the coming years, with certain groups at increased risk. These findings may help with resource allocation when considering screening, treatment, and research in the coming years.
abstract_id: PUBMED:35972334
Temporal Changes in Cholangiocarcinoma Incidence and Mortality in the United States from 2001 to 2017. Background: Previous studies report increasing cholangiocarcinoma (CCA) incidence up to 2015. This contemporary retrospective analysis of CCA incidence and mortality in the US from 2001-2017 assessed whether CCA incidence continued to increase beyond 2015.
Patients And Methods: Patients (≥18 years) with CCA were identified in the National Cancer Institute Surveillance, Epidemiology, and End Results 18 cancer registry (International Classification of Disease for Oncology [ICD-O]-3 codes: intrahepatic [iCCA], C221; extrahepatic [eCCA], C240, C241, C249). Cancer of unknown primary (CUP) cases were identified (ICD-O-3: C809; 8140/2, 8140/3, 8141/3, 8143/3, 8147/3) because of potential misclassification as iCCA.
Results: Forty-thousand-and-thirty CCA cases (iCCA, n=13,174; eCCA, n=26,821; iCCA and eCCA, n=35) and 32,980 CUP cases were analyzed. From 2001-2017, CCA, iCCA, and eCCA incidence (per 100 000 person-years) increased 43.8% (3.08 to 4.43), 148.8% (0.80 to 1.99), and 7.5% (2.28 to 2.45), respectively. In contrast, CUP incidence decreased 54.4% (4.65 to 2.12). CCA incidence increased with age, with greatest increase among younger patients (18-44 years, 81.0%). Median overall survival from diagnosis was 8, 6, 9, and 2 months for CCA, iCCA, eCCA, and CUP. From 2001-2016, annual mortality rate declined for iCCA (57.1% to 41.2%) and generally remained stable for eCCA (40.9% to 37.0%) and for CUP (64.3% to 68.6%).
Conclusions: CCA incidence continued to increase from 2001-2017, with greater increase in iCCA versus eCCA, whereas CUP incidence decreased. The divergent CUP versus iCCA incidence trends, with overall greater absolute change in iCCA incidence, provide evidence for a true increase in iCCA incidence that may not be wholly attributable to CUP reclassification.
abstract_id: PUBMED:32294255
Have incidence rates of liver cancer peaked in the United States? Background: Liver cancer incidence has increased for several decades in the United States. Recently, reports have suggested that rates of hepatocellular carcinoma (HCC), the dominant form of liver cancer, had declined in certain groups. However, to the authors' knowledge, the most recent histology-specific liver cancer rates have not been reported to date.
Methods: The authors examined the incidence of HCC and intrahepatic cholangiocarcinoma (ICC) from 1992 through 2016 using data from the Surveillance, Epidemiology, and End Results registries. Age-standardized incidence rates were calculated by histology, sex, race and/or ethnicity, and age. Trends were analyzed using the National Cancer Institute's Joinpoint Regression Program to estimate the annual percent change.
Results: Between 2011 and 2016, HCC rates significantly declined (annual percent change, -1.9%), with more prominent declines noted among males, Asian/Pacific Islanders, and individuals aged <50 years. Conversely, ICC rates increased from 2002 through 2016.
Conclusions: Declining HCC rates may persist due to improved treatment of the hepatitis C virus and/or competing causes of mortality among individuals with fatty liver disease.
abstract_id: PUBMED:16788161
Impact of classification of hilar cholangiocarcinomas (Klatskin tumors) on the incidence of intra- and extrahepatic cholangiocarcinoma in the United States. Cholangiocarcinomas are topographically categorized as intrahepatic or extrahepatic by the International Classification of Diseases for Oncology (ICD-O). Although hilar cholangiocarcinomas (Klatskin tumors) are extrahepatic cholangiocarcinomas, the second edition of the ICD-O (ICD-O-2) assigned them a histology code 8162/3, Klatskin, which was cross-referenced to intrahepatic cholangiocarcinoma. Recent studies in the United States that included this code (8162/3, Klatskin) with intrahepatic cholangiocarcinoma reported an increasing incidence of intrahepatic cholangiocarcinoma and a decreasing incidence of extrahepatic cholangiocarcinoma. To investigate the impact of this misclassification on site-specific cholangiocarcinoma incidence rates, we calculated annual percent changes (APCs) with data from the Surveillance, Epidemiology, and End Results (SEER) program using a Poisson regression model that was age-adjusted to the year 2000 U.S. population. All statistical tests were two-sided. During 1992-2000, when SEER used ICD-O-2, 1710 intrahepatic cholangiocarcinomas, 1371 extrahepatic cholangiocarcinomas, and 269 hilar cholangiocarcinomas identified by code 8162/3, Klatskin were diagnosed. Ninety-one percent (246 of 269) of the hilar cholangiocarcinomas were incorrectly coded as intrahepatic cholangiocarcinomas, resulting in an overestimation of intrahepatic cholangiocarcinoma incidence by 13% and underestimation of extrahepatic cholangiocarcinomas incidence by 15%. However, even after the exclusion of tumors that were coded to the histology code 8162/3, Klatskin, age-adjusted annual intrahepatic cholangiocarcinoma incidence increased during this period (APC = 4%, 95% confidence interval = 2% to 6%, P<.001).
abstract_id: PUBMED:32359831
Epidemiology of Cholangiocarcinoma; United States Incidence and Mortality Trends. Background: Cholangiocarcinoma is an aggressive malignancy with few available studies assessing incidence and mortality. In this study, we aim to investigate trends of incidence and mortality in a large nation-wide epidemiologic study.
Methods: We used SEER 18 database to study cholangiocarcinoma cases in the US during 2000-2015. Incidence and mortality rates of cholangiocarcinoma were calculated by race and were expressed by 1,000,000 person-years. Annual percent change (APC) was calculated using joinpoint regression software.
Results: We reviewed 16,189 patients with cholangiocarcinoma, of which 64.4% were intrahepatic. Most patients were whites (78.4%), males (51.3%), and older than 65 years (63%). A total of 13,121 patients died of cholangiocarcinoma during the study period. Cholangiocarcinoma incidence and mortality were 11.977 and 10.295 and were both higher among Asians, males, and individuals older than 65 years. Incidence rates have significantly increased over the study period (APC=5.063%, P<.001), while mortality increased significantly over the study period (APC=5.964%, P<.001), but decreased after 2013 (APC=-25.029, P<.001).
Conclusion: The incidence and mortality of cholangiocarcinoma were increasing in the study period with significant observed disparities based on race and gender.
abstract_id: PUBMED:29632056
Trends in Incidence and Factors Affecting Survival of Patients With Cholangiocarcinoma in the United States. Background: Cholangiocarcinoma (CCA) includes cancers arising from the intrahepatic and extrahepatic bile ducts. The etiology and pathogenesis of CCA remain poorly understood. This is the first study investigating both incidence patterns of CCA from 1973 through 2012 and demographic, clinical, and treatment variables affecting survival of patients with CCA. Patients and Methods: Using the SEER database, age-adjusted incidence rates were evaluated from 1973-2012 using SEER*Stat software. A retrospective cohort of 26,994 patients diagnosed with CCA from 1973-2008 was identified for survival analysis. Cox proportional hazards models were used to perform multivariate survival analysis. Results: Overall incidence of CCA increased by 65% from 1973-2012. Extrahepatic CCA (ECC) remained more common than intrahepatic CCA (ICC), whereas the incidence rates for ICC increased by 350% compared with a 20% increase seen with ECC. Men belonging to non-African American and non-Caucasian ethnicities had the highest incidence rates of CCA. This trend persisted throughout the study period, although African Americans and Caucasians saw 50% and 59% increases in incidence rates, respectively, compared with a 9% increase among other races. Median overall survival (OS) was 8 months in patients with ECC compared with 4 months in those with ICC. Our survival analysis found Hispanic women to have the best 5-year survival outcome (P<.0001). OS diminished with age (P<.0001), and ECC had better survival outcomes compared with ICC (P<.0001). Patients who were married, were nonsmokers, belonged to a higher income class, and underwent surgery had better survival outcomes compared with others (P<.0001). Conclusions: This is the most up-to-date study of CCA from the SEER registry that shows temporal patterns of increasing incidence of CCA across different races, sexes, and ethnicities. We identified age, sex, race, marital status, income, smoking status, anatomic location of CCA, tumor grade, tumor stage, radiation, and surgery as independent prognostic factors for OS in patients with CCA.
abstract_id: PUBMED:23886131
Rising incidence of primary liver cancer in Brunei Darussalam. Background: Primary liver cancer (PLC) is the fifth most common malignancy worldwide and is still associated with high mortality. Hepatocellular carcinoma (HCC) and cholangiocarcinoma are the two most common PLCs, and their incidence varies across regions. Currently there are no published data available on the incidence of PLC in Brunei Darussalam.
Materials And Methods: All proven PLCs between 2000 and 2009 were identified from the National Cancer Registry and reviewed. Metastatic diseases were excluded. A total of 123 cases (male 65.8%, female 34.2%) were identified and their data collected for calculation of the age standardised rate (ASR).
Results: The most common type of PLC was HCC (87.8%) followed by cholangiocarcinoma (10.6%). There were two cases of hepatoblastoma. The mean age at diagnosis was 63.2 years. The overall ASR of PLC was 8.2/100,000, increasing from 4.5/100,000 population in 2000 to 11.4/100,000 population in 2009. The rates were higher among males (12.0/100,000) than females (4.7/100,000). Among the ethnic groups, Chinese had the highest rates (overall 13.1/100,000 with none recorded in 2000 to 30.3/100,000 in 2009) compared to the Malays (overall 8.5/100,000 increasing from 4.5/100,000 in 2000 to 12.3/100,000 in 2009) and the indigenous groups. The incidence increased after the age of 50 and was highest among the 75-79 age groups. Increase was seen for HCC but not for cholangiocarcinoma.
Conclusions: The most common type of PLC is HCC and the annual incidence of PLC is increasing in Brunei Darussalam,rates being higher in males and Chinese.
Answer: The rising incidence of intrahepatic cholangiocarcinoma (ICC) in the United States appears to be a true increase rather than solely a reflection of improved detection or reclassification. Several studies have analyzed data from the Surveillance, Epidemiology, and End Results (SEER) program and other databases to assess trends in ICC incidence and mortality.
One study found that the incidence of ICC increased by 165% between 1975 and 1999, with most of the increase occurring after 1985. The stable proportions over time of patients with early-stage disease, unstaged disease, tumor size <5 cm, and microscopic confirmation suggest a true increase in ICC incidence (PUBMED:15123362). Another study reported a marked increase in the incidence and mortality from ICC between 1973 and 1997, with the incidence and mortality rates increasing significantly during this period (PUBMED:11391522).
Further analysis of SEER data from 1995 to 2014 showed that the incidence rate of ICC increased twofold, with significant disparities in incidence and survival based on ethnicity, race, age, and gender (PUBMED:29893702). Projections based on historical epidemiological data indicate that the incidence of ICC is expected to continue to increase, particularly in the white population, with rates in 2029 projected to be double those in 2001 (PUBMED:38398075).
A contemporary retrospective analysis from 2001 to 2017 confirmed that ICC incidence continued to increase, with a greater increase in ICC versus extrahepatic cholangiocarcinoma (eCCA), while the incidence of cancer of unknown primary (CUP) decreased. This divergence in trends provides evidence for a true increase in ICC incidence that may not be wholly attributable to CUP reclassification (PUBMED:35972334).
Additionally, while hepatocellular carcinoma (HCC) rates have shown signs of decline in certain groups, ICC rates have continued to increase from 2002 through 2016 (PUBMED:32294255). The misclassification of hilar cholangiocarcinomas (Klatskin tumors) as intrahepatic cholangiocarcinomas in the past has been corrected, and even after excluding these misclassified cases, the incidence of ICC still showed an increase (PUBMED:16788161).
Overall, the evidence from multiple studies indicates that the incidence of ICC in the United States is truly rising, with various factors such as age, race, and gender affecting the incidence and survival rates of patients with ICC (PUBMED:32359831, PUBMED:29632056). |
Instruction: Do survivors of acute neurologic injury remember their stay in the neuroscience intensive care unit?
Abstracts:
abstract_id: PUBMED:23553251
Do survivors of acute neurologic injury remember their stay in the neuroscience intensive care unit? Background: Patients in medical, surgical, and trauma intensive care units (ICUs) are at risk for later development of symptoms of post-traumatic stress disorder (PTSD). Because acute brain injury can impair recall; we sought to show that neuroscience patients undergoing prolonged neuroscience ICU admission have limited memory of their ICU stay and thus are less likely to develop symptoms of PTSD.
Methods: We surveyed patients >18 years admitted for 10 days or more to our neuroscience ICU over a 10-year period.
Results: The survey response rate was 50.5% (47/93). Forty percent (19/47) of respondents presented with coma. Recall of details of the ICU admission was limited. Fewer than 10% of patients who required mechanical ventilation recalled being on a ventilator. Only five patients (11%) had responses suggestive of possible post-traumatic stress syndrome. The most commonly experienced symptoms following discharge were difficulty sleeping, difficulty with concentration, and memory loss.
Conclusion: Patients requiring prolonged neuroscience ICU admission do not appear to be traumatized by their ICU stay.
abstract_id: PUBMED:30682351
Acute Neurologic Injury in Children Admitted to the Cardiac Intensive Care Unit. Background: Children with acquired and congenital heart disease both have low mortality but an increased risk of neurologic morbidity that is multifactorial. Our hypothesis was that acute neurologic injuries contribute to mortality in such children and are an important cause of death.
Methods: All admissions to the pediatric cardiac intensive care unit (CICU) from January 2011 through January 2015 were retrospectively reviewed. Patients were assessed for any acute neurologic events (ANEs) during admission, as defined by radiologic findings or seizures documented on an electroencephalogram.
Results: Of the 1,573 children admitted to the CICU, the incidence of ANEs was 8.6%. Mortality of the ANE group was 16.3% compared with 1.5% for those who did not have an ANE. The odds ratio for death with ANEs was 8.55 (95% confidence interval, 4.56 to 16.03). Patients with ANEs had a longer hospital length of stay than those without ANEs (41.4 ± 4 vs 14.2 ± 0.6 days; p < 0.001). Need for extracorporeal membrane oxygenation, previous cardiac arrest, and prematurity were independently associated with the presence of an ANE.
Conclusions: Neurologic injuries are common in pediatric CICUs and are associated with an increase in mortality and hospital length of stay. Children admitted to the CICU are likely to benefit from improved surveillance and neuroprotective strategies to prevent neurologic death.
abstract_id: PUBMED:20124891
Serum creatinine as stratified in the RIFLE score for acute kidney injury is associated with mortality and length of stay for children in the pediatric intensive care unit. Objective: To evaluate the ability of the RIFLE criteria to characterize acute kidney injury in critically ill children.
Design: Retrospective analysis of prospectively collected clinical data.
Setting: Multidisciplinary, tertiary care, 20-bed pediatric intensive care unit.
Patients: All 3396 admissions between July 2003 and March 2007.
Interventions: None.
Measurements And Main Results: A RIFLE score was calculated for each patient based on percent change of serum creatinine from baseline (risk = serum creatinine x1.5; injury = serum creatinine x2; failure = serum creatinine x3). Primary outcome measures were mortality and intensive care unit length of stay. Logistic and linear regressions were performed to control for potential confounders and determine the association between RIFLE score and mortality and length of stay, respectively.One hundred ninety-four (5.7%) patients had some degree of acute kidney injury at the time of admission, and 339 (10%) patients had acute kidney injury develop during the pediatric intensive care unit course. Almost half of all patients with acute kidney injury had their maximum RIFLE score within 24 hrs of intensive care unit admission, and approximately 75% achieved their maximum RIFLE score by the seventh intensive care unit day. After regression analysis, any acute kidney injury on admission and any development of or worsening of acute kidney injury during the pediatric intensive care unit stay were independently associated with increased mortality, with the odds of mortality increasing with each grade increase in RIFLE score (p < .01). Patients with acute kidney injury at the time of admission had a length of stay twice that of those with normal renal function, and those who had any acute kidney injury develop during the pediatric intensive care unit course had a four-fold increase in pediatric intensive care unit length of stay. Also, other than being admitted with RIFLE risk score, an independent relationship between any acute kidney injury at the time of pediatric intensive care unit admission, any acute kidney injury present during the pediatric intensive care unit course, or any worsening RIFLE scores during the pediatric intensive care unit course and increased pediatric intensive care unit length of stay were identified after controlling for the same high-risk covariates (p < .01).
Conclusions: RIFLE criteria serves well to describe acute kidney injury in critically ill pediatric patients.
abstract_id: PUBMED:34816755
Quantitative Electroencephalography (EEG) Predicting Acute Neurologic Deterioration in the Pediatric Intensive Care Unit: A Case Series. Introduction: Continuous neurologic assessment in the pediatric intensive care unit is challenging. Current electroencephalography (EEG) guidelines support monitoring status epilepticus, vasospasm detection, and cardiac arrest prognostication, but the scope of brain dysfunction in critically ill patients is larger. We explore quantitative EEG in pediatric intensive care unit patients with neurologic emergencies to identify quantitative EEG changes preceding clinical detection. Methods: From 2017 to 2020, we identified pediatric intensive care unit patients at a single quaternary children's hospital with EEG recording near or during acute neurologic deterioration. Quantitative EEG analysis was performed using Persyst P14 (Persyst Development Corporation). Included features were fast Fourier transform, asymmetry, and rhythmicity spectrograms, "from-baseline" patient-specific versions of the above features, and quantitative suppression ratio. Timing of quantitative EEG changes was determined by expert review and prespecified quantitative EEG alert thresholds. Clinical detection of neurologic deterioration was defined pre hoc and determined through electronic medical record documentation of examination change or intervention. Results: Ten patients were identified, age 23 months to 27 years, and 50% were female. Of 10 patients, 6 died, 1 had new morbidity, and 3 had good recovery; the most common cause of death was cerebral edema and herniation. The fastest changes were on "from-baseline" fast Fourier transform spectrograms, whereas persistent changes on asymmetry spectrograms and suppression ratio were most associated with morbidity and mortality. Median time from first quantitative EEG change to clinical detection was 332 minutes (interquartile range: 201-456 minutes). Conclusion: Quantitative EEG is potentially useful in earlier detection of neurologic deterioration in critically ill pediatric intensive care unit patients. Further work is required to quantify the predictive value, measure improvement in outcome, and automate the process.
abstract_id: PUBMED:34745699
A Study of Acute Kidney Injury in a Tertiary Care Pediatric Intensive Care Unit. The objective of this study was to calculate the incidence, severity, and risk factors for acute kidney injury (AKI) in a tertiary care pediatric intensive care unit (PICU). Also, to assess the impact of AKI and its varying severity on mortality and length of hospital and PICU stays. A prospective observational study was performed in children between 1 month and 12 years of age admitted to the PICU between July 1, 2013, and July 31, 2014 (13 months). The change in creatinine clearance was considered to diagnose and stage AKI according to pediatric risk, injury, failure, loss, and end-stage renal disease criteria. The risk factors for AKI and its impact on PICU stay, hospital stay, and mortality were evaluated. Of the total 220 patients enrolled in the study, 161 (73.2%) developed AKI, and 59 cases without AKI served as the "no AKI" (control) group. Majority (57.1%) of children with AKI had Failure grade of AKI, whereas 26.1% had Risk grade and 16.8% had Injury grade of AKI. Infancy ( p = 0.000), hypovolemia ( p = 0.005), shock ( p = 0.008), and sepsis ( p = 0.022) were found to be significant risk factors for AKI. Mortality, PICU stay, and hospital stay were comparable in children with and without AKI as well as between the various grades of renal injury (i.e., Failure, Risk, and Injury ). An exceedingly high incidence of AKI, especially of the severe Failure grade was observed in critically ill children. Infancy and frequent PICU occurrences such as sepsis, hypovolemia, and shock predisposed to AKI.
abstract_id: PUBMED:30173167
Nonadherence to Geriatric-Focused Practices in Older Intensive Care Unit Survivors. Background: Older adults account for more than half of all admissions to intensive care units; most remain alive at 1 year, but with long-term sequelae.
Objective: To explore geriatric-focused practices and associated outcomes in older intensive care survivors.
Methods: In a 1-year, retrospective, cohort study of patients admitted to the medical intensive care unit and subsequently transferred to the medicine service, adherence to geriatric-focused practices and associated clinical outcomes during intensive care were determined.
Results: A total of 179 patients (mean age, 80.5 years) met inclusion criteria. Nonadherence to geriatric-focused practices, including nothing by mouth (P = .004), exposure to benzodiazepines (P = .007), and use of restraints (P < .001), were associated with longer stay in the intensive care unit. Nothing by mouth (P = .002) and restraint use (P = .003) were significantly associated with longer hospital stays. Bladder catheters were associated with hospital-acquired pressure injuries (odds ratio, 8.9; 95% CI, 1.2-67.9) and discharge to rehabilitation (odds ratio, 8.9; 95% CI, 1.2-67.9). Nothing by mouth (odds ratio, 3.2; 95% CI, 1.2-8.0) and restraints (odds ratio, 2.8; 95% CI, 1.4-5.8) were also associated with an increase in 30-day readmission. Although 95% of the patients were assessed at least once by using the Confusion Assessment Method for the Intensive Care Unit (overall 2334 assessments documented), only 3.4% had an assessment that indicated delirium; 54.6% of these assessments were inaccurate.
Conclusion: Although initiatives have increased awareness of the challenges, implementation of geriatric-focused practices in intensive care is inconsistent.
abstract_id: PUBMED:19556412
Systemic inflammatory response syndrome score and race as predictors of length of stay in the intensive care unit. Background: Identifying predictors of length of stay in the intensive care unit can help critical care clinicians prioritize care in patients with acute, life-threatening injuries.
Objective: To determine if systemic inflammatory response syndrome scores are predictive of length of stay in the intensive care unit in patients with acute, life-threatening injuries.
Methods: Retrospective chart reviews were completed on patients with acute, life-threatening injuries admitted to the intensive care unit at a level I trauma center in the southeastern United States. All 246 eligible charts from the trauma registry database from 1998 to 2007 were included. Systemic inflammatory response syndrome scores measured on admission were correlated with length of stay in the intensive care unit. Data on race, sex, age, smoking status, and injury severity score also were collected. Univariate and multivariate regression modeling was used to analyze data.
Results: Severe systemic inflammatory response syndrome scores on admission to the intensive care unit were predictive of length of stay in the unit (F=15.83; P<.001), as was white race (F=9.7; P=.002), and injury severity score (F=20.23; P<.001).
Conclusions: Systemic inflammatory response syndrome scores can be measured quickly and easily at the bedside. Data support use of the score to predict length of stay in the intensive care unit.
abstract_id: PUBMED:29521453
Sleep on the ward in intensive care unit survivors: a case series of polysomnography. Background: Few studies have investigated sleep in patients after intensive care despite the possibility that inadequate sleep might further complicate an acute illness impeding recovery.
Aims: To assess the quality and quantity of a patient's sleep on the ward by polysomnography (PSG) within a week of intensive care unit (ICU) discharge and to explore the prevalence of key in-ICU risk factors for persistent sleep fragmentation.
Methods: We enrolled 20 patients after they have been mechanically ventilated for at least 3 days and survived to ICU discharge. We included all patients over the age of 16 years and excluded patients with advanced cognitive impairment or who were unable to follow simple commands before their acute illness, primary admission diagnosis of neurological injury, uncontrolled psychiatric illness or not fluent in English.
Results: Twenty patients underwent an overnight PSG recording on day 7 after ICU discharge (SD, 1 day). ICU survivors provided 292.8 h of PSG recording time with median recording times of 16.8 h (Interquartile range (IQR), 15.0-17.2 h). The median total sleep time per patient was 5.3 h (IQR, 2.6-6.3 h). In a multivariable regression model, postoperative admission diagnosis (P = 0.04) and patient report of poor ICU sleep (P = 0.001) were associated with less slow-wave (restorative) sleep on the wards after ICU discharge.
Conclusions: Patients reported poor sleep while in the ICU, and a postoperative admission diagnosis may identify a high-risk subgroup of patients who may derive greater benefit from interventions to improve sleep hygiene.
abstract_id: PUBMED:32476030
Feasibility of Nurse-Led Multidimensional Outcome Assessments in the Neuroscience Intensive Care Unit. Background: The outcome focus for survivors of critical care has shifted from mortality to patient-centered outcomes. Multidimensional outcome assessments performed in critically ill patients typically exclude those with primary neurological injuries.
Objective: To determine the feasibility of measurements of physical function, cognition, and quality of life in patients requiring neurocritical care.
Methods: This evaluation of a quality improvement initiative involved all patients admitted to the neuroscience intensive care unit at the University of Cincinnati Medical Center.
Interventions: Telephone assessments of physical function (Glasgow Outcome Scale-Extended and modified Rankin Scale scores), cognition (modified Telephone Interview for Cognitive Status), and quality of life (5-level EQ-5D) were conducted between 3 and 6 months after admission.
Results: During the 2-week pilot phase, the authors contacted and completed data entry for all patients admitted to the neuroscience intensive care unit over a 2-week period in approximately 11 hours. During the 18-month implementation phase, the authors followed 1324 patients at a mean (SD) time of 4.4 (0.8) months after admission. Mortality at follow-up was 38.9%; 74.8% of these patients underwent withdrawal of care. The overall loss to follow-up rate was 23.6%. Among all patients contacted, 94% were available by the second attempt to interview them by telephone.
Conclusions: Obtaining multidimensional outcome assessments by telephone across a diverse population of neurocritically ill patients was feasible and efficient. The sample was similar to those in other cohort studies in the neurocritical care population, and the loss to follow-up rate was comparable with that of the general critical care population.
abstract_id: PUBMED:36872058
Impact of Fluid Balance on Intensive Care Unit Length of Stay in Critically Ill Trauma Patients. Background: There is significant data in the medical and surgical literature supporting the correlations between positive volume balance and negative outcomes such as AKI, prolonged mechanical ventilation, intensive care unit and hospital length of stay and increased mortality.
Methods: This single-center, retrospective chart review included adult patients identified from a Trauma Registry database. The primary outcome was the total ICU LOS. Secondary outcomes include hospital LOS, ventilator-free days, incidence of compartment syndrome, acute respiratory distress syndrome (ARDS), renal replacement therapy (RRT), and days of vasopressor therapy.
Results: In general, baseline characteristics were similar between groups with the exception of mechanism of injury, FAST exam, and disposition from the ED. The ICU LOS was shortest in the negative fluid balance and longest in the positive fluid balance group (4 days vs 6 days, P = .001). Hospital LOS was also shorter in the negative balance group than that of the positive balance group (7 days vs 12 days, P < .001). More patients in the positive balance group experienced acute respiratory distress syndrome compared to the negative balance group (6.3% vs 0%, P = .004). There was no significant difference in the incidence of renal replacement therapy, days of vasopressor therapy, or ventilator-free days.
Discussion: A negative fluid balance at seventy-two hours was associated with a shorter ICU and hospital LOS in critically ill trauma patients. Our observed correlation between positive volume balance and total ICU days merits further exploration with prospective, comparative studies of lower volume resuscitation to key physiologic endpoints compared with routine standard of care.
Answer: Survivors of acute neurologic injury who have undergone prolonged admission in the neuroscience intensive care unit (ICU) appear to have limited memory of their ICU stay. A survey conducted on patients admitted for 10 days or more to a neuroscience ICU over a 10-year period found that recall of details of the ICU admission was limited. Fewer than 10% of patients who required mechanical ventilation recalled being on a ventilator. Only 11% of the respondents had responses suggestive of possible post-traumatic stress syndrome. The most commonly experienced symptoms following discharge were difficulty sleeping, difficulty with concentration, and memory loss. Therefore, the study suggests that patients requiring prolonged neuroscience ICU admission do not seem to be traumatized by their ICU stay (PUBMED:23553251). |
Instruction: Does visual impairment lead to additional disability in adults with intellectual disabilities?
Abstracts:
abstract_id: PUBMED:18771511
Does visual impairment lead to additional disability in adults with intellectual disabilities? Background: This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID).
Method: In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with observant-based questionnaires, prior to expert assessment of visual function. With linear regression analysis the percentage of variance, explained by levels of visual function, was calculated for the total population and per ID level.
Results: A total of 107/269 participants were visually impaired or blind (WHO criteria). On top of the decrease by ID visual impairment significantly decreased daily living skills, communication & language, recognition/communication. Visual impairment did not cause more self-absorbed and withdrawn behaviour or anxiety. Peculiar looking habits correlated with visual impairment and not with ID. In the groups with moderate and severe ID this effect seems stronger than in the group with profound ID.
Conclusion: Although ID alone impairs daily functioning, visual impairment diminishes the daily functioning even more. Timely detection and treatment or rehabilitation of visual impairment may positively influence daily functioning, language development, initiative and persistence, social skills, communication skills and insecure movement.
abstract_id: PUBMED:24467811
Onset aging conditions of adults with an intellectual disability associated with primary caregiver depression. Caregivers of adults with an intellectual disability experience depressive symptoms, but the aging factors of the care recipients associated with the depressive symptoms are unknown. The objective of this study was to analyze the onset aging conditions of adults with an intellectual disability that associated with the depression scores of their primary caregivers. A cross-sectional survey was administered to gather information from 455 caregivers of adults with an intellectual disability about their symptoms of depression which assessed by a 9-item Patient Health Questionnaire (PHQ-9). The 12 aging conditions of adults with an intellectual disability include physical and mental health. The results indicate that 78% of adults with an intellectual disability demonstrate aging conditions. Physical conditions associated with aging include hearing decline (66.3%), vision decline (63.6%), incontinence (44%), articulation and bone degeneration (57.9%), teeth loss (80.4), physical strength decline (81.2%), sense of taste and smell decline (52.8%), and accompanied chronic illnesses (74.6%). Mental conditions associated with aging include memory loss (77%), language ability deterioration (74.4%), poor sleep quality (74.2%), and easy onset of depression and sadness (50.3%). Aging conditions of adults with an intellectual disability (p<0.001) was one factor that significantly affected the presence of depressive symptom among caregivers after controlling demographic characteristics. Particularly, poor sleep quality of adults with an intellectual disability (yes vs. no, OR=3.807, p=0.002) was statistically correlated to the occurrence of significant depressive symptoms among their caregivers. This study suggests that the authorities should reorient community services and future policies toward the needs of family caregivers to decrease the burdens associated with caregiving.
abstract_id: PUBMED:31136093
The relationship between symptoms of autism spectrum disorder and visual impairment among adults with intellectual disability. The higher prevalence of autism reported in blind children has been commonly attributed to the confounding effects of an underlying intellectual disability. The aim of this study was to explore the relationship between symptoms of autism and blindness in adults with intellectual disability. We hypothesized that blindness can increase the probability of the autism phenotype, independent of known risk factors, that is, severity of intellectual disability and gender. A general population case register (population size of 0.7 million) was used to conduct two studies. The first study was on 3,138 adults with intellectual disability, using a validated autism risk indicator to study adults with visual impairment. This identified 386 adults with partial and complete visual impairment, both of which were associated with presence of high number of autistic traits (P < 0.001). The second study was only on those with congenital blindness using a standardized assessment tool, the Pervasive Developmental Disorder-Mental Retardation Scale. Those with hearing impairment or unilateral, partial, and acquired visual impairment were excluded. Control groups were randomly selected from those with normal hearing and vision. Prevalence of the autism phenotype was higher among those with congenital blindness (n = 46/60; 76.7%) than their controls (n = 36/67; 53.7%) and this association was statistically significant (adjusted odds ratio = 3.03; 95% confidence interval: 1.34-6.89; P = 0.008). Our results support the hypothesis that a congenital blindness independently affects psychosocial development and increases the probability of the autism phenotype. Early identification of autism could facilitate appropriate psychosocial interventions and educational opportunities to improve quality of life of people with blindness. Autism Res 2019, 12: 1411-1422. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Although autism has been commonly reported in those with blindness, it is generally attributed to an accompanying intellectual disability. Current study, however, revealed that congenital blindness is independently associated with symptoms of autism. In spite of its high prevalence, autism can be overlooked in those with intellectual disability and blindness. Improving diagnosis in this population should, therefore, be advocated through raising awareness of this association to facilitate early access to services.
abstract_id: PUBMED:30173164
Prevalence of long-term health conditions in adults with autism: observational study of a whole country population. Objectives: To investigate the prevalence of comorbid mental health conditions and physical disabilities in a whole country population of adults aged 25+ with and without reported autism.
Design: Secondary analysis of Scotland's Census, 2011 data. Cross-sectional study.
Setting: General population.
Participants: 94% of Scotland's population, including 6649/3 746 584 adults aged 25+ reported to have autism.
Main Outcome Measures: Prevalence of six comorbidities: deafness or partial hearing loss, blindness or partial sight loss, intellectual disabilities, mental health conditions, physical disability and other condition; ORs (95% CI) of autism predicting these comorbidities, adjusted for age and gender; and OR for age and gender in predicting comorbidities within the population with reported autism.
Results: Comorbidities were common: deafness/hearing loss-14.1%; blindness/sight loss-12.1%; intellectual disabilities-29.4%; mental health conditions-33.0%; physical disability-24.0%; other condition-34.1%. Autism statistically predicted all of the conditions: OR 3.3 (95% CI 3.1 to 3.6) for deafness or partial hearing loss, OR 8.5 (95% CI 7.9 to 9.2) for blindness or partial sight loss, OR 94.6 (95% CI 89.4 to 100.0) for intellectual disabilities, OR 8.6 (95% CI 8.2 to 9.1) for mental health conditions, OR 6.2 (95% CI 5.8 to 6.6) for physical disability and OR 2.6 (95% CI 2.5 to 2.8) for other condition. Contrary to findings within the general population, female gender predicted all conditions within the population with reported autism, including intellectual disabilities (OR=1.4).
Conclusions: Clinicians need heightened awareness of comorbidities in adults with autism to improve detection and suitable care, especially given the added complexity of assessment in this population and the fact that hearing and visual impairments may cause additional difficulties with reciprocal communication which are also a feature of autism; hence posing further challenges in assessment.
abstract_id: PUBMED:9373823
Diagnosis of sensory impairment in people with intellectual disability in general practice. The present authors have participated in the development of a Dutch consensus on the early detection, diagnosis and treatment of hearing and visual impairment in children and adults with intellectual disability. They argue that the early detection of sensory impairment in babies and children with intellectual disability should primarily be a responsibility of paediatricians and youth health physicians. General practitioners should be aware of the necessity of screening and should check whether this has been done when children visit the surgery. It is stressed that the general practitioner should play a more active role in the detection of age-related sensory loss in older adults with intellectual disability, and the assessment of younger adults whose sensory functions have never or incompletely been evaluated. Annual sensory screening is certainly not necessary, but annual otoscopy to detect impacted earwax or unidentified middle ear infection, as well as checks of the proper use of glasses and hearing aids, are suggested. Most adults with mild or moderate intellectual disability can be assessed with methods that are normally used by general practitioners. Uncooperative people should be referred for screening with specialized methods. A low-threshold referral system (e.g. via district expert teams) has been outlined.
abstract_id: PUBMED:32525419
Investigating similarities and differences in health needs according to disability type using the International Classification of Functioning, Disability and Health. Purpose: This study aimed to investigate the health needs of adults with disabilities in South Korea according to disability type using the International Classification of Functioning, Disability and Health (ICF).
Materials And Methods: An exploratory, qualitative approach using content analysis was employed. Five focus groups consisted of six to seven participants with visual impairment (PVI), hearing impairment (PHI), physical impairment (PPI), brain disorder (PBD), and intellectual disability (PID). Linking rules were used to identify how the health needs related to the ICF components of Body Functions, Activity & Participation, and Environmental Factors.
Results: The health needs related to the Environmental Factors were the most mentioned and were frequently perceived as causes of poor health conditions related to Activities & Participation and Body Function. According to what participants perceived as main health issues in the Environmental Factors, the five groups were classified into (1) Services, systems, and policies mainly affecting type (PVI and PPI); (2) Support and relationships mainly affecting type (PHI); and (3) Attitude mainly affecting type (PBD and PID).
Conclusions: Government officials and health professionals must tailor development and provision of healthcare for people with disabilities based on health need type.IMPLICATIONS FOR REHABILITATIONFew studies have investigated the health needs of people with disabilities, although many health indicators suggest that they are facing health inequalities in South Korea.The health issues related to the Environmental Factors were often perceived in this study as causes of poor health conditions related to the Activities & Participation and Body Function, indicating the need to preferentially solve the health issues related to the Environmental Factor.According to what people with each of the five types of disabilities perceived as main health issues and what kinds of actions they expected in the Environmental Factors, they could be classified into three health need types.It is recommended that government officials and health professionals develop and provide appropriate supply-side measures of healthcare considering these different demand-side health needs according to disability type.
abstract_id: PUBMED:22502859
Prevalence, associated factors and treatment of sleep problems in adults with intellectual disability: a systematic review. In people with intellectual disability (ID), impaired sleep is common. Life expectancy has increased in this group, and it is known that in general population sleep deteriorates with aging. Therefore the aims of this systematic review were to examine how sleep problems are defined in research among adults and older people with ID, and to collect information on the prevalence, associated factors and treatment of sleep problems in this population. PubMed, EMBase, PsycINFO and Web of Science were searched for studies published between January 1990 and August 2011. All empirical studies covering sleep problems in adults with ID were included, and assessed on quality (level of evidence), using a slightly modified version of the SIGN-50 methodology checklist for cohort studies. Of 50 studies that were included for systematic review, one was of high quality, 14 were well conducted, 14 were well conducted but with a high risk of bias, and 21 were non-analytical. The reported estimated prevalence rates of sleep problems in adults with ID ranged from 8.5% to 34.1%. A prevalence of 9.2% was reported for significant sleep problems. Sleep problems were associated with the following factors: challenging behavior; respiratory disease; visual impairment; psychiatric conditions; and using psychotropic, antiepileptic and/or antidepressant medication. Little information was found on older people specifically. Two studies reported treatment effects on sleep problems in larger populations; their findings suggest that non-pharmaceutical interventions are beneficial. Research on the prevalence, associated factors and treatment of sleep problems in adults and older people with ID has mainly focused on subjectively derived data. The definitions used to describe a sleep problem are not uniform, and associations are mainly described as correlations. In order to give recommendations for clinical practice further research is needed, involving objective measurements and multivariate analysis.
abstract_id: PUBMED:36554959
More than a Physical Problem: The Effects of Physical and Sensory Impairments on the Emotional Development of Adults with Intellectual Disabilities. With the introduction of the ICD-11 and DSM-5, indicators of adaptive behavior, including social-emotional skills, are in focus for a more comprehensive understanding of neurodevelopmental disorders. Emotional skills can be assessed with the Scale of Emotional Development-Short (SED-S). To date, little is known about the effects of physical disorders and sensory impairments on a person's developmental trajectory. The SED-S was applied in 724 adults with intellectual disabilities, of whom 246 persons had an additional physical and/or sensory impairment. Ordinal regression analyses revealed an association of movement disorders with more severe intellectual disability and lower levels of emotional development (ED) on the overall and domain levels (Others, Body, Material, and Communication). Visual impairments predicted lower levels of ED in the SED-S domains Material and Body, but not the overall level of ED. Hearing impairments were not associated with intellectual disability or ED. Epilepsy correlated only with the severity of intellectual disability. Multiple impairments predicted more severe intellectual disabilities and lower levels of overall ED. In conclusion, physical and sensory impairments may not only affect physical development but may also compromise intellectual and emotional development, which should be addressed in early interventions.
abstract_id: PUBMED:37892316
Cochlear Implantation in Children with Additional Disabilities: A Systematic Review. This study examines the last 10 years of medical literature on the benefits of cochlear implantation in children who are deaf or hard of hearing (DHH) with additional disabilities. The most recent literature concerning cochlear implants (CIs) in DHH children with additional disabilities was systematically explored through PubMed, Embase, Scopus, PsycINFO, and Web of Science from January 2012 to July 2023. Our two-stage search strategy selected a total of 61 articles concerning CI implantation in children with several forms of additional disabilities: autism spectrum disorder, cerebral palsy, visual impairment, motor disorders, developmental delay, genetic syndromes, and intellectual disability. Overall, many children with additional disabilities benefit from CIs by acquiring greater environmental sound awareness. This, in turn, improves non-verbal communication and adaptive skills, with greater possibilities to relate to others and to be connected with the environment. Instead, despite some improvement, expressive language tends to develop more slowly and to a lesser extent compared to children affected by hearing loss only. Further studies are needed to better appreciate the specificities of each single disability and to personalize interventions, not restricting the analysis to auditory and language skills, but rather applying or developing cross-culturally validated instruments able to reliably assess the developmental trajectory and the quality of life of DHH children with additional disabilities before and after CI.
abstract_id: PUBMED:32341043
Prevalence of mental health conditions, sensory impairments and physical disability in people with co-occurring intellectual disabilities and autism compared with other people: a cross-sectional total population study in Scotland. Objectives: To investigate prevalence of mental health conditions, sensory impairments and physical disability in children, adults and older adults with co-occurring intellectual disabilities and autism, given its frequent co-occurrence, compared with the general population.
Design: Whole country cohort study.
Setting: General community.
Participants: 5709 people with co-occurring intellectual disabilities and autism, compared with 5 289 694 other people.
Outcome Measures: Rates and ORs with 95% CIs for mental health conditions, visual impairment, hearing impairment and physical disability in people with co-occurring intellectual disabilities and autism compared with other people, adjusted for age, sex and interaction between age and co-occurring intellectual disabilities and autism.
Results: All four long-term conditions were markedly more common in children, adults and older adults with co-occurring intellectual disabilities and autism compared with other people. For mental health, OR=130.8 (95% CI 117.1 to 146.1); visual impairment OR=65.9 (95% CI 58.7 to 73.9); hearing impairment OR=22.0 (95% CI 19.2 to 25.2); and physical disability OR=157.5 (95% CI 144.6 to 171.7). These ratios are also greater than previously reported for people with either intellectual disabilities or autism rather than co-occurring intellectual disabilities and autism.
Conclusions: We have quantified the more than double disadvantage for people with co-occurring intellectual disabilities and autism, in terms of additional long-term health conditions. This may well impact on quality of life. It raises challenges for staff working with these people in view of additional complexity in assessments, diagnoses and interventions of additional health conditions, as sensory impairments and mental health conditions in particular, compound with the persons pre-existing communication and cognitive problems in this context. Planning is important, with staff being trained, equipped, resourced and prepared to address the challenge of working for people with these conditions.
Answer: Yes, visual impairment does lead to additional disability in adults with intellectual disabilities. A study found that visual impairment significantly decreased daily living skills, communication and language, and recognition/communication in adults with intellectual disabilities. This effect was stronger in groups with moderate and severe intellectual disabilities than in the group with profound intellectual disabilities. The study concluded that timely detection and treatment or rehabilitation of visual impairment may positively influence daily functioning, language development, initiative and persistence, social skills, communication skills, and insecure movement (PUBMED:18771511).
Furthermore, visual impairment has been associated with an increased probability of the autism phenotype in adults with intellectual disabilities, independent of known risk factors such as the severity of intellectual disability and gender. Congenital blindness, in particular, was found to be independently associated with symptoms of autism (PUBMED:31136093).
Additionally, the prevalence of blindness or partial sight loss was found to be significantly higher in adults with reported autism compared to the general population, indicating that visual impairment is a common comorbidity in adults with intellectual disabilities and autism (PUBMED:30173164).
Overall, these findings suggest that visual impairment exacerbates the challenges faced by adults with intellectual disabilities and can impact various aspects of their daily lives and development. Therefore, it is important for healthcare providers to be aware of the potential for additional disabilities caused by visual impairment in this population and to ensure appropriate interventions are in place. |
Instruction: Should infection control practitioners do follow-up of staff exposures to patient blood and body fluids?
Abstracts:
abstract_id: PUBMED:8731027
Should infection control practitioners do follow-up of staff exposures to patient blood and body fluids? Background: The purpose of this study was to determine the efficiency of a joint infection control/occupational health program for the follow-up of accidental blood or bloody body fluid exposures in health care workers.
Methods: A comprehensive staff follow-up program for all blood exposures with known patient sources was initiated in 1989, consisting of patient follow-up by the Infection Control Department (risk assessment for hepatitis B virus [HBV] and [HIV] infection and obtaining of consent for HIV testing) and staff follow-up by the Occupational Health Department. In 1992 a mailed survey was conducted to examine exposure follow-up policies and responsibilities in large teaching hospitals across Canada.
Results: A total of 924 blood exposures with known patient sources were reported between January 1989 and December 1993. HIV and HBV screening was obtained for 67.9% and 87.6% of patients assessed as at low risk and 82.3% and 92.2% of those assessed as at high risk for infection, respectively. Two previously unknown HIV-seropositive patients were identified, one of whom had been classified as at low risk (one of 530 [0.19%] patients at low risk who underwent screening). Primary reasons for screening being missed were patient discharge (46.3%) or communication problems (18.0%). The requirement for informed written consent before HIV screening accounted for the difference in completed HIV and HBV screens. Results of the hospital survey indicated that 40.8% of Canadian hospitals follow up all patients who are involved in blood exposures; however, most hospitals still rely on the physician to obtain consent (87.6%).
Conclusions: Use of ICPs to screen patients involved in staff blood exposures during regular hours may be the most efficient method of follow-up, particularly if supplemented by a backup team of health professionals on nights and weekends. Although screening all patients for HBV/HIV may detect patients with undisclosed high-risk behaviors, institutions must decide whether the practice is cost-effective in areas of low prevalence.
abstract_id: PUBMED:38434795
Splash of body fluids among healthcare support staff in Ghana: a cross-sectional study. Background: Exposure to splash of body fluids is one of the common ways of transmitting blood-borne infections from patients to healthcare practitioners. Globally, there is a paucity of evidence on exposure to splash of body fluids among hospital housekeepers. This study, therefore, investigated splash of body fluid and its predisposing factors among healthcare support staff in the Greater Accra region, Ghana.
Methods: An analytic cross-sectional survey was conducted among support staff in 10 major hospitals between 30 January and 31 May 2023. A multi-stage sampling procedure was the overarching technique employed, and study participants were recruited through simple random and probability proportional-to-size sampling techniques. The data analyses were conducted using STATA 15 software. The preliminary association between exposure to splash of body fluids and predisposing factors was established through Chi-square, Fisher's exact, and Mann-Whitney U tests. Log-binomial regression analyses were employed to validate the factors related to splash of body fluids at a significance level of p-value < 0.05.
Results: The investigation was conducted among 149 healthcare support staff. The exposure to splash of body fluids over the past 1 year was 53.7% (95% CI: 45.3%-61.9%). The types of body fluids that were mostly encountered through these splash exposures were amniotic fluids (36.3%) and urine (23.8%). Several factors were found to be significantly associated with splash of body fluids, namely: employed as a healthcare assistant [APR = 1.61 (1.16, 2.22)], holding a supervisory position [APR = 0.24 (0.11, 0.51)], having a system in place for reporting body fluid splashes [APR = 0.61 (0.44, 0.85)], male healthcare support staff [APR = 0.62 (0.41, 0.93)], and adherence to standard precautions most of the time [APR = 1.66 (1.11, 2.48)].
Conclusion: Healthcare support staff were highly exposed to splash of body fluids. Gender, supervisory role, category of worker, reporting systems, and adherence to standard precautions were associated with exposure to splash of body fluids. Facility managers are advised to enhance the efficiency of reporting systems.
abstract_id: PUBMED:25729538
Prevalence and determinants of occupational exposures to blood and body fluids among health workers in two tertiary hospitals in Nigeria. Background: Healthcare associated infections among health workers commonly follow occupational exposures to pathogens infecting blood or body fluids of patients. We evaluated the prevalence and determinants of occupational exposures to blood/body fluids among health workers in two tertiary hospitals in Nigeria.
Methods: In a cross section study undertaken in two tertiary hospitals in North-central and South-south Nigeria in 2011, a structured self-administered questionnaire was used to obtain demographic data and occupational exposures to blood/body fluids in the previous year from doctors, nurses and laboratory scientists. Independent predictors of occupational exposures were determined in an unconditional logistic regression model.
Results: Out of 290 health workers studied, 75.8%, 44.7%, 32.9%, 33.9% and 84.4% had skin contact with patient's blood, needle stick injuries, cut by sharps, blood/body fluid splashes to mucous membranes and one or more type of exposures respectively. Ninety one percent, 86%, 71.1%, 87.6%, 81.3%, and 84.4% of house officers, resident doctors, consultant doctors, staff nurses, principal/chief nursing officers and laboratory scientists, respectively had one or more type of exposures in the previous year (P>0.05). Professional group was found to be the only independent predictor of cut by sharps. House officers and nurses had higher and more frequent occupational exposures than other professional groups.
Conclusion: Our results suggest high rates of occupational exposures to blood/body fluid among health workers in Nigeria, especially among newly qualified medical doctors and nurses. Health facilities in Nigeria ought to strengthen infection prevention and control practices while targeting high risk health workers such as house officers and nurses.
abstract_id: PUBMED:25636318
Healthcare worker adherence to follow-up after occupational exposure to blood and body fluids at a teaching hospital in Brazil. Healthcare workers (HCWs) are at a high risk for exposure to pathogens in the workplace. The objective of this study was to evaluate HCW adherence to follow-up after occupational exposure to blood and body fluids at a tertiary care university hospital in the city of São Paulo, Brazil. Data were collected from 2102 occupational exposures to blood and body fluids reports, obtained from the Infection Control Division of the Universidade Federal de São Paulo/Escola Paulista de Medicina/Hospital São Paulo, in São Paulo, Brazil, occurring between January of 2005 and December of 2011. To evaluate adherence to post-exposure follow-up among the affected HCWs, we took into consideration follow-up visits for serological testing. For HCWs exposed to materials from source patients infected with human immunodeficiency virus (HIV), hepatitis B virus (HBV), or hepatitis C virus (HCV), as well as from source patients of unknown serological status, follow-up serological testing was scheduled for 3 and 6 months after the accident. For those exposed to materials from source patients co-infected with HIV and HCV, follow-up evaluations were scheduled for 3, 6, and 12 months after the accident. During the study period, there were 2056 accidental exposures for which data regarding the serology of the source patient were available. Follow-up evaluation of the affected HCW was recommended in 612 (29.8%) of those incidents. After the implementation of a post-exposure protocol involving telephone calls and official letters mailed to the affected HCW, adherence to follow-up increased significantly, from 30.5 to 54.0% (P = 0.028). Adherence was correlated positively with being female (P = 0.009), with the source of the exposure being known (P = 0.026), with the source patient being HIV positive (P = 0.029), and with the HCW having no history of such accidents (P = 0.047). Adherence to the recommended serological testing was better at the evaluation scheduled for 3 months after the exposure (the initial evaluation) than at those scheduled for 6 and 12 months after the exposure (P = 0.004). During the study period, there was one confirmed case of HCW seroconversion to HCV positivity. The establishment of a protocol that involves the immediate supervisor of the affected HCWs, in the formal summoning of those HCWs is necessary in order to increase the rate of adherence to post-exposure follow-up.
abstract_id: PUBMED:17564978
Costs of management of occupational exposures to blood and body fluids. Objective: To determine the cost of management of occupational exposures to blood and body fluids.
Design: A convenience sample of 4 healthcare facilities provided information on the cost of management of occupational exposures that varied in type, severity, and exposure source infection status. Detailed information was collected on time spent reporting, managing, and following up the exposures; salaries (including benefits) for representative staff who sustained and who managed exposures; and costs (not charges) for laboratory testing of exposure sources and exposed healthcare personnel, as well as any postexposure prophylaxis taken by the exposed personnel. Resources used were stratified by the phase of exposure management: exposure reporting, initial management, and follow-up. Data for 31 exposure scenarios were analyzed. Costs were given in 2003 US dollars.
Setting: The 4 facilities providing data were a 600-bed public hospital, a 244-bed Veterans Affairs medical center, a 437-bed rural tertiary care hospital, and a 3,500-bed healthcare system.
Results: The overall range of costs to manage reported exposures was $71-$4,838. Mean total costs varied greatly by the infection status of the source patient. The overall mean cost for exposures to human immunodeficiency virus (HIV)-infected source patients (n=19, including those coinfected with hepatitis B or C virus) was $2,456 (range, $907-$4,838), whereas the overall mean cost for exposures to source patients with unknown or negative infection status (n=8) was $376 (range, $71-$860). Lastly, the overall mean cost of management of reported exposures for source patients infected with hepatitis C virus (n=4) was $650 (range, $186-$856).
Conclusions: Management of occupational exposures to blood and body fluids is costly; the best way to avoid these costs is by prevention of exposures.
abstract_id: PUBMED:15757989
Hepatitis B transmission through blood and body fluids exposure of school personnel. Background: Hepatitis B transmission from students to members of staff has been documented in schools, particularly nurseries and day care centres.
Aims: To investigate the frequency of exposure to blood and other body fluids within day schools and to document practices adopted by school personnel to avoid direct contact and decontaminate the environment.
Methods: Questionnaire survey among 21 public day schools in Malta.
Results: Episodes of significant blood exposure were rare, occurring at frequencies of 0.071 [95% confidence interval (CI): 0-0.148] incidents per thousand student days. Contact with larger volumes of other body fluids, namely urine and vomitus, was more likely: 0.12 (95% CI: 0.008-0.383) and 0.088 (95% CI: 0.048-0.128) episodes per 1000 student days, respectively. School personnel generally used correct personal protective equipment, particularly gloves, in cases of contact with blood and body fluids. Environmental disinfection methods varied considerably with only 38% of schools (95% CI: 21-59%) using recommended hypochlorite preparations.
Conclusions: Exposure to quantities of blood sufficient to result in HBV transmission in day schools is rare. Emphasis should be placed on risk assessment at individual school level, concentrating on correct management of body fluid exposures through effective staff education.
abstract_id: PUBMED:33010808
Occupational exposures to blood and body fluids among healthcare workers in Ethiopia: a systematic review and meta-analysis. Background: Occupational exposure to blood and body fluids is a major risk factor for the transmission of blood-borne infections to healthcare workers. There are several primary studies in Ethiopia yet they might not be at the national level to quantify the extent of occupational blood and body fluid exposures (splash of blood or other body fluids into the eyes, nose, or mouth) or blood contact with non-intact skin among the healthcare workers. This systematic review and meta-analysis aimed to estimate the pooled prevalence of occupational blood and body fluid exposure of healthcare workers in Ethiopia.
Methods: PubMed, Science Direct, Hinari, Google Scholar, and the Cochrane library were systematically searched; withal, the references of appended articles were also checked for further possible sources. The Cochrane Q test statistics and I2 tests were used to assess the heterogeneity of the included studies. A random-effects meta-analysis model was used to estimate the lifetime and 12-month prevalence of occupational exposure to blood and body fluids among healthcare workers in Ethiopia.
Results: Of the 641 articles identified through the database search, 36 studies were included in the final analysis. The estimated pooled lifetime and 12-month prevalence on occupational exposure to blood and body fluids among healthcare workers were found to be at 54.95% (95% confidence interval (CI), 48.25-61.65) and 44.24% (95% CI, 36.98-51.51), respectively. The study identified a variation in healthcare workers who were exposed to blood and body fluids across Ethiopian regions.
Conclusion: The finding of the present study revealed that there was a high level of annual and lifetime exposures to blood and body fluids among healthcare workers in Ethiopia.
abstract_id: PUBMED:19233830
Exposures to blood and body fluids in Brazilian primary health care. Background: Primary health care workers (HCWs) represent a growing occupational group worldwide. They are at risk of infection with blood-borne pathogens because of occupational exposures to blood and body fluids (BBF).
Aim: To investigate BBF exposure and its associated factors among primary HCWs.
Methods: Cross-sectional study among workers from municipal primary health care centres in Florianópolis, Southern Brazil. Workers who belonged to occupational categories that involved BBF exposures during the preceding 12 months were interviewed and included in the data analysis.
Results: A total of 1077 workers participated. The mean incidence rate of occupational BBF exposures was 11.9 per 100 full-time equivalent worker-years (95% confidence interval: 8.4-15.3). The cumulative prevalence was 7% during the 12 months preceding the interview. University-level education, employment as a nurse assistant, dental assistant or dentist, higher workload score, inadequate working conditions, having sustained a previous occupational accident and current smoking were associated with BBF exposures (P <or= 0.05).
Conclusions: Primary Health Care Centres are working environments in which workers are at risk of BBF exposures. Exposure surveillance systems should be created to monitor their occurrence and to guide the implementation of preventive strategies.
abstract_id: PUBMED:10613940
Emergency department management of blood and body fluid exposures. Exposure to blood and body fluids that may be contaminated with infectious agents is a common occupational hazard for health care workers. Health care workers in the emergency department or out-of-hospital setting are at especially high risk for exposure to blood or body fluids. Nonemergency health care workers are frequently referred to hospital EDs for immediate treatment of occupation exposures. A series of recommendations by the Centers for Disease Control and Prevention evolved over the past decade, and changes are expected to continue. This state-of the-art article reviews current recommendations for management of persons exposed to blood or body fluids and discusses the scientific basis for recommendations regarding hepatitis B virus, hepatitis C virus, and HIV.
abstract_id: PUBMED:1340944
Accidental exposure to health care workers of blood and body fluids from patients Due to the growing concern of Health Care Workers (HCW) regarding the possibility of acquiring blood borne infections through accidental occupational exposure and the fact that HIV and Hepatitis B infections have occurred in that setting, an evaluation of the frequency of accidents and their circumstances, suffered by HCW with blood or body fluids from patients was carried out. 1,340 self administered questionnaires were given to HCW with direct contact with patients in a general 800 beds hospital, requiring information of accidental percutaneous, mucosal or cutaneous (it not intact) exposures to blood or certain (risky) body fluids ever and/or in the last 6 months. Sixty five percent of HCW referred some exposure ever and 46% in the last 6 months. The rate of exposure ever ranged from 36.6% in medical students to 69.5% in doctors, 78.5% in nurses to 100% in dentists. Seventy seven percent of the exposures were seen during routine care, 28.5% were perceived as due to personal carelessness, 19.9% due to patient agitation, 33.5% as inherent to the procedure and 8.9% to abandonment of material. Exposures were mostly to blood. One hundred eleven out of 331 (33.5%) exposures were produced during handling of syringes; 33.2% during invasive procedures and 13% during cleaning of material. Sixty out of 107 (56%) non surgical doctors (NSD) and 61/67 (92.5%) of surgical doctors (SD) had had exposures ever (p < 0.01), 16.8% and 65.6% had had one or more in the last 6 months respectively.(ABSTRACT TRUNCATED AT 250 WORDS)
Answer: Based on the information provided in the abstracts, it appears that there is a strong argument for infection control practitioners (ICPs) to be involved in the follow-up of staff exposures to patient blood and body fluids.
The study described in abstract PUBMED:8731027 suggests that using ICPs to screen patients involved in staff blood exposures during regular hours may be the most efficient method of follow-up. This is particularly true if supplemented by a backup team of health professionals on nights and weekends. The study also highlights the importance of considering the cost-effectiveness of screening all patients for HBV/HIV in areas of low prevalence.
Abstract PUBMED:38434795 indicates that healthcare support staff are highly exposed to splash of body fluids, and factors such as gender, supervisory role, category of worker, reporting systems, and adherence to standard precautions are associated with exposure. This suggests that ICPs could play a crucial role in enhancing the efficiency of reporting systems and ensuring adherence to standard precautions to reduce exposure risks.
In abstract PUBMED:25729538, high rates of occupational exposures to blood/body fluid among health workers in Nigeria are reported, especially among newly qualified medical doctors and nurses. This implies that ICPs could be instrumental in strengthening infection prevention and control practices, targeting high-risk health workers.
Abstract PUBMED:25636318 shows that adherence to follow-up after occupational exposure increased significantly with the implementation of a post-exposure protocol involving telephone calls and official letters mailed to the affected HCW. This indicates that structured follow-up protocols, potentially managed by ICPs, can improve adherence to post-exposure follow-up.
Abstract PUBMED:17564978 discusses the costs associated with managing occupational exposures to blood and body fluids, emphasizing that prevention of exposures is the best way to avoid these costs. ICPs could contribute to prevention strategies and manage the financial aspects of post-exposure interventions.
Abstract PUBMED:15757989 highlights that exposure to blood sufficient to result in HBV transmission in day schools is rare, but emphasizes the importance of correct management of body fluid exposures through effective staff education, which could be facilitated by ICPs.
Abstract PUBMED:33010808 reveals a high level of annual and lifetime exposures to blood and body fluids among healthcare workers in Ethiopia, suggesting a need for systematic follow-up and prevention strategies where ICPs could be involved. |
Instruction: Are patients of Asian medicine active consumers?
Abstracts:
abstract_id: PUBMED:15138372
Are patients of Asian medicine active consumers? Background: The hypothesis that heterodox patients are active consumers.
Objective: What are the decisional criteria that lead patients to Asian medicine? Do they want to be involved in therapeutic decision-making? Are they well informed about diverse therapeutic options?
Methods: Semistructured interviews with 26 patients using Ayurveda or acupuncture.
Results: Active consumerism among the patients is limited to processes before the beginning of the treatment. They collect little information about the concepts of Asian medicine. They also tend to welcome a rather paternalistic therapeutic relationship in which medical decision-making is centered around the physician.
Conclusion: As only two of the 26 interviewed patients conform to the notion of active consumerism, we argue that the majority of patients engages in practices of passive consumerism.
abstract_id: PUBMED:34623617
Leveraging knowledge of Asian herbal medicine and its active compounds as COVID-19 treatment and prevention. The outbreak of COVID-19 disease has led to a search for effective vaccines or drugs. However, insufficient vaccine supplies to meet global demand and no effective approved prescribed drugs for COVID-19 have led some people to consider the use of alternative or complementary medicines, such as traditional herbal medicine. Medicinal plants have various therapeutic properties that depend on the active compounds they contain. Obviously, herbal medicine has had an essential role in treatment and prevention during COVID-19 outbreak, especially in Asian cultures. Hence, we reviewed the uses of herbal medicine in Asian cultures and described the prominent families and species that are sources of antiviral agents against COVID-19 on the basis of case reports, community surveys, and guidelines available in the literature databases. Antiviral efficacy as determined in laboratory testing was assessed, and several promising active compounds with their molecular targets in cell models against SARS-CoV-2 viral infection will be discussed. Our review findings revealed the highly frequent use of Lamiaceae family members, Zingiber officinale, and Glycyrrhiza spp. as medicinal sources for treatment of COVID-19. In addition, several plant bioactive compounds derived from traditional herbal medicine, including andrographolide, panduratin A, baicalein, digoxin, and digitoxin, have shown potent SARS-CoV-2 antiviral activity as compared with some repurposed FDA-approved drugs. These commonly used plants and promising compounds are recommended for further exploration of their safety and efficacy against COVID-19.
abstract_id: PUBMED:29491665
The rights of patients as consumers: An ancient view. As far as the rights of consumers are concerned, the International Organization of Consumer's Union (IOCU) in 1983 has specified about the eight rights of a consumer. The Consumer Protection Act (CPA), 1986 then prescribed six "Rights of Consumers," which are protected under the act. However, these rights can be observed in the ancient Indian texts such as Brihat-trayee, Narad Smruti, and Kautilya Arthashastra., in the form of rights given to patients. For the purpose of present study, the implemented methodology includes - (1) study of the consumer rights described by IOCU and CPA, (2) detailed review of literature for observance of replication of these consumer rights in the ancient Indian texts and (3) a comparative study of the present consumer rights with the rights of patients observed in ancient Indian texts. This study shows that the substance of consumer rights is not a recent evolution, but the foundation of these rights has been laid well beforehand in the ancient times, which were provided to the patients by medical profession as well as by the rulers. The current scenario of protection of consumer rights is the replication of this ancient practice only.
abstract_id: PUBMED:27770300
Cultivating Medical Intentionality: The Phenomenology of Diagnostic Virtuosity in East Asian Medicine. This study examines the perceptual basis of diagnostic virtuosity in East Asian medicine, combining Merleau-Ponty's phenomenology and an ethnographic investigation of Korean medicine in South Korea. A novice, being exposed to numerous clinical transactions during apprenticeship, organizes perceptual experience that occurs between him or herself and patients. In the process, the fledgling practitioner's body begins to set up a medically-tinged "intentionality" interconnecting his or her consciousness and medically significant qualities in patients. Diagnostic virtuosity is gained when the practitioner embodies a cultivated medical intentionality. In the process of becoming a practitioner imbued with virtuosity, this study focuses on the East Asian notion of "Image" that maximizes the body's perceptual capacity, and minimizes possible reductions by linguistic re-presentation. "Image" enables the practitioner to somatically conceptualize the core notions of East Asian medicine, such as Yin-Yang, and to use them as an embodied litmus as the practitioner's cultivated body instinctively conjures up medical notions at clinical encounters. In line with anthropological critiques of reductionist frameworks that congeal human existential and perceptual vitality within a "scientific" explanatory model, this article attempts to provide an example of various knowing and caring practices, institutionalized external to the culture of science.
abstract_id: PUBMED:35018351
Active ageing of elderly consumers: insights and opportunities for future business strategies. Recent studies have focused on the emerging scenario of 'active ageing' as a series of positive actions aimed at fostering elderly adaptability by supporting emotionally close relationships and removing age-related structural barriers. Active ageing may be stimulated not only by leveraging technological and scientific innovations but also by implementing new business strategies that reflect a better comprehension of elderly new roles and behaviours. To aid in that effort, through a literature review of marketing and management contributions across a five-decade period (1970-2020), this paper investigates elderly consumers' new roles and related implications for business strategies, from a consumer behaviour perspective. Results present a structured classification of the most prominent streams of research by highlighting five promising changes (5Cs): changes in elderly consumers' roles in markets and societies; changes in self-care resulting in fashion purchases and cosmetic surgery; changes in elderly consumers' expenditures on specifically designed products and services; changes in the perception of risks resulting in preferences for either extremely prudent or hazardous behaviours; and changes in general elderly characteristics due to the so-called 'ageless society'. We highlight the heterogeneity of elderly consumers' new values and lifestyles, and the importance of incorporating their needs into innovative business strategies, by describing for each section the main findings of extant research and practical implications.
abstract_id: PUBMED:21114786
Complementary and alternative medicine practices among Asian radiotherapy patients. Aim: To describe the prevalence, expectations and factors associated with the use of complementary and alternative medicine (CAM) in Asian radiotherapy patients.
Methods: Overall 65 consecutive patients in an Asian oncology department were surveyed from December 2004 to January 2005, using a modified and translated instrument capturing information on patients' characteristics, CAM use, treatment refusal and satisfaction.
Results: Some basic characteristics were: 86% Chinese; median age 56 years (range: 31-87 years); 57% women; cancer types - breast 42%, lung 20%, nasopharyngeal 11%. All had received prior radiotherapy (54%), chemotherapy (51%) or surgery (45%). The median diagnosis-to-survey time was 7.1 months (range 1-168 months). Fifty-six patients (86%) used CAM for cancer treatment. The two commonest categories were spiritual practices (48%) and traditional Chinese medicine (TCM) (37%). Significant factors in TCM use were being male (P = 0.007) and having advanced disease (P = 0.045). Overall 60% of patients using herbal treatment and 97% of patients using spiritual practices expected a cure, a longer life, symptomatic relief, improved immunity or a better quality of life. Satisfaction with western treatment correlated positively with satisfaction with CAM (Spearman's rank correlation coefficient = 0.4). Forty-six patients (71%) did not discuss their CAM use with their oncologists and 64% obtained advice from their friends or families. Fourteen patients refused previous western treatments (11 feared its side effects (79%), five preferred CAM (36%)).
Conclusion: This study highlights the prevalence of CAM practices among Asian radiotherapy patients, their high expectations of the outcome and the need for better doctor-patient communication.
abstract_id: PUBMED:28127673
Opportunities and Challenges in Precision Medicine: Improving Cancer Prevention and Treatment for Asian Americans. Cancer is the leading cause of death among Asian Americans, and cancer cases among Asian Americans, Pacific Islanders, and Native Americans are expected to rise by 132% by 2050. Yet, little is known about biologic and environmental factors that contribute to these higher rates of disease in this population. Precision medicine has the potential to contribute to a more comprehensive understanding of morbidity and mortality trends among Asian American subgroups and to reduce cancer-related health disparities by recognizing patients as individuals with unique genetic, environmental, and lifestyle characteristics; identifying ways in which these differences impact cancer expression; and developing tailored disease prevention and clinical treatment strategies to address them. Yet, substantial barriers to the recruitment and retention of Asian Americans in cancer research persist, threatening the success of precision medicine research in addressing these knowledge gaps. This commentary outlines the major challenges to recruiting and retaining Asian Americans in cancer trials, suggests ways of surmounting them, and offers recommendations to ensure that personalized medicine becomes a reality for all Americans.
abstract_id: PUBMED:31495823
Text and Practice in East Asian Medicine: The Structure of East Asian Medical Knowledge Examined by Donguibogam Currents in Contemporary South Korea. How do classical texts, such as Hwangdi Neijing and Shanghanlun, continuously play significant roles in medical practices in the history of East Asian medicine? Although this is a significant question in interpreting the position of written texts in the medical history and even for understanding the structure of East Asian medical knowledge, it has been conspicuously underexamined in the studies of East Asian medicine. In order to explore this underrepresented question, this study focuses on currents of tradition in contemporary South Korea. Drawing on anthropological fieldwork at three Donguibogam (Treasured Mirror of Eastern Medicine) currents, it delves into the interaction of text and practice in East Asian medicine. Even though all three currents (Hyun-dong, Byeong-in, Hyung-sang) are based on Donguibogam, their ways of reading the text and organizing clinical practices are diverse. Each current sets up a keyword, such as pulse diagnosis, cause of disease, and appearance-image, and attempts to penetrate the entire Donguibogam through the keyword. This means that the classical medical text is open to plural approaches. This study found that there is a visible gap between a medical text and the reader of the text in East Asia. Masters and currents of tradition are the actors who fill up the gap, continuously interpreting and reinterpreting classical texts, and guiding medical practices of new readers. Adding the history of practice to the body of literature that have focused on the history of written texts, this study will contribute to the history of East Asian medicine.
abstract_id: PUBMED:14724341
Asian traditional medicine (ATM) based on in vivo microcirculation evidence. Asian traditional medicine (ATM) (herbal medicine, acupuncture or moxibution) has gained some popularity among communities in Asia, but there are still few evidences to prove the effectiveness of such therapeutic measures. A symposium was held with aim at the effectiveness of Asian traditional therapies in views of in vivo microcirculation. This report is concerned with the symposium, including Asian activities for ATM.
abstract_id: PUBMED:25992390
Patients, consumers, and caregivers: the original data stewards. Introduction: For patients and families, the issue of data quality is personal. When treatment decisions or research conclusions are based on incomplete or inaccurate information-a misdiagnosis, an incomplete medication list, the wrong demographic data-care safety, quality, and outcomes suffer-and patients and families suffer.
Opportunity For Change: Thanks in part to the power of health information technology, individuals have new opportunities to be part of the solution when it comes to assessing and improving data quality. Building on a framework for data quality, there are numerous opportunities for patients to be part of 'feedback loops' to understand and improve data quality are presented.
Foundation For The Future: As the source of patient-centered data, consumers have important roles to play in reviewing and confirming their own health information and should be engaged partners in efforts to understand and improve the quality of electronic health data.
Answer: Based on the provided abstracts, it appears that patients of Asian medicine are not typically active consumers. The study described in abstract PUBMED:15138372 found that active consumerism among patients of Asian medicine is limited to processes before the beginning of treatment. Patients tend to collect little information about the concepts of Asian medicine and often welcome a paternalistic therapeutic relationship where medical decision-making is centered around the physician. Only two out of the 26 interviewed patients conformed to the notion of active consumerism, suggesting that the majority of patients engage in passive consumerism.
While abstract PUBMED:34623617 does not directly address the consumer behavior of patients, it does indicate that there has been an interest in Asian herbal medicine as a form of treatment and prevention during the COVID-19 outbreak, which could imply some level of active consumerism in seeking alternative treatments due to the pandemic.
Abstract PUBMED:29491665 discusses the rights of patients as consumers from an ancient perspective, indicating that the concept of consumer rights in medicine is not new. However, this does not necessarily reflect current patient behavior in actively seeking and making informed decisions about their healthcare.
Abstract PUBMED:27770300 focuses on the development of diagnostic virtuosity in East Asian medicine and does not provide information on patient consumer behavior.
Abstract PUBMED:35018351 discusses active ageing and the roles of elderly consumers, but it does not specifically address the consumer behavior of patients in Asian medicine.
Abstract PUBMED:21114786 reports on the prevalence of complementary and alternative medicine (CAM) use among Asian radiotherapy patients, indicating a high rate of CAM use. However, a significant number of patients did not discuss their CAM use with their oncologists, which could suggest a lack of active engagement in the therapeutic decision-making process.
Abstract PUBMED:28127673 highlights the challenges in recruiting and retaining Asian Americans in cancer trials, which may reflect barriers to active consumerism in research participation.
Abstract PUBMED:31495823 examines the role of classical texts in East Asian medical practices and the interpretation of medical knowledge, but does not address patient consumer behavior.
Abstract PUBMED:14724341 discusses the effectiveness of Asian traditional therapies from the perspective of in vivo microcirculation, without reference to patient consumer behavior.
Finally, abstract PUBMED:25992390 emphasizes the role of patients, consumers, and caregivers as stewards of their own health data, suggesting that there are opportunities for patients to be more actively involved in their healthcare, although this is not specific to Asian medicine.
In summary, the evidence suggests that patients of Asian medicine are not typically active consumers, with most engaging in passive consumerism and showing limited involvement in therapeutic decision-making (PUBMED:15138372). |
Instruction: Is it possible to increase the survival of the transverse rectus abdominis musculocutaneous flap following previous abdominoplasty using a delay procedure?
Abstracts:
abstract_id: PUBMED:16327608
Is it possible to increase the survival of the transverse rectus abdominis musculocutaneous flap following previous abdominoplasty using a delay procedure? An experimental study in the rat. Background: Although, because of the disruption of perforators, abdominoplasty has been suggested as a major contraindication for patients undergoing autologous breast reconstruction with the transverse rectus abdominis musculocutaneous (TRAM) flap, many researchers encourage the search for a means of improving the survival of the skin paddle of the flap in patients who have undergone previous abdominoplasty. In this study, the effect of the surgical delay phenomenon on the survival of the TRAM flap following abdominoplasty was investigated.
Methods: Thirty adult Wistar rats were used: the control group (n = 6), the short-term group (n = 12), and the long-term group (n = 12). In the control group, a standard superior pedicled TRAM flap was harvested with no abdominoplasty procedure, and the flap was replaced in situ. In all other animals, an abdominoplasty procedure was performed initially. The short-term and long-term groups were divided into two subgroups: the abdominoplasty plus TRAM-only subgroup (n = 6), and the abdominoplasty plus delay plus TRAM subgroup (n = 6). In the short-term group, the experiment was performed 1 month after abdominoplasty, whereas the same surgical procedures were applied 6 months after abdominoplasty in the long-term group.
Results: The short-term abdominoplasty plus TRAM subgroup, the long-term abdominoplasty plus TRAM subgroup, the short-term abdominoplasty plus delay plus TRAM subgroup, the long-term abdominoplasty plus delay plus TRAM subgroup, and the conventional superior pedicled TRAM flap group showed 2.33 +/- 3.01 percent, 13.33 +/- 8.76 percent, 24.17 +/- 13.57 percent, 60 +/- 8.94 percent, and 70.83 +/- 9.70 percent survival rates for the skin paddle, respectively.
Conclusion: The data demonstrate that surgical delay after long-term abdominoplasty can enhance the survival rate of the skin paddle of the TRAM flap.
abstract_id: PUBMED:27085609
Increasing the survival of transverse rectus abdominis musculocutaneous flaps with a Botulinum toxin-A injection: A comparison of surgical and chemical flap delay methods. Background: Botulinum toxin type-A (Bot-A) is a commonly used drug for both cosmetic and therapeutic purposes. The effects of Bot-A on skin and muscle flaps and the related mechanisms have been described previously. In this study, we used a rat transverse rectus abdominis musculocutaneous (TRAM) flap model to examine the effects of Bot-A on the skin island, which is perfused by the rectus abdominis muscle according to the angiosome concept.
Methods: Forty female rats were divided into five groups, including control and sham groups. In the control group, a TRAM flap was raised and sutured back after inserting a silicone sheath underneath the flap. In the sham group, the flap was raised 1 month after injecting saline into the muscle. In the chemical delay group, the flap was raised 1 month after injecting 10 IU of Bot-A. In the surgical delay group, the flap was raised 2 weeks after ligating the cranial epigastric artery. In the surgical and chemical delay group, a Bot-A injection was performed initially, a cranial epigastric artery was ligated after 2 weeks, and a TRAM flap was raised after the first month. In all groups, laser Doppler examination, photographic documentation, and analysis of the flap survival rates were performed. In the histopathological evaluation, the diameter measurements of the caudal epigastric vessels, vascular density measurements using CD31 stain, and apoptotic rate estimation using the Tunnel method were performed.
Results: The necrosis ratios, arterial cross-sectional diameters, and microvascular density measurements were significantly superior compared to those of control and sham groups; however, there was no significant difference between the delay groups. There was also no difference in the laser Doppler measurements between the groups and the zones of the TRAM flaps.
Conclusion: An injection of Bot-A increases muscular circulation and flap survival of TRAM flaps in rats.
abstract_id: PUBMED:30706950
Photobiomodulation with polychromatic light increases zone 4 survival of transverse rectus abdominis musculocutaneous flap. Objective: The aim of this study was to evaluate the effect of relatively novel approach of application of polychromatic light waves on flap survival of experimental musculocutaneous flap model and to investigate efficacy of this modality as a delay procedure to increase vascularization of zone 4 of transverse rectus abdominis musculocutaneous (TRAM) flap.
Methods: Twenty-one Wistar rats were randomized and divided into 3 experimental groups (n = 7 each). In group 1 (control group), after being raised, the TRAM flap was sutured back to its bed without any further intervention. In group 2 (delay group), photobiomodulation (PBM) was applied for 7 days as a delay procedure, before elevation of the flap. In group 3 (PBM group), the TRAM flap was elevated, and PBM was administered immediately after the flap was sutured back to its bed for therapeutic purpose. PBM was applied in 48 hours interval from 10 cm. distance to the whole abdominal wall both in groups 2 and 3 for one week. After 7 days of postoperative follow-up, as the demarcation of necrosis of the skin paddle was obvious, skin flap survival was further evaluated by macroscopic, histological and microangiographic analysis.
Results: The mean percentage of skin flap necrosis was 56.17 ± 23.68 for group 1, 30.92 ± 17.46 for group 2 and 22.73 ± 12.98 for group 3 PBM receiving groups 2 and 3 revealed less necrosis when compared to control group and this difference was statistically significant. Vascularization in zone 4 of PBM applied groups 2 and 3 was higher compared to group 1 (P = 0.001). Acute inflammation in zone 4 of group 1 was significantly higher compared to groups 2 and 3 (P = 0.025). Similarly, evaluation of zone 1 of the flaps reveled more inflammation and less vascularization among the samples of the control group (P = 0.006 and P = 0.007, respectively). Comparison of PBM receiving two groups did not demonstrate further difference in means of vascularization and inflammation density (P = 0.259).
Conclusion: Application of PBM in polychromatic fashion enhances skin flap survival in experimental TRAM flap model both on preoperative basis as a delay procedure or as a therapeutic approach. Lasers Surg. 51:538-549, 2019. © 2019 Wiley Periodicals, Inc.
abstract_id: PUBMED:35836467
Rectus Abdominis Musculocutaneous Flap With Supercharging for Reconstruction of Extensive Thoracic Defect Due to Deep Sternal Wound Infection: A Case Report. Deep sternal wound infection is a serious postoperative complication of cardiac surgery and often requires flap reconstruction. Herein, we report a case of deep sternal wound infection with an extensive thoracic defect that was successfully treated using a modified technique. This technique, defined as "supercharging," anastomoses the deep inferior epigastric artery and vein of pedicled rectus abdominis musculocutaneous flap to the transverse cervical artery and external jugular vein, respectively. The transverse cervical artery is an easily accessible and reliable recipient vessel. Therefore, we recommend that our technique be used, especially in cases of deep sternal wound infection with extensive thoracic defects.
abstract_id: PUBMED:25991893
Successful pregnancy "during" pedicled transverse rectus abdominis musculocutaneous flap for breast reconstruction with normal vaginal delivery. A transverse rectus abdominis myocutaneous (TRAM) flap is a popular choice for breast reconstruction. Pregnancies in women following a TRAM flap present concerns regarding both safety and the integrity of the abdominal wall. We report a case of a patient who was pregnant during immediate breast reconstruction with pedicled TRAM flap and had a successful spontaneous vaginal delivery. We also conducted a literature review using PubMed on pregnancy post TRAM flap, type of reconstruction, timing of pregnancy after TRAM flap, complication, and mode of delivery, which are summarised in this report. We concluded that patients may have safe pregnancies and normal deliveries following TRAM flap breast reconstruction regardless of the time frame of pregnancy after the procedure. Therefore, TRAM flaps can continue to be a reconstruction option, even in women of childbearing age.
abstract_id: PUBMED:25396186
Comparison of the complications in vertical rectus abdominis musculocutaneous flap with non-reconstructed cases after pelvic exenteration. Background: Perineal reconstruction following pelvic exenteration is a challenging area in plastic surgery. Its advantages include preventing complications by obliterating the pelvic dead space and minimizing the scar by using the previous abdominal incision and a vertical rectus abdominis musculocutaneous (VRAM) flap. However, only a few studies have compared the complications and the outcomes following pelvic exenteration between cases with and without a VRAM flap. In this study, we aimed to compare the complications and the outcomes following pelvic exenteration with or without VRAM flap coverage.
Methods: We retrospectively reviewed the cases of nine patients for whom transpelvic VRAM flaps were created following pelvic exenteration due to pelvic malignancy. The complications and outcomes in these patients were compared with those of another nine patients who did not undergo such reconstruction.
Results: Flap reconstruction was successful in eight cases, with minor complications such as wound infection and dehiscence. In all cases in the reconstructed group (n=9), structural integrity was maintained and major complications including bowel obstruction and infection were prevented by obliterating the pelvic dead space. In contrast, in the control group (n=9), peritonitis and bowel obstruction occurred in 1 case (11%).
Conclusions: Despite the possibility of flap failure and minor complications, a VRAM flap can result in adequate perineal reconstruction to prevent major complications of pelvic exenteration.
abstract_id: PUBMED:34527568
Breast reconstruction using delayed pedicled transverse rectus abdominis muscle flap with supercharging: reports of three cases. Breast reconstruction using a pedicled transverse rectus abdominis muscle (TRAM) flap is a well-established surgical procedure. Although studies suggest that transplanting this flap using a delayed method reduces the risk of partial flap necrosis, challenges persist. Hence, we present three cases of breast reconstruction using a pedicled TRAM flap with both delaying and supercharging. Patient age, excised tissue volume for mastectomy, and follow-up period were as follows: Case 1, 58 years, 429 cm3, 5 months; Case 2, 35 years, 910 cm3, 6 months; and Case 3, 56 years, 489 cm3, 4 months. One patient (Case 2) required a large flap tissue volume to achieve breast symmetry, whereas the other two (Cases 1 and 3) had long, longitudinal scars from previous cesareans sections. In a delayed surgery, the flap was partially elevated with partial dissection and no ligation of the deep inferior epigastric artery and vein (DIEAV). An artificial dermis with a silicone membrane (Teldermis®) was used to prevent adhesion of the rectus abdominal muscles and DIEAV to the surrounding tissue. Supercharging was performed by anastomosis between the ipsilateral DIEAV and internal thoracic AV. Flaps in zones I-III and in half of zone IV for Case 2, and zones I-III for Cases 1 and 3, were transferred; all survived without infection. This method allowed the transferring of a larger tissue volume compared with the conventional pedicled TRAM flap-transfer method. Thus, it may be useful for patients who require larger tissue volume or high-risk patients.
abstract_id: PUBMED:38387422
Comparison of complications and functional outcomes following total or subtotal glossectomy with laryngeal preservation using a deep inferior epigastric artery perforator free flap versus a rectus abdominis musculocutaneous free flap. Objective: Wide defects resulting from subtotal or total glossectomy are commonly reconstructed using a bulk flap to maintain oral and speech functions. The flap, including muscle tissue, diminishes with time. This study aimed to compare the surgical outcomes of deep inferior epigastric artery perforator and rectus abdominis musculocutaneous free flap reconstructions after glossectomy with laryngeal preservation.
Methods: Medical records of 13 and 26 patients who underwent deep inferior epigastric artery perforator and rectus abdominis musculocutaneous free flap reconstructions, respectively, from 2014 to 2022 at our institution were reviewed. Patients who underwent middle pharynx resection except for the base of the tongue, mandibular bone resection, and sensory reinnervation were excluded.
Results: The rectus abdominis musculocutaneous groups showed a higher number of lymph node dissection and shorter operative time than the deep inferior epigastric artery perforator groups. No significant differences in postoperative complications or functional oral intake scale scores at 6 months were observed. Volumetric changes on computed tomography images at 6 and 12 months were significantly lower in the deep inferior epigastric artery perforator group. Cancer recurrence was significantly associated with reduced oral function.
Conclusions: Oral function in patients with cancer is influenced by various other factors. However, the deep inferior epigastric artery perforator flap may be suitable for tongue reconstruction because of the minimal postoperative changes in flap volume, easy adjustment of flap thickness, elevation of multiple flaps, and minimal complications at the donor site.
abstract_id: PUBMED:34028299
Muscle-sparing transverse rectus abdominis musculocutaneous free flap breast reconstruction following cryolipolysis. Cryolipolysis refers to the sub-physiological cooling of regional body parts (typically the abdomen) in order to reduce the volume of adipose tissue. It provides a non-invasive alternative to procedures such as liposuction, which have traditionally been considered as relative contraindications for future abdominal free flap-based reconstructions. We describe the first case of a patient undergoing skin-sparing mastectomy and a muscle-sparing transverse rectus abdominis musculocutaneous (msTRAM) free flap breast reconstruction, following cryolipolysis therapy. Pre- and intraoperative radiological investigations and clinical examination showed no obvious adverse effect of cryolipolysis on the flap. The patient recovered well, with no vascular complications noted on follow up.
abstract_id: PUBMED:12567055
Effects of the abdominal midline incision on the survival of the transverse rectus abdominis musculocutaneous flap in rat model. The transverse rectus abdominis musculocutaneous (TRAM) flap, as the standard for breast reconstruction, can be performed in selected patients with operation scars in the abdomen. It is questioned why the unilateral TRAM flaps raised from the abdomen with scars seem to have more blood supply to the periphery. The effect of the abdominal midline incision on the survival of the TRAM flap was studied with the rat model. The survival of the inferiorly based unilateral TRAM flap was compared between a control group and three experimental groups in which initial abdominal midline incision was made 1, 2, or 4 weeks before TRAM flap elevation. The control group had a mean survival of 47.05 +/- 4.52% of their original flap surface area. The viabilities of the other groups were 72.19 +/- 11.05, 76.96 +/- 7.79, 90.26 +/- 3.67%, respectively. Results from all three groups were significantly higher than that of the control group. In the microangiogram of the TRAM flap, the main pedicle (deep epigastric system) and muscle perforators remained unchanged, but the subdermal plexus of the abdominal wall showed increase of size and density after incision. Histological examination also revealed corresponding findings. A midline incision in the abdomen, which was made at 1 or more weeks before unilateral TRAM flap formation, was found to improve the survival of the flap. These data may be explained by increase in the size and density of the subdermal plexus, which delivers the blood flow from the muscle perforators to the expanded area of abdominal wall. Therefore, the process of making the abdominal midline incision can be regarded as a minimal form of a prefabrication procedure, which is one of the well-known delay procedures.
Answer: Yes, it is possible to increase the survival of the transverse rectus abdominis musculocutaneous (TRAM) flap following previous abdominoplasty using a delay procedure. An experimental study in rats demonstrated that surgical delay after long-term abdominoplasty can enhance the survival rate of the skin paddle of the TRAM flap. Specifically, the study found that the survival rates for the skin paddle were significantly higher in the long-term abdominoplasty plus delay plus TRAM subgroup (60%) compared to the long-term abdominoplasty plus TRAM subgroup (13.33%) (PUBMED:16327608). This suggests that implementing a delay procedure can improve the viability of the TRAM flap in cases where abdominoplasty has previously disrupted the perforators that supply the flap. |
Instruction: Does caregiver well-being predict stroke survivor depressive symptoms?
Abstracts:
abstract_id: PUBMED:23340070
Does caregiver well-being predict stroke survivor depressive symptoms? A mediation analysis. Background And Purpose: Studies suggest that family caregiver well-being (ie, depressive symptoms and life satisfaction) may affect stroke survivor depressive symptoms. We used mediation analysis to assess whether caregiver well-being might be a factor explaining stroke survivor depressive symptoms, after controlling for demographic factors and stroke survivor impairments and problems.
Methods: Caregiver/stroke participant dyads (N = 146) completed measures of stroke survivor impairments and problems and depressive symptoms and caregiver depressive symptoms and life satisfaction. Mediation analysis was used to examine whether caregiver well-being mediated the relationship between stroke survivor impairments and problems and stroke survivor depressive symptoms.
Results: As expected, more stroke survivor problems and impairments were associated with higher levels of stroke survivor depressive symptoms (P < .0001). After controlling for demographic factors, we found that this relationship was partially mediated by caregiver life satisfaction (29.29%) and caregiver depressive symptoms (32.95%). Although these measures combined to account for 40.50% of the relationship between survivor problems and impairments and depressive symptoms, the direct effect remained significant.
Conclusions: Findings indicate that stroke survivor impairments and problems may affect family caregivers and stroke survivors and a high level of caregiver distress may result in poorer outcomes for stroke survivors. Results highlight the likely importance of intervening with both stroke survivors and family caregivers to optimize recovery after stroke.
abstract_id: PUBMED:30207873
Association between incongruence about survivor function and outcomes among stroke survivors and family caregivers. Background: Stroke survivors and family caregivers often have incongruent appraisals of survivor cognitive, physical, and psychosocial function. Partner incongruence contributes to poor outcomes for survivors and caregivers.
Objectives: This study explored whether partner incongruence: (1) differs by function domain; (2) increases or decreases over time, and; (3) is associated with self-rated health, distress, stress, and depressive symptoms.
Methods: Structured surveys were administered to 32 survivors and caregivers at approximately 3 (enrollment) and 7 months (follow-up) post-stroke. Paired t-tests were used to examine partners' ratings of survivor function at enrollment and follow-up, and changes in incongruence over time. Partial correlations were used to examine the association between incongruence at enrollment and outcomes at follow-up.
Results: Survivors consistently rated their own memory and thinking as significantly better than caregivers rated their memory and thinking. At follow-up, survivors rated their own communication as significantly better than caregivers rated their communication. Incongruence about survivor memory and thinking was associated with survivor distress, as well as caregiver distress, stress, and depressive symptoms. Incongruence about survivor ADLs was associated with caregiver stress and depressive symptoms. Incongruence about survivor social participation was associated with caregiver distress.
Conclusions: Findings from this study suggest that survivors and caregivers often have incongruent appraisals of survivor function, that incongruence does not improve naturally over time, and that incongruence may be detrimental for survivor and caregiver outcomes. Further research should be directed at the mitigation of incongruence and strategies to improve outcomes for both survivors and family caregivers.
abstract_id: PUBMED:31303131
Associations between characteristics of stroke survivors and caregiver depressive symptoms: a critical review. Background: Poststroke depression is common in stroke survivors. Evidence suggests that caregivers of stroke survivors also experience depression, at rates similar to survivors (30-40%). While much research has focused on developing better understanding of poststroke depression in stroke survivors, stroke caregiver depression has received less attention. Available research suggests that characteristics of the survivor such as age, gender, relation to caregiver, mental health, and physical or cognitive deficits correlate with and may be contributing factors for caregiver depression. Knowledge of risk factors for stroke caregiver depression could translate to better screening, management, and prevention, but further investigation is needed. Objectives: To examine the existing literature and synthesize evidence surrounding survivor characteristics and their association with poststroke depressive symptoms in caregivers. Methods: Medline, PsychInfo, and CINAHL databases were searched with variations of keywords: "stroke," "caregiver" and "depression." Studies analyzing associations between at least one stroke survivor characteristic and caregiver depressive symptoms were included. Results: Seventeen studies met eligibility criteria. They analyzed a wide range of survivor characteristics. Many survivor characteristics lacked convincing evidence of an association with caregiver depressive symptoms. However, a trend emerged supporting an association between survivor depressive symptoms and caregiver depressive symptoms. Conclusions: Health-care providers should be aware that depressive symptoms in one member of a stroke survivor-caregiver dyad may indicate risk for depressive symptoms in the other. Screening both individuals may lead to earlier detection and provide information to guide interventions. Knowing risk factors for stroke caregiver depression may improve prevention/management, but further investigation is needed.
abstract_id: PUBMED:26061711
Implications of stroke for caregiver outcomes: findings from the ASPIRE-S study. Background: Informal caregivers are vital to the long-term care and rehabilitation of stroke survivors worldwide. However, caregiving has been associated with negative psychological outcomes such as anxiety and depression, which leads to concerns about caregiver as well as stroke survivor well-being. Furthermore, caregivers may not receive the support and service provision they require from the hospitals and community.
Aims: This study examines caregiver psychological well-being and satisfaction with service provision in the context of stroke.
Methods: Caregiver data were collected as part of the ASPIRE-S study, a prospective study of secondary prevention and rehabilitation which assessed stroke patients and their carers at six-months post stroke. Carer assessment included measurement of demographics, satisfaction with care (UK Healthcare Commission National Patient Survey of Stroke Care), psychological distress (Hospital Anxiety and Depression Scale), and vulnerability (Vulnerable Elders Scale). Logistic regression analyses and chi-squared tests were performed using stata version 12.
Results: Analyses from 162 carers showed substantial levels of dissatisfaction (37·9%) with community and hospital services, as well as notable levels of anxiety (31·3%) and depressive symptoms (18·8%) among caregivers. Caregiver anxiety was predicted by stroke survivor anxiety (OR = 3·47, 95% CI 1·35-8·93), depression (OR = 5·17, 95% CI 1·83-14·58), and stroke survivor cognitive impairment (OR 2·35, 95% CI 1·00-5·31). Caregiver depression was predicted by stroke survivor anxiety (OR = 4·41, 95% CI 1·53-12·72) and stroke survivor depression (OR = 6·91, 95% CI 2·26-21·17).
Conclusion: Findings indicate that caregiver and stroke survivor well-being are interdependent. Thus, early interventions, including increased training and support programs that include caregivers, are likely to reduce the risk of negative emotional outcomes.
abstract_id: PUBMED:36257927
Associations among disability, depression, anxiety, stress, and quality of life between stroke survivors and their family caregivers: An Actor-Partner Interdependence Model. Aim: To explore the effects of disability, depressive, anxiety and stress symptoms on patients' and their partners' quality of life (QoL) using the actor-partner interdependence model (APIM).
Design: A cross-sectional study using actor-partner interdependence model.
Methods: We recruited 183 dyads of stroke survivors and their family caregivers in Indonesia. The World Health Organization Disability Assessment (WHODAS 2.0), Depression, Anxiety and Stress (DASS-42) and Rand Short Form Health Survey (SF-36) were used to measure disability, depressive, anxiety and stress symptoms and QoL of stroke survivors and family caregivers. The actor-partner interdependence model was tested using multilevel modelling. The actor-partner interdependence mediation model (APIMeM) was applied to estimate the direct and indirect effect.
Results: Disability had actor effects on stroke survivor's overall QoL and partner effect on family caregiver's overall QoL. More severe disability of stroke survivors was associated with a lower overall QoL of their own and that of family caregiver's overall QoL. Depressive symptoms of stroke survivors had actor effects on stroke survivors' overall QoL and partner effects on family caregivers' overall QoL. Actor and partner effects also exist on family caregiver's depression symptoms to their own overall QoL and stroke survivor's overall QoL. Moreover, higher anxiety symptoms were associated with lower levels of their own and partner's overall QoL in both stroke survivors and family caregivers. Stroke survivor's stress symptoms also negatively associated with their own and family caregiver's overall QoL. However, a family caregiver's stress without a partner effects on stroke survivor's overall QoL. The APIMeM analysis showed that disability of stroke survivors directly decreased their own overall, physical (PCS) and mental QoL (MCS). Also, disability mediated by stroke survivor's depression, anxiety and stress symptoms decreased both stroke survivor's and family caregiver's physical (PCS) and mental QoL (MCS).
Conclusion: The findings suggest that stroke survivors and family caregivers may influence each other during the caregiving process and social life. The disability of stroke survivors, and the depression, anxiety and stress symptoms of stroke survivors and family caregivers affect their own QoL and their partners' QoL. Disability of stroke survivors directly decreased their own overall, physical (PCS) and mental QoL (MCS). Also, it indirectly via stroke survivor's depression, anxiety and stress symptoms decreased both stroke survivor's and family caregiver's physical (PCS) and mental QoL (MCS).
Impact: Dyadic actor-partner interdependence models have shown promising potential to predict the QoL among patients and family caregivers. The dyadic effects of disability, depression, anxiety and stress symptoms on the QoL of stroke survivors and family caregivers can be applied to guide the future development of nursing intervention addressed decreasing depression, anxiety and stress symptoms to optimize health outcomes among stroke survivors and their family caregivers.
abstract_id: PUBMED:32740226
Moderator Role of Mutuality on the Association Between Depression and Quality of Life in Stroke Survivor-Caregiver Dyads. Background: Authors of previous research have not yet analyzed the role of potential moderators in the relationship between depressive symptoms and quality of life (QOL).
Aims: The aim of this study was to examine the moderating effect of mutuality between depressive symptoms and QOL in stroke survivor and caregiver dyads.
Methods: This study used a longitudinal design with 222 stroke survivor-caregiver dyads enrolled at survivor discharge from rehabilitation hospitals. Data collection was performed for 12 months. We examined survivor and caregiver QOL dimensions (physical, psychological, social, and environmental), depression, and mutuality at baseline and every 3 months. Hierarchical linear modeling was used to test 4 longitudinal dyadic moderation models (1 for each QOL domain).
Results: Survivors (50% male) and caregivers (65% female) were 70.8 (SD, 11.9) and 52.5 (SD, 13.1) years old, respectively. We observed no significant moderating effects of mutuality for survivors across the 4 dimensions of QOL over time. However, higher survivor mutuality was significantly associated with higher survivor psychological and social QOL at baseline. Regarding caregivers, caregiver mutuality significantly moderated the association between caregiver depressive symptoms and caregiver physical (B = 0.63, P < .05), psychological (B = 0.63, P < .01), and social (B = 0.95, P < .001) QOL at baseline, but not in environmental QOL. Higher caregiver mutuality was significantly associated with less improvement in caregiver physical QOL over time.
Conclusions: Mutuality is a positive variable on the association between depression and QOL for both members of the dyad at discharge but may lead to declines in physical health for caregivers over time. Further work is needed to understand the role of mutuality on long-term outcomes and associations with increased care strain.
abstract_id: PUBMED:36453001
Actor-partner effects of wellbeing, hope and self-esteem on depression in stroke survivor-caregiver dyads: A randomized controlled trial. Background: Stroke is a disabling, long-term condition that challenges the mental and physical health of stroke-survivors concurrently with their primary family-caregivers (dyad). However, there has been a lack of emphasis on this dyadic need. Thus, this study aims to investigate the impacts of two interventions on hope, self-esteem and hedonic wellbeing on depression among the stroke-survivor-caregiver dyad.
Methods: This randomized-controlled-trial applied the actor-partner interdependence model to 100 randomly-selected dyads (N = 200) of stroke-survivors, mean (SD) age was 73.63(7.22) and family-caregivers, mean (SD) age was 62.49(14.44) years, recruited from Hong Kong hospitals and rehabilitation centres. The intervention was eight-weekly two-hour narrative therapy group sessions (n = 54 dyads), compared with the current model of psychoeducational group to each dyad as needed. Outcomes were collected via questionnaires and interviews, at four time-points: baseline (T1), during-intervention (T2) (1-month), immediately post-intervention (T3) (2-months) and follow-up (T4) (6-months).
Results: The results demonstrated that there are actor effects on stroke-survivors (β = -0.353, p < 0.05) and caregivers (β = -0.383, p < 0.05), where higher levels of hedonic wellbeing were associated with fewer depressive symptoms. Partner effects were observed as caregivers' depressive symptoms were possessing a significant negative relationship with stroke survivors' wellbeing (β = -0.387, p < 0.05). Those stroke survivors in the intervention group had a significantly higher level of self-esteem associated with lower levels of depression (β = -0.314, p < 0.05).
Conclusions: Improving hope, self-esteem and wellbeing through narrative therapy significantly mediates depressive symptoms, strengthening the dyadic support of stroke survivors and family caregivers.
abstract_id: PUBMED:33156591
Caregiver Burden and Associated Factors Among Informal Caregivers of Stroke Survivors. Background: Informal caregiving of stroke survivors often begins with intensity compared with the linear caregiving trajectories in progressive conditions. Informal caregivers of stroke survivors are often inadequately prepared for their caregiving role, which can have detrimental effects on their well-being. A greater depth of understanding about caregiving burden is needed to identify caregivers in most need of intervention. The purpose of this study was to examine caregiver burden and associated factors among a cohort of informal caregivers of stroke survivors.
Methods: A cross-sectional study of 88 informal caregivers of stroke survivors was completed. Caregiver burden was determined with the Zarit Burden Interview, caregiver depressive symptoms were measured with the Patient Health Questionnaire-9, and stroke survivor functional disability was assessed with the Barthel Index. Ordinal logistic regression was used to identify independent factors associated with caregiver burden.
Results: Forty-three informal caregivers (49%) reported minimal or no caregiver burden, 30 (34%) reported mild to moderate caregiver burden, and 15 (17%) reported moderate to severe caregiver burden. Stroke survivor functional disability was associated with informal caregiver burden (P = .0387). The odds of having mild to moderate caregiver burden were 3.7 times higher for informal caregivers of stroke survivors with moderate to severe functional disability than for caregivers of stroke survivors with no functional disability. The presence of caregiver depressive symptoms was highly correlated with caregiver burden (P < .001).
Conclusion: Caregivers of stroke survivors with functional disabilities and caregivers experiencing depressive symptoms may have severer caregiver burden. Trials of interventions aimed at decreasing informal caregiver burden should consider the potential impact of stroke survivors' functional disability and the presence of depressive symptoms.
abstract_id: PUBMED:35670198
The moderating role of caregiver preparedness on the relationship between depression and stroke-specific quality of life in stroke dyads: a longitudinal study. Aims: To examine the moderating role of caregiver preparedness on the association between stroke survivors' depression and stroke-specific quality of life dimensions.
Methods And Results: We used a multilevel modelling approach to analyse trajectories of change in the eight Stroke Impact Scale 3.0 subscales [i.e. strength, communication, mobility, activities of daily living (ADL)/instrumental activities of daily living (IADL), memory, emotion, hand function, participation] using Hierarchical Linear Modeling. Caregiver preparedness significantly moderated the association between survivor depressive symptoms and survivor communication (B = -0.95, P < 0.01), mobility (B = -0.60, P < 0.05), and ADL/IADL (B = -0.73, P < 0.01) at baseline; linear change for strength (B = 0.83, P < 0.05) and communication (B = 0.66, P < 0.05); and quadratic change for strength (B = -0.19, P < 0.01). Although caregiver preparedness did not significantly moderate the association between survivor depressive symptoms and strength at baseline, there was a significant moderating effect for change over time. Higher levels of caregiver preparedness were significantly associated with higher survivor scores of emotion, hand function, and participation at baseline.
Conclusions: Including immediate caregivers in the care process, through a psycho-educational training, would mean having better-prepared caregivers and consequently more-healthy stroke survivors. Given that preparedness includes coping with stress, responding and managing emergencies, assessing help and information may require tailored interventions aimed at improving the caregivers' skills and knowledge about stroke survivors' management.
abstract_id: PUBMED:22367269
Stroke survivor and informal caregiver perceptions of poststroke depressive symptoms. Poststroke depression is common but remains underdiagnosed and undertreated. One explanation for this is that depressive symptoms go unrecognized by stroke survivors and their family caregivers and thus go unreported to the stroke survivor's healthcare provider. Forty-four stroke survivor-informal caregiver dyads were interviewed using a depression knowledge scale, the Center for Epidemiologic Studies Depression Scale, and an adapted version of the Revised Illness Perception Questionnaire to determine their lay knowledge of depressive symptoms, recognition of poststroke depressive symptoms, and perception of the recognized symptoms. Caregivers and stroke survivors had moderate knowledge of depressive symptoms but had some misconceptions. The most commonly recognized poststroke depressive symptoms by stroke survivors were "feeling like everything was an effort," "restless sleep," "difficulty concentrating," and "talking less than usual." Caregivers identified that the stroke survivor was "feeling like everything was an effort," had "restless sleep," "felt sad," and "felt depressed." Three quarters of stroke survivors with high levels of depressive symptoms identified the cluster of symptoms as potentially being depression. Neuroscience nurses are in the unique position to educate stroke survivors and their caregivers about poststroke depressive symptoms, treatment options, and the importance of discussing depressive symptoms with their healthcare provider.
Answer: Yes, caregiver well-being does predict stroke survivor depressive symptoms. Studies have shown that the well-being of family caregivers, including their depressive symptoms and life satisfaction, can affect the depressive symptoms experienced by stroke survivors. A mediation analysis indicated that caregiver well-being partially mediated the relationship between stroke survivor impairments and problems and stroke survivor depressive symptoms. Specifically, caregiver life satisfaction accounted for 29.29% and caregiver depressive symptoms accounted for 32.95% of the relationship between survivor problems and impairments and depressive symptoms (PUBMED:23340070).
Furthermore, incongruence in appraisals of survivor function between survivors and caregivers has been associated with negative outcomes for both parties. Incongruence about survivor memory and thinking was linked to distress and depressive symptoms in both survivors and caregivers. Incongruence about survivor activities of daily living (ADLs) and social participation was associated with caregiver stress and depressive symptoms (PUBMED:30207873).
Additionally, characteristics of stroke survivors, such as depressive symptoms, have been found to correlate with caregiver depressive symptoms. This suggests that depressive symptoms in one member of a stroke survivor-caregiver dyad may indicate a risk for depressive symptoms in the other, highlighting the importance of screening both individuals (PUBMED:31303131).
Caregiver psychological well-being has also been shown to be interdependent with stroke survivor well-being. Caregiver anxiety and depression were predicted by stroke survivor anxiety, depression, and cognitive impairment (PUBMED:26061711).
Moreover, the disability of stroke survivors and the depression, anxiety, and stress symptoms of both stroke survivors and family caregivers have been found to affect their own quality of life (QoL) as well as their partners' QoL. The disability of stroke survivors directly decreased their own overall, physical, and mental QoL, and also indirectly decreased both their own and their family caregivers' QoL through mediation by depression, anxiety, and stress symptoms (PUBMED:36257927).
In summary, caregiver well-being is a significant predictor of stroke survivor depressive symptoms, and interventions aimed at improving the well-being of both stroke survivors and their caregivers are likely to be beneficial for optimizing recovery and quality of life post-stroke. |
Instruction: Is bone turnover of jawbone and its possible over suppression by bisphosphonates of etiologic importance in pathogenesis of bisphosphonate-related osteonecrosis?
Abstracts:
abstract_id: PUBMED:24485975
Is bone turnover of jawbone and its possible over suppression by bisphosphonates of etiologic importance in pathogenesis of bisphosphonate-related osteonecrosis? Purpose: The pathogenesis of bisphosphonate-related osteonecrosis of the jaw (BRONJ) is not completely understood. The most popular hypothesis has suggested that the bone turnover (BT) in the jawbone is greater than that in other sites and that this turnover will be overly suppressed by bisphosphonates. Using bone scintigraphy, a simple tool for the quantitative evaluation of bone metabolism and blood flow, the goals of the present study were to determine whether the rate of bone remodeling is greater in the jaw and whether the bone BT in the jaw is differentially altered after bisphosphonate intake compared with that in other skeletal sites.
Materials And Methods: The bone scintigraphies of 90 female patients with breast cancer were retrospectively analyzed (n = 45 with bisphosphonate intake; n = 45 without bisphosphonate intake [control group]). All patients in the study group had undergone bone scintigraphy before therapy and during the treatment (course after 12 and 24 months). The data were quantitatively analyzed using 6 predetermined regions of interest.
Results: The bone BT of the mandible was similar to that of the femur and significantly reduced compared with that of the maxilla (P < .01). None of the investigated bone regions (including the mandible and maxilla) were significantly altered after bisphosphonate administration (P > .05).
Conclusions: The finding that the mandible had significantly lower bone BT than that of the maxilla and that two thirds of BRONJ cases occur in the mandible were inconsistent with the investigated hypothesis. Furthermore, the bone BT in the jawbone was not overly suppressed by bisphosphonates. Thus, it is unlikely that over suppression of bone BT is the exclusive causation playing a role in the pathomechanism of BRONJ.
abstract_id: PUBMED:35199244
Impacts of bisphosphonates on the bone and its surrounding tissues: mechanistic insights into medication-related osteonecrosis of the jaw. Bisphosphonates are widely used as anti-resorptive agents for the treatment of various bone and joint diseases, including advanced osteoporosis, multiple myeloma, bone metastatic cancers, Paget's disease of bone, and rheumatoid arthritis. Bisphosphonates act as an anti-osteoclast via the induction of osteoclast apoptosis, resulting in a decreased rate of bone resorption. Unfortunately, there is much evidence to demonstrate that the long-term use of bisphosphonates is associated with osteonecrosis. The pathogenesis of osteonecrosis includes the death of osteoblasts, osteoclasts, and osteocytes. In addition, the functions of endothelial cells, epithelial cells, and fibroblasts are impaired in osteonecrosis, leading to disruptive angiogenesis, and delayed wound healing. Osteonecrosis is most commonly found in the jawbone and the term medication-related osteonecrosis of the jaw (MRONJ) has become the condition of greatest clinical concern among patients receiving bisphosphonates. Although surgical treatment is an effective strategy for the treatment of MRONJ, several non-surgical interventions for the attenuation of MRONJ have also been investigated. With the aim of increasing understanding around MRONJ, we set out to summarize and discuss the holistic effects of bisphosphonates on the bone and its surrounding tissues. In addition, non-surgical interventions for the attenuation of bisphosphonate-induced osteonecrosis were reviewed and discussed.
abstract_id: PUBMED:24794267
The role of bisphosphonates in medical oncology and their association with jaw bone necrosis. Bisphosphonates, synthetic analogues to inorganic pyrophosphates found in the bone matrix, inhibit bone resorption. Bisphosphonates and their related effects on the jaw have been established since 2001. The pathogenesis of bisphosphonate-related osteonecrosis of the jaw (BRONJ) is multifactorial and still under investigation. Currently, drugs with mechanisms of action involving remodeling suppression, osteoclast depression, and decreasing angiogenesis are under investigation for causing BRONJ-like symptoms. Further studies are needed to determine the effective length of use of biphosponates and the efficacy of drug holidays to prevent BRONJ.
abstract_id: PUBMED:24582013
Effect of antiresorptive drugs on bony turnover in the jaw: denosumab compared with bisphosphonates. Osteonecrosis of the jaw as a result of treatment with receptor activators of nuclear factor kappa-B ligand (RANKL) inhibitors (denosumab) is a new type of bony necrosis, the exact pathogenesis of which is unknown. Our aim was to find out whether the turnover of bone in the jaw is increased after denosumab has been given compared with other skeletal sites, and if that turnover might have a role in denosumab-related osteonecrosis of the jaw (DRONJ). Bone scintigraphic images of 45 female patients with breast cancer and bone metastases were analysed retrospectively, and divided into 3 groups: those given denosumab, those given a bisphosphonate, and a control group (n=15 in each). All patients had bone scintigraphy before treatment (T0) and during the course of treatment after 12 (T1) and 24 (T2) months. The data were analysed quantitatively using 6 preset bony regions of interest. There was similar turnover of bone in the mandible compared with other skeletal sites (such as the femur), while the maxilla showed significantly higher turnover. None of the bony regions investigated showed any significant changes after the bisphosphonate had been given. There was a tendency to increase bone turnover in those patients taking denosumab. The bone turnover of the jawbone is not overtly changed either by a bisphosphonate or denosumab, so it seems unlikely that oversuppression of bony turnover in the jawbones plays an important part either in the pathogenesis of DRONJ or in the bisphosphonate-related osteonecrosis of the jaw (BRONJ).
abstract_id: PUBMED:28326724
Research progress on bisphosphonate-related osteonecrosis of the jaws Bisphosphonates (BPs), as potent drugs inhibiting bone resorption, have been widely used for treatment of several diseases. In recent years, dentists and oral and maxillofacial surgeons reported continuously increasing cases of bisphosphonate-related osteonecrosis of the jaws (BRONJ). This disease is clinically characterized by exposed bones, formation of sequestrum, pain, and halitosis. Provided that pathogenesis of BRONJ is unclear, effective treatments for this disease are currently unavailable. Thus, prevention plays an important role in the management of BRONJ. This review summarizes research progress on pathogenesis, risk factors, clinical characteristics, treatment, and prevention of this condition.
abstract_id: PUBMED:31447218
Can children be affected by bisphosphonate-related osteonecrosis of the jaw? A systematic review. Knowledge of bisphosphonate-related osteonecrosis of the jaw (BRONJ) is mostly based on adult cases, however bisphosphonates are also currently recommended for different paediatric diseases resulting in osteoporosis. The aim of this study was to review the literature on the risk of developing BRONJ in children and adolescents. The PubMed, LILACS, Web of Science, Scopus, and Cochrane databases were searched using the key words "bisphosphonates", "osteonecrosis", "jaw", and "children". Literature reviews, case reports, abstracts, theses, textbooks, and book chapters were excluded. Studies involving children and young adults (younger than 24 years of age) were included. A total of 56 publications were identified. After applying the eligibility criteria, only seven articles remained. Although no cases of osteonecrosis were identified, all studies had weaknesses such as a limited sample size or the absence of risk factors for the development of osteonecrosis. There is general consensus that this subject should be of concern and that further studies should be conducted before any definitive opinion is reached. It is believed that patients with secondary osteoporosis who use bisphosphonates continuously should be followed up during adulthood, since bone turnover decreases over the years.
abstract_id: PUBMED:24480759
Commentary--Is the bone turnover of the jawbone and its possible over suppression by bisphosphonates of etiological importance for the pathogenesis of the bisphosphonate-related osteonecrosis? N/A
abstract_id: PUBMED:20138420
Bisphosphonate-related osteonecrosis of the jaw: is pH the missing part in the pathogenesis puzzle? Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is a side effect of bisphosphonate therapy, primarily diagnosed in patients with cancer and metastatic bone disease and receiving intravenous administrations of nitrogen-containing bisphosphonates. If diagnosis or treatment is delayed, BRONJ can develop to a severe and devastating disease. Numerous studies have focused on BRONJ, with possible pathomechanisms identified to be oversuppression of bone turnover, ischemia due to antiangiogenetic effects, local infections, or soft tissue toxicity. However, the precise pathogenesis largely remains elusive and questions of paramount importance await to be answered, namely 1) Why is only the jaw bone affected? 2) Why and how do the derivatives differ in their potency to induce a BRONJ? and 3) Why and when is BRONJ manifested? The present perspective reflects on existing theories and introduces the hypothesis that local tissue acidosis in the jaw bone offers a conclusive pathogenesis model and may prove to be the missing link in BRONJ.
abstract_id: PUBMED:24001573
Bisphosphonate-related osteonecrosis of the jaw The bisphosphonates are stable inorganic pyrophosphate analogs that have demonstrated their efficacy in treatment of osteolytic lesions associated with bony metastases, and multiple myeloma, malignant hypercalcemia, Paget's disease, and osteoporosis. Several publications within the last few years have suggested that osteonecrosis of the jaw is associated with bisphosphonate therapy. The diagnosis and management strategies of the patients with bisphosphonate-related osteonecrosis of the jaw is very difficult. It is important for patients to be informed of the risk of this complication, so that they have the opportunity to assess the need for dental treatment before starting therapy. Preventive measures must be taken before, during, and after treatment with bisphosphonates. If osteonecrosis of the jaw is present, management should be conservative: oral chlorhexidine and antibiotics. Surgical treatment should be reserved for those patients who are symptomatic.
abstract_id: PUBMED:35357247
A case of bisphosphonate-related osteonecrosis of the maxilla with orbital cellulitis. Bisphosphonates are used for the treatment of bone metastases of cancer and prevention of osteoporosis. Bisphosphonate-related osteonecrosis of the jaw (BRONJ) has unclear mechanisms, but its elucidation, prevention, and treatment are essential to improve patient outcomes.An 88-year-old woman who was taking oral bisphosphonates for 4 years presented with a 5-day history of gradual orbital pain. Computed tomography (CT) revealed bilateral maxillary sinusitis and bony destruction at the base of the right maxillary sinus and inferior orbital wall. We diagnosed her with bisphosphonate-related osteonecrosis of the maxilla. In addition to intravenous antibiotic treatment, an endoscopic sinus surgery was performed. The patient's postoperative course was uneventful.To the best of our knowledge, this is the first case report of BRONJ with orbital cellulitis. This condition should be considered among patients taking bisphosphonates, who manifest with symptoms of orbital cellulitis and bone destruction on imaging.
Answer: The pathogenesis of bisphosphonate-related osteonecrosis of the jaw (BRONJ) is multifactorial and not fully understood. One hypothesis suggests that the high bone turnover (BT) in the jawbone might be overly suppressed by bisphosphonates, leading to BRONJ. However, studies have shown that the bone turnover of the mandible is similar to that of other skeletal sites like the femur and is significantly lower than that of the maxilla. Furthermore, bone turnover in the jawbone was not found to be significantly altered after bisphosphonate administration, indicating that over suppression of bone turnover is unlikely to be the exclusive cause of BRONJ (PUBMED:24485975).
Bisphosphonates act as anti-osteoclast agents by inducing osteoclast apoptosis, which leads to a decreased rate of bone resorption. The pathogenesis of osteonecrosis involves the death of osteoblasts, osteoclasts, and osteocytes, as well as impaired functions of endothelial cells, epithelial cells, and fibroblasts, leading to disrupted angiogenesis and delayed wound healing (PUBMED:35199244). The exact mechanisms by which bisphosphonates contribute to osteonecrosis in the jaw are still under investigation, and further studies are needed to determine the effective length of use of bisphosphonates and the efficacy of drug holidays to prevent BRONJ (PUBMED:24794267).
Comparative studies with other antiresorptive drugs, such as denosumab, have also shown that the bone turnover of the jawbone is not overtly changed by bisphosphonates or denosumab, suggesting that oversuppression of bone turnover in the jawbones may not play a significant role in the pathogenesis of BRONJ or denosumab-related osteonecrosis of the jaw (DRONJ) (PUBMED:24582013).
In summary, while bisphosphonates are associated with BRONJ, the evidence does not support the hypothesis that over suppression of bone turnover in the jawbone is the primary etiological factor in the pathogenesis of BRONJ. Other factors, such as medication-induced cell death and impaired tissue functions, may also play a role in the development of this condition (PUBMED:35199244; PUBMED:24794267). |
Instruction: Café-au-lait spots and early onset colorectal neoplasia: a variant of HNPCC?
Abstracts:
abstract_id: PUBMED:14574005
Café-au-lait spots and early onset colorectal neoplasia: a variant of HNPCC? Background: Café-au-lait spots (CALS) are classically found in neurocutaneous syndromes such as neurofibromatosis, but have not been associated with hereditary colorectal cancer. However, review of hereditary colorectal cancer case reports reveals occasional description of CALS on physical exam.
Methods: We describe the colonic and extracolonic phenotype in a family with CALS and early onset colorectal neoplasia (adenomas and/or cancer) and review 23 additional families reported in the literature.
Results: Among the 24 families, 32/59 (54.2%) individuals had colorectal adenomas diagnosed at a mean age of 15.7 +/- 1.1 (SE) years (range 5-38 years). The majority (24/32, 75.0%) of persons at first colorectal examination had oligopolyposis (< 100 polyps) versus polyposis (> or = 100 polyps). Forty-two of 59 (71.2%) individuals were affected with colorectal cancer, diagnosed at a mean age of 31.9 +/- 2.7 years (range 5-70 years). A brain tumor was found in 28/59 (47.5%) affected individuals (4 families with 2 or more cases) with an overall mean age of diagnosis of 16.5 +/- 1.2. Lymphoma and/or leukemia was found in 8/24 (33.3%) families (one family with 3 cases). Two families had mutation of the mismatch repair gene, hPMS2 (1 with homozygous germline mutation), while two carried homozygous germline mutations of another mismatch repair gene, hMLH1.
Conclusions: Café-au-lait spots with early onset colorectal neoplasia may identify families with a variant of HNPCC characterized by oligopolyposis, glioblastoma at young age, and lymphoma. This variant may be caused by homozygous mutation of the mismatch repair genes, such as hPMS2 or hMLH1.
abstract_id: PUBMED:22920205
Constitutional mismatch repair-deficiency syndrome (CMMR-D) - a case report of a family with biallelic MSH6 mutation This work gives comprehensive information about new recessively inherited syndrome characterized by development of childhood malignancies. Behind this new described syndrome, called Constitutional mismatch repair-deficiency syndrome (CMMR-D), there are biallelic mutations in genes, which cause adult cancer syndrom termed Lynch syndrom (Hereditary non-polyposis cancer syndrom-HNPCC) if they are heterozygous mutations. Biallelic germline mutations of genes MLH1, MSH2, MSH6 and PMS2 in CMMR-D are characterized by increased risk of hematological malignancies, atypical brain tumors and early onset of colorectal cancers. An accompanying manifestation of the disease are skin spots with diffuse margins and irregular pigmentation reminiscent of Café au lait spots of NF1. This paper reports a case of a family with CMMR-D caused by novel homozygous MSH6 mutations leading to gliomatosis cerebri, T-ALL in an 11-year-old female and glioblastoma multiforme in her 10-year-old brother, both with rapid progression of the diseases. A literature review of brain tumors in CMMR-D families shows that they are treatment-resistant and lead to early death. Therefore, this work highlights the importance of early identification of patients with CMMR-D syndrome - in terms of initiation of a screening program for early detection of malignancies as well as early surgical intervention.
abstract_id: PUBMED:25883011
Phenotypic and genotypic characterisation of biallelic mismatch repair deficiency (BMMR-D) syndrome. Lynch syndrome, the most common inherited colorectal cancer syndrome in adults, is an autosomal dominant condition caused by heterozygous germ-line mutations in DNA mismatch repair (MMR) genes MLH1, MSH2, MSH6 and PMS2. Inheriting biallelic (homozygous) mutations in any of the MMR genes results in a different clinical syndrome termed biallelic mismatch repair deficiency (BMMR-D) that is characterised by gastrointestinal tumours, skin lesions, brain tumours and haematologic malignancies. This recently described and under-recognised syndrome can present with adenomatous polyps leading to early-onset small bowel and colorectal adenocarcinoma. An important clue in the family history that suggests underling BMMR-D is consanguinity. Interestingly, pedigrees of BMMR-D patients typically show a paucity of Lynch syndrome cancers and most parents are unaffected. Therefore, a family history of cancers is often non-contributory. Detection of BMMR-D can lead to more appropriate genetic counselling and the implementation of targeted surveillance protocols to achieve earlier tumour detection that will allow surgical resection. This review describes an approach for diagnosis and management of these patients and their families.
abstract_id: PUBMED:18409202
Compound heterozygosity for two MSH6 mutations in a patient with early onset colorectal cancer, vitiligo and systemic lupus erythematosus. Lynch syndrome (hereditary non-polyposis colorectal cancer, HNPCC) is an autosomal dominant condition caused by heterozygous germline mutations in the DNA mismatch repair (MMR) genes MLH1, MSH2, MSH6, or PMS2. Rare cases have been reported of an inherited bi-allelic deficiency of MMR genes, associated with multiple café-au-lait spots, early onset CNS tumors, hematological malignancies, and early onset gastrointestinal neoplasia. We report on a patient with vitiligo in segments of the integument who developed systemic lupus erythematosus (SLE) at the age of 16, and four synchronous colorectal cancers at age 17 years. Examination of the colorectal cancer tissue showed high microsatellite instability (MSI-H) and an exclusive loss of expression of the MSH6 protein. Immunohistochemical analysis of normal colon tissue also showed loss of MSH6, pointing to a bi-allelic MSH6 mutation. Sequencing of the MSH6 gene showed the two germline mutations; c.1806_1809delAAAG;p.Glu604LeufsX5 and c.3226C > T;p.Arg1076Cys. We confirmed that the two mutations are on two different alleles by allele-specific PCR. To our knowledge, neither parent is clinically affected. They did not wish to be tested for the mutations identified in their daughter. These data suggest that bi-allelic mutations of one of the MMR genes should be considered in patients who develop early-onset multiple HNPCC-associated tumors and autoimmune disorders, even in absence of either hematological malignancies or brain tumors.
abstract_id: PUBMED:16341812
Syndrome of early onset colon cancers, hematologic malignancies & features of neurofibromatosis in HNPCC families with homozygous mismatch repair gene mutations. Hereditary nonpolyposis colon cancer (HNPCC) is the most common hereditary colon cancer syndrome. It is characterized by multiple colon as well as extracolonic cancers such as endometrial, ovarian and urinary tract cancers. In addition, it is well known that some cases of HNPCC can present with unique tumor spectrums such as sebaceous tumors, which is often referred to as the 'Muir-Torre' syndrome. In recent years there have been a few reports of families presenting with early onset of colon tumors along with café-au-lait spots and/or hematologic malignancies often associated with homozygous mutations involving one of the mismatch repair genes. In this article we have performed a comprehensive review of the entire medical literature to identify all cases with similar presentations reported in the literature and have summarized the clinical features and genetic test results of the same. The available data clearly highlight such presentations as a distinct clinical entity characterized by early onset of gastrointestinal tumors, hematologic malignancies as well as features of neurofibromatosis (easily remembered by the acronym ;CoLoN'; Colon tumors or/and Leukemia/Lymphoma or/and Neurofibromatosis features). Furthermore, there has also been some evidence that the neurofibromatosis type-1 gene is a mutational target of the mismatch repair deficiency that is seen in families with HNPCC, and that mlh1 deficiency can accelerate the development of leukemia in neurofibromatosis (Nf1) heterozygous mice. Recognition of this syndrome has significant importance in terms of earlier detection of cancers, cancer screening recommendations as well as genetic counseling offered to such families.
abstract_id: PUBMED:17851451
Homozygous PMS2 germline mutations in two families with early-onset haematological malignancy, brain tumours, HNPCC-associated tumours, and signs of neurofibromatosis type 1. Heterozygous germline mutations in mismatch repair (MMR) genes MLH1, PMS2, MSH2, and MSH6 cause Lynch syndrome. New studies have indicated that biallelic mutations lead to a distinctive syndrome, childhood cancer syndrome (CCS), with haematological malignancies and tumours of brain and bowel early in childhood, often associated with signs of neurofibromatosis type 1. We provide further evidence for CCS reporting on six children from two consanguineous families carrying homozygous PMS2 germline mutations. In family 1, all four children had the homozygous p.I590Xfs mutation. Two had a glioblastoma at the age of 6 years and one of them had three additional Lynch-syndrome associated tumours at 15. Another sibling suffered from a glioblastoma at age 9, and the fourth sibling had infantile myofibromatosis at 1. In family 2, two of four siblings were homozygous for the p.G271V mutation. One had two colorectal cancers diagnosed at ages 13 and 14, the other had a Non-Hodgkin's lymphoma and a colorectal cancer at ages 10 and 11, respectively. All children with malignancies had multiple café-au-lait spots. After reviewing published cases of biallelic MMR gene mutations, we provide a concise description of CCS, revealing similarities in age distribution with carriers of heterozygous MMR gene mutations.
abstract_id: PUBMED:16418736
Compound heterozygosity for two MSH6 mutations in a patient with early onset of HNPCC-associated cancers, but without hematological malignancy and brain tumor. Heterozygous germline mutations in the human mismatch repair (MMR) genes MLH1, PMS2, MSH2 and MSH6 predispose to the hereditary non-polyposis colorectal cancer (HNPCC) syndrome. Biallelic mutations in these genes have been reported for a limited number of cases resulting in hematological malignancies, brain tumors and gastrointestinal tumors early in childhood. These tumor phenotypes are frequently associated with café-au-lait spots (CALS), one of the clinical hallmarks of neurofibromatosis type 1 (NF1). We report the first case of compound heterozygosity for two MSH6 mutations resulting in a nonconservative amino-acid change of a conserved residue and in a premature stop codon in a patient who developed rectal and endometrial cancer at ages 19 and 24 years, respectively, and presented few CALS in a single body segment. Immunohistochemistry and Western blotting revealed only residual expression of the MSH6 protein in the normal cells. The disease history resembles the HNPCC phenotype rather than a phenotype associated with biallelic MMR gene mutations. Therefore, we assume that one or both mutations abolish protein function only partially, further supported by the parents, which are both carriers of one of the mutations each, and not affected by the disease at ages 57 and 58 years. Our data suggest considering biallelic mutations in MMR genes for patients who develop HNPCC-associated tumors at an unusually young age of onset, even without hematological or brain malignancies.
abstract_id: PUBMED:18709565
Constitutional mismatch repair-deficiency syndrome: have we so far seen only the tip of an iceberg? Heterozygous mutations in one of the mismatch repair (MMR) genes MLH1, MSH2, MSH6 and PMS2 cause the dominant adult cancer syndrome termed Lynch syndrome or hereditary non-polyposis colorectal cancer. During the past 10 years, some 35 reports have delineated the phenotype of patients with biallelic inheritance of mutations in one of these MMR genes. The patients suffer from a condition that is characterised by the development of childhood cancers, mainly haematological malignancies and/or brain tumours, as well as early-onset colorectal cancers. Almost all patients also show signs reminiscent of neurofibromatosis type 1, mainly café au lait spots. Alluding to the underlying mechanism, this condition may be termed as "constitutional mismatch repair-deficiency (CMMR-D) syndrome". To give an overview of the current knowledge and its implications of this recessively inherited cancer syndrome we summarise here the genetic, clinical and pathological findings of the so far 78 reported patients of 46 families suffering from this syndrome.
abstract_id: PUBMED:17557300
Novel biallelic mutations in MSH6 and PMS2 genes: gene conversion as a likely cause of PMS2 gene inactivation. Since the first report by our group in 1999, more than 20 unrelated biallelic mutations in DNA mismatch repair genes (MMR) have been identified. In the present report, we describe two novel cases: one carrying compound heterozygous mutations in the MSH6 gene; and the other, compound heterozygous mutations in the PMS2 gene. Interestingly, the inactivation of one PMS2 allele was likely caused by gene conversion. Although gene conversion has been suggested to be a mutation mechanism underlying PMS2 inactivation, this is the first report of its involvement in a pathogenic mutation. The clinical features of biallelic mutation carriers were similar to other previously described patients, with the presence of café-au-lait spots (CALS), early onset of brain tumors, and colorectal neoplasia. Our data provide further evidence of the existence, although rare, of a distinct recessively inherited syndrome on the basis of MMR constitutional inactivation. The identification of this syndrome should be useful for genetic counseling, especially in families with atypical hereditary nonpolyposis colon cancer (HNPCC) associated with childhood cancers, and for the clinical surveillance of these mutation carriers.
abstract_id: PUBMED:30551794
Constitutional MMR deficiency: Genetic bases and clinical implications Inherited mono-allelic mutation in one of the 4 major MMR genes results in Lynch syndrome which predisposes, in adulthood, mainly to colorectal and endometrial tumors characterized by microsatellite instability (MSI phenotype). Individuals with bi-allelic mutations of one of these genes developed early and multiple malignancies, most often in childhood. This recessively inherited condition is named CMMRD for Constitutional Mismatch Repair Deficiency. The spectrum of tumors is distinct from Lynch syndrome. Malignant brain tumors are at least as frequent as gastrointestinal tumors and in more than a third of cases haematological malignancies were also reported. Patients also displayed clinical features similar of neurofibromatosis type 1, especially café au lait spots. The most commonly involved genes are PMS2 and MSH6 while bi-allelic MLH1 and MSH2 mutations are rare. The digestive tumors of these patients show MSI whereas the brain tumors can be "microsatellite stable". Because of variable clinical presentation and phenotypical overlaps with other cancer syndromes, CMMRD syndrome is frequently unrecognized by clinicians and its incidence is almost certainly underestimated. A better knowledge of clinical criteria and diagnosis methods should improve the identification of these patients at least at the time when they develop their first tumor or even before. This will allow adjusting treatment modalities and offering surveillance strategies of other tumor risks, not only for patients themselves but also for their relatives.
Answer: Café-au-lait spots (CALS) are traditionally associated with neurocutaneous syndromes such as neurofibromatosis, but there is evidence to suggest that they may also be a feature of a variant of hereditary nonpolyposis colorectal cancer (HNPCC), also known as Lynch syndrome. This variant is characterized by early onset colorectal neoplasia, including adenomas and cancer, as well as other malignancies such as brain tumors and hematologic cancers. The presence of CALS in individuals with early onset colorectal neoplasia may indicate a condition known as constitutional mismatch repair-deficiency syndrome (CMMR-D), which is caused by biallelic mutations in mismatch repair (MMR) genes (PUBMED:14574005, PUBMED:22920205, PUBMED:25883011).
CMMR-D is a recessively inherited syndrome that can lead to childhood malignancies, atypical brain tumors, and early onset colorectal cancers. It is associated with mutations in the same genes that cause Lynch syndrome when mutated in a heterozygous state (PUBMED:22920205). The syndrome can present with adenomatous polyps leading to early-onset small bowel and colorectal adenocarcinoma, and it is characterized by gastrointestinal tumors, skin lesions, brain tumors, and hematologic malignancies (PUBMED:25883011).
Families with this variant may show homozygous mutations in MMR genes such as hPMS2 or hMLH1, and the presence of CALS, oligopolyposis, glioblastoma at a young age, and lymphoma are indicative of this condition (PUBMED:14574005). The identification of this syndrome is crucial for appropriate genetic counseling and the implementation of targeted surveillance protocols for earlier tumor detection and surgical intervention (PUBMED:25883011).
In summary, the presence of café-au-lait spots in individuals with early onset colorectal neoplasia may indeed suggest a variant of HNPCC, specifically the CMMR-D syndrome, which is associated with biallelic mutations in MMR genes and a distinct clinical presentation including a range of malignancies and neurofibromatosis-like features (PUBMED:14574005, PUBMED:22920205, PUBMED:25883011). |
Instruction: Evaluation of treatment thresholds for unconjugated hyperbilirubinemia in preterm infants: effects on serum bilirubin and on hearing loss?
Abstracts:
abstract_id: PUBMED:23667532
Evaluation of treatment thresholds for unconjugated hyperbilirubinemia in preterm infants: effects on serum bilirubin and on hearing loss? Background: Severe unconjugated hyperbilirubinemia may cause deafness. In the Netherlands, 25% lower total serum bilirubin (TSB) treatment thresholds were recently implemented for preterm infants.
Objective: To determine the rate of hearing loss in jaundiced preterms treated at high or at low TSB thresholds.
Design/methods: In this retrospective study conducted at two neonatal intensive care units in the Netherlands, we included preterms (gestational age <32 weeks) treated for unconjugated hyperbilirubinemia at high or low TSB thresholds. Infants with major congenital malformations, syndromes, chromosomal abnormalities or toxoplasmosis, rubella, cytomegalovirus, herpes, syphilis, and human immunodeficiency infections were excluded. We analyzed clinical characteristics and TSB levels during the first ten postnatal days. After two failed automated Auditory Brainstem Response (ABR) tests we used the results of the diagnostic ABR examination to define normal, unilateral, and bilateral hearing loss (>35 dB).
Results: There were 479 patients in the high and 144 in the low threshold group. Both groups had similar gestational ages (29.5 weeks) and birth weights (1300 g). Mean and mean peak TSB levels were significantly lower after the implementation of the novel thresholds: 152 ± 43 µmol/L and 212 ± 52 µmol/L versus 131 ± 37 µmol/L and 188 ± 46 µmol/L for the high versus low thresholds, respectively (P<0.001). The incidence of hearing loss was 2.7% (13/479) in the high and 0.7% (1/144) in the low TSB threshold group (NNT = 50, 95% CI, 25-3302).
Conclusions: Implementation of lower treatment thresholds resulted in reduced mean and peak TSB levels. The incidence of hearing impairment in preterms with a gestational age <32 weeks treated at low TSB thresholds was substantially lower compared to preterms treated at high TSB thresholds. Further research with larger sample sizes and power is needed to determine if this effect is statistically significant.
abstract_id: PUBMED:29906300
Hyperbilirubinemia in preterm infants in Japan: New treatment criteria. In 1992, Kobe University proposed treatment criteria for hyperbilirubinemia in newborns using total serum bilirubin and serum unbound bilirubin reference values. In the last decade, chronic bilirubin encephalopathy has been found to develop in preterm infants in Japan because it can now be clinically diagnosed based on an abnormal signal of the globus pallidus on T2-weighted magnetic resonance imaging and abnormal auditory brainstem response with or without apparent hearing loss, along with physical findings of kinetic disorders with athetosis. We therefore revised the Kobe University treatment criteria for preterm hyperbilirubinemic infants in 2017. The three revised points are as follows: (i) newborns are classified under gestational age at birth or corrected gestational age, not birthweight; (ii) three treatment options were created: standard phototherapy, intensive phototherapy, and albumin therapy and/or exchange blood transfusion; and (iii) initiation of standard phototherapy, intensive phototherapy, and albumin therapy and/or exchange blood transfusion is decided based on the total serum bilirubin and serum unbound bilirubin reference values for gestational weeks at birth at <7 days of age, and on the reference values for corrected gestational age at ≥7 days of age. Studies are needed to establish whether chronic bilirubin encephalopathy can be prevented using the 2017 revised Kobe University treatment criteria for preterm infants in Japan.
abstract_id: PUBMED:29504491
Unbound bilirubin measurements by a novel probe in preterm infants. Background: Hyperbilirubinemia occurs in over 80% of newborns and severe bilirubin toxicity can lead to neurological dysfunction and death, especially in preterm infants. Currently, the risk of bilirubin toxicity is assessed by measuring the levels of total serum bilirubin (TSB), which are used to direct treatments including immunoglobulin administration, phototherapy, and exchange transfusion. However, free, unbound bilirubin levels (Bf) predict the risk of bilirubin neurotoxicity more accurately than TSB.
Objective: To examine Bf levels in preterm infants and determine the frequency with which they exceed reported neurotoxic thresholds.
Methods: One hundred thirty preterm infants (BW 500-2000 g; GA 23-34 weeks) were enrolled and Bf levels measured during the first week of life by the fluorescent Bf sensor BL22P1B11-Rh. TSB and plasma albumin were measured by standard techniques. Bilirubin-albumin dissociation constants (Kd) were calculated based on Bf and plasma albumin.
Results: Five hundred eighty samples were measured during the first week of life, with an overall mean Bf of 13.6 ± 9.0 nM. A substantial number of measurements exceeded potential toxic thresholds levels as reported in the literature. The correlation between Bf and TSB was statistically significant (r2 0.17), but this weak relationship was lost at high Bf levels. Infants <28-week gestations had more hearing screening failures than infants ≥28-week gestation.
Conclusions: Unbound (free) bilirubin values are extremely variable during the first week of life in preterm infants. A significant proportion of these values exceeded reported neurotoxic thresholds.
abstract_id: PUBMED:25332491
Serum bilirubin and bilirubin/albumin ratio as predictors of bilirubin encephalopathy. Background And Objective: Bilirubin/albumin ratio (B/A) may provide a better estimate of free bilirubin than total serum bilirubin (TSB), thus improving identification of newborns at risk for bilirubin encephalopathy. The objective of the study was to identify thresholds and compare specificities of TSB and B/A in detecting patients with acute and posttreatment auditory and neurologic impairment.
Methods: A total of 193 term/near-term infants, admitted for severe jaundice to Cairo University Children's Hospital, were evaluated for neurologic status and auditory impairment (automated auditory brainstem response), both at admission and posttreatment by investigators blinded to laboratory results. The relationships of TSB and B/A to advancing stages of neurotoxicity were compared by using receiver operating characteristic curves.
Results: TSB and B/A ranged from 17 to 61 mg/dL and 5.4 to 21.0 mg/g, respectively; 58 (30%) of 193 subjects developed acute bilirubin encephalopathy, leading to kernicterus in 35 infants (13 lethal). Auditory impairment was identified in 86 (49%) of 173 infants at admission and in 22 of 128 at follow-up. In the absence of clinical risk factors, no residual neurologic or hearing impairment occurred unless TSB exceeded 31 mg/dl. However, transient auditory impairment occurred at lower TSB and B/A (22.9 mg/dL and 5.7 mg/g, respectively). Intervention values of TSB and B/A set at high sensitivity to detect different stages of neurotoxicity had nearly the same specificity.
Conclusions: Both TSB and B/A are strong predictors of neurotoxicity, but B/A does not improve prediction over TSB alone. Threshold values detecting all affected patients (100% sensitivity) increase with advancing severity of neurotoxicity.
abstract_id: PUBMED:30404412
Hyperbilirubinemia and Follow-up Auditory Brainstem Responses in Preterm Infants. Objectives: Neonatal hyperbilirubinemia is considered one of the most common causative factors of hearing loss. Preterm infants are more vulnerable to neuronal damage caused by hyperbilirubinemia. This study aimed to evaluate the effect of hyperbilirubinemia on hearing threshold and auditory pathway in preterm infants by serial auditory brainstem response (ABR). In addition, we evaluate the usefulness of the unconjugated bilirubin (UCB) level compared with total serum bilirubin (TSB) on bilirubin-induced hearing loss.
Methods: This study was conducted on 70 preterm infants with hyperbilirubinemia who failed universal newborn hearing screening by automated ABR. The diagnostic ABR was performed within 3 months after birth. Follow-up ABR was conducted in patients with abnormal results (30 cases). TSB and UCB concentration were compared according to hearing threshold by ABR.
Results: The initial and maximal measured UCB concentration for the preterm infants of diagnostic ABR ≥40 dB nHL group (n=30) were statistically higher compared with ABR ≤35 dB nHL group (n=40) (P=0.031 and P=0.003, respectively). In follow-up ABR examination, 13 of the ABR ≥40 dB nHL group showed complete recovery, but 17 had no change or worsened. There was no difference in bilirubin level between the recovery group and non-recovery group.
Conclusion: UCB is a better predictor of bilirubin-induced hearing loss than TSB in preterm infants as evaluated by serial ABR. Serial ABR testing can be a useful, noninvasive methods to evaluate early reversible bilirubin-induced hearing loss in preterm infants.
abstract_id: PUBMED:19419529
The significance of measurement of serum unbound bilirubin concentrations in high-risk infants. Background: In the management of neonatal hyperbilirubinemia, total bilirubin (TB) concentration is not specific enough to predict the brain damage caused by bilirubin toxicity. Unbound bilirubin (UB) easily passes the blood-brain barrier and causes neurotoxicity. We aimed to evaluate whether serum UB concentration would be a useful predictor of bilirubin encephalopathy in high-risk infants.
Methods: We measured the serum TB and UB concentrations of 388 newborn infants treated with phototherapy or exchange transfusion for their hyperbilirubinemia at Takatsuki General Hospital between January 2002 and October 2003. Peak serum TB and UB levels and UB/TB ratios were studied on each birthweight group: below 1500 g (very low birthweight), 1500 g-2499 g (low birthweight), above 2500 g (normal birthweight); and several clinical factors influencing hyperbilirubinemia were also studied.
Results: Peak serum TB and UB levels increased with increasing birthweight, while UB/TB ratios decreased. The very low birthweight group showed higher UB levels and UB/TB ratios despite lower TB levels in intraventricular hemorrhage or severe infection compared to those in the others. The low birthweight and normal birthweight groups showed higher TB and UB levels in cases of hemolytic disease of the newborn compared to non-hemolytic disease of the newborn cases. Eight of 44 cases showed high UB levels accompanied by abnormal auditory brainstem responses, one of whom subsequently developed ataxic cerebral palsy with hearing loss, whereas the other seven showed transient abnormalities of auditory brainstem responses by the treatment of exchange transfusion or phototherapy.
Conclusion: The UB measurement was considered to be significant for the assessment of the risk of bilirubin neurotoxicity and the appropriate intervention for hyperbilirubinemia in high-risk infants.
abstract_id: PUBMED:35588044
Bilirubin Encephalopathy. Purpose Of Review: Hyperbilirubinemia is commonly seen in neonates. Though hyperbilirubinemia is typically asymptomatic, severe elevation of bilirubin levels can lead to acute bilirubin encephalopathy and progress to kernicterus spectrum disorder, a chronic condition characterized by hearing loss, extrapyramidal dysfunction, ophthalmoplegia, and enamel hypoplasia. Epidemiological data show that the implementation of universal pre-discharge bilirubin screening programs has reduced the rates of hyperbilirubinemia-associated complications. However, acute bilirubin encephalopathy and kernicterus spectrum disorder are still particularly common in low- and middle-income countries.
Recent Findings: The understanding of the genetic and biochemical processes that increase the susceptibility of defined anatomical areas of the central nervous system to the deleterious effects of bilirubin may facilitate the development of effective treatments for acute bilirubin encephalopathy and kernicterus spectrum disorder. Scoring systems are available for the diagnosis and severity grading of these conditions. The treatment of hyperbilirubinemia in newborns relies on the use of phototherapy and exchange transfusion. However, novel therapeutic options including deep brain stimulation, brain-computer interface, and stem cell transplantation may alleviate the heavy disease burden associated with kernicterus spectrum disorder. Despite improved screening and treatment options, the prevalence of acute bilirubin encephalopathy and kernicterus spectrum disorder remains elevated in low- and middle-income countries. The continued presence and associated long-term disability of these conditions warrant further research to improve their prevention and management.
abstract_id: PUBMED:23273639
Taurine attenuates bilirubin-induced neurotoxicity in the auditory system in neonatal guinea pigs. Objectives: Previous work showed that taurine protects neurons against unconjugated bilirubin (UCB)-induced neurotoxicity by maintaining intracellular calcium homeostasis, membrane integrity, and mitochondrial function, thereby preventing apoptosis from occurring, in primary neuron cultures. In this study, we investigated whether taurine could protect the auditory system against the neurotoxicity associated with hyperbilirubinemia in an in vivo model.
Methods: Hyperbilirubinemia was established in neonatal guinea pigs by intraperitoneal injection of UCB. Hearing function was observed in electrocochleograms (ECochGs) and auditory brainstem responses (ABRs) recorded before and 1, 8, 24, and 72 h after UCB injection. For morphological evaluations, animals were sacrificed at 8h post-injection, and the afferent terminals beneath the inner hair cells (IHCs), spiral ganglion neurons (SGNs), and their fibers were examined.
Results: It was found that UCB injection significantly increased latencies and inter-wave intervals, and thresholds of ABR and compound action potentials, and caused marked damage to type I SGNs, their axons, and terminals to cochlear IHCs. When baby guinea pigs were pretreated with taurine for 5 consecutive days and then injected with bilirubin, electrophysiological abnormalities and morphological damage were attenuated significantly in both the peripheral and central auditory system.
Conclusions: From these observations, it was concluded that taurine limited bilirubin-induced neural damage in the auditory system. These findings may contribute to the development of taurine as a broad-spectrum agent for preventing and/or treating hearing loss in neonatal jaundice.
abstract_id: PUBMED:24303432
The Relationship between the Behavioral Hearing Thresholds and Maximum Bilirubin Levels at Birth in Children with a History of Neonatal Hyperbilirubinemia. Introduction: Neonatal hyperbilirubinemia is one of the most important factors affecting the auditory system and can cause sensorineural hearing loss. This study investigated the relationship between behavioral hearing thresholds in children with a history of jaundice and the maximum level of bilirubin concentration in the blood.
Materials And Methods: This study was performed on 18 children with a mean age of 5.6 years and with a history of neonatal hyperbilirubinemia. Behavioral hearing thresholds, transient evoked emissions and brainstem evoked responses were evaluated in all children.
Results: Six children (33.3%) had normal hearing thresholds and the remaining (66.7%) had some degree of hearing loss. There was no significant relationship (r=-0.28, P=0.09) between the mean total bilirubin levels and behavioral hearing thresholds in all samples. A transient evoked emission was seen only in children with normal hearing thresholds however in eight cases brainstem evoked responses had not detected.
Conclusion: Increased blood levels of bilirubin at the neonatal period were potentially one of the causes of hearing loss. There was a lack of a direct relationship between neonatal bilirubin levels and the average hearing thresholds which emphasizes on the necessity of monitoring the various amounts of bilirubin levels.
abstract_id: PUBMED:18595994
Outcomes in a population of healthy term and near-term infants with serum bilirubin levels of >or=325 micromol/L (>or=19 mg/dL) who were born in Nova Scotia, Canada, between 1994 and 2000. Objective: The goal was to study the incidence of kernicterus, developmental delay, autism, cerebral palsy, and hearing loss in infants with peak total serum bilirubin levels of >or=325 micromol/L (>or=19 mg/dL), compared with infants with less-severe or no hyperbilirubinemia, in a population of healthy term and late preterm infants.
Methods: Prospectively gathered, standardized, maternal and neonatal data for infants at >or=35 weeks of gestation who were born between January 1, 1994, and December 31, 2000, were extracted from the Nova Scotia Atlee Perinatal Database. Infants with Rh factor isoimmunization, significant congenital or chromosomal abnormalities, or severe peripartum asphyxia were excluded. Comparisons were made on the basis of peak total serum bilirubin levels. Diagnoses were obtained through data linkage with the Medical Services Insurance Database for office visits and the Canadian Institute for Health Information Database for hospital admissions. The registration file provided information allowing calculation of follow-up times, which were determined for each separate outcome. Follow-up periods ranged from 2 to 9 years, with the end point being the first time the diagnostic code was encountered in either database. Cox proportional-hazards regression analyses were used to examine the relationships between outcomes and total serum bilirubin levels.
Results: Of 61238 infants included in the study cohort, 4010 (6.7%) did not have linkage data, which left 56019 infants for analysis. There were no cases of kernicterus and no significant differences in rates of cerebral palsy, deafness, developmental delay, or visual abnormalities between the groups. There were suggestions of associations with attention-deficit disorder in the severe hyperbilirubinemia group and with autism in the combined moderate and severe hyperbilirubinemia group.
Conclusions: There was no increase in adverse effects reported previously to be associated with bilirubin toxicity. Associations with developmental delay, attention-deficit disorder, and autism were observed.
Answer: The evaluation of treatment thresholds for unconjugated hyperbilirubinemia in preterm infants has shown that implementing lower treatment thresholds can lead to reduced mean and peak total serum bilirubin (TSB) levels and may also decrease the incidence of hearing impairment. A retrospective study in the Netherlands found that preterm infants treated at lower TSB thresholds had a substantially lower incidence of hearing loss compared to those treated at higher thresholds (2.7% vs. 0.7%), suggesting that more conservative treatment thresholds could be beneficial for hearing outcomes (PUBMED:23667532).
In Japan, revised treatment criteria for hyperbilirubinemia in preterm infants were proposed, taking into account gestational age and serum unbound bilirubin levels, with the aim of preventing chronic bilirubin encephalopathy, which can lead to hearing loss among other neurological issues (PUBMED:29906300).
Studies have indicated that unbound bilirubin levels (Bf) are more predictive of bilirubin neurotoxicity than TSB levels. Measurements of Bf in preterm infants have shown that a significant proportion of values exceeded reported neurotoxic thresholds, and infants with lower gestational ages had more hearing screening failures (PUBMED:29504491).
While the bilirubin/albumin ratio (B/A) may provide a better estimate of free bilirubin than TSB, a study found that B/A does not improve prediction over TSB alone for identifying newborns at risk for bilirubin encephalopathy (PUBMED:25332491).
Research has also suggested that unconjugated bilirubin (UCB) levels are better predictors of bilirubin-induced hearing loss than TSB in preterm infants, as evaluated by serial auditory brainstem response (ABR) testing (PUBMED:30404412).
Overall, the evaluation of treatment thresholds for unconjugated hyperbilirubinemia in preterm infants indicates that lower thresholds can lead to reduced TSB levels and may have a positive effect on reducing the risk of hearing loss. However, further research is needed to establish the most effective treatment criteria and to understand the role of unbound bilirubin in predicting neurotoxicity and hearing impairment in this vulnerable population. |
Instruction: Can prenatal malaria exposure produce an immune tolerant phenotype?
Abstracts:
abstract_id: PUBMED:19636353
Can prenatal malaria exposure produce an immune tolerant phenotype? A prospective birth cohort study in Kenya. Background: Malaria in pregnancy can expose the fetus to malaria-infected erythrocytes or their soluble products, thereby stimulating T and B cell immune responses to malaria blood stage antigens. We hypothesized that fetal immune priming, or malaria exposure in the absence of priming (putative tolerance), affects the child's susceptibility to subsequent malaria infections.
Methods And Findings: We conducted a prospective birth cohort study of 586 newborns residing in a malaria-holoendemic area of Kenya who were examined biannually to age 3 years for malaria infection, and whose malaria-specific cellular and humoral immune responses were assessed. Newborns were classified as (i) sensitized (and thus exposed), as demonstrated by IFNgamma, IL-2, IL-13, and/or IL-5 production by cord blood mononuclear cells (CBMCs) to malaria blood stage antigens, indicative of in utero priming (n = 246), (ii) exposed not sensitized (mother Plasmodium falciparum [Pf]+ and no CBMC production of IFNgamma, IL-2, IL-13, and/or IL-5, n = 120), or (iii) not exposed (mother Pf-, no CBMC reactivity, n = 220). Exposed not sensitized children had evidence for prenatal immune experience demonstrated by increased IL-10 production and partial reversal of malaria antigen-specific hyporesponsiveness with IL-2+IL-15, indicative of immune tolerance. Relative risk data showed that the putatively tolerant children had a 1.61 (95% confidence interval [CI] 1.10-2.43; p = 0.024) and 1.34 (95% CI 0.95-1.87; p = 0.097) greater risk for malaria infection based on light microscopy (LM) or PCR diagnosis, respectively, compared to the not-exposed group, and a 1.41 (95%CI 0.97-2.07, p = 0.074) and 1.39 (95%CI 0.99-2.07, p = 0.053) greater risk of infection based on LM or PCR diagnosis, respectively, compared to the sensitized group. Putatively tolerant children had an average of 0.5 g/dl lower hemoglobin levels (p = 0.01) compared to the other two groups. Exposed not sensitized children also had 2- to 3-fold lower frequency of malaria antigen-driven IFNgamma and/or IL-2 production (p<0.001) and higher IL-10 release (p<0.001) at 6-month follow-ups, when compared to sensitized and not-exposed children. Malaria blood stage-specific IgG antibody levels were similar among the three groups.
Conclusions: These results show that a subset of children exposed to malaria in utero acquire a tolerant phenotype to blood-stage antigens that persists into childhood and is associated with an increased susceptibility to malaria infection and anemia. This finding could have important implications for malaria vaccination of children residing in endemic areas.
abstract_id: PUBMED:23151722
Prenatal p,p´-DDE exposure and neurodevelopment among children 3.5-5 years of age. Background: The results of previous studies suggest that prenatal exposure to bis[p-chlorophenyl]-1,1,1-trichloroethane (DDT) and to its main metabolite, 2,2-bis(p-chlorophenyl)-1,1-dichloroethylene (DDE), impairs psychomotor development during the first year of life. However, information about the persistence of this association at later ages is limited.
Objectives: We assessed the association of prenatal DDE exposure with child neurodevelopment at 42-60 months of age.
Methods: Since 2001 we have been monitoring the neurodevelopment in children who were recruited at birth into a perinatal cohort exposed to DDT, in the state of Morelos, Mexico. We report McCarthy Scales of Children's Abilities for 203 children at 42, 48, 54, and 60 months of age. Maternal DDE serum levels were available for at least one trimester of pregnancy. The Home Observation for Measurement of the Environment scale and other covariables of interest were also available.
Results: After adjustment, a doubling of DDE during the third trimester of pregnancy was associated with statistically significant reductions of -1.37, -0.88, -0.84, and -0.80 points in the general cognitive index, quantitative, verbal, and memory components respectively. The association between prenatal DDE and the quantitative component was weaker at 42 months than at older ages. No significant statistical interactions with sex or breastfeeding were observed.
Conclusions: These findings support the hypothesis that prenatal DDE impairs early child neurodevelopment; the potential for adverse effects on development should be considered when using DDT for malaria control.
abstract_id: PUBMED:29648420
Prenatal Exposure to DDT and Pyrethroids for Malaria Control and Child Neurodevelopment: The VHEMBE Cohort, South Africa. Background: Although indoor residual spraying (IRS) with dichlorodiphenyltrichloroethane (DDT) and pyrethroids effectively controls malaria, it potentially increases human exposure to these insecticides. Previous studies suggest that prenatal exposure to these insecticides may impact human neurodevelopment.
Objectives: We aimed to estimate the effects of maternal insecticide exposure and neurodevelopment of toddlers living in a malaria-endemic region currently using IRS.
Methods: The Venda Health Examination of Mothers, Babies and their Environment (VHEMBE) is a birth cohort of 752 mother-child pairs in Limpopo, South Africa. We measured maternal exposure to DDT and its breakdown product, dichlorodiphenyldichloroethylene (DDE), in maternal serum, and measured pyrethroid metabolites in maternal urine. We assessed children's neurodevelopment at 1 and 2 y of age using the Bayley Scales of Infant Development, third edition (BSID-III), and examined associations with maternal exposure.
Results: DDT and DDE were not associated with significantly lower scores for any BSID-III scale. In contrast, each 10-fold increase in cis-DCCA, trans-DCCA, and 3-phenoxybenzoic acid were associated, respectively, with a -0.63 (95% CI: -1.14, -0.12), -0.48 (95% CI: -0.92, -0.05), and -0.58 (-1.11, -0.06) decrement in Social-Emotional scores at 1 y of age. In addition, each 10-fold increase in maternal cis-DBCA levels was associated with significant decrements at 2 y of age in Language Composite scores and Expressive Communication scores [β=-1.74 (95% CI: -3.34, -0.13) and β=-0.40 (95% CI: -0.77, -0.04), respectively, for a 10-fold increase]. Significant differences by sex were estimated for pyrethroid metabolites and motor function scores at 2 y of age, with higher scores for boys and lower scores for girls.
Conclusions: Prenatal exposure to pyrethroids may be associated at 1 y of age with poorer social-emotional development. At 2 y of age, poorer language development was observed with higher prenatal pyrethroid levels. Considering the widespread use of pyrethroids, these findings deserve further investigation. https://doi.org/10.1289/EHP2129.
abstract_id: PUBMED:30384846
Modulation of innate immune responses at birth by prenatal malaria exposure and association with malaria risk during the first year of life. Background: Factors driving inter-individual differences in immune responses upon different types of prenatal malaria exposure (PME) and subsequent risk of malaria in infancy remain poorly understood. In this study, we examined the impact of four types of PME (i.e., maternal peripheral infection and placental acute, chronic, and past infections) on both spontaneous and toll-like receptors (TLRs)-mediated cytokine production in cord blood and how these innate immune responses modulate the risk of malaria during the first year of life.
Methods: We conducted a birth cohort study of 313 mother-child pairs nested within the COSMIC clinical trial (NCT01941264), which was assessing malaria preventive interventions during pregnancy in Burkina Faso. Malaria infections during pregnancy and infants' clinical malaria episodes detected during the first year of life were recorded. Supernatant concentrations of 30 cytokines, chemokines, and growth factors induced by stimulation of cord blood with agonists of TLRs 3, 7/8, and 9 were measured by quantitative suspension array technology. Crude concentrations and ratios of TLR-mediated cytokine responses relative to background control were analyzed.
Results: Spontaneous production of innate immune biomarkers was significantly reduced in cord blood of infants exposed to malaria, with variation among PME groups, as compared to those from the non-exposed control group. However, following TLR7/8 stimulation, which showed higher induction of cytokines/chemokines/growth factors than TLRs 3 and 9, cord blood cells of infants with evidence of past placental malaria were hyper-responsive in comparison to those of infants not-exposed. In addition, certain biomarkers, which levels were significantly modified depending on the PME category, were independent predictors of either malaria risk (GM-CSF TLR7/8 crude) or protection (IL-12 TLR7/8 ratio and IP-10 TLR3 crude, IL-1RA TLR7/8 ratio) during the first year of life.
Conclusions: These findings indicate that past placental malaria has a profound effect on fetal immune system and that the differential alterations of innate immune responses by PME categories might drive heterogeneity between individuals to clinical malaria susceptibility during the first year of life.
abstract_id: PUBMED:27448394
Prenatal exposure to Plasmodium falciparum increases frequency and shortens time from birth to first clinical malaria episodes during the first two years of life: prospective birth cohort study. Background: Prenatal exposure to Plasmodium falciparum affects development of protective immunity and susceptibility to subsequent natural challenges with similar parasite antigens. However, the nature of these effects has not been fully elucidated. The aim of this study was to determine the effect of prenatal exposure to P. falciparum on susceptibility to natural malaria infection, with a focus on median time from birth to first clinical malaria episode and frequency of clinical malaria episodes in the first 2 years of life.
Methods: A prospective birth cohort study was conducted in Rufiji district in Tanzania, between January 2013 and December 2015. Infants born to mothers with P. falciparum in the placenta at time of delivery were defined as exposed, and infants born to mothers without P. falciparum parasites in placenta were defined as unexposed. Placental infection was established by histological techniques. Out of 206 infants recruited, 41 were in utero exposed to P. falciparum and 165 infants were unexposed. All infants were monitored for onset of clinical malaria episodes in the first 2 years of life. The outcome measure was time from birth to first clinical malaria episode, defined by fever (≥37 °C) and microscopically determined parasitaemia. Median time to first clinical malaria episode between exposed and unexposed infants was assessed using Kaplan-Meier survival analysis and comparison was done by log rank. Association of clinical malaria episodes with prenatal exposure to P. falciparum was assessed by multivariate binary logistic regression. Comparative analysis of mean number of clinical malaria episodes between exposed and unexposed infants was done using independent sample t test.
Results: The effect of prenatal exposure to P. falciparum infection on clinical malaria episodes was statistically significant (Odds Ratio of 4.79, 95 % CI 2.21-10.38, p < 0.01) when compared to other confounding factors. Median time from birth to first clinical malaria episode for exposed and unexposed infants was 32 weeks (95 % CI 30.88-33.12) and 37 weeks (95 % CI 35.25-38.75), respectively, and the difference was statistically significant (p = 0.003). The mean number of clinical malaria episodes in exposed and unexposed infants was 0.51 and 0.30 episodes/infant, respectively, and the difference was statistically significant (p = 0.038).
Conclusions: Prenatal exposure to P. falciparum shortens time from birth to first clinical malaria episode and increases frequency of clinical malaria episodes in the first 2 years of life.
abstract_id: PUBMED:26414943
Prenatal DDT and DDE exposure and child IQ in the CHAMACOS cohort. Although banned in most countries, dichlorodiphenyl-trichloroethane (DDT) continues to be used for vector control in some malaria endemic areas. Previous findings from the Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS) cohort study found increased prenatal levels of DDT and its breakdown product dichlorodiphenyl-dichloroethylene (DDE) to be associated with altered neurodevelopment in children at 1 and 2years of age. In this study, we combined the measured maternal DDT/E concentrations during pregnancy obtained for the prospective birth cohort with predicted prenatal DDT and DDE levels estimated for a retrospective birth cohort. Using generalized estimating equation (GEE) and linear regression models, we evaluated the relationship of prenatal maternal DDT and DDE serum concentrations with children's cognition at ages 7 and 10.5years as assessed using the Full Scale Intelligence Quotient (IQ) and 4 subtest scores (Working Memory, Perceptual Reasoning, Verbal Comprehension, and Processing Speed) of the Wechsler Intelligence Scale for Children (WISC). In GEE analyses incorporating both age 7 and 10.5 scores (n=619), we found prenatal DDT and DDE levels were not associated with Full Scale IQ or any of the WISC subscales (p-value>0.05). In linear regression analyses assessing each time point separately, prenatal DDT levels were inversely associated with Processing Speed at age 7years (n=316), but prenatal DDT and DDE levels were not associated with Full Scale IQ or any of the WISC subscales at age 10.5years (n=595). We found evidence for effect modification by sex. In girls, but not boys, prenatal DDE levels were inversely associated with Full Scale IQ and Processing Speed at age 7years. We conclude that prenatal DDT levels may be associated with delayed Processing Speed in children at age 7years and the relationship between prenatal DDE levels and children's cognitive development may be modified by sex, with girls being more adversely affected.
abstract_id: PUBMED:32299029
Associations between prenatal exposure to DDT and DDE and allergy symptoms and diagnoses in the Venda Health Examination of Mothers, Babies and their Environment (VHEMBE), South Africa. Dichlorodiphenyl trichloroethane (DDT) is an organochlorine insecticide that is banned internationally except for use as part of Indoor Residual Spraying (IRS) programs to control malaria. Although animal studies show that DDT and its breakdown product dichlorodiphenyl dichloroethylene (DDE) affect the immune system and may cause allergies, no studies have examined this question in populations where IRS is conducted. The aim of our study was to investigate whether prenatal exposure to DDT and DDE is associated with allergy symptoms and diagnose among South African children living in an area where IRS is conducted. To accomplish this aim, we used data from the Venda Health Examination of Mothers, Babies and their Environment (VHEMBE), an ongoing birth cohort study of 752 children born between 2012 and 2013 in the rural Vhembe district of Limpopo, South Africa. We measured maternal peripartum serum concentrations of DDT and DDE, and administered a questionnaire to the caregivers of 658 children aged 3.5 years to collect information on allergy symptoms and diagnoses as well as potential confounders using validated instruments. Using multiple logistic regression models, we found positive associations between DDT and DDE serum concentrations and most of the allergy symptoms and diagnoses. Maternal DDT (Odds Ratio [OR] = 1.5 per 10-fold increase, 95% Confidence interval, CI = 1.0, 2.3) and DDE (OR = 1.4, 95% CI = 0.8, 2.4) serum concentrations were most strongly associated with caregiver report of wheezing or whistling in the chest. Concentrations of DDT and/or DDE were also associated with increased odds of children's chests sounding wheezy during or after exercise, itchy rashes coming and going for at least six months, diagnosis of food allergy, and diagnosis of dust or dust mites allergy but confidence intervals crossed the null. Results suggest that prenatal exposure to DDT, and possibly DDE, is associated with elevated odds of wheezing among children from an IRS area.
abstract_id: PUBMED:17082631
Prenatal malaria immune experience affects acquisition of Plasmodium falciparum merozoite surface protein-1 invasion inhibitory antibodies during infancy. African infants are often born of mothers infected with malaria during pregnancy. This can result in fetal exposure to malaria-infected erythrocytes or their soluble products with subsequent fetal immune priming or tolerance in utero. We performed a cohort study of 30 newborns from a malaria holoendemic area of Kenya to determine whether T cell sensitization to Plasmodium falciparum merozoite surface protein-1 (MSP-1) at birth correlates with infant development of anti-MSP-1 Abs acquired as a consequence of natural malaria infection. Abs to the 42- and 19-kDa C-terminal processed fragments of MSP-1 were determined by serology and by a functional assay that quantifies invasion inhibition Abs against the MSP-1(19) merozoite ligand (MSP-1(19) IIA). Infants had detectable IgG and IgM Abs to MSP-1(42) and MSP-1(19) at 6 mo of age with no significant change by age 24-30 mo. In contrast, MSP-1(19) IIA levels increased from 6 to 24-30 mo of age (16-29%, p < 0.01). Infants with evidence of prenatal exposure to malaria (defined by P. falciparum detection in maternal, placental, and/or cord blood compartments) and T cell sensitization at birth (defined by cord blood lymphocyte cytokine responses to MSP-1) showed the greatest age-related increase in MSP-1(19) IIA compared with infants with prenatal exposure to malaria but who lacked detectable T cell MSP-1 sensitization. These data suggest that fetal sensitization or tolerance to MSP-1, associated with maternal malaria infection during pregnancy, affects the development of functional Ab responses to MSP-1 during infancy.
abstract_id: PUBMED:33420104
Exposure to pesticides in utero impacts the fetal immune system and response to vaccination in infancy. The use of pesticides to reduce mosquito vector populations is a cornerstone of global malaria control efforts, but the biological impact of most pesticides on human populations, including pregnant women and infants, is not known. Some pesticides, including carbamates, have been shown to perturb the human immune system. We measure the systemic absorption and immunologic effects of bendiocarb, a commonly used carbamate pesticide, following household spraying in a cohort of pregnant Ugandan women and their infants. We find that bendiocarb is present at high levels in maternal, umbilical cord, and infant plasma of individuals exposed during pregnancy, indicating that it is systemically absorbed and trans-placentally transferred to the fetus. Moreover, bendiocarb exposure is associated with numerous changes in fetal immune cell homeostasis and function, including a dose-dependent decrease in regulatory CD4 T cells, increased cytokine production, and inhibition of antigen-driven proliferation. Additionally, prenatal bendiocarb exposure is associated with higher post-vaccination measles titers at one year of age, suggesting that its impact on functional immunity may persist for many months after birth. These data indicate that in utero bendiocarb exposure has multiple previously unrecognized biological effects on the fetal immune system.
abstract_id: PUBMED:35394964
Prenatal Exposure to Insecticides and Weight Trajectories Among South African Children in the VHEMBE Birth Cohort. Background: Dichlorodiphenyltrichloroethane (DDT) or pyrethroid insecticides are sprayed inside dwellings for malaria vector control, resulting in high exposure to millions of people, including pregnant women. These chemicals disrupt endocrine function and may affect child growth. To our knowledge, few studies have investigated the potential impact of prenatal exposure to DDT or pyrethroids on growth trajectories.
Methods: We investigated associations between gestational insecticide exposure and child growth trajectories in the Venda Health Examination of Mothers, Babies and their Environment, a birth cohort of 751 children born between 2012 and 2013 in South Africa. Based on child weight measured at follow-up and abstracted from medical records, we modeled weight trajectories from birth to 5 years using SuperImposition, Translation and Rotation, which estimated two child-specific parameters: size (average weight) and tempo (age at peak weight velocity). We estimated associations between peripartum maternal concentrations of serum DDT, dichlorodiphenyldichloroethylene, or urinary pyrethroid metabolites and SuperImposition, Translation and Rotation parameters using marginal structural models.
Results: We observed that a 10-fold increase in maternal concentrations of the pyrethroid metabolite trans-3-(2,2,-dicholorvinyl)-2,2-dimethyl-cyclopropane carboxylic acid was associated with a 21g (95% confidence interval = -40, -1.6) smaller size among boys but found no association among girls (Pinteraction = 0.07). Estimates suggested that pyrethroids may be associated with earlier tempo but were imprecise. We observed no association with serum DDT or dichlorodiphenyldichloroethylene.
Conclusions: Inverse associations between pyrethroids and weight trajectory parameters among boys are consistent with hypothesized disruption of androgen pathways and with our previous research in this population, and support the endocrine-disrupting potential of pyrethroids in humans.
Answer: Yes, prenatal malaria exposure can produce an immune tolerant phenotype. A prospective birth cohort study in Kenya found that children who were exposed to malaria in utero but not sensitized (i.e., their mothers had Plasmodium falciparum infection but the children's cord blood mononuclear cells did not produce certain cytokines) demonstrated evidence of prenatal immune experience. This was indicated by increased IL-10 production and partial reversal of malaria antigen-specific hyporesponsiveness with IL-2+IL-15, which are indicative of immune tolerance. These children, classified as "exposed not sensitized," had a higher risk for malaria infection and lower hemoglobin levels compared to both not-exposed and sensitized groups. They also had a lower frequency of malaria antigen-driven IFNgamma and/or IL-2 production and higher IL-10 release at 6-month follow-ups. This suggests that a subset of children exposed to malaria in utero acquire a tolerant phenotype to blood-stage antigens that persists into childhood and is associated with increased susceptibility to malaria infection and anemia (PUBMED:19636353). |
Instruction: Brain-computer interfaces and communication in paralysis: extinction of goal directed thinking in completely paralysed patients?
Abstracts:
abstract_id: PUBMED:18824406
Brain-computer interfaces and communication in paralysis: extinction of goal directed thinking in completely paralysed patients? Objective: To investigate the relationship between physical impairment and brain-computer interface (BCI) performance.
Method: We present a meta-analysis of 29 patients with amyotrophic lateral sclerosis and six patients with other severe neurological diseases in different stages of physical impairment who were trained with a BCI. In most cases voluntary regulation of slow cortical potentials has been used as input signal for BCI-control. More recently sensorimotor rhythms and the P300 event-related brain potential were recorded.
Results: A strong correlation has been found between physical impairment and BCI performance, indicating that performance worsens as impairment increases. Seven patients were in the complete locked-in state (CLIS) with no communication possible. After removal of these patients from the analysis, the relationship between physical impairment and BCI performance disappeared. The lack of a relation between physical impairment and BCI performance was confirmed when adding BCI data of patients from other BCI research groups.
Conclusions: Basic communication (yes/no) was not restored in any of the CLIS patients with a BCI. Whether locked-in patients can transfer learned brain control to the CLIS remains an open empirical question.
Significance: Voluntary brain regulation for communication is possible in all stages of paralysis except the CLIS.
abstract_id: PUBMED:32045022
Neuropsychological and neurophysiological aspects of brain-computer-interface (BCI) control in paralysis. Brain-computer interfaces (BCIs) aim to help paralysed patients to interact with their environment by controlling external devices using brain activity, thereby bypassing the dysfunctional motor system. Some neuronal disorders, such as amyotrophic lateral sclerosis (ALS), severely impair the communication capacity of patients. Several invasive and non-invasive brain-computer interfaces (BCIs), most notably using electroencephalography (EEG), have been developed to provide a means of communication to paralysed patients. However, except for a few reports, all available BCI literature for the paralysed (mostly ALS patients) describes patients with intact eye movement control, i.e. patients in a locked-in state (LIS) but not a completely locked-in state (CLIS). In this article we will discuss: (1) the fundamental neuropsychological learning factors and neurophysiological factors determining BCI performance in clinical applications; (2) the difference between LIS and CLIS; (3) recent development in BCIs for communication with patients in the completely locked-in state; (4) the effect of BCI-based communication on emotional well-being and quality of life; and (5) the outlook and the methodology needed to provide a means of communication for patients who have none. Thus, we present an overview of available studies and recent results and try to anticipate future developments which may open new doors for BCI communication with the completely paralysed.
abstract_id: PUBMED:17992083
Brain-computer interfaces in the continuum of consciousness. Purpose Of Review: To summarize recent developments and look at important future aspects of brain-computer interfaces.
Recent Findings: Recent brain-computer interface studies are largely targeted at helping severely or even completely paralysed patients. The former are only able to communicate yes or no via a single muscle twitch, and the latter are totally nonresponsive. Such patients can control brain-computer interfaces and use them to select letters, words or items on a computer screen, for neuroprosthesis control or for surfing the Internet. This condition of motor paralysis, in which cognition and consciousness appear to be unaffected, is traditionally opposed to nonresponsiveness due to disorders of consciousness. Although these groups of patients may appear to be very alike, numerous transition states between them are demonstrated by recent studies.
Summary: All nonresponsive patients can be regarded on a continuum of consciousness which may vary even within short time periods. As overt behaviour is lacking, cognitive functions in such patients can only be investigated using neurophysiological methods. We suggest that brain-computer interfaces may provide a new tool to investigate cognition in disorders of consciousness, and propose a hierarchical procedure entailing passive stimulation, active instructions, volitional paradigms, and brain-computer interface operation.
abstract_id: PUBMED:27590968
Brain-computer interfaces in the completely locked-in state and chronic stroke. Brain-computer interfaces (BCIs) use brain activity to control external devices, facilitating paralyzed patients to interact with the environment. In this chapter, we discuss the historical perspective of development of BCIs and the current advances of noninvasive BCIs for communication in patients with amyotrophic lateral sclerosis and for restoration of motor impairment after severe stroke. Distinct techniques have been explored to control a BCI in patient population especially electroencephalography (EEG) and more recently near-infrared spectroscopy (NIRS) because of their noninvasive nature and low cost. Previous studies demonstrated successful communication of patients with locked-in state (LIS) using EEG- and invasive electrocorticography-BCI and intracortical recordings when patients still showed residual eye control, but not with patients with complete LIS (ie, complete paralysis). Recently, a NIRS-BCI and classical conditioning procedure was introduced, allowing communication in patients in the complete locked-in state (CLIS). In severe chronic stroke without residual hand function first results indicate a possible superior motor rehabilitation to available treatment using BCI training. Here we present an overview of the available studies and recent results, which open new doors for communication, in the completely paralyzed and rehabilitation in severely affected stroke patients. We also reflect on and describe possible neuronal and learning mechanisms responsible for BCI control and perspective for future BMI research for communication in CLIS and stroke motor recovery.
abstract_id: PUBMED:17234696
Brain-computer interfaces: communication and restoration of movement in paralysis. The review describes the status of brain-computer or brain-machine interface research. We focus on non-invasive brain-computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour.
abstract_id: PUBMED:23862678
Sensors and decoding for intracortical brain computer interfaces. Intracortical brain computer interfaces (iBCIs) are being developed to enable people to drive an output device, such as a computer cursor, directly from their neural activity. One goal of the technology is to help people with severe paralysis or limb loss. Key elements of an iBCI are the implanted sensor that records the neural signals and the software that decodes the user's intended movement from those signals. Here, we focus on recent advances in these two areas, placing special attention on contributions that are or may soon be adopted by the iBCI research community. We discuss how these innovations increase the technology's capability, accuracy, and longevity, all important steps that are expanding the range of possible future clinical applications.
abstract_id: PUBMED:32164869
Brain-computer interfaces for communication. Locked-in syndrome (LIS) is characterized by an inability to move or speak in the presence of intact cognition and can be caused by brainstem trauma or neuromuscular disease. Quality of life (QoL) in LIS is strongly impaired by the inability to communicate, which cannot always be remedied by traditional augmentative and alternative communication (AAC) solutions if residual muscle activity is insufficient to control the AAC device. Brain-computer interfaces (BCIs) may offer a solution by employing the person's neural signals instead of relying on muscle activity. Here, we review the latest communication BCI research using noninvasive signal acquisition approaches (electroencephalography, functional magnetic resonance imaging, functional near-infrared spectroscopy) and subdural and intracortical implanted electrodes, and we discuss current efforts to translate research knowledge into usable BCI-enabled communication solutions that aim to improve the QoL of individuals with LIS.
abstract_id: PUBMED:38018832
Endovascular Brain-Computer Interfaces in Poststroke Paralysis. Stroke is a leading cause of paralysis, most frequently affecting the upper limbs and vocal folds. Despite recent advances in care, stroke recovery invariably reaches a plateau, after which there are permanent neurological impairments. Implantable brain-computer interface devices offer the potential to bypass permanent neurological lesions. They function by (1) recording neural activity, (2) decoding the neural signal occurring in response to volitional motor intentions, and (3) generating digital control signals that may be used to control external devices. While brain-computer interface technology has the potential to revolutionize neurological care, clinical translation has been limited. Endovascular arrays present a novel form of minimally invasive brain-computer interface devices that have been deployed in human subjects during early feasibility studies. This article provides an overview of endovascular brain-computer interface devices and critically evaluates the patient with stroke as an implant candidate. Future opportunities are mapped, along with the challenges arising when decoding neural activity following infarction. Limitations arise when considering intracerebral hemorrhage and motor cortex lesions; however, future directions are outlined that aim to address these challenges.
abstract_id: PUBMED:34587854
Brain Computer Interfaces for Assisted Communication in Paralysis and Quality of Life. The rapid evolution of Brain-Computer Interface (BCI) technology and the exponential growth of BCI literature during the past 20 years is a consequence of increasing computational power and the achievements of statistical learning theory and machine learning since the 1960s. Despite this rapid scientific progress, the range of successful clinical and societal applications remained limited, with some notable exceptions in the rehabilitation of chronic stroke and first steps towards BCI-based assisted verbal communication in paralysis. In this contribution, we focus on the effects of noninvasive and invasive BCI-based verbal communication on the quality of life (QoL) of patients with amyotrophic lateral sclerosis (ALS) in the locked-in state (LIS) and the completely locked-in state (CLIS). Despite a substantial lack of replicated scientific data, this paper complements the existing methodological knowledge and focuses future investigators' attention on (1) Social determinants of QoL and (2) Brain reorganization and behavior. While it is not documented in controlled studies that the good QoL in these patients is a consequence of BCI-based neurorehabilitation, the proposed determinants of QoL might become the theoretical background needed to develop clinically more useful BCI systems and to evaluate the effects of BCI-based communication on QoL for advanced ALS patients and other forms of severe paralysis.
abstract_id: PUBMED:32548859
Classification of Facial Expressions for Intended Display of Emotions Using Brain-Computer Interfaces. Facial expressions are important for intentional display of emotions in social interaction. For people with severe paralysis, the ability to display emotions intentionally can be impaired. Current brain-computer interfaces (BCIs) allow for linguistic communication but are cumbersome for expressing emotions. Here, we investigated the feasibility of a BCI to display emotions by decoding facial expressions. We used electrocorticographic recordings from the sensorimotor cortex of people with refractory epilepsy and classified five facial expressions, based on neural activity. The mean classification accuracy was 72%. This approach could be a promising avenue for development of BCI-based solutions for fast communication of emotions. ANN NEUROL 2020;88:631-636.
Answer: Brain-computer interfaces (BCIs) have been developed to aid communication in paralyzed patients by allowing them to control external devices using brain activity, thereby bypassing the dysfunctional motor system. However, the effectiveness of BCIs varies depending on the extent of the patient's paralysis. For patients with amyotrophic lateral sclerosis (ALS) and other severe neurological diseases, BCIs have been used successfully in various stages of paralysis, but a strong correlation has been found between physical impairment and BCI performance, indicating that performance worsens as impairment increases (PUBMED:18824406).
Patients in a locked-in state (LIS), who retain some control over eye movements, have been able to use BCIs for communication. However, for patients in a completely locked-in state (CLIS), where no voluntary muscle control remains, including eye movements, BCIs have not been successful in restoring basic communication (yes/no responses) (PUBMED:18824406). This raises the question of whether goal-directed thinking is extinguished in completely paralyzed patients or if there are other factors at play.
The literature suggests that the lack of success in BCI communication with CLIS patients may be due to a lack of contingencies between goal-directed thoughts and intentions, as seen in experiments with chronically curarized rats where operant conditioning and voluntary control of autonomic physiological functions were impossible (PUBMED:17234696). This implies that the ability to engage in goal-directed thinking may still be present in CLIS patients, but the connection between intention and feedback is disrupted, making BCI control unattainable.
Recent developments in BCIs, such as the use of near-infrared spectroscopy (NIRS) and classical conditioning procedures, have shown some promise in allowing communication for CLIS patients (PUBMED:27590968). Additionally, the use of intracortical BCIs, which involve implanted sensors and decoding software, is being explored to increase the capability, accuracy, and longevity of BCIs, potentially expanding the range of clinical applications (PUBMED:23862678).
In summary, while BCIs have not yet been able to restore communication in completely paralyzed patients, the issue may not be the extinction of goal-directed thinking but rather the challenge of establishing a reliable method for CLIS patients to convey their intentions through BCI systems. Ongoing research and technological advancements continue to explore new methods for facilitating communication in this patient population (PUBMED:18824406, PUBMED:17234696, PUBMED:27590968, PUBMED:23862678). |
Instruction: Is sentinel node biopsy beneficial in melanoma patients?
Abstracts:
abstract_id: PUBMED:15887051
Sentinel node biopsy. What are the facts? Sentinel node biopsy (SNB) is increasingly being used as a minimally invasive staging procedure in patients with malignant melanoma. For decades elective lymph node dissection (ELND) was performed in many centers on patients at risk for lymph node metastasis but without clinically detectable lymph node involvement. Today, selective lymph node dissection (SLND) is offered only to patients with histologically proven metastasis in a SN (10-29%). A positive SN is one of the most important prognostic parameters. Ten years after the introduction of the technique, the role of SNB in the treatment of cutaneous melanoma still remains controversial. Issues include the usefulness of highly sensitive evaluation of SN using molecular biology or cytology techniques, as well as the therapeutic impact of the SNB per se and the associated combined surgical or medical adjuvant therapies.
abstract_id: PUBMED:15278236
Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study.
abstract_id: PUBMED:38155145
LONG-TERM OUTCOMES OF SENTINEL LYMPH NODE BIOPSY VERSUS LYMPH NODE OBSERVATION IN MELANOMA PATIENTS. Objective: evaluating the influence of sentinel lymph node biopsy without following completion lymph node dissection independent on sentinel lymph node status on the outcome in patients with skin melanoma.
Materials And Methods: Three hundred nine patients with a primary skin melanoma were randomly assigned to wide excision of the primary tumor and sentinel lymph node biopsy without following completion lymph-node dissection independent on sentinel lymph node status or to wide excision of skin melanoma. Low-dose interferon was administrated in the adjuvant setting.
Results: 5-year disease-free survival rate was (85.1 ± 3.0) % in the wide excision and sentinel lymph node biopsy group and (78.4 ± 2.4) % in the wide excision group (hazard ratio, 0.69; p = 0.006). 5-year overall survival rates were similar in the two groups: (88.6 ± 3.0) % vs. (85.1 ± 2.4) %, respectively; hazard ratio, 0.97; p = 0.42.
Conclusion: Sentinel lymph node biopsy in patients with skin melanoma increases disease-free survival rate without influence on overall survival, confirming the diagnostic, not therapeutical, value of this procedure.
abstract_id: PUBMED:26929285
Improved Survival in Male Melanoma Patients in the Era of Sentinel Node Biopsy. Background And Aims: Sentinel node biopsy is a standard method for nodal staging in patients with clinically localized cutaneous melanoma, but the survival advantage of sentinel node biopsy remains unsolved. The aim of this case-control study was to investigate the survival benefit of sentinel node biopsy.
Materials And Methods: A total of 305 prospective melanoma patients undergoing sentinel node biopsy were compared with 616 retrospective control patients with clinically localized melanoma whom have not undergone sentinel node biopsy. Survival differences were calculated with the median follow-up time of 71 months in sentinel node biopsy patients and 74 months in control patients. Analyses were calculated overall and separately in males and females.
Results: Overall, there were no differences in relapse-free survival or cancer-specific survival between sentinel node biopsy patients and control patients. Male sentinel node biopsy patients had significantly higher relapse-free survival ( P = 0.021) and cancer-specific survival ( P = 0.024) than control patients. In females, no differences were found. Cancer-specific survival rates at 5 years were 87.8% in sentinel node biopsy patients and 85.2% in controls overall with 88.3% in male sentinel node biopsy patients and 80.6% in male controls and 87.3% in female sentinel node biopsy patients and 89.8% in female controls.
Conclusion: Sentinel node biopsy did not improve survival in melanoma patients overall. While females had no differences in survival, males had significantly improved relapse-free survival and cancer-specific survival following sentinel node biopsy.
abstract_id: PUBMED:16824370
Prognosis after sentinel node biopsy in malignant melanoma Introduction: Sentinel node biopsy (SNB) is used in patients with cutaneous malignant melanoma (MM) to detect subclinical spread to the regional lymph nodes, after which a radical lymph node dissection can be performed. Since 2001, the Department of Plastic Surgery, Roskilde Amts Sygehus, has used SNB routinely in patients with cutaneous MM who have a statistical risk of at least 10% of harbouring subclinical lymph node metastasis.
Materials And Methods: In the four-year period from 2001 to 2004, 248 consecutive patients with primary MM underwent SNB at the time of radical surgery for their MM. If metastatic spread was found in the removed sentinel node, a radical lymph node dissection was performed shortly afterward. All patients were followed up after their operation in the department's outpatient clinic.
Results: Regional lymph node metastatic spread was found by SNB in 32% of the patients. At radical lymph node dissection, further metastatic lymph nodes were found in 24% of the dissected cases. The median follow-up time was 21 months (range 1-51 months). 7% of SN-negative cases developed recurrence during follow-up, in contrast to 23% of the SN-positive cases. The median time to recurrence was 14 months. The two-year and four-year disease-free survival rates were 93% and 85% in the SN-negative group and 73% and 55% in the SN-positive group, respectively. Risk factors for recurrence were: extracapsular SN growth, more than one metastatic SN and further lymph node metastases being found by formal node dissection. 18% of the SN-positive patients died during the follow-up period, in contrast to 3% of the SN-negative cases. The MM-specific two-year and four-year survival rates were 84% and 64% in the SN-positive group and 99% and 97% in the SN-negative group, respectively.
Conclusion: Sentinel node biopsy is a procedure that detects MM patients who have a very high risk of recurrence and death by MM within a few years after primary treatment. SNB status is a very strong prognostic factor, and SN-positive cases should be followed carefully.
abstract_id: PUBMED:12854268
Sentinel lymph node biopsy in patients with cutaneous malignant melanoma Introduction: In the Department of Plastic Surgery of Odense University Hospital patients having cutaneous melanoma from 1 to 4 mm underwent sentinel lymph node (SLN) biopsy. The aim of this study was to evaluate results and complications.
Material And Methods: During the first three years one hundred and sixteen patients underwent SLN biopsy, one patient was excluded from the study. All patients were operated in general anaesthesia and followed according to recommendations of the Danish Melanoma Group.
Results: Median follow-up was 16 months. 76% had negative SLN and 24% had positive SLN. No significant difference was recorded in median thickness in the two groups. In two patients SLN were false negative; in both patients the primary melanoma was located in the face. The complication rate was 8.5%.
Discussion: We conclude that SLN biopsy is a reliable method in staging the regional lymph nodes and determining the need of elective lymphadenectomy and that our results match the ones of international standard.
abstract_id: PUBMED:25370951
Sentinel lymph node biopsy necessary in melanoma patients Sentinel lymph node biopsy provides melanoma patients with important prognostic information and improves staging and regional tumour control, while the morbidity is limited and the subsequent quality of life is good. The most important result of the only randomised study is the improved ten-year melanoma-specific survival in node-positive patients with an intermediate thickness melanoma. We recommend sentinel lymph node biopsy as a standard diagnostic procedure for these patients. Sentinel lymph node biopsy can be considered in patients with a thinner or thicker melanoma.
abstract_id: PUBMED:11299621
Initial experiences with sentinel lymph node biopsy in malignant melanoma Elective versus therapeutic lymph node dissection has been a controversial field of the surgical treatment of cutaneous malignant melanoma for more than two decades. The identification and biopsy of the sentinel lymph node in different solid malignancies has become feasible by the method described by Morton in 1992. The sentinel lymph node is the first tumor draining lymph node in the regional lymph node basin. If metastasis is not proven in the sentinel node by detailed histological study those are unlikely in other regional lymph nodes and formal lymph node dissection can be omitted. Patients undergoing surgery for primary cutaneous (intermediate or high risk) melanoma have been initiated in this feasibility study. Of the 40 patients the sentinel lymph node biopsy was unsuccessful in two and at least one positive sentinel lymph node was found in nine patients. The duration of the procedure is between five and 15 minutes. After this feasibility study further prospective and randomized studies are projected.
abstract_id: PUBMED:32661834
Sentinel lymph node biopsy for lung cancer. Sentinel lymph node biopsy is a technique to identify the first lymph node (or nodes) draining a tumor. The underlying principle is that as the first site of cancer spread, evaluation of the sentinel node will be most predictive for wider nodal involvement. The introduction of sentinel node biopsy revolutionized the surgical management of cutaneous melanoma and breast cancer, becoming a key component in the management of such patients. For over 20 years, thoracic surgeons have similarly worked to apply this technique to lung cancer but have thus far not had the same impact on lung surgery. In this review, we will summarize the ongoing discussions on the role of sentinel node biopsy in lung cancer, the methods for identifying the sentinel node, and the techniques for evaluating the sentinel node specimen. We will also highlight some of the pressing questions investigators should consider when designing a trial for sentinel node mapping. This will clarify the current status of sentinel node biopsy in lung cancer and thus highlight important future directions for research.
abstract_id: PUBMED:20677534
Discussing sentinel lymph node biopsy with your melanoma patients. Sentinel lymph node biopsy, indicated for stage 1B/2 melanoma may be an underutilized diagnostic modality. Experts in the field agree that sentinel lymph node biopsy should be offered to patients with T1 melanomas with primary tumor ulceration, a mitotic rate greater than or equal to 1/mm2, and/or Clark level IV/V invasion especially if tumor thickness exceeds 0.75 mm. It is the responsibility of practitioners to characterize patients as eligible or non-eligible for the sentinel node procedure. Furthermore, it is important to obtain a fully informed consent and explain to patients the statistics of the progonostic information garnered by the test.
Answer: Sentinel node biopsy (SNB) is a minimally invasive staging procedure used in patients with malignant melanoma to detect subclinical spread to the regional lymph nodes. It has replaced elective lymph node dissection (ELND) in patients with primary melanoma who are at risk for lymph node metastasis but do not have clinically detectable lymph node involvement (PUBMED:15887051; PUBMED:15278236). The procedure is important for staging and provides significant prognostic information, as the presence of metastasis in the sentinel node (SN) is a strong independent prognostic factor (PUBMED:15278236; PUBMED:16824370).
Studies have shown that SNB can increase disease-free survival rates in melanoma patients, although it may not have a significant impact on overall survival (PUBMED:38155145). In male patients, SNB has been associated with improved relapse-free survival and cancer-specific survival, while no survival differences were observed in female patients (PUBMED:26929285). The procedure is considered reliable for staging regional lymph nodes and determining the need for elective lymphadenectomy (PUBMED:12854268).
The most important result from a randomized study is the improved ten-year melanoma-specific survival in node-positive patients with intermediate thickness melanoma, which supports the recommendation of SNB as a standard diagnostic procedure for these patients (PUBMED:25370951). SNB status is a very strong prognostic factor, and SN-positive cases should be followed carefully due to their higher risk of recurrence and death by melanoma within a few years after primary treatment (PUBMED:16824370).
In conclusion, SNB provides important prognostic information, improves staging and regional tumor control, and has limited morbidity with good subsequent quality of life. It is beneficial for melanoma patients, particularly for those with intermediate thickness melanoma and for male patients, in terms of relapse-free and cancer-specific survival (PUBMED:25370951; PUBMED:26929285). |
Instruction: Magnesium sulphate during transradial cardiac catheterization: a new use for an old drug?
Abstracts:
abstract_id: PUBMED:18829998
Magnesium sulphate during transradial cardiac catheterization: a new use for an old drug? Objective: To assess the effect of intra-arterial magnesium on the radial artery during transradial cardiac catheterization.
Background: Transradial coronary angiography has become popular in the last decade and offers several advantages over transfemoral angiography. Radial artery spasm is a major limitation of this approach, and a vasodilatory cocktail is usually given. The aim of this study was to examine the effect of magnesium sulphate on the radial artery during cardiac catheterization.
Methods: This was a prospective, double-blind, randomized trial of 86 patients undergoing radial catheterization. Patients were randomized to receive magnesium sulphate (150 mg) or verapamil (1 mg) into the radial sheath. Radial dimensions were assessed using Doppler ultrasound. The primary endpoint of the study was a change in radial artery diameter following administration. Secondary endpoints included operator-defined radial artery spasm and patient pain.
Results: Following administration of the study drug, there was an increase in radial artery diameter in both groups (p < 0.01), although the increase seen was greater in the group receiving magnesium (magnesium 0.36 +/- 0.03 mm; verapamil 0.27 +/- 0.03 mm; p < 0.05). Administration of verapamil resulted in a fall in mean arterial pressure (MAP) (change in MAP -6.6 +/- 1.4 mmHg; p < 0.01), whereas magnesium did not have a hemodynamic effect. Severe arm pain (pain score > 5) was observed in 14 (30%) patients receiving verapamil and 9 (27%) receiving magnesium (p = NS).
Conclusion: This study demonstrates that magnesium is a more effective vasodilator when compared to verapamil, with a reduced hemodynamic effect, and is equally effective at preventing radial artery spasm. As such, the use of this agent offers distinct advantages over verapamil during radial catheterization.
abstract_id: PUBMED:12688270
A survey of peri-operative use of magnesium sulphate in adult cardiac surgery in the UK. We conducted a postal survey of cardiac anaesthetists in the UK, to determine the extent of magnesium sulphate (MgSO4) use and the main indications for its administration. Questionnaires were sent to anaesthetists at 35 UK hospitals undertaking adult cardiac surgery. Responses were received from 24 hospitals (69%) totalling 124 individual responses. Twenty-five (20%) of the anaesthetists responding to the questionnaire routinely gave magnesium other than in cardioplegia. The most common indications for administration were arrhythmia prophylaxis and treatment, myocardial protection, and the treatment of hypomagnesaemia.
abstract_id: PUBMED:29878650
Is there benefit to continue magnesium sulphate postpartum in women receiving magnesium sulphate before delivery? A randomised controlled study. Objective: To determine if the use of magnesium sulphate postdelivery reduces the risk of eclampsia in women with severe pre-eclampsia exposed to at least 8 hours of magnesium sulphate before delivery.
Design: Randomised multicentre controlled trial.
Setting: Latin America.
Population: Women with severe pre-eclampsia that had received a 4-g loading dose followed by 1 g per hour for 8 hours as maintenance dose before delivery.
Methods: In all, 1113 women were randomised; 555 women were randomised to continue the infusion of magnesium sulphate for 24 hours postpartum and 558 were randomised to stopping the magnesium sulphate infusion immediately after delivery.
Outcome Measures: Primary outcome was the incidence of eclampsia in the first 24 hours postdelivery. Secondary outcomes included maternal death, maternal complications, time to start ambulation and time to start lactation.
Results: The maternal characteristics at randomisation between the groups were not different. There were no differences in the rate of eclampsia; 1/555 (0.18%) versus 2/558 (0.35%) [relative risk (RR 0.7, 95% CI 0.1-3.3; P = 0.50] or maternal complications between the groups (RR 1.0, 95% CI 0.8-1.2; P = 0.76). Time to start ambulation was significantly shorter in the no magnesium sulphate group (18.1 ± 10.6 versus 11.8 ± 10.8 hours; P = 0.0001) and time to start lactation was equally shorter in the no magnesium sulphate group (24.1 ± 17.1 versus 17.1 ± 16.8 hours; P = 0.0001).
Conclusions: Women with severe pre-eclampsia treated with a minimum of 8 hours of magnesium sulphate before delivery do not benefit from continuing the magnesium sulphate for 24 hours postpartum.
Tweetable Abstract: No benefit of continuing magnesium sulphate postpartum in severe pre-eclampsia exposed to this drug for a minimum of 8 hours before delivery.
abstract_id: PUBMED:36214549
Magnesium sulphate activates the L-arginine/NO/cGMP pathway to induce peripheral antinociception in mice In the present study, we investigated whether magnesium sulphate activates the L-arginine/NO/cGMP pathway and elicits peripheral antinociception. The male Swiss mice paw pressure test was performed with hyperalgesia induced by intraplantar injection of prostaglandin E2. All drugs were administered locally into the right hind paw of animals. Magnesium sulphate (20, 40, 80 and 160 μg/paw) induced an antinociceptive effect. The dose of 80 μg/paw elicited a local antinociceptive effect that was antagonized by the non-selective NOS inhibitor, L-NOArg, and by the selective neuronal NOS inhibitor, L-NPA. The inhibitors, L-NIO and L-NIL, selectively inhibited endothelial and inducible NOS, respectively, but were ineffective regarding peripheral magnesium sulphate injection. The soluble guanylyl cyclase inhibitor, ODQ, blocked the action of magnesium sulphate, and the cGMP-phosphodiesterase inhibitor, zaprinast, enhanced the antinociceptive effects of intermediate dose of magnesium sulphate. Our results suggest that magnesium sulphate stimulates the NO/cGMP pathway via neuronal NO synthase to induce peripheral antinociceptive effects.
abstract_id: PUBMED:30501993
Magnesium sulphate replacement therapy in cardiac surgery patients: A systematic review. Objective: The objective of this review was to identify evidence to inform clinical practice guidelines for magnesium sulphate (MgSO4) replacement therapy for postoperative cardiac surgery patients.
Data Sources: Three databases were systematically searched: CINAHL Complete, MEDLINE Complete, and EmBase.
Review Method Used: A systematic literature review method was used to locate, appraise, and synthesise available evidence for each step of the medication management cycle (indication, prescription, preparation, administration, and monitoring) for MgSO4 replacement therapy. Database searches used combinations of synonyms for postoperation or surgery, cardiac, heart, arrhythmia, atrial fibrillation, and magnesium sulphate. Search results were independently screened for inclusion by two researchers at title, abstract, and full-text stages with good statistical agreement (kappa scores of 0.99, 0.87, and 1.00, respectively).
Results: Twenty-four included studies reported varying methodologies, data collected, and medication management practices. Of these, 23 studies (95.8%) excluded patients with comorbidities commonly observed in clinical practice. This review identified low-level evidence for two practice recommendations: (i) concurrent administration of MgSO4 with medications recommended as the best practice for prevention of postoperative atrial fibrillation and (ii) clinical and laboratory monitoring of magnesium blood serum levels, vital signs, and electrocardiography should be performed during MgSO4 replacement therapy. Evidence to inform MgSO4 replacement therapy for each medication management cycle step was limited; therefore, a guideline could not be developed.
Conclusions: Although MgSO4 is routinely administered to prevent hypomagnesaemia in postoperative cardiac surgery patients, there was insufficient evidence to guide critical care nurses in each medication management cycle step for MgSO4 replacement therapy. These findings precluded the development of comprehensive recommendations to standardise this practice. Poor standardisation can increase the risk for patient harm related to variation in clinical processes and procedural errors. In light of this evidence gap, consensus of expert opinion should be used as a strategy to guide MgSO4 medication management.
abstract_id: PUBMED:36354241
Oral magnesium sulphate administration in rats with minimal hepatic encephalopathy: NMR-based metabolic characterization of the brain Objective: To investigate the metabolic changes in rats with minimal hepatic encephalopathy (MHE) treated with oral magnesium sulphate administration.
Materials And Methods: A total of 30 Sprague-Dawley rats were divided into a control group and MHE group (further divided into an MHE group and an MHE-Mg group treated with oral administration of 124 mg/kg/day magnesium sulphate). Morris water maze (MWM), Y maze and narrow beam walking (NBW) were used to evaluate cognitive and motor functions. Brain manganese and magnesium content were measured. The metabolic changes in rats with MHE were investigated using hydrogen-nuclear magnetic resonance. Metabolomic signatures were identified with enrichment and pathway analysis.
Results: A significantly decreased number of entries into the MWM within the range of interest, longer latency and total time during NBW, and higher brain manganese content were found in rats with MHE. After magnesium sulphate treatment, the rats with MHE had better behavioural performance and lower brain manganese content. The 25 and 26 metabolomic signatures were identified in the cortex and striatum of rats with MHE. The pathway analysis revealed alanine, aspartate and glutamate metabolism as the major abnormal metabolic pathways associated with these metabolomic signatures.
Conclusion: Alanine, aspartate and glutamate metabolism are major abnormal metabolic pathways in rats with MHE, which could be restored by magnesium sulphate treatment.
abstract_id: PUBMED:8988709
Use of magnesium sulphate in Scottish obstetric units. The greater efficacy of magnesium sulphate compared with diazepam or phenytoin in the treatment of eclampsia is now generally accepted. We used a postal questionnaire to establish the frequency and nature of use of magnesium sulphate in obstetric units in Scotland. Ninety percent of units responded and of these 90% were either using the drug or intended to do so. There was considerable heterogeneity in the indications for its use and some important omissions in the monitoring of patients receiving the drug. The introduction of magnesium sulphate for a rare indication to units with no previous experience of its use may be associated with significant risks to obstetric patients.
abstract_id: PUBMED:34095588
Emergency medicine: magnesium sulphate injections and their pharmaceutical quality concerns. Objectives: World Health Organization has recognized magnesium sulphate as the drug of choice for prevention and treatment of fits associated with preeclampsia and eclampsia which are amongst the leading causes of maternal morbidity and mortality. In this study, the pharmaceutical quality of magnesium sulphate injections marketed in Anambra state was assessed.
Methods: Ninety samples of magnesium sulphate obtained from the 3 senatorial zones in Anambra state were subjected to identification tests, microbiological analysis consisting of Growth promotion test, sterility and endotoxin test. Content analysis using titrimetric method and pH analysis were also carried out on the samples.
Results: Twenty percent (20%) of samples obtained from Onitsha failed identification test as they had no Registration number in Nigeria. All samples subjected to the microbiology tests (sterility and endotoxin test) passed. Twenty percent (20%) and thirty-three percent (33.3%) of samples sourced from Onitsha and Nnewi respectively failed the pH analysis test. All the samples passed microbiological tests and had their Active Pharmaceutical Ingredients (API) within the acceptable limit.
Conclusions: This study reveals that there are still some substandard magnesium sulphate injections in circulation in the locality. The supply chain of these drugs should be monitored to ensure a reduction in the incidences of substandard magnesium sulphate and positive therapeutic outcome which translates to reduced maternal mortality associated with pre-eclampsia and eclampsia in Nigeria.
abstract_id: PUBMED:29945665
Availability and use of magnesium sulphate at health care facilities in two selected districts of North Karnataka, India. Background: Pre-eclampsia and eclampsia are major causes of maternal morbidity and mortality. Magnesium sulphate is accepted as the anticonvulsant of choice in these conditions and is present on the WHO essential medicines list and the Indian National List of Essential Medicines, 2015. Despite this, magnesium sulphate is not widely used in India for pre-eclampsia and eclampsia. In addition to other factors, lack of availability may be a reason for sub-optimal usage. This study was undertaken to assess the availability and use of magnesium sulphate at public and private health care facilities in two districts of North Karnataka, India.
Methods: A facility assessment survey was undertaken as part of the Community Level Interventions for Pre-eclampsia (CLIP) Feasibility Study which was undertaken prior to the CLIP Trials (NCT01911494). This study was undertaken in 12 areas of Belagavi and Bagalkote districts of North Karnataka, India and included a survey of 88 facilities. Data were collected in all facilities by interviewing the health care providers and analysed using Excel.
Results: Of the 88 facilities, 28 were public, and 60 were private. In the public facilities, magnesium sulphate was available in six out of 10 Primary Health Centres (60%), in all eight taluka (sub-district) hospitals (100%), five of eight community health centres (63%) and both district hospitals (100%). Fifty-five of 60 private facilities (92%) reported availability of magnesium sulphate. Stock outs were reported in six facilities in the preceding six months - five public and one private. Twenty-five percent weight/volume and 50% weight/volume concentration formulations were available variably across the public and private facilities. Sixty-eight facilities (77%) used the drug for severe pre-eclampsia and 12 facilities (13.6%) did not use the drug even for eclampsia. Varied dosing schedules were reported from facility to facility.
Conclusions: Poor availability of magnesium sulphate was identified in many facilities, and stock outs in some. Individual differences in usage were identified. Ensuring a reliable supply of magnesium sulphate, standard formulations and recommendations of dosage schedules and training may help improve use; and decrease morbidity and mortality due to pre-eclampsia/ eclampsia.
Trial Registration: The CLIP trial was registered with ClinicalTrials.gov ( NCT01911494 ).
abstract_id: PUBMED:16406662
Magnesium sulphate for the management of preeclampsia In case of eclampsia, and especially in case of preeclampsia, no consensus exist in order to treat or to prevent convulsions by routine use of magnesium sulphate, at least in France. However, a large, multicentre, randomised trial compared the efficacy of magnesium sulphate with diazepam or phenytoin in eclamptic women. In this trial, magnesium sulphate was associated with a significantly lower rate of recurrent seizures and lower rate of maternal death than that observed with other anticonvulsants. The primary objective of magnesium sulphate prophylaxis in women with preeclampsia is to prevent or reduce the rate of eclampsia and complications associated with eclampsia. There are 3 large randomised controlled trials comparing the use of magnesium sulphate to prevent convulsions in patients with severe preeclampsia: the first one was vs phenytoin, the second vs placebo, and the third vs nimodipine. Patients receiving magnesium sulphate presented a significant lower risk of eclampsia than that observed with other comparison groups, probably by decreasing the cerebral perfusion pressure, thus avoiding a cerebral barotrauma. However, several arguments balance a wide use of magnesium sulphate: the prevalence of eclampsia in the Western world is very low, the use of magnesium sulphate does not affect the neonatal morbidity and mortality, and it is associated with a high rate of side effects, sometimes severe, such as respiratory depression. Thus, the benefit to risk ratio has to guide the use of magnesium sulphate and is directly correlated to the prevalence of eclampsia according to the risk of considered group. 1) The rate of seizures in women with mild preeclampsia not receiving magnesium sulphate is very low. Magnesium sulphate may potentially be associated with a higher number of adverse maternal effects. Therefore, the benefit to risk ratio does not support routine use of magnesium sulphate prophylaxis in this group. 2) On the other hand, the higher rate of seizures in women with severe preeclampsia (2.0%), especially in those who have imminent eclampsia, justifies prophylaxis with magnesium sulphate.
Answer: Magnesium sulphate has been explored for a new application during transradial cardiac catheterization, which is a procedure that has gained popularity due to its advantages over transfemoral angiography. A major limitation of the transradial approach is radial artery spasm, which is typically addressed with a vasodilatory cocktail. A study aimed to assess the effect of intra-arterial magnesium on the radial artery during this procedure. The study was a prospective, double-blind, randomized trial involving 86 patients undergoing radial catheterization. These patients were randomized to receive either magnesium sulphate (150 mg) or verapamil (1 mg) into the radial sheath. The primary endpoint was a change in radial artery diameter following administration, and secondary endpoints included operator-defined radial artery spasm and patient pain.
The results showed that both magnesium sulphate and verapamil increased the radial artery diameter, but the increase was greater in the magnesium group. Additionally, verapamil caused a fall in mean arterial pressure, whereas magnesium did not have a hemodynamic effect. There was no significant difference in severe arm pain between the two groups. The study concluded that magnesium is a more effective vasodilator compared to verapamil, with reduced hemodynamic effects, and is equally effective at preventing radial artery spasm. Therefore, magnesium sulphate offers distinct advantages over verapamil during radial catheterization, suggesting a new use for this old drug (PUBMED:18829998). |
Instruction: Can waist circumference identify children with the metabolic syndrome?
Abstracts:
abstract_id: PUBMED:22226511
Body mass index correlates with waist circumference in school aged Italian children. This study demonstrates the existence of a linear correlation between Body Mass Index (BMI) and waist circumference in Italian school aged children and suggests an indirect method (from weight and height) to estimate waist circumference, whose increase may be indicative for the diagnosis of the metabolic syndrome.
abstract_id: PUBMED:27796813
Neck circumference as an effective measure for identifying cardio-metabolic syndrome: a comparison with waist circumference. Neck circumference is a new anthropometric index for estimating obesity. We aimed to determine the relationship between neck circumference and body fat content and distribution as well as the efficacy of neck circumference for identifying visceral adiposity and metabolic disorders. A total of 1943 subjects (783 men, 1160 women) with a mean age of 58 ± 7 years were enrolled in this cross-sectional study. Metabolic syndrome was defined according to the standard in the 2013 China Guideline. Analyses were conducted to determine optimal neck circumference cutoff points for visceral adiposity quantified by magnetic resonance imaging, and to compare the performance of neck circumference with that of waist circumference in identifying abdominal obesity and metabolic disorders. Visceral fat content was independently correlated with neck circumference. Receiver operating characteristic curves showed that the area under the curve for the ability of neck circumference to determine visceral adiposity was 0.781 for men and 0.777 for women. Moreover, in men a neck circumference value of 38.5 cm had a sensitivity of 56.1 % and specificity of 83.5 %, and in women, a neck circumference value of 34.5 cm had a sensitivity of 58.1 % and specificity of 82.5 %. These values were the optimal cutoffs for identifying visceral obesity. There were no statistically significant differences between the proportions of metabolic syndrome and its components identified by an increased neck circumference and waist circumference. Neck circumference has the same power as waist circumference for identifying metabolic disorders in a Chinese population.
abstract_id: PUBMED:21419020
Hyperinsulinemia and waist circumference in childhood metabolic syndrome. Objective: To determine the characteristics of obese children presenting at a tertiary care hospital and the frequency of metabolic syndrome (MS) in them using two paediatric definitions.
Study Design: Cross-sectional study.
Place And Duration Of Study: The Endocrine Clinic of National Institute of Child Health, Karachi, from November 2005 till May 2008.
Methodology: A total of 262 obese children aged 4-16 years, with BMI greater than 95th percentile were included. Children having obesity due to syndromes, medications causing weight gain, chronic illness and developmental disability were excluded. Blood pressure, waist circumference, fasting triglycerides, HDL, insulin and glucose levels were obtained. Obesity was defined as BMI > 95th percentile for age and gender according to the UK growth reference charts. The prevalence of metabolic syndrome was estimated using to the De Ferrantis and Lambert definitions.
Results: The frequency of MS varied between 16% and 52% depending on whether insulin levels were included in the definition. There was a significant positive correlation(r) when the metabolic parameters were correlated with waist circumference and insulin levels, except HDL which was negatively correlated. All the metabolic parameters like waist circumference, triglycerides, high density lipoprotein cholesterol and systolic blood pressure increased considerably across the insulin quartile (p < 0.05). The most noteworthy anthropometric and metabolic abnormality were the waist circumference (46.5%) and insulin levels (58%) respectively.
Conclusion: There was a marked difference in the frequency of metabolic syndrome according to the definition used. The waist circumference and hyperinsulinemia are significant correlates of MS in obese children. There is a need for establishing normal insulin ranges according to age, gender and pubertal status. The clinical examination and investigations ought to include waist circumference and insulin levels together as a part of the definition of MS, for early detection and intervention of childhood obesity.
abstract_id: PUBMED:18793503
Identifying adolescent metabolic syndrome using body mass index and waist circumference. Introduction: Metabolic syndrome is increasing among adolescents. We examined the utility of body mass index (BMI) and waist circumference to identify metabolic syndrome in adolescent girls.
Methods: We conducted a cross-sectional analysis of 185 predominantly African American girls who were a median age of 14 years. Participants were designated as having metabolic syndrome if they met criteria for 3 of 5 variables: 1) high blood pressure, 2) low high-density lipoprotein cholesterol level, 3) high fasting blood glucose level, 4) high waist circumference, and 5) high triglyceride level. We predicted the likelihood of the presence of metabolic syndrome by using previously established cutpoints of BMI and waist circumference. We used stepwise regression analysis to determine whether anthropometric measurements significantly predicted metabolic syndrome.
Results: Of total participants, 18% met the criteria for metabolic syndrome. BMI for 118 (64%) participants was above the cutpoint. Of these participants, 25% met the criteria for metabolic syndrome, whereas only 4% of participants with a BMI below the cutpoint met the criteria for metabolic syndrome (P <.001). Girls with a BMI above the cutpoint were more likely than girls with a BMI below the cutpoint to have metabolic syndrome (P = .002). The waist circumference for 104 (56%) participants was above the cutpoint. Of these participants, 28% met the criteria for metabolic syndrome, whereas only 1% of participants with a waist circumference below the cutpoint met the criteria for metabolic syndrome (P <.001). Girls with a waist circumference above the cutpoint were more likely than girls with a waist circumference below the cutpoint to have metabolic syndrome (P = .002). Stepwise regression showed that only waist circumference significantly predicted metabolic syndrome.
Conclusion: Both anthropometric measures were useful screening tools to identify metabolic syndrome. Waist circumference was a better predictor of metabolic syndrome than was BMI in our study sample of predominantly African American female adolescents living in an urban area.
abstract_id: PUBMED:16061781
Can waist circumference identify children with the metabolic syndrome? Objective: To determine in children the association between waist circumference (WC) and insulin resistance determined by homeostasis modeling (HOMA-IR) and proinsulinemia and components of the metabolic syndrome, including lipid profile and blood pressure (BP).
Methods: Eighty-four students (40 boys) aged 6 to 13 years and matched for sex and age underwent anthropometric measurements; 40 were obese; 28, overweight; and 16, nonobese. Body mass index (BMI), WC, BP, and Tanner stage were determined. An oral glucose tolerance test, lipid profile, and insulin and proinsulin assays were performed. Children were classified as nonobese (BMI < 85th percentile), overweight (BMI, 85th-94th percentile), and obese (BMI > or = 95th percentile).
Results: There was univariate association (P < .01) between WC and height (r = 0.73), BMI (r = 0.96), Tanner stage (r = 0.67), age (r = 0.56), systolic BP (r = 0.64), diastolic BP (r = 0.61), high-density lipoprotein cholesterol level (r = 0.45), triglyceride level (r = 0.28), proinsulin level (r = 0.59), and HOMA-IR (r = 0.59). Multiple linear regression analysis using HOMA-IR as the dependent variable showed that WC (beta coefficient = 0.050 [95% confidence interval, 0.028 to 0.073]; P = .001) and systolic BP (beta coefficient = 0.033 [95% confidence interval, 0.004 to 0.062]; P = .004) were significant independent predictors for insulin resistance adjusted for diastolic BP, height, BMI, acanthosis nigricans, and high-density lipoprotein cholesterol level.
Conclusion: Waist circumference is a predictor of insulin resistance syndrome in children and adolescents and could be included in clinical practice as a simple tool to help identify children at risk.
abstract_id: PUBMED:18682591
Waist circumference measurement in clinical practice. The obesity epidemic is a major public health problem worldwide. Adult obesity is associated with increased morbidity and mortality. Measurement of abdominal obesity is strongly associated with increased cardiometabolic risk, cardiovascular events, and mortality. Although waist circumference is a crude measurement, it correlates with obesity and visceral fat amount, and is a surrogate marker for insulin resistance. A normal waist circumference differs for specific ethnic groups due to different cardiometabolic risk. For example, Asians have increased cardiometabolic risk at lower body mass indexes and with lower waist circumferences than other populations. One criterion for the diagnosis of the metabolic syndrome, according to different study groups, includes measurement of abdominal obesity (waist circumference or waist-to-hip ratio) because visceral adipose tissue is a key component of the syndrome. The waist circumference measurement is a simple tool that should be widely implemented in clinical practice to improve cardiometabolic risk stratification.
abstract_id: PUBMED:27184997
Self-Measured vs Professionally Measured Waist Circumference. Purpose: Although waist circumference can provide important metabolic risk information, logistic issues inhibit its routine use in outpatient practice settings. We assessed whether self-measured waist circumference is sufficiently accurate to replace professionally measured waist circumference for identifying high-risk patients.
Methods: Medical outpatients and research participants self-measured their waist circumference at the same visit during which a professionally measured waist circumference was obtained. Participants were provided with standardized pictorial instructions on how to measure their waist circumference, and professionals underwent standard training.
Results: Self- and professionally measured waist circumference data were collected for 585 women (mean ± SD age = 40 ± 14 years, mean ± SD body mass index = 27.7 ± 6.0 kg/m(2)) and 165 men (mean ± SD age = 41 ± 14 years, mean ± SD body mass index = 29.3 ± 4.6 kg/m(2)). Although self- and professionally measured waist circumference did not differ significantly, we found a clinically important false-negative rate for the self-measurements. Eleven percent of normal-weight and 52% of overweight women had a professionally measured waist circumference putting them in a high-risk category for metabolic syndrome (ie, greater than 88 cm); however, 57% and 18% of these women, respectively, undermeasured their waist circumference as falling below that cutoff. Fifteen percent and 84% of overweight and class I obese men, respectively, had a professionally measured waist circumference putting them in the high-risk category (ie, greater than 102 cm); however, 23% and 16% of these men, respectively, undermeasured their waist circumference as falling below that cutoff.
Conclusions: Despite standardized pictorial instructions for self-measured waist circumference, the false-negative rate of self-measurements approached or exceeded 20% for some groups at high risk for poor health outcomes.
abstract_id: PUBMED:29259800
Accuracy of self-reported height, weight and waist circumference in a Japanese sample. Objective: Inconsistent results have been found in prior studies investigating the accuracy of self-reported waist circumference, and no study has investigated the validity of self-reported waist circumference among Japanese individuals. This study used the diagnostic standard of metabolic syndrome to assess the accuracy of individual's self-reported height, weight and waist circumference in a Japanese sample.
Methods: Study participants included 7,443 Japanese men and women aged 35-79 years. They participated in a cohort study's baseline survey between 2007 and 2011. Participants' height, weight and waist circumference were measured, and their body mass index was calculated. Self-reported values were collected through a questionnaire before the examination.
Results: Strong correlations between measured and self-reported values for height, weight and body mass index were detected. The correlation was lowest for waist circumference (men, 0.87; women, 0.73). Men significantly overestimated their waist circumference (mean difference, 0.8 cm), whereas women significantly underestimated theirs (mean difference, 5.1 cm). The sensitivity of self-reported waist circumference using the cut-off value of metabolic syndrome was 0.83 for men and 0.57 for women.
Conclusions: Due to systematic and random errors, the accuracy of self-reported waist circumference was low. Therefore, waist circumference should be measured without relying on self-reported values, particularly in the case of women.
abstract_id: PUBMED:17487506
Waist circumference percentiles for 7- to 17-year-old Turkish children and adolescents. Abdominal obesity is associated with risk of cardiovascular disease and type 2 diabetes mellitus. Waist circumference as a measure of obesity may be clinically useful as a predictor of metabolic syndrome in children. To develop age- and sex-specific reference values for waist circumference we evaluated the data obtained from Turkish children and adolescents. Waist circumference measurements from 4,770 healthy schoolchildren were obtained. Smoothed percentile curves were produced by the LMS method. The median curves of Turkish children were compared with four other countries: Australia, the UK, USA (Bogalusa) and Japan. Smoothed percentile curves and values for the 3rd, 5th, 10th, 25th, 50th, 75th, 85th, 90th, 95th and 97th percentiles were calculated for boys and girls. We found that waist circumference increased with age both in boys and girls. The 50th percentile waist circumference curve of Turkish children was over the British and Japanese but lower than the Bogalusa children and adolescents. This study presents data and smoothed percentile curves for waist circumference of healthy Turkish children aged 7-17 years. The differences in waist circumference of different countries can be explained by lifestyles and cultural characteristics. These data can be added to the existing international reference values for waist circumference of children and adolescents.
abstract_id: PUBMED:27295014
DIAGNOSTIC PERFORMANCE OF WAIST CIRCUMFERENCE MEASUREMENTS FOR PREDICTING CARDIOMETABOLIC RISK IN MEXICAN CHILDREN. Objective: The accumulation of abdominal fat is associated with cardiometabolic abnormalities. Waist circumference (WC) measurements allow an indirect evaluation of abdominal adiposity. However, controversy exists over which WC reference values are the most suitable for identifying the pediatric population at risk. The aim of the study was to evaluate the ability of various WC indices to identify abdominal obesity as diagnostic tools for predicting cardiometabolic risk in Mexican children and adolescents.
Methods: Anthropometric measurements were performed and biochemical profiles determined in a crosssectional study that included 366 children and adolescents. Four parameters were used to evaluate abdominal obesity in our study group: (1) WC >90th percentile, according to the Fernández reference in a Mexican-American population measured by the National Center for Health Statistics (NCHS) technique; (2) WC >90th percentile, according to the Klünder reference in a Mexican population (measured by the World Health Organization [WHO] technique); (3) waist-to-height ratio (WHtR) >0.5 according to WHO; and (4) WHtR >0.5 according to NCHS. The ability of each of the indices to discriminate cardiometabolic abnormalities (fasting plasma glucose, dyslipidemia, and hypertension) was assessed.
Results: WHtR >0.5 according to WHO or NCHS references showed greater sensitivity to detect metabolic abnormalities compared to percentile reference parameters (74.3 to 100% vs. 59.0 to 88.9%; P<.05). However, the percentiles displayed more specificity to identify these alterations (46.2 to 62.2 vs. 21.3 to 46.9; P<.05). Area under the curve analysis showed that WHtR >0.5 can more readily detect hypertriglyceridemia (0.642), hypoalphalipoproteinemia (0.700), and a combination of two or more metabolic abnormalities (0.661), whereas WC >90th percentile, according to Klünder, better detected hyperglycemia (0.555).
Conclusion: WHtR >0.5 is a sensitive measure to identify pediatric patients with cardiometabolic alterations, despite its low specificity, and is a useful diagnostic tool to detect populations at risk. Based on the results of this study, we recommend preferential use of the Klünder waist circumference references over the Fernández method in Mexican pediatric populations.
Abbreviations: AUC = area under the curve BMI = body mass index HDL = high-density lipoprotein IDF = International Diabetes Federation LDL = low-density lipoprotein MS = metabolic syndrome NCHS = National Center for Health Statistics ROC = receiver operating characteristic WC = waist circumference WHO = World Health Organization WHtR = waist-to-height ratio.
Answer: Yes, waist circumference can identify children with the metabolic syndrome. Multiple studies have shown that waist circumference is a significant correlate and predictor of metabolic syndrome (MS) in children. For instance, a study found that waist circumference was a better predictor of metabolic syndrome than body mass index (BMI) in a sample of predominantly African American female adolescents (PUBMED:18793503). Another study concluded that waist circumference is a predictor of insulin resistance syndrome in children and adolescents and could be included in clinical practice as a simple tool to help identify children at risk (PUBMED:16061781).
Furthermore, the accuracy of waist circumference as a measure for identifying cardiometabolic risk has been supported by research indicating that it correlates with obesity and visceral fat amount, and is a surrogate marker for insulin resistance (PUBMED:18682591). Additionally, waist circumference percentiles have been developed for children and adolescents in various countries, which can be used as reference values to assess abdominal obesity and predict metabolic syndrome (PUBMED:17487506).
The diagnostic performance of waist circumference measurements for predicting cardiometabolic risk has also been evaluated in Mexican children, with findings suggesting that waist-to-height ratio (WHtR) >0.5 is a sensitive measure to identify pediatric patients with cardiometabolic alterations (PUBMED:27295014). However, it's important to note that the accuracy of self-reported waist circumference is low, and therefore, waist circumference should be measured professionally rather than relying on self-reported values, especially in women (PUBMED:29259800).
In summary, waist circumference is a valuable and practical measure that can be used to identify children who may have metabolic syndrome, and it should be included in clinical assessments for early detection and intervention of childhood obesity and associated metabolic disorders. |
Instruction: Dialectical behavior therapy: is outpatient group psychotherapy an effective alternative to individual psychotherapy?
Abstracts:
abstract_id: PUBMED:26731172
Group therapy for university students: A randomized control trial of dialectical behavior therapy and positive psychotherapy. The present study examined the efficacy of two evidence-based group treatments for significant psychopathology in university students. Fifty-four treatment-seeking participants were randomized to a semester-long dialectical behavior therapy (DBT) or positive psychotherapy (PPT) group treatment. Mixed modeling was used to assess improvement over time and group differences on variables related to symptomatology, adapative/maladaptive skill usage, and well-being/acceptability factors. All symptom and skill variables improved over the course of treatment. There were no statistically significant differences in rate of change between groups. The DBT group evidenced nearly all medium to large effect sizes for all measures from pre-to post-treatment, with mostly small to medium effect sizes for the PPT group. There was a significant difference in acceptability between treatments, with the DBT group demonstrating significantly lower attrition rates, higher attendance, and higher overall therapeutic alliance. While both groups demonstrated efficacy in this population, the DBT group appeared to be a more acceptable and efficacious treatment for implementation. Results may specifically apply to group therapy as an adjunctive treatment because a majority of participants had concurrent individual therapy.
abstract_id: PUBMED:22560774
Dialectical behavior therapy: is outpatient group psychotherapy an effective alternative to individual psychotherapy?: Preliminary conclusions. Objectives: This study evaluates a 12-month-duration adapted outpatient group dialectical behavior therapy (DBT) program for patients with a borderline personality disorder in an unselected, comorbid population. If the results of this approach are comparable with the outcome rates of a standard DBT program, the group approach can have several advantages over individual treatment. One advantage is the possibility of treating more patients per therapist.
Method: A pre-post design was used to measure the effectiveness of an outpatient group DBT. Data from the Beck Depression Inventory II, the Symptom Checklist 90-Revised, the State-Trait Anger Inventory, the State and Trait Anxiety Inventory, of 34 female patients (mean age, 32.65 years) were collected before and after a treatment period of 1 year.
Results: Overall, a significant reduction (P < .05) of depressive symptoms, suicidal thoughts, anxiety, and anger was experienced by the patients.
Conclusions: This study is a first attempt in showing that DBT in an outpatient group setting can be effective in reducing psychiatric complaints and therefore has several advantages, such as the opportunity to treat more patients at once.
abstract_id: PUBMED:2318559
Individual psychotherapy as an adjunct to group psychotherapy. This paper describes a form of combined psychotherapy in which the individual sessions are used as an adjunct to group therapy. Each group member is seen regularly in individual sessions to focus primarily on the member's ongoing group work. The individual sessions are scheduled on a rotating basis. Typically, each group member is seen in an individual session once every four weeks. Additional individual sessions are available only when immediate attention is appropriate and necessary. The group is viewed as the primary therapeutic component. A cost-effective therapeutic approach that uses both individual and group methods, this modality lends itself well to a clinic and to a private practice setting.
abstract_id: PUBMED:31246370
Clinical efficacy of a combined acceptance and commitment therapy, dialectical behavioural therapy, and functional analytic psychotherapy intervention in patients with borderline personality disorder. Objective: Borderline personality disorder (BPD) consists of a persistent pattern of instability in affective regulation, impulse control, interpersonal relationships, and self-image. Although certain forms of psychotherapy are effective, their effects are small to moderate. One of the strategies that have been proposed to improve interventions involves integrating the therapeutic elements of different psychotherapy modalities from a contextual behavioural perspective (ACT, DBT, and FAP).
Methods: Patients (n = 65) attending the BPD Clinic of the Instituto Nacional de Psiquiatría Ramón de la Fuente Muñíz in Mexico City who agreed to participate in the study were assigned to an ACT group (n = 22), a DBT group (n = 20), or a combined ACT + DBT + FAP therapy group (n = 23). Patients were assessed at baseline and after therapeutic trial on measures of BPD symptom severity, emotion dysregulation, experiential avoidance, attachment, control over experiences, and awareness of stimuli.
Results: ANOVA analyses showed no differences between the three therapeutic groups in baseline measures. Results of the MANOVA model showed significant differences in most dependent measures over time but not between therapeutic groups.
Conclusions: Three modalities of brief, contextual behavioural therapy proved to be useful in decreasing BPD symptom severity and emotional dysregulation, as well as negative interpersonal attachment. These changes were related to the reduction of experiential avoidance and the acquisition of mindfulness skills in all treatment groups, which may explain why no differences between the three different intervention modalities were observed.
Practitioner Points: Brief adaptations of acceptance and commitment therapy and dialectical behavioural therapy are effective interventions for BPD patients, in combined or isolated modalities, and with or without the inclusion of functional analytic psychotherapy. The reduction of experiential avoidance and the acquisition of mindfulness skills are related with the diminution of BPD symptoms severity, including emotional dysregulation and negative interpersonal attachment.
abstract_id: PUBMED:9196787
The synergy of group and individual psychotherapy training. Although this paper has focussed on ways in which group-therapy experience differs from the experience of individual therapy for the beginning therapist, all of the differences discussed are only matters of degree and thus are important in both individual and group therapy. Group-therapy experience tends to enhance the visibility of important aspects of all forms of psychotherapy and therefore should be looked at as a useful tool in learning all forms of psychotherapy. Experience in group psychotherapy should ideally precede or occur simultaneously with initial exposure to individual therapy rather than following afterwards as an option, as is usually the case in current psychotherapy training programs. The most important skill to acquire in learning psychotherapy is the sophisticated ability to listen. To do so involves attending to content, affect, the patient's and one's own verbal language, body language, and meta-communications, conscious as well as unconscious. Group psychotherapy experience early in one's development as a psychotherapist can be a powerful tool in developing this ability to listen. Being part of the group process allows a unique level of intimacy that is probably more equal and vulnerable than in other forms of therapy. The use of the self, and the ability to trust in the process without using theory as a barrier between ourselves and our patients, but rather as a bridge to greater understanding, can all flow from the unique experience of leading a psychodynamic group.
abstract_id: PUBMED:27388259
Individual and group psychotherapy with people diagnosed with dementia: a systematic review of the literature. Objectives: Psychotherapy provides a means of helping participants to resolve emotional threats and play an active role in their lives. Consequently, psychotherapy is increasingly used within dementia care. This paper reviews the existing evidence base for individual and group psychotherapy with people affected by dementia.
Design: The protocol was registered. We searched electronic databases, relevant websites and reference lists for records of psychotherapy with people affected by Alzheimer's Disease, Vascular dementia, Lewy-body dementia or a mixed condition between 1997 and 2015. We included studies of therapies which met British Association of Counselling and Psychotherapy definitions (e.g. occurs regularly, focuses on talking about life events and facilitates understand of the illness). Art therapy, Cognitive Stimulation and Rehabilitation, Life Review, Reminiscence Therapy and family therapy were excluded. Studies which included people with frontal-temporal dementia and mild cognitive impairment were excluded. Data was extracted using a bespoke form, and risk of bias assessments were carried out independently by both authors. Meta-analysis was not possible because of the heterogeneity of data.
Results: A total of 1397 papers were screened with 26 papers using randomised, non-randomised controlled trials or repeated measured designs being included. A broad mix of therapeutic modalities, types, lengths and settings were described, focussing largely on people with mild levels of cognitive impairment living in the community.
Conclusions: This study was limited to only those studies published in English. The strongest evidence supported the use of short-term group therapy after diagnosis and an intensive, multi-faceted intervention for Nursing Home residents. Many areas of psychotherapy need further research. Copyright © 2016 John Wiley & Sons, Ltd.
abstract_id: PUBMED:29955449
Dialectical behavior therapy as treatment for borderline personality disorder. Dialectical behavior therapy (DBT) is a structured outpatient treatment developed by Dr Marsha Linehan for the treatment of borderline personality disorder (BPD). Dialectical behavior therapy is based on cognitive-behavioral principles and is currently the only empirically supported treatment for BPD. Randomized controlled trials have shown the efficacy of DBT not only in BPD but also in other psychiatric disorders, such as substance use disorders, mood disorders, posttraumatic stress disorder, and eating disorders. Traditional DBT is structured into 4 components, including skills training group, individual psychotherapy, telephone consultation, and therapist consultation team. These components work together to teach behavioral skills that target common symptoms of BPD, including an unstable sense of self, chaotic relationships, fear of abandonment, emotional lability, and impulsivity such as self-injurious behaviors. The skills include mindfulness, interpersonal effectiveness, emotion regulation, and distress tolerance. Given the often comorbid psychiatric symptoms with BPD in patients participating in DBT, psychopharmacologic interventions are oftentimes considered appropriate adjunctive care. This article aims to outline the basic principles of DBT as well as comment on the role of pharmacotherapy as adjunctive treatment for the symptoms of BPD.
abstract_id: PUBMED:3403914
Concurrent individual and individual-in-a-group psychoanalytic psychotherapy. This paper's thesis is that concurrent individual-in-a-group and individual psychoanalytic psychotherapy can be conducted in conformance with psychoanalytic principles of treatment as well as can individual psychoanalytic psychotherapy alone. American psychoanalysts have shown little interest in group psychotherapy, probably because of earlier criticism that transference is diluted by the greater reality of the therapist in group psychotherapy. This is a misconception extrapolated from the mirror model of dyadic analytic technique. The criticism was formulated during a period when that model was prominent and there was little awareness that the actual personal relationship between patient and analyst played an important facilitating role in the dyadic analytic process, including providing a basis for investiture of transference. Also, the criticism was based on one-session-per-week group psychotherapy, whereas concurrent individual-in-a-group and individual psychoanalytic psychotherapy utilizes two group sessions plus one or two individual sessions per week, enabling a more intensive patient-therapist relationship. Concurrent group and individual psychoanalytic psychotherapy constitutes a contribution to the widening scope of application of psychoanalytic treatment.
abstract_id: PUBMED:37712660
Developments in Group Transference-Focused Psychotherapy. Several evidence-based psychotherapies for personality disorders have been developed in recent decades, including transference-focused psychotherapy (TFP), a contemporary model of psychodynamic psychotherapy developed by Otto Kernberg. Kernberg established Group TFP (TFP-G) as an alternative or adjunct treatment to individual TFP. Although not yet manualized, TFP-G is used in publicly and privately funded mental health services, including outpatient clinics, subacute hospitals, therapeutic inpatient units, partial hospitalization services, and rehabilitation services serving people with borderline personality. Kernberg's model of TFP-G psychotherapy, its application in clinical settings, and what differentiates it from other group psychotherapy models is described as well as illustrated with some examples useful to practitioners.
abstract_id: PUBMED:28953804
Guidelines for Individual and Group Psychodynamic Psychotherapy for the Treatment of Persons Diagnosed with Psychosis and/or Schizophrenia. The hereby presented guidelines for the use of psychodynamic psychotherapy are based on references and research in the field of individual and group therapy and they refer to psychotherapy for patients suffering from the first psychotic episode, schizophrenia, schizoaffective psychosis, bipolar disorder and paranoid psychosis. The aim was to provide an overview of present literature and to give recommendations based on current knowledge. Clinical experience and research of the outcomes of psychodynamic psychotherapy encourage positioning of such treatments among recommendations for treating various mental disorders, as well as in the field of psychotherapy of patients with psychotic disorders (PD).
Answer: The available abstracts suggest that dialectical behavior therapy (DBT) can be an effective form of outpatient group psychotherapy and may serve as an alternative to individual psychotherapy for certain populations. A study evaluating a 12-month-duration adapted outpatient group DBT program for patients with borderline personality disorder found significant reductions in depressive symptoms, suicidal thoughts, anxiety, and anger, indicating that DBT in an outpatient group setting can be effective in reducing psychiatric complaints (PUBMED:22560774). This suggests that group DBT can be a viable alternative to individual therapy, with the added advantage of treating more patients per therapist.
Another study compared the efficacy of DBT group treatment to positive psychotherapy (PPT) group treatment for university students with significant psychopathology. The results showed that both treatments led to improvements in symptom and skill variables, but there were no statistically significant differences in the rate of change between groups. However, the DBT group demonstrated medium to large effect sizes for all measures from pre-to post-treatment and was found to be more acceptable with lower attrition rates, higher attendance, and higher overall therapeutic alliance compared to the PPT group (PUBMED:26731172). This suggests that DBT group therapy may be more efficacious and acceptable than PPT group therapy for university students.
Additionally, a study on the clinical efficacy of combined acceptance and commitment therapy, DBT, and functional analytic psychotherapy intervention in patients with borderline personality disorder found that all three modalities were useful in decreasing symptom severity and emotional dysregulation (PUBMED:31246370). This indicates that DBT, whether used alone or in combination with other therapies, can be effective in a group setting.
In summary, the evidence from the provided abstracts supports the notion that DBT can be an effective form of outpatient group psychotherapy and may serve as an alternative to individual psychotherapy, particularly for patients with borderline personality disorder and university students with significant psychopathology. |
Instruction: Is seed conditioning essential for Orobanche germination?
Abstracts:
abstract_id: PUBMED:19222050
Is seed conditioning essential for Orobanche germination? Background: Parasitic Orobanchaceae germinate only after receiving a chemical stimulus from roots of potential host plants. A preparatory phase of several days that follows seed imbibition, termed conditioning, is known to be required; thereafter the seeds can respond to germination stimulants. The aim of this study was to examine whether conditioning is essential for stimulant receptivity.
Results: Non-conditioned seeds of both Orobanche cumana Wallr. and O. aegyptiaca Pers. [syn. Phelipanche aegyptiaca (Pers.) Pomel] were able to germinate in response to chemical stimulation by GR24 even without prior conditioning. Stimulated seeds reached maximal germination rates about 2 weeks after the onset of imbibition, no matter whether the seeds had or had not been conditioned before stimulation. Whereas the lag time between stimulation and germination response of non-conditioned seeds was longer than for conditioned seeds, the total time between imbibition and germination was shorter for the non-conditioned seeds. Unlike the above two species, O. crenata Forsk. was found to require conditioning prior to stimulation.
Conclusions: Seeds of O. cumana and O. aegyptiaca are already receptive before conditioning. Thus, conditioning is not involved in stimulant receptivity. A hypothesis is put forward, suggesting that conditioning includes (a) a parasite-specific early phase that allows the imbibed seeds to overcome the stress caused by failing to receive an immediate germination stimulus, and (b) a non-specific later phase that is identical to the pregermination phase between seed imbibition and actual germination that is typical for all higher plants.
abstract_id: PUBMED:17390893
The effects of Fusarium oxysporum on broomrape (Orobanche egyptiaca) seed germination. Broomrape (Orobanche aegyptiaca L.), one of the most important parasitic weeds in Iran, is a root parasitic plant that can attack several crops such as tobacco, sunflower, tomato and etc. Several methods were used for Orobanche control, however these methods are inefficient and very costly. Biological control is an additional recent tool for the control of parasitic weeds. In order to study of the fungus Fusarium oxysporum (biocontrol agent) effects on broomrape seed germination, two laboratory studies were conducted in Tehran University. In the first experiment, different concentration of GR60 (0, 1, 2 and 5 ppm) as stimulation factor for Orobanche seeds germination were experimented. Results showed that concentrations of GR60 had a significant effect on seed germination. The highest seed germination percent was obtained in 1 ppm. In the second experiment, the effect of Fusarium oxysporum was tested on O. aegyptiaca seeds germination. The fungus Fusarium oxysporum were isolated from infested and juvenile O. aegyptiaca ower stalks in tomato field in karaj. Fungus spores suspension in different concentrations (0 (Control), 10(5) (T1), 10(6) (T2), 10(7) (T3) and 3 x 10(7) (T4)) from potato dextrose agar (PDA) prepared and together with 1ppm of GR60 concentration were tested on O. aegyptiaca seeds. Results show that the highest inhibition of seed germination obtained in 10(5) spores/ml. With increasing of suspension concentrations, inhibition percent was reduced and mortality of seeds germ tube was increased. In this investigation, Fusarium oxysporum can be used to inhibit seed germination, stimulate the "suicidal germination" of seeds and reduce the Orobanche seed bank.
abstract_id: PUBMED:27397007
Effect of Jasmonates and Related Compounds on Seed Germination of Orobanche minor Smith and Striga hermonthica (Del.) Benth. Jasmonates and related compounds were found to elicit the seed germination of the important root parasites, clover broomrape (Orobanche minor Smith) and witchweed [Striga hermonthica (Del.) Benth]. The stimulation of seed germination by the esters was more effective than by the corresponding free acids, and methyl jasmonate (MJA) was the most active stimulant among the compounds tested.
abstract_id: PUBMED:22527522
Induction of seed germination in Orobanche spp. by extracts of traditional Chinese medicinal herbs. The co-evolution of Orobanche spp. and their hosts within the same environment has resulted in a high degree of adaptation and effective parasitism whereby the host releases parasite germination stimulants, which are likely to be unstable in the soil. Our objective was to investigate whether extracts from non-host plants, specifically, Chinese medicinal plants, could stimulate germination of Orobanche spp. Samples of 606 Chinese medicinal herb species were extracted with deionized water and methanol. The extracts were used to induce germination of three Orobanche species; Orobanche minor, Orobanche cumana, and Orobanche aegyptiaca. O. minor exhibited a wide range of germination responses to the various herbal extracts. O. cumana and O. aegyptiaca exhibited an intermediate germination response to the herbal extracts. O. minor, which has a narrow host spectrum, showed higher germination rates in response to different herbal extracts compared with those of O. cumana and O. aegyptiaca, which have a broader host spectrum. Methanolic extracts of many Chinese herbal species effectively stimulated seed germination among the Orobanche spp., even though they were not the typical hosts. The effective herbs represent interesting examples of potential trap crops. Different countries can also screen extracts from indigenous herbaceous plants for their ability to induce germination of Orobanche spp. seeds. The use of such species as trap plants could diminish the global soil seed bank of Orobanche.
abstract_id: PUBMED:15310074
IAA production during germination of Orobanche spp. seeds. Broomrapes (Orobanche spp.) are parasitic plants, whose growth and development fully depend on the nutritional connection established between the parasite and the roots of the respective host plant. Phytohormones are known to play a role in establishing the specific Orobanche-host plant interaction. The first step in the interaction is seed germination triggered by a germination stimulant secreted by the host-plant roots. We quantified indole-3-acetic acid (IAA) and abscisic acid (ABA) during the seed germination of tobacco broomrape (Orobanche ramosa) and sunflower broomrape (O. cumana). IAA was mainly released from Orobanche seeds in host-parasite interactions as compared to non-host-parasite interactions. Moreover, germinating seeds of O. ramosa released IAA as early as 24 h after the seeds were exposed to the germination stimulant, even before development of the germ tube. ABA levels remained unchanged during the germination of the parasites' seeds. The results presented here show that IAA production is probably part of a mechanism triggering germination upon the induction by the host factor, thus resulting in seed germination.
abstract_id: PUBMED:18763781
Stimulation of seed germination of Orobanche species by ophiobolin A and fusicoccin derivatives. Various Orobanche species (broomrapes) are serious weed problems and cause severe reduction on yields in many important crops. Seeds of these parasitic weeds may remain dormant in the soil for many years until germination is stimulated by the release of a chemical signal by roots of a host plant. Some fungal metabolites, such as ophiobolin A and fusicoccin derivatives, were assayed to determine their capacity to stimulate the seed germination of several Orobanche species. The results obtained showed that the stimulation of seed germination is species-dependent and also affected by the concentration of the stimulant. Among ophiobolin A, fusicoccin, and its seven derivatives, tested in the concentration range of 10 (-4)-10 (-7) M, the highest stimulatory effect was observed for ophiobolin A and the hexacetyl and pentacetyl isomers of 16- O-demethyl-de- tert-pentenylfusicoccin prepared by chemical modification of the fusicoccin, while the other fusicoccin derivatives appeared to be practically inactive. The most sensitive species appeared to be O. aegyptica, O. cumana, O. minor, and to a lesser extent, O. ramosa.
abstract_id: PUBMED:34850875
Involvement of α-galactosidase OmAGAL2 in planteose hydrolysis during seed germination of Orobanche minor. Root parasitic weeds of the Orobanchaceae, such as witchweeds (Striga spp.) and broomrapes (Orobanche and Phelipanche spp.), cause serious losses in agriculture worldwide, and efforts have been made to control these parasitic weeds. Understanding the characteristic physiological processes in the life cycle of root parasitic weeds is particularly important to identify specific targets for growth modulators. In our previous study, planteose metabolism was revealed to be activated soon after the perception of strigolactones in germinating seeds of O. minor. Nojirimycin inhibited planteose metabolism and impeded seed germination of O. minor, indicating a possible target for root parasitic weed control. In the present study, we investigated the distribution of planteose in dry seeds of O. minor by matrix-assisted laser desorption/ionization-mass spectrometry imaging. Planteose was detected in tissues surrounding-but not within-the embryo, supporting its suggested role as a storage carbohydrate. Biochemical assays and molecular characterization of an α-galactosidase family member, OmAGAL2, indicated that the enzyme is involved in planteose hydrolysis in the apoplast around the embryo after the perception of strigolactones, to provide the embryo with essential hexoses for germination. These results indicate that OmAGAL2 is a potential molecular target for root parasitic weed control.
abstract_id: PUBMED:16706065
Seed germination characteristics of parasitic plant and its host recognition mechanisms Parasitic plants are widely distributed in various ecological environments, with different growth habits and host recognition mechanisms. This paper discussed the distinctive seed germination characteristics of root parasitic plants such as Orobanche and Striga, summarized the signals for parasitic seed germination discovered up to now, and reviewed the effects of various germination signals, plant hormones and several fungal metabolites on the host recognition of parasitic plants, as well as the respiration characteristics during the conditioning, and the activating mechanism of the signals for parasitic seed germination. The induction of various differentiated calli in different Orobanche species, and the establishment of novel in vitro aseptic infection system and its application in the host recognition of parasitic plants were also discussed, with the present problems in researching the recognition mechanisms between parasitic plants and hosts put forward, and the further work prospected.
abstract_id: PUBMED:16310229
Stimulation of Orobanche ramosa seed germination by fusicoccin derivatives: a structure-activity relationship study. A structure-activity relationship study was conducted assaying 25 natural analogues and derivatives of fusicoccin (FC), and cotylenol, the aglycone of cotylenins, for their ability to stimulate the seed germination of the parasitic species Orobanche ramosa. Some of the compounds tested proved to be highly active, being 8,9-isopropylidene of the corresponding FC aglycone and the dideacetyl derivative the most active FC derivatives. In both groups of glucosides and aglycones (including cotylenol), the most important structural feature to impart activity appears to be the presence of the primary hydroxy group at C-19. Furthermore, the functionalities and the conformation of the carbotricyclic ring proved to play a significant role. The dideacetyl derivative of FC, being easily and rapidly obtainable in high yield starting by FC, could be of interest for its practical application as a stimulant of Orobanche ramosa seed germination, inducing the "suicidal germination", an interesting approach for parasitic plant management.
abstract_id: PUBMED:38450089
Cycloheximide in the nanomolar range inhibits seed germination of Orobanche minor. From the 992 samples of culture extracts of microorganisms isolated from soil in Japan, we found that the extract of Streptomyces sp. no. 226 inhibited Orobanche minor seed germination without significantly affecting the seed germination of Trifolium pratense and the growth of Aspergillus oryzae and Escherichia coli. Using ESI-MS, 1H-NMR, and 13C-NMR, we identified the active compound as cycloheximide. Cycloheximide had half-maximum inhibitory concentrations of 2.6 ng/mL for the inhibition of seed germination of O. minor and 2.5 µg/mL for that of the conidial germination of A. oryzae. Since cycloheximide is known to inhibit translation by interacting with ribosomal protein L28 (RPL28) in yeast, we investigated whether RPL protein of O. minor plays a critical role in the inhibition of O. minor seed germination. Our data suggested that O. minor RPL27A was not sensitive to cycloheximide by comparing it to the strain expressing S. cerevisiae RPL28. These findings suggest the presence of an unidentified mechanism by which cycloheximide hinders O. minor seed germination.
Answer: Seed conditioning is not universally essential for Orobanche germination. Studies have shown that non-conditioned seeds of Orobanche cumana and O. aegyptiaca can germinate in response to chemical stimulation by GR24 even without prior conditioning. These seeds reached maximal germination rates about 2 weeks after the onset of imbibition, regardless of whether they had been conditioned before stimulation. However, the lag time between stimulation and germination response of non-conditioned seeds was longer than for conditioned seeds, but the total time between imbibition and germination was shorter for the non-conditioned seeds. In contrast, O. crenata was found to require conditioning prior to stimulation (PUBMED:19222050). This suggests that the requirement for seed conditioning in Orobanche species may be species-specific, and while some species can germinate without conditioning, others cannot. |
Instruction: Does the achievement of medical identity limit the ability of primary care practitioners to be patient-centred?
Abstracts:
abstract_id: PUBMED:16332470
Does the achievement of medical identity limit the ability of primary care practitioners to be patient-centred? A qualitative study. Objective: To explore primary care practitioners approach to and management of menstrual disorders using a sociological perspective.
Methods: Semi-structured interviews of primary care practitioners with an iterative approach to recruitment and analysis informed by grounded theory.
Results: Two broad approaches to patient care were described-a biomedical approach, which concentrated on medical history taking and the search for disease, and a patient-as-person approach where a patient's individual ideas and concerns were elicited. Practitioners believed they had a role in integrating these approaches. Activities intrinsic to the biomedical approach such as the performance of examinations, the ordering of tests and making decisions about biomedical aspects of care were however not available for shared decision-making. The exercise of these decisions by medical practitioners was necessary for them to achieve their professional identity.
Conclusion: While practitioners accepted the ideology of patient-centred care the biomedical approach had the advantage of providing practitioners with a professional identity, which protected their status in relation to patients and colleagues.
Practice Implications: The adoption of shared decision-making by medically qualified primary care practitioners is limited by practitioners need to achieve their medical identity. At present, this identity does not involve significant sharing of power and responsibility. A shift in perception of medical identity is required before more shared decision-making is seen in practice.
abstract_id: PUBMED:34049489
Association between doctor-patient familiarity and patient-centred care during general practitioner's consultations: a direct observational study in Chinese primary care practice. Background: Patient-centred care is a core attribute of primary care. Not much is known about the relationship between patient-centred care and doctor-patient familiarity. This study aimed to explore the association between general practitioner (GP) perceived doctor-patient familiarity and the provision of patient-centred care during GP consultations.
Methods: This is a direct observational study conducted in eight community health centres in China. Level of familiarity was rated by GPs using a dichotomized variable (Yes/No). The provision of patient-centred care during GP consultations was measured by coding audiotapes using a modified Davis Observation Code (DOC) interactional instrument. Eight individual codes in the modified DOC were selected for measuring the provision of patient-centred care, including 'family information', 'treatment effects', 'nutrition guidance', 'exercise guidance', 'health knowledge', 'patient question', 'chatting', and 'counseling'. Multivariate analyses of covariance were adopted to evaluate the association between GP perceived doctor-patient familiarity and patient-centred care.
Results: A total of 445 audiotaped consultations were collected, with 243 in the familiar group and 202 in the unfamiliar group. No significant difference was detected in overall patient-centred care between the two groups. For components of patient-centred care, the number of intervals (1.36 vs 0.88, p = 0.026) and time length (7.26 vs. 4.40 s, p = 0.030) that GPs spent in 'health knowledge', as well as time length (13.0 vs. 8.34 s, p = 0.019) spent in 'patient question' were significantly higher in unfamiliar group. The percentage of 'chatting' (11.9% vs. 7.34%, p = 0.012) was significantly higher in the familiar group.
Conclusions: This study suggested that GP perceived doctor-patient familiarity may not be associated with GPs' provision of patient-centred care during consultations in the context of China. Not unexpectedly, patients would show more health knowledge and ask more questions when GPs were not familiar with them. Further research is needed to confirm and expand on these findings.
abstract_id: PUBMED:33303622
Patient-centred care delivered by general practitioners: a qualitative investigation of the experiences and perceptions of patients and providers. Background: Patient-centred care (PCC) is care that is respectful and responsive to the wishes of patients. The body of literature on PCC delivered by general practitioners (GPs) has increased steadily over time. There is an opportunity to advance the work on GP-delivered PCC through qualitative research involving both patients and providers.
Aim: To explore the perceptions and experiences of PCC by patient advocates and GPs.
Design And Setting: Qualitative description in a social constructivist paradigm. Participants were sampled from six primary care organisations in south east Queensland/northern New South Wales, Australia.
Method: Purposive sampling was used to recruit English-speaking adult participants who were either practising GPs or patient advocates. Focus group sessions explored participants' perceptions and experiences of PCC. Data were analysed thematically using a constant-comparative approach.
Results: Three focus groups with 15 patient advocates and three focus groups with 12 practising GPs were conducted before thematic saturation was obtained. Five themes emerged: (1) understanding of PCC is varied and personal, (2) valuing humanistic care, (3) considering the system and collaborating in care, (4) optimising the general practice environment and (5) needing support for PCC that is embedded into training.
Conclusion: Patient advocates' and GPs' understanding of PCC are diverse, which can hinder strategies to implement and sustain PCC improvements. Future research should explore novel interventions that expose GPs to unique feedback from patients, assess the patient-centeredness of the environment and promote GP self-reflection on PCC.
abstract_id: PUBMED:19538706
Trust, mistrust, racial identity and patient satisfaction in urban African American primary care patients of nurse practitioners. Purpose: To analyze relationships between cultural mistrust, medical mistrust, and racial identity and to predict patient satisfaction among African American adults who are cared for by primary-care nurse practitioners using Cox's Interaction Model of Client Health Behaviors.
Design: A descriptive-correlational study was conducted with a convenience sample of 100 community-dwelling adults.
Methods: Participants completed the Cultural Mistrust Inventory; Group Based Medical Mistrust Scale; Black Racial Identity Attitude Scale; Trust in Physician Scale; Michigan Academic Consortium Patient Satisfaction Questionnaire; and provided demographic and primary care data.
Analysis: Correlations and stepwise multiple regression techniques were used to examine the study aims and correlational links between the theoretical constructs of client singularity, client-professional interaction, and outcome.
Findings And Conclusions: Cox's model indicated a complex view of African American patients' perspectives on nurse practitioners. Participants simultaneously held moderate cultural mistrust of European American providers and mistrust of the health care system, and high levels of trust and satisfaction with their nurse practitioners. One racial identity schema (conformity) and trust of nurse-practitioner (NP) providers explained 41% of variance in satisfaction.
Clinical Relevance: An African American patient's own attitudes about racial identity and the client-professional relationship have a significant effect on satisfaction with primary care.
abstract_id: PUBMED:30997740
Beyond checkboxes: A qualitative assessment of physicians' experiences providing care in a patient-centred medical home. Rationale, Aims, And Objectives: The patient-centred medical home (PCMH) is an innovative approach to health care reform. Despite a well-established process for recognizing PCMH practices, fidelity to, and/or adaptation of, the PCMH model can limit health care and population health improvements. This study explored the connection between fidelity/adaptation to the PCMH model with implementation successes and challenges through the experiences of family and internal medicine PCMH physicians.
Methods: Interviews were conducted at two academic PCMH clinics with faculty and resident physicians. Data were transcribed and coded on the basis of an a priori code list. Together, the authors reviewed text and furthered the analysis process to reach final interpretation of the data.
Results: Ten faculty and nine resident physicians from the Family Care Centre (FCC; n = 11) and the Internal Medicine Clinic (IMC; n = 8) were interviewed. Both FCC and IMC physicians spoke positively about their clinic's adherence to the PCMH model of enhanced access to care, coordinated/integrated care, and improvements in quality and safety through data collection and documentation. However, physicians highlighted inadequate staffing and clinic hours. FCC physicians also discussed the challenge of providing high-quality care amidst differences in coverage between payers.
Conclusion: There remains significant variability in PCMH characteristics across the United States and Canada. This qualitative analysis uncovered factors contributing to fidelity/adaptation to the PCMH model in two academic PCMH clinics. For the PCMH to achieve the Triple Aim promise of improved patient health and experience at a reduced cost, policy must support fidelity to core elements of the PCMH.
abstract_id: PUBMED:27571642
Moving into the 'patient-centred medical home': reforming Australian general practice. The Australian healthcare system is a complex network of services and providers funded and administered by federal, state and territory governments, supplemented by private health insurance and patient contributions. The broad geographical range, complexity and increasing demand within the Australian healthcare sector mean health expenditure is high. Aspects of current funding for the healthcare system have attracted criticism from medical practitioners, patients, representative organisations and independent statutory agencies. In response to the problems in primary care funding in Australia, The Royal Australian College of General Practitioners developed the Vision for general practice and a sustainable healthcare system (the Vision). The Vision presents a plan to improve healthcare delivery in Australia through greater quality, access and efficiency by reorienting how general practice services are funded based on the 'patient-centred medical home' model.
abstract_id: PUBMED:32213972
Cohort Profile: Effectiveness of a 12-Month Patient-Centred Medical Home Model Versus Standard Care for Chronic Disease Management among Primary Care Patients in Sydney, Australia. Evidence suggests that patient-centred medical home (PCMH) is more effective than standard general practitioner care in improving patient outcomes in primary care. This paper reports on the design, early implementation experiences, and early findings of the 12-month PCMH model called 'WellNet' delivered across six primary care practices in Sydney, Australia. The WellNet study sample comprises 589 consented participants in the intervention group receiving enhanced primary care in the form of patient-tailored chronic disease management plan, improved self-management support, and regular monitoring by general practitioners (GPs) and trained clinical coordinators. The comparison group consisted of 7750 patients who were matched based on age, gender, type and number of chronic diseases who received standard GP care. Data collected include sociodemographic characteristics, clinical measures, and self-reported health assessments at baseline and 12 months. Early study findings show the mean age of the study participants was 70 years with nearly even gender distribution of males (49.7%) and females (50.3%). The most prevalent chronic diseases in descending order were circulatory system disorders (69.8%), diabetes (47.4%), musculoskeletal disorders (43.5%), respiratory diseases (28.7%), mental illness (18.8%), and cancer (13.6%). To our knowledge, the WellNet study is the first study in Australia to generate evidence on the feasibility of design, recruitment, and implementation of a comprehensive PCMH model. Lessons learned from WellNet study may inform other medical home models in Australian primary care settings.
abstract_id: PUBMED:23701178
The politics of patient-centred care. Background: Despite widespread belief in the importance of patient-centred care, it remains difficult to create a system in which all groups work together for the good of the patient. Part of the problem may be that the issue of patient-centred care itself can be used to prosecute intergroup conflict.
Objective: This qualitative study of texts examined the presence and nature of intergroup language within the discourse on patient-centred care.
Methods: A systematic SCOPUS and Google search identified 85 peer-reviewed and grey literature reports that engaged with the concept of patient-centred care. Discourse analysis, informed by the social identity approach, examined how writers defined and portrayed various groups.
Results: Managers, physicians and nurses all used the discourse of patient-centred care to imply that their own group was patient centred while other group(s) were not. Patient organizations tended to downplay or even deny the role of managers and providers in promoting patient centredness, and some used the concept to advocate for controversial health policies. Intergroup themes were even more obvious in the rhetoric of political groups across the ideological spectrum. In contrast to accounts that juxtaposed in-groups and out-groups, those from reportedly patient-centred organizations defined a 'mosaic' in-group that encompassed managers, providers and patients.
Conclusion: The seemingly benign concept of patient-centred care can easily become a weapon on an intergroup battlefield. Understanding this dimension may help organizations resolve the intergroup tensions that prevent collective achievement of a patient-centred system.
abstract_id: PUBMED:38020415
An Evaluation of the Relationship between Training of Health Practitioners in a Person-Centred Care Model and their Person-Centred Attitudes. Introduction: The Esther Network (EN) person-centred care (PCC) advocacy training aims to promote person-centred attitudes among health practitioners in Singapore. This study aimed to assess the relationship between the training and practitioners' PCC attributes over a 3-month period, and to explore power sharing by examining the PCC dimensions of "caring about the service user as a whole person" and the "sharing of power, control and information".
Methods: A repeated-measure study design utilising the Patient-Practitioner Orientation Scale (PPOS), was administered to 437 training participants at three time points - before training (T1), immediately after (T2) and three months after training (T3). A five-statement questionnaire captured knowledge of person-centred care at T1 and T2. An Overall score, Caring and Sharing sub-scores were derived from the PPOS. Scores were ranked and divided into three groups (high, medium and low). Ordinal Generalised Estimating Equation (GEE) model analysed changes in PPOS scores over time.
Results: A single, short-term training appeared to result in measurable improvements in person-centredness of health practitioners, with slight attenuation at T3. There was greater tendency to "care" than to "share power" with service users across all three time points, but the degree of improvement was larger for sharing after training. The change in overall person-centred scores varied by sex and profession (females score higher than males, allied health showed a smaller attenuation at T3).
Conclusion: Training as a specific intervention, appeared to have potential to increase health practitioners' person-centredness but the aspect of equalising power was harder to achieve within a hierarchical structure and clinician-centric culture. An ongoing network to build relationships, and a supportive system to facilitate individual and organisational reflexivity can reinforce learning.
abstract_id: PUBMED:31670759
A new model of patient-centred care for general practitioners: results of an integrative review. Background: GPs providing patient-centred care (PCC) is embedded in international health care policies due to its positive impact on patients and potential to lower health care costs. However, what is currently known about GP-delivered PCC is unknown.
Objective: To synthesize literature investigating GP-delivered PCC and address 'what is currently known about GP-delivered PCC?'
Method: A systematic literature search was conducted between June and July 2018. Eligible articles were empirical, full-text studies published in English between January 2003 and July 2018, related to at least three of the four dimensions of PCC described by Hudon et al. (2011), and related to preventative, acute, and/or chronic care by GPs. Following screening, full-text articles were independently assessed for inclusion by two investigators. Data were extracted and quality assessed by two researchers. Findings on PCC were analysed thematically (meta-synthesis).
Results: Thirty medium- to high-quality studies met the inclusions criteria. Included studies utilized varied designs, with the most frequent being quantitative, cross-sectional. A theoretical model of PCC was synthesized from included studies and contained four major components: (i) understanding the whole person, (ii) finding common ground, (iii) experiencing time and (iv) aiming for positive outcomes. Harms of PCC were rarely reported.
Conclusions: Four overarching theoretical components of PCC relate to elements of the consultation and experience of time. These components can be used to inform the development of toolkits to support GPs and general practice organizations in pursuit of PCC as well as tools to measure patient-centredness.
Answer: The achievement of medical identity can indeed limit the ability of primary care practitioners to be patient-centred. According to a qualitative study exploring primary care practitioners' approach to and management of menstrual disorders, practitioners described two broad approaches to patient care: a biomedical approach, which focused on medical history taking and the search for disease, and a patient-as-person approach, where a patient's individual ideas and concerns were elicited. Practitioners recognized their role in integrating these approaches. However, activities intrinsic to the biomedical approach, such as performing examinations, ordering tests, and making decisions about biomedical aspects of care, were not available for shared decision-making. The exercise of these decisions by medical practitioners was necessary for them to achieve their professional identity. While practitioners accepted the ideology of patient-centred care, the biomedical approach provided them with a professional identity that protected their status in relation to patients and colleagues. This suggests that the need for practitioners to achieve their medical identity does not involve significant sharing of power and responsibility, which limits the adoption of shared decision-making and, consequently, patient-centred care (PUBMED:16332470).
Therefore, a shift in the perception of medical identity is required before more shared decision-making is seen in practice, as the current medical identity does not involve significant sharing of power and responsibility with patients (PUBMED:16332470). |
Instruction: Predicting future cardiovascular disease: do we need the oral glucose tolerance test?
Abstracts:
abstract_id: PUBMED:27699707
Heterogeneity in glucose response curves during an oral glucose tolerance test and associated cardiometabolic risk. We aimed to examine heterogeneity in glucose response curves during an oral glucose tolerance test with multiple measurements and to compare cardiometabolic risk profiles between identified glucose response curve groups. We analyzed data from 1,267 individuals without diabetes from five studies in Denmark, the Netherlands and the USA. Each study included between 5 and 11 measurements at different time points during a 2-h oral glucose tolerance test, resulting in 9,602 plasma glucose measurements. Latent class trajectories with a cubic specification for time were fitted to identify different patterns of plasma glucose change during the oral glucose tolerance test. Cardiometabolic risk factor profiles were compared between the identified groups. Using latent class trajectory analysis, five glucose response curves were identified. Despite similar fasting and 2-h values, glucose peaks and peak times varied greatly between groups, ranging from 7-12 mmol/L, and 35-70 min. The group with the lowest and earliest plasma glucose peak had the lowest estimated cardiovascular risk, while the group with the most delayed plasma glucose peak and the highest 2-h value had the highest estimated risk. One group, with normal fasting and 2-h values, exhibited an unusual profile, with the highest glucose peak and the highest proportion of smokers and men. The heterogeneity in glucose response curves and the distinct cardiometabolic risk profiles may reflect different underlying physiologies. Our results warrant more detailed studies to identify the source of the heterogeneity across the different phenotypes and whether these differences play a role in the development of type 2 diabetes and cardiovascular disease.
abstract_id: PUBMED:22686045
Clinical screening examination for impaired glucose tolerance: is it possible to diagnose impaired glucose tolerance without a 75g oral glucose tolerance test? It is well known that diabetes mellitus is one of the most crucial risk factors in the pathogenesis of atherosclerosis, including cardiovascular diseases (CVD). Considerable epidemiological and clinical studies, such as the Funagata study and the Diabetes Epidemiology Collaborative analysis of Diagnostic criteria in Europe (DECODE) study, have established that even a prediabetic state, including impaired glucose tolerance (IGT), is strongly associated with the occurrence of CVD. For the diagnosis of IGT, the 75g oral glucose tolerance test (75g OGTT) is required clinically, but the test takes at least 2 hours, at considerable cost. Therefore, for the prevention of atherosclerosis and subsequent CVD, another methods and/or beneficial parameters are anticipated to diagnose IGT without 75g OGTT. Recent studies have suggested that subjects beyond approximately 100 mg/dl fasting plasma glucose (FPG) might be classified into IGT by using receiver operating characteristic (ROC) analyses, and that the FPG 100 mg/dl is a suitable cut-off level between IGT and normal glucose tolerance (NGT). In contrast, although it is difficult to distinguish IGT from NGT by the HbAlc level alone, the combination of FPG with HbAlc is more beneficial for the diagnosis of IGT.
abstract_id: PUBMED:11224005
Should we do an oral glucose tolerance test in hypertensive men with normal fasting blood-glucose? The objective of this study was to examine, using the new WHO criteria for diabetes mellitus, whether insulin and glucose before and after an oral glucose tolerance test would predict cardiovascular mortality in hypertensive men with normal fasting blood glucose. A standard oral glucose challenge was performed after an overnight fast in 113 hypertensive men with either hypercholesterolaemia or smoking. These patients were recruited from an on-going risk factor intervention study. The mean observation time was 6.3 years. During follow-up there were 10 cardiovascular deaths. The Cox regression analyses showed an independent and significant association (P < 0.05) between blood glucose 120 min after the glucose ingestion and cardiovascular death during follow-up. Fasting glucose, fasting insulin and insulin 120 min after glucose ingestion was not related to cardiovascular death during follow-up. In conclusion, this is the first study using the current definition of diabetes mellitus showing that hyperglycaemia following an oral glucose load is an independent risk factor for cardiovascular death in hypertensive men with a normal fasting glucose. In this type of hypertensive patient with normal fasting glucose, an oral glucose tolerance test may help to identify subjects at high cardiovascular risk. Journal of Human Hypertension (2001) 15, 71-74
abstract_id: PUBMED:11934216
Oral glucose tolerance test: to be or not to be performed? Type 2 diabetes is one of the most common diseases worldwide. In addition, there are data for a large cohort of undiagnosed cases. In the RIAD (Risk factors in Impaired glucose tolerance for Atherosclerosis and Diabetes) study a total of 15.1% of so far undiagnosed diabetic subjects were detected as well as 26% of subjects with impaired glucose tolerance in a German risk population for diabetes. Type 2 diabetes is associated with an excessively high mortality and morbidity due to cardiovascular disease. Numerous studies have demonstrated the relevance of postprandial hyperglycemia for atherosclerosis. Moreover, the form of isolated postprandial diabetes seems to be much more common than expected. Even mild postprandial hyperglycemia in the form of impaired glucose tolerance was shown to be associated with an increased rate of cardiovascular disease. This indicates the necessity of using the oral glucose tolerance test (OGTT) in the screening of high-risk populations in order to detect asymptomatic diabetic subjects and enable appropriate treatment in time. Not using the OGTT would mean missing a large cohort of undiagnosed diabetic subjects, particularly among older people. Since an OGTT cannot be generally conducted, we recommend its performance in risk subjects and especially in elderly women. This would make it possible to institute preventive measures.
abstract_id: PUBMED:34024551
The reliability of an abbreviated fat tolerance test: A comparison to the oral glucose tolerance test. Background & Aims: Postprandial lipemia (PPL) is predictive of cardiovascular disease risk, but the current method for assessing PPL is a burdensome process. Recently, the validity of an abbreviated fat tolerance test (AFTT) has been demonstrated. As a continuation of this research, the purpose of this study was to determine the reliability of the AFTT and compare it to the reliability of the oral glucose tolerance test (OGTT).
Methods: In this randomized crossover trial, 20 healthy adults (10 male and 10 female) completed 2 AFTTs and 2 OGTTs, each separated by a 1-week washout. For the AFTT, triglycerides (TG) were measured at baseline and 4 h post-consumption of a high-fat meal, during which time participants were able to leave the lab. For the OGTT, we measured blood glucose at baseline and 2 h post-consumption of a 75-g pure glucose solution, and participants remained in the lab. To determine reliability, we calculated within-subject coefficient of variation (WCV) and intraclass correlation coefficient (ICC).
Results: The mean 4-h TG WCV for the AFTT was 12.6%, while the mean 2-h glucose WCV for the OGTT was 10.5%. ICC values for 4-h TG and TG change were 0.79 and 0.71, respectively, while ICC values for 2-h glucose and glucose change were 0.66 and 0.56, respectively.
Conclusions: Based on WCV and ICC, the TG response to an AFTT was similarly reliable to the glucose response to an OGTT in our sample of healthy adults, supporting the AFTT's potential as a standard clinical test for determining PPL. However, reliability of the AFTT needs to be further tested in individuals at greater risk for cardiometabolic disease.
abstract_id: PUBMED:2265736
Diabetes in the Belgian province of Luxembourg: frequency, importance of the oral glucose tolerance test and a modestly increased fasting blood glucose A sample of 1949 subjects aged 35-64 years has been studied in the Belgian Province of Luxembourg according with the MONICA project (MONItoring of Trends and Determinants in CArdiovascular Diseases) elaborated by the World Health Organization. Among the data collected, were a fasting glycaemia and a glycaemia at the second hour of a 75 grams oral glucose load. Analysis of these two parameters has allowed to divide the individuals of the study into: 4.1% of diabetic subjects which half of them being unknown, 5.2% of subjects presenting an impaired glucose tolerance, 3.4% of subjects with an early reactive hypoglycaemia and 87.3% of normoglycaemic subjects. The measurement of the fasting glycaemia alone has allowed to display 15 glucidic abnormalities (that is to say 0.8%) whereas the complementary realization of the oral glucose tolerance test has disclosed about 10% of additional abnormalities. The fact to consider a borderline fasting glycaemia (included between 110 and 140 mg/dl on venous plasma) result in a greater probability to find an abnormal blood glucose value at the second hour of the oral glucose tolerance test.
abstract_id: PUBMED:12351490
Predicting future cardiovascular disease: do we need the oral glucose tolerance test? Objective: Our objective was to compare the performance of oral glucose tolerance tests (OGTTs) and multivariate models incorporating commonly available clinical variables in their ability to predict future cardiovascular disease (CVD).
Research Design And Methods: We randomly selected 2,662 Mexican-Americans and 1,595 non-Hispanic whites, 25-64 years of age, who were free of both CVD and known diabetes at baseline from several San Antonio census tracts. Medical history, cigarette smoking history, BMI, blood pressure, fasting and 2-h plasma glucose and serum insulin levels, triglyceride level, and fasting serum total, LDL, and HDL cholesterol levels were obtained at baseline. CVD developed in 88 Mexican-Americans and 71 non-Hispanic whites after 7-8 years of follow-up. Stepwise multiple logistic regression models were developed to predict incident CVD. The areas under receiver operator characteristic (ROC) curves were used to assess the predictive power of these models.
Results: The area under the 2-h glucose ROC curve was modestly but not significantly greater than under the fasting glucose curve, but both were relatively weak predictors of CVD. The areas under the ROC curves for the multivariate models incorporating readily available clinical variables other than 2-h glucose were substantially and significantly greater than under the glucose ROC curves. Addition of 2-h glucose to these models did not improve their predicting power.
Conclusions: Better identification of individuals at high risk for CVD can be achieved with simple predicting models than with OGTTs, and the addition of the latter adds little if anything to the predictive power of the model.
abstract_id: PUBMED:29216249
Longer time to peak glucose during the oral glucose tolerance test increases cardiovascular risk score and diabetes prevalence. Introduction: The pattern of glucose levels during an oral glucose tolerance test (OGTT) may be useful for predicting diabetes or cardiovascular disease (CVD). Our aim was to determine whether the time to peak glucose during the OGTT is associated with CVD risk scores and diabetes.
Methods: Individuals with impaired fasting glucose (IFG) were enrolled in this observational study. Participants were grouped by the measured time to peak glucose (30, 60, 90 and 120 min) during the 75g OGTT. The primary outcome was 10-year CVD risk scores (using the Framingham risk score calculator). Secondary outcomes evaluated effect of time to peak glucose on prevalence of diabetes and indicators of glucose homeostasis.
Results: A total of 125 patients with IFG underwent OGTTs. Framingham 10-year risk score for the 90-min group was 1.7 times higher than for the 60-min group (6.98±6.56% vs. 4.05±4.60%, P = 0.023). Based on multivariate linear regression, time to peak glucose at 90 min was associated with a higher Framingham risk score than 60-min group (β coefficient: 2.043, 95% confidence interval: 0.067-6.008, P = 0.045). The percentages of patients with HbA1c ≥6.5%, isolated post-challenge hyperglycemia (IPH) and diabetes (combined IPH and HbA1c ≥6.5%) were significantly increased with longer times to peak glucose. Prevalence of diabetes was higher in the 90-min group than in the 60-min group (31.5% vs. 5.7%, P = 0.001).
Conclusions: In subjects with IFG, those with a longer time to peak glucose had a higher Framingham 10-year risk score and were associated with a greater likelihood of IPH and diabetes.
abstract_id: PUBMED:25506681
The shape of the plasma glucose curve during an oral glucose tolerance test as an indicator of Beta cell function and insulin sensitivity in end-pubertal obese girls. It is hypothesized that the shape of the glucose curve during an oral glucose tolerance test is an early indicator of the risk for developing type 2 diabetes mellitus. In this study, we aimed to examine the shape of plasma glucose response curves and study their relationship with insulin sensitivity, insulin secretion and components of the metabolic syndrome in end-pubertal obese girls. Eighty-one end-pubertal obese girls [median (range) age: 14.4 (11.2-18.0) years; BMI: 34.6 (25.4-50.8) kg/m(²)] who underwent a 2-h oral glucose tolerance test were classified according to the shape of the glucose curve. Four shape types of the plasma glucose response curve were observed: 28 (34.6%) monophasic, 30 (37.0%) biphasic, 14 (17.3%) triphasic, and 9 (11.1%) unclassified. Patients with a monophasic shape had a higher area under the curve for glucose (p = 0.008), a lower early-phase insulin secretion (p = 0.005), and a poorer beta cell function relative to insulin sensitivity as reflected by the oral disposition index (p = 0.022) compared to the bi- and triphasic shape types. In addition, the triglyceride level and TG/HDL-C ratio was higher in patients with a monophasic shape compared to those with a biphasic shape (p = 0.040 and p = 0.048, respectively). In conclusion, end-pubertal obese girls with a monophasic plasma glucose curve are at increased risk for insulin resistance, which can contribute to the development of type 2 diabetes mellitus and cardiovascular diseases.
abstract_id: PUBMED:28325022
Comparing glucose and insulin data from the two-hour oral glucose tolerance test in metabolic syndrome subjects and marathon runners. Glucose is the main energy source of the body's cells and is essential for normal metabolism. Two pancreatic hormones, insulin and glucagon, are involved in glucose home-ostasis. Alteration in the plasma glucose and insulin concentrations could lead to distinct symptoms and diseases, ranging from mental function impairment to coma and even death. Type 2 diabetes, insulin resistance and metabolic syndrome are typical examples of abnormal glucose metabolism that increase the risk for cardiovascular disease and mortality. The oral glucose tolerance test (OGTT) is a medical test used to screen for prediabetes, type 2 diabetes and insulin resistance. In the 5-sample 2-hour OGTT, plasma glucose and insulin concentrations are measured after a fast and then after oral intake of glucose, at intervals of 30 minutes. In this work, a statistical analysis is carried out to find significant differences between the five stages of the OGTT for plasma glucose and insulin data. In addition, the behavior of the glucose and insulin data is compared between subjects with the metabolic syndrome and marathon runners. Results show that marathon runners have plasma glucose and insulin levels significantly lower (p <; 0.05) than people with the metabolic syndrome in all the stages of the OGTT. Insulin secretion decreases in marathon runners due to a significant reduction in plasma glucose concentration, but insulin secretion does not decrease in metabolic syndrome subjects due to insulin resistance, consequently plasma glucose concentration does not achieve normal levels.
Answer: The oral glucose tolerance test (OGTT) has been a subject of discussion regarding its necessity for predicting future cardiovascular disease (CVD). Some studies suggest that the OGTT can be useful in identifying individuals at risk for CVD and diabetes. For instance, the shape of the glucose curve during an OGTT has been associated with insulin sensitivity, insulin secretion, and components of the metabolic syndrome, which are risk factors for CVD and type 2 diabetes mellitus (PUBMED:27699707; PUBMED:28325022; PUBMED:25506681). Additionally, the time to peak glucose during the OGTT has been linked to higher cardiovascular risk scores and a greater prevalence of diabetes, indicating that the OGTT may provide valuable information beyond fasting glucose levels alone (PUBMED:29216249).
However, other studies argue that simple multivariate models incorporating commonly available clinical variables may be more effective in predicting CVD than the OGTT, and that adding the OGTT to these models does not significantly improve their predictive power (PUBMED:12351490). Furthermore, it has been suggested that fasting plasma glucose (FPG) and HbA1c levels, when used in combination, can be beneficial for diagnosing impaired glucose tolerance without the need for an OGTT (PUBMED:22686045).
In conclusion, while the OGTT can provide detailed information on glucose metabolism and has been associated with CVD risk factors, its necessity for predicting future CVD is debated. Some evidence suggests that simpler and less time-consuming methods could be equally or more effective for CVD risk assessment in certain populations. Therefore, the decision to use the OGTT for predicting future CVD may depend on the specific context and the availability of other clinical variables. |
Instruction: Coronary CT Angiography Versus Standard Emergency Department Evaluation for Acute Chest Pain and Diabetic Patients: Is There Benefit With Early Coronary CT Angiography?
Abstracts:
abstract_id: PUBMED:23538167
Coronary CT angiography versus standard of care for assessment of chest pain in the emergency department. Use of coronary CT angiography (CTA) in the early evaluation of low-intermediate risk chest pain in the emergency department represents a common, appropriate application of CTA in the community. Three large randomized trials (CT-STAT, ACRIN-PA, and ROMICAT II) have compared a coronary CTA strategy with current standard of care evaluations in >3000 patients. These trials consistently show the safety of a negative coronary CT angiogram to identify patients for discharge from the emergency department with low rates of major adverse cardiovascular events, at significantly lower cost, and greater efficiency in terms of time to discharge. Together, these trials provide definitive evidence for the use of coronary CTA in the emergency department in patients with a low-to-intermediate pretest probability of coronary artery disease. Clinical practice guidelines that recommend the use of coronary CTA in the emergency department are warranted.
abstract_id: PUBMED:27006119
Coronary CT Angiography Versus Standard Emergency Department Evaluation for Acute Chest Pain and Diabetic Patients: Is There Benefit With Early Coronary CT Angiography? Results of the Randomized Comparative Effectiveness ROMICAT II Trial. Background: Cardiac computed tomography angiography (CCTA) reduces emergency department length of stay compared with standard evaluation in patients with low- and intermediate-risk acute chest pain. Whether diabetic patients have similar benefits is unknown.
Methods And Results: In this prespecified analysis of the Rule Out Myocardial Ischemia/Infarction by Computer Assisted Tomography (ROMICAT II) multicenter trial, we randomized 1000 patients (17% diabetic) with symptoms suggestive of acute coronary syndrome to CCTA or standard evaluation. The rate of acute coronary syndrome was 8% in both diabetic and nondiabetic patients (P=1.0). Length of stay was unaffected by the CCTA strategy for diabetic patients (23.9 versus 27.2 hours, P=0.86) but was reduced for nondiabetic patients compared with standard evaluation (8.4 versus 26.5 hours, P<0.0001; P interaction=0.004). CCTA resulted in 3-fold more direct emergency department discharge in both groups (each P≤0.0001, P interaction=0.27). No difference in hospital admissions was seen between the 2 strategies in diabetic and nondiabetic patients (P interaction=0.09). Both groups had more downstream testing and higher radiation doses with CCTA, but these were highest in diabetic patients (all P interaction≤0.04). Diabetic patients had fewer normal CCTAs than nondiabetic patients (32% versus 50%, P=0.003) and similar normalcy rates with standard evaluation (P=0.70). Notably, 66% of diabetic patients had no or mild stenosis by CCTA with short length of stay comparable to that of nondiabetic patients (P=0.34), whereas those with >50% stenosis had a high prevalence of acute coronary syndrome, invasive coronary angiography, and revascularization.
Conclusions: Knowledge of coronary anatomy with CCTA is beneficial for diabetic patients and can discriminate between lower risk patients with no or little coronary artery disease who can be discharged immediately and higher risk patients with moderate to severe disease who warrant further workup.
Clinical Trial Registration: URL: https://www.clinicaltrials.gov/. Unique identifier: NCT01084239.
abstract_id: PUBMED:27604479
Coronary CT Angiography in the Emergency Department: Current Status. Opinion Statement: Acute chest pain (ACP) represents a clinical as well as economic challenge, often resulting in time-consuming, expensive evaluations to avoid missed diagnosis of acute coronary syndromes (ACSs). Coronary CT angiography (CTA) is an attractive noninvasive technique for use in the emergency department (ED) due to its high accuracy and negative predictive value. Recent studies have demonstrated that coronary CTA can aid in safe, rapid, and cost-efficient triage of these patients. Additional applications of plaque characterization, fractional flow analysis, and CT perfusion imaging hold promise in providing incremental data in patients with suspected ACS. In this review, we examine the data for the use of coronary CTA in acute chest pain, novel applications of the technology, and best practice for its use in the ED.
abstract_id: PUBMED:23809428
Establishing a successful coronary CT angiography program in the emergency department: official writing of the Fellow and Resident Leaders of the Society of Cardiovascular Computed Tomography (FiRST). Coronary CT angiography is an effective, evidence-based strategy for evaluating acute chest pain in the emergency department for patients at low-to-intermediate risk of acute coronary syndrome. Recent multicenter trials have reported that coronary CT angiography is safe, reduces time to diagnosis, facilitates discharge, and may lower overall cost compared with routine care. Herein, we provide a 10-step approach for establishing a successful coronary CT angiography program in the emergency department. The importance of strategic planning and multidisciplinary collaboration is emphasized. Patient selection and preparation guidelines for coronary CT angiography are reviewed with straightforward protocols that can be adapted and modified to clinical sites, depending on available cardiac imaging capabilities. Technical parameters and patient-specific modifications are also highlighted to maximize the likelihood of diagnostic quality examinations. Practical suggestions for quality control, process monitoring, and standardized reporting are reviewed. Finally, the role of a "triple rule-out" protocol is featured in the context of acute chest pain evaluation in the emergency department.
abstract_id: PUBMED:25714275
Cardiac CT angiography in the emergency department. OBJECTIVE. Nearly 8 million patients present annually to emergency departments (EDs) in the United States with acute chest pain. Identifying those with a sufficiently low risk of acute coronary syndrome (ACS) remains challenging. Early imaging is important for risk stratification of these individuals. The objective of this article is to discuss the role of cardiac CT angiography (CTA) as a safe, efficient, and cost-effective tool in this setting and review state-of-the-art technology, protocols, advantages, and limitations from the perspective of our institution's 10-year experience. CONCLUSION. Early utilization of cardiac CTA in patients presenting to the ED with chest pain and a low to intermediate risk of ACS quickly identifies a group of particularly low-risk patients (< 1% risk of adverse events within 30 days) and allows safe and expedited discharge. By preventing unnecessary admissions and prolonged lengths of stay, a strategy based on early cardiac CTA has been shown to be efficient, although potential overutilization and other issues require long-term study.
abstract_id: PUBMED:25301041
Coronary CT angiography for acute chest pain in the emergency department. Acute chest pain in the emergency department (ED) is a common and costly public health challenge. The traditional strategy of evaluating acute chest pain by hospital or ED observation over a period of several hours, serial electrocardiography and cardiac biomarkers, and subsequent diagnostic testing such as physiologic stress testing is safe and effective. Yet this approach has been criticized for being time intensive and costly. This review evaluates the current medical evidence which has demonstrated the potential for coronary CT angiography (CTA) assessment of acute chest pain to safely reduce ED cost, time to discharge, and rate of hospital admission. These benefits must be weighed against the risk of ionizing radiation exposure and the influence of ED testing on rates of downstream coronary angiography and revascularization. Efforts at radiation minimization have quickly evolved, implementing technology such as prospective electrocardiographic gating and high pitch acquisition to significantly reduce radiation exposure over just a few years. CTA in the ED has demonstrated accuracy, safety, and the ability to reduce ED cost and crowding although its big-picture effect on total hospital and health care system cost extends far beyond the ED. The net effect of CTA is dependent also on the prevalence of coronary artery disease (CAD) in the population where CTA is used, which significantly influences rates of post-CTA invasive procedures such as angiography and coronary revascularization. These potential costs and benefits will warrant careful consideration and prospective monitoring as additional hospitals continue to implement this important technology into their diagnostic regimen.
abstract_id: PUBMED:36511943
Coronary Computed Tomography Angiography for Evaluation of Chest Pain in the Emergency Department. Coronary computed tomography angiography has emerged as an important diagnostic modality for evaluation of acute chest pain in the emergency department for patients at low to intermediate risk for acute coronary syndromes. Several clinical trials have shown excellent negative predictive value of coronary computed tomography angiography to detect obstructive coronary artery disease. Cardiac biomarkers such as troponins and creatine kinase MB, along with history, electrocardiogram, age, risk factors, troponin score, and Thrombolysis in Myocardial Infarction score should be used in conjunction with coronary computed tomography angiography for safe and rapid discharge of patients from the emergency department. Coronary computed tomography angiography along with high-sensitivity troponin assays could be effective for rapid evaluation of acute chest pain in the emergency department, but high-sensitivity troponins are not always available. Emergency department physicians are not quite comfortable making clinical decisions, especially if the coronary stenosis is in the range of 50% to 70%. In these cases, further evaluation with functional testing, such as nuclear stress testing or stress echocardiogram, is a common approach in many centers; however, newer methods such as fractional flow reserve computed tomography could be safely incorporated in coronary computed tomography angiography to help with clinical decision-making in these scenarios.
abstract_id: PUBMED:26310589
Safety and efficiency of outpatient versus emergency department-based coronary CT angiography for evaluation of patients with potential ischemic chest pain. Background: While coronary CT angiography (coronary CTA) may be comparable to standard care in diagnosing acute coronary syndrome (ACS) in emergency department (ED) chest pain patients, it has traditionally been obtained prior to ED discharge and a strategy of delayed outpatient coronary CTA following an ED visit has not been evaluated.
Objective: To investigate the safety of discharging stable ED patients and obtaining outpatient CCTA.
Methods: At two urban Canadian EDs, patients up to 65 years with chest pain but no findings indicating presence of ACS were further evaluated depending upon time of presentation: (1) ED-based coronary CTA during normal working hours, (2) or outpatient coronary CTA within 72 hours at other times. All data were collected prospectively. The primary outcome was the proportion of patients who had an outpatient coronary CTA ordered and had a predefined major adverse cardiac event (MACE) between ED discharge and outpatient CT; secondary outcome was the ED length of stay in both groups.
Results: From July 1, 2012 to June 30, 2014, we enrolled 521 consecutive patients: 350 with outpatient CT and 171 with ED-based CT. Demographics and risk factors were similar in both cohorts. No outpatient CT patients had a MACE prior to coronary CTA. (0.0%, 95% CI 0 to 0.9%) The median length of stay for ED-based evaluation was 6.6 hours (interquartile range 5.4 to 8.3 hours) while the outpatient group had a median length of stay of 7.0 hours (IQR 6.0 to 9.8 hours, n.s.).
Conclusions: In ED chest pain patients with a low risk of ACS, performing coronary CTA as an outpatient may be a safe strategy.
abstract_id: PUBMED:25767006
Management of acute chest pain: A major role for coronary CT angiography. Most patients presenting with acute chest pain (ACP) at the emergency unit do not have any marked electrocardiogram abnormalities or known history of heart disease. Identifying the few patients who have, or will actually develop acute coronary syndrome in this group that is considered to be at low risk, is an actual clinical challenge for emergency department physicians. In these patients, the goal of complementary non-invasive morphological or functional imaging tests is to exclude heart disease. The diagnostic values of coronary CT angiography include a sensitivity of 96% and a negative likelihood ratio of 0.09, which are highly contributory to the diagnosis, and the integration of this imaging test into a decision tree algorithm appears to be the least expensive strategy with the best cost/effective ratio. Coronary CT angiography is indicated in the presence of ACP associated with an inconclusive electrocardiogram, in the absence of any other obvious diagnoses, when the ultrasensitive troponin assay is negative or the dynamic changes are modest, slow and/or inconclusive. Ideally, coronary CT angiography should be performed within 3 to 48hours after the initial consultation.
abstract_id: PUBMED:23139094
CT coronary angiography: new risks for low-risk chest pain. Widespread conservative management of low-risk chest pain has motivated the development of a rapid triage strategy based on CT coronary angiography (CTCA) in the Emergency Department (ED). Recently, three prominent trials using this technology in the ED setting have presented results in support of its routine use. However, these studies fail to show the incremental prognostic value of CTCA over clinical and biomarker-based risk-stratification strategies, demonstrate additional downstream costs and interventions, and result in multiple harms associated with radio-contrast and radiation exposure. Observing the widespread overdiagnosis of pulmonary embolism following availability of CT pulmonary angiogram as a practice pattern parallel, CTCA use for low-risk chest pain in the ED should be advanced only with caution.
Answer: The use of early coronary CT angiography (CCTA) in the evaluation of acute chest pain in the emergency department (ED) has been assessed for its benefits, particularly in diabetic patients, through the Randomized Comparative Effectiveness ROMICAT II Trial. The study found that while CCTA reduced the length of stay for nondiabetic patients compared with standard evaluation (8.4 versus 26.5 hours, P<0.0001), it did not significantly affect the length of stay for diabetic patients (23.9 versus 27.2 hours, P=0.86). However, CCTA did result in a threefold increase in direct ED discharges for both diabetic and nondiabetic patients (P≤0.0001). Diabetic patients had fewer normal CCTAs than nondiabetic patients (32% versus 50%, P=0.003), but those with no or mild stenosis by CCTA had a short length of stay comparable to that of nondiabetic patients. In contrast, diabetic patients with greater than 50% stenosis had a high prevalence of acute coronary syndrome, invasive coronary angiography, and revascularization. Therefore, CCTA can be beneficial for diabetic patients by distinguishing between lower risk patients who can be discharged immediately and higher risk patients who require further workup (PUBMED:27006119).
In general, CCTA is considered an effective strategy for evaluating acute chest pain in the ED for patients at low-to-intermediate risk of acute coronary syndrome. It has been shown to be safe, reduce time to diagnosis, facilitate discharge, and may lower overall cost compared with routine care (PUBMED:23809428). CCTA has a high negative predictive value for detecting obstructive coronary artery disease and can be used in conjunction with cardiac biomarkers and clinical scores for rapid and safe discharge of patients from the ED (PUBMED:36511943). However, the use of CCTA should be advanced with caution due to potential downstream costs, interventions, and risks associated with radio-contrast and radiation exposure (PUBMED:23139094). |
Instruction: Does a nerve-sparing technique or potency affect continence after open radical retropubic prostatectomy?
Abstracts:
abstract_id: PUBMED:16753399
Nerve sparing open radical retropubic prostatectomy--does it have an impact on urinary continence? Purpose: We prospectively assessed the role of nerve sparing surgery on urinary continence after open radical retropubic prostatectomy.
Materials And Methods: We evaluated a consecutive series of 536 patients who underwent open radical retropubic prostatectomy with attempted bilateral, unilateral or no nerve sparing, as defined by the surgeon, without prior radiotherapy at a minimum followup of 1 year with documented assessment of urinary continence status. Because outlet obstruction may influence continence rates, its incidence and management was also evaluated.
Results: One year after surgery 505 of 536 patients (94.2%) were continent, 27 (5%) had grade I stress incontinence and 4 (0.8%) had grade II stress incontinence. Incontinence was found in 1 of 75 (1.3%), 11 of 322 (3.4%) and 19 of 139 patients (13.7%) with attempted bilateral, attempted unilateral and without attempted nerve sparing, respectively. The proportional differences were highly significant, favoring a nerve sparing technique (p <0.0001). On multiple logistic regression analysis attempted nerve sparing was the only statistically significant factor influencing urinary continence after open radical retropubic prostatectomy (OR 4.77, 95% CI 2.18 to 10.44, p = 0.0001). Outlet obstruction at the anastomotic site in 33 of the 536 men (6.2%) developed at a median of 8 weeks (IQR 4 to 12) and was managed by dilation or an endoscopic procedure.
Conclusions: The incidence of incontinence after open radical retropubic prostatectomy is low and continence is highly associated with a nerve sparing technique. Therefore, nerve sparing should be attempted in all patients if the principles of oncological surgery are not compromised.
abstract_id: PUBMED:27011560
Robot-Assisted Radical Prostatectomy vs. Open Retropubic Radical Prostatectomy for Prostate Cancer: A Systematic Review and Meta-analysis. Open retropubic radical prostatectomy (ORP) remains the "gold standard" for surgical treatment of clinically localized prostate cancer (PCa). Robot-assisted radical prostatectomy (RARP) is a robotic surgery used worldwide. The aim of this study is to collect the data available in the literature on RARP and ORP, and further evaluate the overall safety and efficacy of RARP vs. ORP for the treatment of clinically localized PCa. A literature search was performed using electronic databases between January 2009 and October 2013. Clinical data such as operation duration, transfusion rate, positive surgical margins (PSM), nerve sparing, 3- and 12-month urinary continence, and potency were pooled to carry out meta-analysis. Six studies were enrolled for this meta-analysis. The operation duration of RARP group was longer than that of ORP group (weighted mean difference = 64.84). There was no statistically significant difference in the transfusion rate, PSM rate, and between RARP and ORP (transfusion rate, OR = 0.30; PSM rate, OR = 0.94). No significant difference was seen in 3- and 12-month urinary continence recovery (3 months, OR = 1.32; 12 months, OR = 1.30). There was a statistically significant difference in potency between the 3- and 12-month groups (3 months, OR = 2.80; 12 months, OR = 1.70). RARP is a safe and feasible surgical technique for the treatment of clinically localized PCa owing to the advantages of fewer perioperative complications and quicker patency recovery.
abstract_id: PUBMED:18808410
Does a nerve-sparing technique or potency affect continence after open radical retropubic prostatectomy? Objective: To characterize the effect of preserving the neurovascular bundle (NVB) and of potency on urinary continence after open radical retropubic prostatectomy (ORRP).
Patients And Methods: Between October 2000 to September 2005, 1110 consecutive continent men had ORRP by one surgeon. The University of California Los Angeles Prostate Cancer Index was self-administered at baseline and 3, 6, 12, and 24 months after ORRP. Men were considered continent if they responded that they had total urinary control or had occasional urinary leakage. Men were considered potent if they engaged in sexual intercourse with or without the use of phosphodiesterase inhibitors at least once in the month before or after ORRP. Of the 1110 men, 728 (66%) were potent and continent at baseline. Men undergoing adjuvant hormonal therapy, radiation therapy or chemotherapy were excluded. The potency status was evaluated in 610 men at 24 months after ORRP, and the number of NVBs preserved was recorded at the time of ORRP.
Results: Of men who were potent at baseline and had bilateral vs unilateral nerve sparing, 96% and 99% were continent at 24 months, respectively (P = 0.50). Of the men who were potent and impotent at 24 months, 98% and 96% were continent at 24 months, respectively (P = 0.25). Continence did not depend on whether men regained potency or whether they had a bilateral or a unilateral nerve-sparing procedure.
Conclusion: Our observation that only 60% of men undergoing bilateral nerve-sparing ORRP regain potency suggests that the NVBs are often inadvertently injured, despite efforts to preserve them. We feel that potency status is the best indicator of the true extent of NVB preservation. That men undergoing bilateral vs unilateral nerve-sparing procedures, and that potent vs impotent men at 24 months have similar continence rates, provides compelling evidence that nerve-sparing is not associated with better continence. Based on these findings, NVBs should not be preserved in men with baseline erectile dysfunction, with the expectation of improving continence.
abstract_id: PUBMED:34251109
Clinical and morphological assessment of the results of a standard robot-assisted nerve-sparing radical prostatectomy and with the use of Retzius-sparing technique Objective: To compare the perioperative, functional, clinical and morphological results of a standard robot-assisted nerve-sparing radical prostatectomy and with the use of the Retzius-sparing technique.
Materials And Methods: A prospective analysis was performed of two groups of patients (n=54) who underwent nerve-sparing robot-assisted radical prostatectomy (period from 2017 to 2018). The first group included 29 patients who underwent nerve-sparing robot-assisted radical prostatectomy with Retzius-sparing technique, the second - 25 patients operated on according to the standard method of bilateral nerve-sparing radical prostatectomy. All patients were comparable in baseline characteristics. In all cases, patients had histologically verified localized prostate cancer pT2a-2c.
Results: In cases with use Retzius-sparing technique there is no statistically significant difference in the operation time (243.60 min vs 236.64 min, in groups 1 and 2, p>0.05) and intraoperative blood loss (131.20 ml vs 122.57 ml , in groups 1 and 2, p>0.05). Regarding the dynamics of the urinary continence recovery, the Retzius-sparing technique demonstrates advantages in speed and frequency at all follow-up periods (54.13% vs 41.81%; 68.12% vs 59.21%; 94.15% vs 90 , 63%; 98.54% vs 97.12%; 98.62% vs 97.31%; 98.83% vs 97.82% - in one week after removal of the urethral catheter, 1, 3, 6, 9, and 12 months in the first and second group, respectively). The frequency of erectile function recovery after 12 months was 82.17% and 71.14% in the first and second groups, respectively.
Conclusions: Retzius-sparing robot-assisted prostatectomy superior to standard operation in the speed and timing of recovery of urine continence and erectile function.
abstract_id: PUBMED:32620655
Retzius-sparing Robotic-assisted Radical Prostatectomy Facilitates Early Continence Regardless of Neurovascular Bundle Sparing. Background/aim: Retzius-sparing robotic-assisted radical prostatectomy (RARP) has had better results in early continence rate and comparable oncological safety compared to the retropubic approach. However, the role the neurovascular bundle (NVB) sparing plays in the rate of early continence after catheter removal remains unclear. In this study, we sought to compare the early continence rate between Retzius-sparing RARP and the retropubic approach RARP to assess whether NVB sparing affects the continence rate in patients with prostate cancer.
Patients And Methods: This was a retrospective case series of 133 patients who underwent RARP from 2004 to 2017. 92 patients underwent retropubic RARP and 41 patents underwent Retzius-sparing RARP. All procedures were performed by a single surgical team in a single institution. Baseline patient characteristics were recorded and analyzed. Continence results and oncological outcomes were compared between the two groups. Continence outcome of Retzius-sparing RARP with NVB sparing was also analyzed.
Results: No differences in age, prostate size, pathology T stage, PSA, and NVB sparing were found between the two groups. The oncological results including surgical margin and biochemical recurrence rate at one year showed no difference between the two groups. With respect to immediate continence results, the Retzius-sparing group showed a better continence result compared to the retropubic approach (75.6% vs. 26.1 %, respectively, p<0.001) after catheter removal. However, there was no difference between the two groups after 6 months. Furthermore, no significant difference in immediate continence result was found in the Retzius-sparing group between patients with NVB sparing (75 %) and those without (75 % vs. 78%, respectively, p=1.00).
Conclusion: Retzius-sparing RARP may provide a better immediate continent result compared to retropubic RARP. In Retzius-sparing RARP, NVB sparing did not enhance immediate continence after the operation.
abstract_id: PUBMED:17074431
Nerve-sparing open radical retropubic prostatectomy. Introduction: In recent years, the surgical technique for open radical prostatectomy has evolved and increasing attention is paid to preserving anatomic structures and the impact on outcome and quality of life.
Methods: Technical aspects of nerve-sparing open radical retropubic prostatectomy (RRP) are described. Patient selection criteria and functional results are discussed, focusing on postoperative urinary continence.
Results: The video demonstrates the nerve-sparing open RRP and important steps are elucidated with schematic drawings. The value of nerve sparing, not only for preserving erectile function, but also for preserving urinary continence is discussed and results from our institution are presented. In our series, urinary incontinence was present in 1 of 71 patients (1%) with attempted bilateral nerve-sparing, 11 of 322 (3%) with attempted unilateral nerve-sparing, or 19 of 139 (14%) without attempted nerve-sparing surgery. In multiple logistic regression analysis, the only statistically significant factor influencing urinary continence after open RRP was attempted nerve sparing (odds ratio, 4.77; 95% confidence interval, 2.18-10.44; p=0.0001).
Conclusions: Nerve-sparing surgery has a significant impact on erectile function and urinary continence and should be performed in all patients provided radical tumour resection is not compromised. For successful nerve preservation we advocate a lateral approach to the prostate to improve visualisation and simplify separation of the neurovascular bundles from the dorsolateral prostatic capsule. Bunching, ligating, and incising Santorini's plexus over the prostate and not over the sphincter ensures a bloodless surgical field. Mucosa-to-mucosa adaptation of the reconstructed bladder neck and the urethra is another important factor to be observed.
abstract_id: PUBMED:8345607
Return of erections and urinary continence following nerve sparing radical retropubic prostatectomy. We evaluated recovery of erections and urinary continence following anatomical radical retropubic prostatectomy in a series of 784 consecutive patients with clinical stage A or B prostate cancer. Nerve sparing radical prostatectomy was performed in men deemed appropriate candidates. Recovery of erections sufficient for intercourse and urinary continence were analyzed controlling for patient age, pathological tumor stage and the performance of unilateral or bilateral nerve sparing surgery in men followed for a minimum of 18 months. Erections were regained in 149 of 236 preoperatively potent men (63%) treated with bilateral and 24 of 59 (41%) treated with unilateral nerve sparing surgery. Recovery of erections correlated with patient age and pathological tumor stage in patients treated with bilateral nerve sparing surgery. Continence was regained in 409 of 435 patients (94%) and did not correlate with patient age, tumor stage or nerve sparing surgery. Anatomical radical retropubic prostatectomy can be performed with favorable results in preserving potency and urinary continence. Better results are achieved in younger men with organ confined cancer.
abstract_id: PUBMED:16332409
Open retropubic nerve-sparing radical prostatectomy. Retropubic radical prostatectomy is the most commonly used therapeutic option for the treatment of clinically localized prostate cancer. An ongoing stage migration toward organ-confined cancers allows performance of a nerve-sparing procedure in a growing number of patients. Key elements for achieving convincing functional results are a sphincter-preserving ligation of the distal part of the Santorini plexus and the subtle preparation of the neurovascular bundle. This article gives a detailed description of the operative technique, which is demonstrated in the attached DVD. Furthermore, indication, oncologic outcome, and functional results addressing postoperative urinary continence and potency are discussed.
abstract_id: PUBMED:32739124
Radical retropubic prostatectomy: Preservation of urinary continence. Objective: Historical review of procedures and maneuvers described in the literature to preserve-increase urinary continence after retropubic prostatectomy.
Material And Method: Review of the original articles on the design of maneuvers for the preservation of urinary continence.
Results: Numerous maneuvers have been described attempting to preserve-increase urinary continence after prostatectomy. They can be grouped into preservation of puboprostatic ligaments, bladder neck, striated sphincter or trigonal innervation, construction of a neourethra, suspension of the anastomosis or the dorsal venous complex and intussusception of the bladder neck.
Conclusion: There is no ideal maneuver for preserving-increasing urinary continence after retropubic prostatectomy, as there is no well-conducted work with any of these techniques that confirm their efficacy.
abstract_id: PUBMED:14991116
Technique of nerve-sparing radical retropubic prostatectomy Open radical retropubic prostatectomy is the most popular operative technique for the treatment of clinically localized prostate cancer. Key elements to achieve convincing functional results are a sphincter-preserving ligation of the distal part of Santorini's plexus and the subtle preparation of the neurovascular bundle. This article gives a detailed description of our operative technique. Furthermore, a strategy for patient selection and tumor selection for the indication of a nerve-sparing radical prostatectomy is suggested.
Answer: The impact of nerve-sparing techniques on urinary continence after open radical retropubic prostatectomy (ORRP) has been explored in several studies, with varying conclusions.
One study found that nerve-sparing surgery had a significant positive impact on urinary continence after ORRP. The incidence of incontinence was lower in patients who underwent bilateral or unilateral nerve-sparing surgery compared to those with no nerve sparing. The study concluded that nerve sparing should be attempted in all patients if oncological principles are not compromised, as it was the only statistically significant factor influencing urinary continence (PUBMED:16753399).
However, another study reported that continence did not depend on whether men regained potency or whether they had a bilateral or unilateral nerve-sparing procedure. This study suggested that nerve-sparing is not associated with better continence, and that potency status is the best indicator of the true extent of neurovascular bundle (NVB) preservation. It recommended that NVBs should not be preserved in men with baseline erectile dysfunction with the expectation of improving continence (PUBMED:18808410).
Further research comparing Retzius-sparing robot-assisted radical prostatectomy with standard techniques found that Retzius-sparing surgery was superior in the speed and timing of recovery of urine continence and erectile function (PUBMED:34251109). Another study on Retzius-sparing robotic-assisted radical prostatectomy (RARP) indicated that this technique facilitated early continence regardless of NVB sparing, suggesting that NVB sparing did not enhance immediate continence after the operation (PUBMED:32620655).
In summary, while some studies suggest that nerve-sparing techniques can positively affect urinary continence after ORRP (PUBMED:16753399; PUBMED:17074431), others indicate that the preservation of NVBs may not be associated with better continence outcomes (PUBMED:18808410; PUBMED:32620655). The discrepancy in findings may be due to differences in surgical techniques, patient selection, and definitions of continence used in the studies. |
Instruction: Sputum eosinophilia and maximal airway narrowing in Dermatophagoides pteronyssinus allergic rhinitis patients: only rhinitis or rhinitis plus mild asthma?
Abstracts:
abstract_id: PUBMED:12426253
Sputum eosinophilia and maximal airway narrowing in Dermatophagoides pteronyssinus allergic rhinitis patients: only rhinitis or rhinitis plus mild asthma? Study Objective: To study the existence of bronchial disease among rhinitis patients. To evaluate the laboratory test or set of tests (ie, symptoms, exposure, and sensitization to the allergen, and the provocative dose of methacholine [Mth] causing a 20% fall in FEV(1) [PD(20)] and the maximal response plateau [MRP] to Mth) that best identifies a case of mild asthma.
Design: Cross-sectional analysis in 52 Dermatophagoides pteronyssinus-monosensitized patients who were consulting a physician for perennial rhinitis.
Setting: Allergy Department, Hospital Doctor Negrín, Las Palmas, Grand Canary Island, Spain.
Interventions And Measurements: Patients filled out a standardized asthma symptom questionnaire, and underwent sputum induction and Mth challenge in which 40% falls in FEV(1) were attained. Dose-response curves were expressed in terms of both PD(20) values and the level of the MRP. D pteronyssinus allergen exposure was assessed in dust samples from patients' beds.
Results: No difference between patients who positively responded to the questionnaire and those who did not was observed. Mth-PD(20) values were not detected in 13% of the patients reporting bronchial symptoms, and an MRP was not identified in 59% of the subjects who did not respond positively. A higher degree of allergen sensitization (ascertained from skin test results, and total and specific serum IgE levels) and higher degree of sputum eosinophilia were detected in subjects in whom an MRP had not been identified. The presence of sputum eosinophilia provided the best differentiation between those patients who presented with an MRP and those who did not.
Conclusion: The individual perception of bronchial symptoms is highly variable among perennial allergic rhinitis patients. The lack of a maximal airway-narrowing plateau is related to the presence of sputum eosinophilia, which might be useful in the detection of patients susceptible to anti-inflammatory therapy. Prospective studies evaluating whether these patients are more likely to develop symptomatic asthma in the future and if the early anti-inflammatory treatment prevents its development are needed.
abstract_id: PUBMED:23814677
Rhinitis patients with sputum eosinophilia show decreased lung function in the absence of airway hyperresponsiveness. Purpose: Sputum eosinophilia is observed frequently in patients with rhinitis. Sputum eosinophilia in patients with non-asthmatic allergic rhinitis has been suggested to be related to nonspecific airway hyperresponsiveness (AHR). However, the clinical significance of sputum eosinophilia in patients with non-asthmatic rhinitis without AHR has not been determined. We conducted a retrospective study examining the influence of sputum eosinophilia in patients with non-asthmatic rhinitis without AHR on pulmonary function and expression of fibrosis-related mediators.
Methods: Eighty-nine patients with moderate-to-severe perennial rhinitis without AHR were included. All underwent lung function tests (forced expiratory volume in 1 second [FEV1] and forced vital capacity [FVC]), skin tests to inhalant allergens, methacholine bronchial challenge tests, and hypertonic saline-induced sputum to determine eosinophil counts. Sputum mRNA levels for transforming growth factor-β (TGF-β), matrix metalloproteinase-9 (MMP-9), and tissue inhibitor of metalloproteinase-1 (TIMP-1) were also examined. Patients were divided into two groups according to the presence of sputum eosinophilia (≥3%, eosinophilia-positive [EP] and <3%, eosinophilia-negative [EN] groups).
Results: FEV1 was significantly lower (P=0.04) and FEV1/FVC tended to be lower (P=0.1) in the EP group than in the EN group. In sputum analyses, the MMP-9 mRNA level (P=0.005) and the ratio of MMP-9 to TIMP-1 expression (P=0.01) were significantly higher in the EP group than in the EN group. There was no significant difference in TGF-β mRNA expression between the two groups.
Conclusions: Sputum eosinophilia in patients with moderate-to-severe perennial rhinitis without AHR influenced FEV1 and the expression pattern of fibrosis-related mediators.
abstract_id: PUBMED:33263506
Upper and lower airway inflammation in severe asthmatics: a guide for a precision biologic treatment. Background And Aims: Severe asthma may require the prescription of one of the biologic drugs currently available, using surrogate markers of airway inflammation (serum IgE levels and allergic sensitization for anti-IgE, or blood eosinophils for anti-IL5/IL5R). Our objective: to assess upper and lower airway inflammation in severe asthmatics divided according to the eligibility criteria for one of the target biologic treatments.
Methods: We selected 91 severe asthmatics, uncontrolled despite high-dose ICS-LABA, and followed for >6 months with optimization of asthma treatment. Patients underwent clinical, functional and biological assessment, including induced sputum and nasal cytology. They were then clustered according to the eligibility criteria for omalizumab or mepolizumab/benralizumab.
Results: Four clusters were selected: A (eligible for omalizumab, n = 23), AB (both omalizumab and mepolizumab, n = 26), B (mepolizumab, n = 22) and C (non-eligible for both omalizumab and mepolizumab, n = 20). There was no difference among clusters for asthma control (Asthma Control Test and Asthma Control Questionnaire 7), pre-bronchodilator forced expiratory volume in 1 s, serum IgE and fractional exhaled nitric oxide levels. Sputum eosinophils were numerically higher in clusters AB and B, in agreement with the higher levels of blood eosinophils. Allergic rhinitis was more frequent in clusters A and AB, while chronic rhinosinusitis with nasal polyps prevalence increased progressively from A to C. Eosinophils in nasal cytology were higher in clusters AB, B and C.
Conclusion: Eosinophilic upper and lower airway inflammation is present in the large majority of severe asthmatics, independently from the prescription criteria for the currently available biologics, and might suggest the use of anti-IL5/IL5R or anti IL4/13 also in patients without blood eosinophilia.The reviews of this paper are available via the supplemental material section.
abstract_id: PUBMED:27852361
Lower airway abnormalities in patients with allergic rhinitis Objective: To investigate the characteristics of lower airway abnormalities in allergic rhinitis(AR) patients without asthma. Methods: Between June 2008 and December 2012, 377 consecutive AR patients and 264 healthy subjects were recruited. All subjects underwent meticulous history taking, nasal examination, allergen skin prick test, blood routine test, serum total immunoglobin E assay, induced sputum cell count and differentials, measurement of fractional exhaled nitric oxide (FeNO) and bronchial challenge test. Results: The positive rates in AR patients was 12.2%(46/377) for bronchial provocation test, 49.2%(185/377) for FeNO, 39.0%(147/377) for sputum eosinophilia, 15.6%(40/377) for peripheral blood eosinophilia and 55.4%(209/377) for increased serum total IgE levels, which were consistently and statistically higher than those of healthy controls(P<0.01). The levels of FeNO [35.0 (21.8, 65.9)ppb], induced sputum eosinophil percentage [2.0 (0.0, 7.5)%], peripheral blood eosinophil percentage [2.9 (1.8, 4.5)%] and serum total IgE [178.4 (63.1, 384.0)kU/L] in AR patients were also higher(P<0.01). Compared with healthy controls, patients with AR demonstrated lower levels of FEV1/FVC%, MMEFpred%, MEF75 pred%, MEF25pred% (all P<0.05). Statistical analysis showed that FeNO, ratio of induced sputum eosinophil percentage and peripheral blood eosinophil percentage had significant correlations with each other(P<0.01), the r value being 0.247, 0.235, 0.355 respectively. Conclusion: AR without asthma is characterized by lower airway inflammation, small airway impairment and bronchial hyperreactivity, features similar to those of asthma.
abstract_id: PUBMED:15461606
Effect of immunotherapy on asthma progression, BHR and sputum eosinophils in allergic rhinitis. Background: Bronchial hyperresponsiveness (BHR) and airway inflammation are frequently associated with allergic rhinitis, and may be important risk factors for the development of asthma. Specific immunotherapy (SIT) reduces symptom in subjects with allergic rhinitis, but the mechanisms are not clear.
Aims Of The Study: To assess the effect of Parietaria-SIT on asthma progression, rhinitic symptoms, BHR, and eosinophilic inflammation.
Methods: Nonasthmatic subjects with seasonal rhinitis were randomly assigned to receive Parietaria pollen vaccine (n = 15) or matched placebo (n = 15). Data on symptoms and medication score, BHR to methacholine, eosinophilia in sputum were collected throughout the 3-year study.
Results: By the end of the study, in the placebo group, symptoms and medication scores significantly increased by a median (interquartile range) of 121% (15-280) and 263% (0-4400) respectively (P < 0.01), whereas no significant difference was observed in the SIT group. We found no significant changes in sputum eosinophils and BHR to methacholine in both groups throughout the study. Nine of 29 participants developed asthma symptoms during the study; of these, only two subjects (14%) in the SIT-treated group (P = 0.056).
Conclusions: Parietaria-SIT reduces symptom and rescue medication scores, but no changes in BHR to methacholine or sputum eosinophilia were observed. Moreover, Parietaria-SIT appears to prevent the natural progression of allergic rhinitis to asthma, suggesting that SIT should be considered earlier in the management of subjects with allergic rhinitis.
abstract_id: PUBMED:16433856
Respiratory symptoms, bronchial hyper-responsiveness, and eosinophilic airway inflammation in patients with moderate-to-severe atopic dermatitis. Background: Patients with atopic dermatitis (AD) often have symptoms suggestive of asthma or rhinitis. The prevalence and signs of respiratory disease in AD patients have been studied to a limited extent.
Objectives: To assess the prevalence and clustering of respiratory symptoms, bronchial hyper-responsiveness (BHR), and eosinophilic airway inflammation in patients with moderate-to-severe AD.
Methods: Eighty-six consecutive patients with moderate-to-severe AD and 49 randomly selected control subjects without AD were studied by questionnaire, flow volume spirometry, histamine challenge to detect BHR, induced sputum test to detect eosinophilic airway inflammation, and skin prick tests (SPTs) and total serum immunoglobulin (Ig)E measurements to detect atopy.
Results: The patients with AD showed increased risk of physician-diagnosed asthma (36% vs. 2%, odds ratio (OR) 10.1, confidence interval (CI) 1.3-79.7, P=0.03), physician-diagnosed allergic rhinitis (AR) (45% vs. 6%, OR 4.5, CI 1.2-16.7, P=0.02), BHR (51% vs. 10%, OR 5.5, CI 1.5-20.1, P=0.01), and sputum eosinophilia (81% vs. 11%, OR 76.1, CI 9.3-623.5, P<0.0001) compared with the control subjects. In AD patients, elevated s-IgE and positive SPTs were associated with the occurrence of physician-diagnosed asthma and AR, BHR, and the presence of sputum eosinophilia.
Conclusions: BHR and eosinophilic airway inflammation are more common in patients with AD than in control subjects. The highest prevalences were seen in patients with AD who were SPT positive and had high IgE levels. Longitudinal studies are needed to assess the outcome of patients with signs of airway disease, in order to identify those who need early initiation of asthma treatment.
abstract_id: PUBMED:12911426
Relationship between cutaneous allergen response and airway allergen-induced eosinophilia. Background: Determinants of changes in airway caliber after allergen challenge include nonallergic airway responsiveness, immune response and dose of allergen given. However, determinants of the airway inflammatory response to allergens remain to be determined.
Aim: To assess the relationship between skin reactivity to airborne allergens and lower airway eosinophilic response to allergen exposure in asthma and allergic rhinitis.
Methods: Forty-two subjects with mild allergic asthma (mean age 24 years) and 14 nonasthmatic subjects with allergic rhinitis (mean age 25 years) had allergen skin prick tests and titration with the allergen chosen for subsequent challenge. On a second visit, 31 asthmatic subjects had a conventional challenge while 11 asthmatic subjects and all rhinitic subjects had a low-dose allergen challenge over four subsequent days. Induced sputum samples were obtained at 6 and 24 h after the conventional challenge and at days 2 and 4 of the low-dose challenge.
Results: In the asthmatic group, there was a weak correlation between wheal diameter induced by the concentration used for challenge and increase in eosinophils 6 h postconventional challenge (r = 0.372, P = 0.05), but no correlation was observed following the low-dose challenge. Rhinitic subjects showed a correlation between wheal diameter with the allergen dose used for bronchoprovocation and increase in eosinophils at day 2 of low dose (r = 0.608, P = 0.02).
Conclusion: This study suggests that immediate immune responsiveness to allergen, assessed by the magnitude of the skin response, is a significant determinant of allergen-induced airway eosinophilia and can help to predict the airway inflammatory response.
abstract_id: PUBMED:15317270
A randomized, controlled study of specific immunotherapy in monosensitized subjects with seasonal rhinitis: effect on bronchial hyperresponsiveness, sputum inflammatory markers and development of asthma symptoms. Allergic rhinitis is often associated with bronchial hyperresponsiveness (BHR) and airway inflammation, and it seems to be an important risk factor for the development of asthma. Specific immunotherapy (SIT) reduces symptoms and medication requirements in subjects with allergic rhinitis, but the mechanisms by which SIT promotes these beneficial effects are less clear. We have investigated the effects of Parietaria-SIT on rhinitis symptoms, BHR to inhaled methacholine, eosinophilic inflammation and cytokine production (interferon gamma and interleukin-4) in the sputum. The effect on asthma progression was also examined. Thirty non-asthmatic subjects with seasonal rhinitis and monosensitized to Parietaria judaica participated in a randomized, double-blind, placebo-controlled, parallel group study. Participants were randomly assigned to receive injections of a Parietaria pollen vaccine (n = 15) or matched placebo injections (n = 15) in a rapid updosing cluster regimen for 7 weeks, followed by monthly injections for 34 months. Throughout the 3-year study we collected data on symptoms and medication score, airway responsiveness to methacholine, eosinophilia and soluble cytokines in sputum, followed by a complete evaluation of the clinical course of atopy. Hay fever symptom and medication scores were well controlled by SIT. By the end of the study, in the placebo group, symptom and medication scores significantly increased by a median (interquartile range) of 121% (15-280%) and 263% (0-4400%) respectively (p < 0.01), whereas no significant difference was observed in the SIT group. We found no significant changes in the sputum parameters and methacholine PC15 values in both groups throughout the study. By the end of the investigation, a total of 9 out of 29 participants developed asthma symptoms; of these, seven (47%) belonged to the placebo group, whereas only 2 (14%) to the SIT-treated group (p = 0.056). In conclusion, Parietaria-SIT is effective in controlling hay fever symptoms and rescue medications, but no changes in the BHR to methacholine or sputum eosinophilia were observed. Moreover, Parietaria-SIT appears to prevent the natural progression of allergic rhinitis to asthma, suggesting that SIT should be considered earlier in the management of this condition.
abstract_id: PUBMED:31228618
Effect of Sublingual Immunotherapy on Airway Inflammation and Airway Wall Thickness in Allergic Asthma. Background: The efficacy of the standardized quality (SQ) house dust mite (HDM) sublingual immunotherapy (SLIT) has been demonstrated for respiratory allergic disease. However, the effects of SLIT on inflammation and structural changes of the airways are still unknown.
Objective: The aim of this study was to assess the effects of the 6 SQ-HDM SLIT on airway inflammation and airway geometry in allergic asthma and rhinitis.
Methods: One hundred two asthmatic patients with rhinitis sensitized to HDM were randomized to receive either SLIT plus pharmacotherapy or standard pharmacotherapy alone, for 48 weeks. Fractional exhaled nitric oxide (FeNO), pulmonary function, quantitative computed tomography, and clinical symptoms were performed at baseline and end of the study.
Results: Compared with pharmacotherapy, SLIT demonstrated a significant reduction of FeNO (P < .01), airway wall area/body surface area (WA/BSA, P < .001), wall thickness (T/√BSA, P < .001), percentage wall area (WA/Ao, P < .01), increase in luminal area (Ai/BSA, P < .05), improvement of airflow limitation (P < .001), and clinical symptom scores (P < .05). The change in forced expiratory volume in 1 second (FEV1) was correlated with both changes in FeNO and airway dimensions. Multiple regression analysis showed that the change in FeNO was independently associated with an increase in FEV1 in the SLIT group (r2 = 0.623, P = .037).
Conclusions: Adding 6 SQ-HDM SLIT to standard asthma therapy provides a significant improvement in symptoms and pulmonary function compared with pharmacotherapy. Improvement of airflow limitation with SLIT was associated with the decrease in eosinophilic airway inflammation.
abstract_id: PUBMED:21535990
Upper and lower airway pathology in young children with allergic- and non-allergic rhinitis. Allergic- and non-allergic rhinitis are very common diseases in childhood in industrialized countries. Although these conditions are widely trivialized by both parents and physicians they induce a major impact on quality of life for the affected children and a substantial drainage of health care resources. Unfortunately, diagnostic specificity is hampered by nonspecific symptom history and lack of reliable diagnostic tests which may explain why the pathology behind such diagnoses is poorly understood. Improved understanding of the pathophysiology of allergic- and non-allergic rhinitis in young children may contribute to the discovery of new mechanisms involved in pathogenesis and help direct future research to develop correctly timed preventive measures as well as adequate monitoring and treatment of children with rhinitis. Asthma is a common comorbidity in subjects with allergic rhinitis and epidemiological surveys have suggested a close connection between upper and lower airway diseases expressed as the "united airways concept". Furthermore, an association between upper and lower airway diseases also seems to exist in non-atopic individuals. Nevertheless, the nature of this association is poorly understood and there is a paucity of data objectivizing this association in young children. The aim of this thesis was to describe pathology in the upper and lower airways in young children from the COPSAC birth cohort with investigator-diagnosed allergic- and non-allergic rhinitis. Nasal congestion is a key symptom in both allergic- and non-allergic rhinitis, and eosinophilic inflammation is a hallmark of the allergic diseases. In paper I, we studied nasal eosinophilia and nasal airway patency assessed by acoustic rhinometry in children with allergic rhinitis, non-allergic rhinitis and healthy controls. Allergic rhinitis was significantly associated with nasal eosinophilia and irreversible nasal airway obstruction suggesting chronic inflammation and structural remodeling of the nasal mucosa in children already at age 6 years. Non-allergic rhinitis exhibited no change in the nasal airway patency, but some nasal eosinophilia albeit less than children with allergic rhinitis. These findings suggest different pathology in allergic- and non-allergic rhinitis which may have important clinical implications for early pharmacological treatment of rhinitis in young children. In paper II, we utilized the nasal airway patency end-points derived from paper I to examine whether upper and lower airway patency are associated. Upper airway patency was assessed by acoustic rhinometry before and after intranasal α-agonist and lower airway patency by spirometry before and after inhaled β2-agonist. Upper and lower airway patencies were strongly associated and independent of body size, rhinitis and asthma. The association was consistent for both baseline values and for decongested nasal airway patency and post-β2 FEV1. Blood and nasal eosinophilia were also associated with nasal airway obstruction. This suggests generalized diminished airway dimensions as a novel susceptibility factor for concurrent symptoms of asthma and rhinitis in early childhood and supports the notion of a common pathophysiology in asthma and rhinitis. The clinical interpretation of these findings is that all children presenting either rhinitis or asthma should be considered inflamed in the entire respiratory tract. In paper III, we aimed to describe asthma and intermediary asthma end-points associated with allergic- and non-allergic rhinitis in preschool-aged children. At age 7 years, we evaluated prevalence of asthma, eczema, food sensitization, and filaggrin mutations; levels of total IgE, FeNO, and blood-eosinophils; lung function and bronchial responsiveness to cold dry air. We found that asthma was similarly associated with allergic- and non-allergic rhinitis suggesting a link between upper and lower airway diseases beyond an allergy associated inflammation. Only children with allergic rhinitis had increased bronchial responsiveness and elevated FeNO suggesting different endotypes of asthma symptoms in young children with allergic- and non-allergic rhinitis. We also found bronchial hyperresponsiveness and raised values of FeNO in children with allergic rhinitis without asthma suggesting sub-clinical bronchial inflammation and supporting the allergic disease process to involve both upper and lower airways. In conclusion, these observations objectively show marked differences in nasal pathology in young children with allergic- and non-allergic rhinitis and lend support to a close connection between upper and lower airway diseases partly from an allergy driven process, but equally from non-allergic mechanisms.
Answer: The presence of sputum eosinophilia in patients with allergic rhinitis to Dermatophagoides pteronyssinus (a common house dust mite) may indicate more than just rhinitis; it could also suggest the presence of mild asthma. In a study of patients with perennial allergic rhinitis, the lack of a maximal airway-narrowing plateau was related to the presence of sputum eosinophilia, which might be useful in detecting patients susceptible to anti-inflammatory therapy (PUBMED:12426253). This suggests that these patients may have underlying bronchial symptoms that are not fully captured by questionnaires or methacholine challenge tests.
Another study found that rhinitis patients with sputum eosinophilia showed decreased lung function in the absence of airway hyperresponsiveness (AHR), indicating that sputum eosinophilia in patients with moderate-to-severe perennial rhinitis without AHR influenced FEV1 and the expression pattern of fibrosis-related mediators (PUBMED:23814677). This further supports the idea that sputum eosinophilia could be indicative of lower airway involvement.
Moreover, a study on severe asthmatics divided according to eligibility criteria for biologic treatments found that eosinophilic upper and lower airway inflammation was present in the majority of severe asthmatics, independently from the prescription criteria for the currently available biologics (PUBMED:33263506). This indicates that eosinophilic inflammation is a common feature in both upper and lower airways in severe asthma, which could also be relevant for patients with allergic rhinitis and sputum eosinophilia.
In conclusion, the presence of sputum eosinophilia in patients with Dermatophagoides pteronyssinus allergic rhinitis may suggest the presence of mild asthma, even in the absence of overt bronchial hyperresponsiveness. This could have implications for the management and treatment of these patients, potentially indicating the need for therapies typically used in asthma management. |
Instruction: Proliferative responses in the local lymph node assay associated with concomitant exposure to 1,4-phenylenediamine and methyldibromo glutaronitrile: evidence for synergy?
Abstracts:
abstract_id: PUBMED:18759875
Proliferative responses in the local lymph node assay associated with concomitant exposure to 1,4-phenylenediamine and methyldibromo glutaronitrile: evidence for synergy? Background: A key consideration when undertaking risk assessments should be the potential for synergy between contact allergens. Previously, this concept has only been investigated during elicitation in contact allergic individuals.
Objective: To determine whether there exists evidence for synergy between contact allergens during the induction phase of skin sensitization using the mouse local lymph node assay (LLNA) as a model system.
Method: Proliferative responses in draining lymph nodes were assessed with increasing concentrations of 1,4-phenylenediamine (PPD), methyldibromo glutaronitrile (MDBGN), and a combination of PPD and MDBGN.
Result: Data from each of two independent experiments show that lymph node cell proliferation associated with combined exposure to PPD and MDBGN was, in general, only modestly increased relative to that predicted from a simple summation of their individual responses.
Conclusions: Although the increase in response is very modest, it does imply a relationship between this combination of sensitizers that may not be simply additive in terms of their ability to stimulate proliferative responses in draining lymph nodes. The reproducibility of this observation should be confirmed in future studies with additional pairs of contact allergens to ascertain whether or not this represents evidence of synergy.
abstract_id: PUBMED:19967517
The local lymph node assay and skin sensitization testing. The mouse local lymph node assay (LLNA) is a method for the identification and characterization of skin sensitization hazards. In this context the method can be used both to identify contact allergens, and also determine the relative skin sensitizing potency as a basis for derivation of effective risk assessments.The assay is based on measurement of proliferative responses by draining lymph node cells induced following topical exposure of mice to test chemicals. Such responses are known to be causally and quantitatively associated with the acquisition of skin sensitization and therefore provide a relevant marker for characterization of contact allergic potential.The LLNA has been the subject of exhaustive evaluation and validation exercises and has been assigned Organization for Economic Cooperation and Development (OECD) test guideline 429. Herein we describe the conduct and interpretation of the LLNA.
abstract_id: PUBMED:11164993
The local lymph node assay and potential application to the identification of drug allergens. The local lymph node assay (LLNA) is a method for the identification of skin sensitization hazard. The method is based upon measurement of proliferative responses induced in draining lymph nodes following topical exposure of mice to the test chemical. More recently the LLNA has also been used for the evaluation of relative skin sensitization potency in the context of risk assessment. Idiosyncratic drug reactions resulting from the stimulation of allergic or autoimmunogenic responses are poorly understood but represent an important clinical problem. In this article, the potential utility of the LLNA, either in a conventional modified configuration, to provide information of value in assessment the potential for systemic allergenicity is considered.
abstract_id: PUBMED:19338586
Skin sensitization potency and cross-reactivity of p-phenylenediamine and its derivatives evaluated by non-radioactive murine local lymph node assay and guinea-pig maximization test. Background: p-Phenylenediamine (PPD)-related chemicals have been used as antioxidants in rubber products, and many cases of contact dermatitis caused by these chemicals have been reported.
Objective: The aim of this study was to investigate relative sensitizing potency and cross-reactivity among PPD derivatives.
Methods: Five PPD derivatives, p-aminodiphenylamine (PADPA), N,N'-diphenyl-p-phenylenediamine (DPPD), N-isopropyl-N'-phenyl-p-phenylenediamine (IPPD), N-(1,3-dimethylbutyl)-N'-phenyl-p-phenylenediamine (DMBPPD), N-(1-methylheptyl)-N'-phenyl-p-phenylenediamine (MHPPD), and the core chemical PPD were evaluated for their sensitizing potency and cross-reactivity using the non-radioactive murine local lymph node assay (LLNA) and the guinea-pig maximization test (GPMT).
Results: PPD and all the derivatives were identified as primary sensitizers in both tests. The order of potency in the LLNA was as follows: IPPD and PADPA > PPD > DMBPPD and MHPPD > DPPD. In the GPMT, all six groups of animals sensitized with one of these chemicals cross-reacted to four other derivatives. Specifically, the five groups that have a common basic PADPA structure, that is PADPA, DPPD, IPPD, DMBPPD, and MHPPD, all reacted to each other at almost the same scores, while none of them reacted to PPD.
Conclusions: The cross-reactivity profile found in the study was to some extent different from that in previous human data, where distinction between cross-reaction and concomitant primary sensitization is not always clear.
abstract_id: PUBMED:24603516
The local lymph node assay in 2014. Toxicology endeavors to predict the potential of materials to cause adverse health (and environmental) effects and to assess the risk(s) associated with exposure. For skin sensitizers, the local lymph node assay was the first method to be fully and independently validated, as well as the first to offer an objective end point with a quantitative measure of sensitizing potency (in addition to hazard identification). Fifteen years later, it serves as the primary standard for the development of in vitro/in chemico/in silico alternatives.
abstract_id: PUBMED:22511117
The local lymph node assay (LLNA). The murine local lymph node assay (LLNA) is a widely accepted method for assessing the skin sensitization potential of chemicals. Compared with other in vivo methods in guinea pig, the LLNA offers important advantages with respect to animal welfare, including a requirement for reduced animal numbers as well as reduced pain and trauma. In addition to hazard identification, the LLNA is used for determining the relative skin sensitizing potency of contact allergens as a pivotal contribution to the risk assessment process. The LLNA is the only in vivo method that has been subjected to a formal validation process. The original LLNA protocol is based on measurement of the proliferative activity of draining lymph node cells (LNC), as determined by incorporation of radiolabeled thymidine. Several variants to the original LLNA have been developed to eliminate the use of radioactive materials. One such alternative is considered here: the LLNA:BrdU-ELISA method, which uses 5-bromo-2-deoxyuridine (BrdU) in place of radiolabeled thymidine to measure LNC proliferation in draining nodes.
abstract_id: PUBMED:12581276
The local lymph node assay: past, present and future. The local lymph node assay (LLNA) was developed originally as a method for the identification of chemicals that have the potential to cause skin sensitization and allergic contact dermatitis. The assay is based on an understanding that the acquisition of contact sensitization is associated with, and dependent upon, the stimulation by chemical allergens of lymphocyte proliferative responses in skin-draining lymph nodes. Those chemicals that provoke a defined level of lymph node cell (LNC) proliferation (a 3-fold or greater increase compared with concurrent vehicle controls) are classified as skin sensitizers. Following its original inception and development, the LLNA was the subject of both national and international interlaboratory collaborative trials, and of very detailed comparisons with other test methods and with human skin sensitization data. The assay has now been validated fully as a stand-alone test for the purposes of hazard identification. In recent years, there has been a growing interest also in the use of the LLNA to assess the potency of contact allergens and in risk assessment. There is reason to believe that the extent of skin sensitization achieved is associated with the vigour of LNC proliferation induced in draining nodes. Given this relationship, the relative potency of skin sensitizing chemicals is measured in the LLNA by derivation of an EC3 value, this being the concentration of chemical required to provoke a 3-fold increase in the proliferation of LNC compared with controls. Experience to date indicates that relative potency as determined using this approach correlates closely with what is known of the activity of skin sensitizing chemicals in humans. In this article, we review the development, evaluation and validation of the LLNA for the purposes of hazard identification, and the more recent application of the method for evaluation of potency in the context of risk assessment. In addition, we consider what new applications and modifications are currently being investigated.
abstract_id: PUBMED:8384542
Evaluation of contact sensitivity of rubber chemicals using the murine local lymph node assay. The sensitizing abilities of 4 rubber additives, tetramethylthiuram disulfide (TMTD). 2-mercaptobenzothiazole (MBT), N-isopropyl-N'-phenyl-p-phenylenediamine (IPPD) and zinc diethyldithiocarbamate (ZDEC), were evaluated using the murine local lymph node assay. Exposure to IPPD induced a significant increase of lymph node cell proliferation in the draining lymph nodes even at a low concentration. Exposure to TMTD and MBT induced moderate proliferative responses, while ZDEC induced a weak proliferation even at the higher concentrations. The sensitizing potency of each chemical was described in terms of the concentration that increased lymph node cell proliferation by a factor of 2 over that in the vehicle-treated control group. The concentrations of IPPD, TMTD, MBT and ZDEC were 0.14%, 0.35%, 4.2% and > 10%, respectively.
abstract_id: PUBMED:19922754
Statistical evaluation of the Local Lymph Node Assay. In the Local Lymph Node Assay measured endpoints for each animal, such as cell proliferation, cell counts and/or lymph node weight should be evaluated separately. The primary criterion for a positive response is when the estimated stimulation index is larger than a specified relative threshold that is endpoint- and strain-specific. When the lower confidence limit for ratio-to-control comparisons is larger than a relevance threshold, a biologically relevant increase can be concluded according to the proof of hazard. Alternatively, when the upper confidence limit for ratio-to-control comparisons is smaller than a tolerable margin, harmlessness can be concluded according to a proof of safety.
abstract_id: PUBMED:1998208
The murine local lymph node assay: results of an inter-laboratory trial. The local lymph node assay is a novel predictive test for the identification of contact allergens. The collaborative study reported here was performed to evaluate the reliability of the method when performed in independent laboratories. Eight chemicals were examined in each of 4 participating laboratories and results compared with predictions of skin-sensitizing activity made from concurrent Magnusson and Kligman guinea-pig maximization tests performed in a single laboratory. The local lymph node assay has as its theoretical basis the fact that contact allergens induce T-lymphocyte proliferative responses. In practice, predictions of contact-sensitizing potential are made following measurement of proliferation in lymph nodes draining the site of exposure to chemical, and derivation of a stimulation index using control values as the comparator. Although in the present study there was some variation between laboratories with respect to the absolute stimulation indices recorded, it was found that with all chemicals each laboratory made the same predictions of sensitizing activity. Six chemicals (2,4-dinitrochlorobenzene, formalin, eugenol, isoeugenol, p-phenylenediamine and potassium dichromate) yielded positive responses, and two (methyl salicylate and benzocaine) were negative, in each laboratory. Furthermore, with 7 of the 8 chemicals tested there was no significant difference between laboratories in terms of the characteristics of the dose-response relationships recorded. With the exception of one chemical (benzocaine), predictions made with the local lymph node assay were in accord with those derived from guinea-pig maximization tests. These inter-laboratory comparisons demonstrate that the local lymph node assay is a robust and reliable method for the identification of at least moderate and strong contact allergens.
Answer: The study investigating the proliferative responses in the local lymph node assay (LLNA) associated with concomitant exposure to 1,4-phenylenediamine (PPD) and methyldibromo glutaronitrile (MDBGN) found that the lymph node cell proliferation associated with combined exposure was only modestly increased relative to that predicted from a simple summation of their individual responses. Although the increase in response was very modest, it does suggest a relationship between this combination of sensitizers that may not be simply additive in terms of their ability to stimulate proliferative responses in draining lymph nodes. However, the study concludes that the reproducibility of this observation should be confirmed in future studies with additional pairs of contact allergens to ascertain whether or not this represents evidence of synergy (PUBMED:18759875). |
Instruction: Are pre-hospital deaths from accidental injury preventable?
Abstracts:
abstract_id: PUBMED:8173428
Are pre-hospital deaths from accidental injury preventable? Objective: To determine what proportion of pre-hospital deaths from accidental injury--deaths at the scene of the accident and those that occur before the person has reached hospital--are preventable.
Design: Retrospective study of all deaths from accidental injury that occurred between 1 January 1987 and 31 December 1990 and were reported to the coroner.
Setting: North Staffordshire.
Main Outcome Measures: Injury severity score, probability of survival (probit analysis), and airway obstruction.
Results: There were 152 pre-hospital deaths from accidental injury (110 males and 42 females). In the same period there were 257 deaths in hospital from accidental injury (136 males and 121 females). The average age at death was 41.9 years for those who died before reaching hospital, and their average injury severity score was 29.3. In contrast, those who died in hospital were older and equally likely to be males or females. Important neurological injury occurred in 113 pre-hospital deaths, and evidence of airway obstruction in 59. Eighty six pre-hospital deaths were due to road traffic accidents, and 37 of these were occupants in cars. On the basis of the injury severity score and age, death was found to have been inevitable or highly likely in 92 cases. In the remaining 60 cases death had not been inevitable and airway obstruction was present in up to 51 patients with injuries that they might have survived.
Conclusion: Death was potentially preventable in at least 39% of those who died from accidental injury before they reached hospital. Training in first aid should be available more widely, and particularly to motorists as many pre-hospital deaths that could be prevented are due to road accidents.
abstract_id: PUBMED:28363752
Are prehospital deaths from trauma and accidental injury preventable? A direct historical comparison to assess what has changed in two decades. Background & Objectives: In 1994, Hussain and Redmond revealed that up to 39% of prehospital deaths from accidental injury might have been preventable had basic first aid care been given. Since then there have been significant advances in trauma systems and care. The exclusion of prehospital deaths from the analysis of trauma registries, giv en the high rate of those, is a major limitation in prehospital research on preventable death. We have repeated the 1994 study to identify any changes over the years and potential developments to improve patient outcomes.
Methods: We examined the full Coroner's inquest files for prehospital deaths from trauma and accidental injury over a three-year period in Cheshire. Injuries were scored using the Abbreviated-Injury-Scale (AIS-1990) and Injury Severity Score (ISS), and probability of survival estimated using Bull's probits to match the original protocol.
Results: One hundred and thirty-four deaths met our inclusion criteria; 79% were male, average age at death was 53.6 years. Sixty-two were found dead (FD), fifty-eight died at scene (DAS) and fourteen were dead on arrival at hospital (DOA). The predominant mechanism of injury was fall (39%). The median ISS was 29 with 58 deaths (43%) having probability of survival of >50%. Post-mortem evidence of head injury was present in 102 (76%) deaths. A bystander was on scene or present immediately after injury in 45% of cases and prior to the Emergency Medical Services (EMS) in 96%. In 93% of cases a bystander made the call for assistance, in those DAS or DOA, bystander intervention of any kind was 43%.
Conclusions: The number of potentially preventable prehospital deaths remains high and unchanged. First aid intervention of any kind is infrequent. There is a potentially missed window of opportunity for bystander intervention prior to the arrival of the ambulance service, with simple first-aid manoeuvres to open the airway, preventing hypoxic brain injury and cardiac arrest.
abstract_id: PUBMED:21309208
Patterns of pre hospital fatal injuries in Mekelle, Ethiopia. Objective: To assess the pattern of pre hospital fatal injuries and determine the age, sex and types of injury for all victims brought dead to Mekelle hospital
Materials And Methods: A retrospective descriptive audit of all victims brought dead to Mekelle hospital due to trauma was carried out to assess the magnitude and pattern of pre hospital fatal injuries between September 1, 2004 to Jun. 30, 2006. These include all victims brought dead following road traffic accidents, suicidal, homicidal, drowning, burns and also deaths from other accidental causes. The source and the study group are (N = 120 deaths) involved in accidental injuries. Case notes were obtained from written police reports and medical records office and were analyzed for age, sex and type of injury.
Results: There were 120 deaths caused by fatal accidents before arrival to Mekelle hospital. There were 95 (79.2%) male and 25 (20.8%) female victims. Peak incidence of pre hospital fatal deaths were in the 10-29 year age group accounting for 54.9% of the cases. In the study the median age was 25 years and the range from 4 to 70 years. Road traffic accident was the leading causes of fatal injuries, accounting for 46.6 % of all fatal deaths followed by physical assaults (28.3%), suicidal hanging (10.8%), electrical burns (5.0%) and bullet injuries (4.1%).
Conclusion: Overall this study has shown that pre hospital fatal injuries commonly occurred in the younger age group. Majority were caused by road traffic accidents with males predominantly affected
abstract_id: PUBMED:37318061
Five-year review (2014-2019) of paediatric accidental deaths in Cook County, Illinois (USA). In the USA, intentional and accidental injuries are the most frequent causes of death in children. Many of these deaths could be avoided through preventive measures, and aetiological studies are needed to reduce fatalities. The leading causes of accidental death vary by age. We analysed all paediatric accidental deaths recorded by a busy urban Medical Examiner"s Office in Chicago, Illinois (USA). We searched the electronic database for accidental deaths in children aged under 10 between 1 August 2014 and 31 July 2019. 131 deaths were identified with a preponderance of males and African Americans. This is consistent with ratios of other deaths recorded for this age group (during the same period and area). The leading causes of death were asphyxia due to an unsafe sleeping environment (in subjects <1-year-old), and road traffic accidents/drowning (in subjects >1-year-old). Behaviours, risk factors and environments most likely to contribute to fatal injuries are discussed. Our study highlights the role of forensic pathologists and medico-legal death investigators who identify the causes and circumstances surrounding these deaths. The research results may help from an epidemiological perspective to implement age-specific preventive strategies.
abstract_id: PUBMED:10770304
Injury-related deaths among women aged 10-50 years in Bangladesh, 1996-97. Background: Few studies have examined injury-related deaths among women in Bangladesh. We did a case-finding study to identify causes and the impact of intentional and unintentional injury-related deaths among women aged 10-50 years in Bangladesh.
Methods: Between 1996 and 1997, health care and other service providers at 4751 health facilities throughout Bangladesh were interviewed about their knowledge of deaths among women aged 10-50 years. In addition, at all public facilities providing inpatient service, medical records of women who died during the study period were reviewed. The reported circumstances surrounding each death were carefully reviewed to attribute the most likely cause of death.
Findings: 28,998 deaths among women aged 10-50 years were identified in our study, and, of these, 6610 (23%) were thought to be caused by intentional or unintentional injuries. About half (3317) of the injury deaths were attributable to suicide, 352 (5%) to homicide, 1100 (17%) to accidental injuries, and the intent was unknown for 1841 (28%) deaths. The unadjusted rate of suicides were higher in the Khulna administrative division (27.0 per 100,000) than the other four administrative divisions of Bangladesh (range 3.5-11.3 per 100,000). Poisoning (n=3971) was the commonest cause of injury-related death--60% of all injury deaths (6610) and 14% of all deaths (28,998). Other common causes of injury deaths in order of frequency were hanging or suffocation, road traffic accidents, burns, drowning, physical assault, firearm or sharp instrument injury, and snake or animal bite.
Interpretation: Intentional and unintentional injuries are a major cause of death among women aged 10-50 years in Bangladesh. Strategies to reduce injury-related deaths among women need to be devised.
abstract_id: PUBMED:10793781
Deaths from unintentional injuries in rural areas of the Islamic Republic of Iran. Deaths from accidental injury in the rural areas of 13 provinces in the Islamic Republic of Iran from 1993 to 1994 were investigated. The crude mortality rate was 4.33 per 1000 and the number of deaths from unintentional injuries was 5213 (10.7% of all deaths). There were more deaths among males than females (65.7 per 100,000 versus 26.1 per 100,000). After the age of 1 year, over 65-year-olds had the highest average of deaths resulting from injuries (111.9 per 100,000). The leading causes of death were traffic accidents (55.0%), drowning (10.1%), falls (9.5%) and burns and scalding (9.5%). Since most injuries are preventable, their reduction should be considered a priority.
abstract_id: PUBMED:7125686
Two-year study of the causes of postperinatal deaths classified in terms of preventability. A detailed pathological and psychosocial study was made of all postperinatal (8 days-2 years) deaths in Sheffield during a 2-year period. The cause of death was classified from the point of view of possible prevention. Of the total of 65 deaths, 35 were unpreventable after the perinatal period, but 9 might have been preventable before birth. Of the 30 other deaths, 20 had evidence of possible treatable disease, and for the majority of these adverse social factors could be identified. Proved non-accidental injury occurred in 2 children and in 3 others there was a high degree of suspicion of 'gentle battering'. Only in 4 children was death unexplained and this apparently represents the local true unexplained cot death rate of 0.16/1000 births.
abstract_id: PUBMED:30451103
Paediatric deaths in a tertiary government hospital setting, Malawi. Background: Malawisuccessfully achieved Millennium Development Goal (MDG) four by decreasing the under-5 mortality rate by two-thirds in 2012. Despite this progress child mortality is still high and in 2013, the leading causes of death in under-5s were malaria, acute respiratory infections and HIV/AIDS. Aims: To determine the causes of inpatient child death including microbiological aetiologies in Malawi. Methods: A prospective, descriptive study was undertaken in Queen Elizabeth Central Hospital over 12 months in 2015/2016. Data was collected for every paediatric covering HIV and nutritional status, cause of death, and microbiology. Deaths of inborn neonates were excluded. Results: Of 13,827 admissions, there were 488 deaths, giving a mortality rate of 3.5%. One-third of deaths (168) occurred in the first 24 h of admission and 255 after 48 h Sixty-eight per cent of those who died (332) were under 5 years of age. The five leading causes of death were sepsis (102), lower respiratory tract infection (67), acute gastroenteritis with severe dehydration (51), malaria (37) and meningitis (34). The leading non-communicable cause of death was solid tumour (12). Of the 362 children with a known HIV status 134 (37.0%) were HIV-infected or HIV-exposed. Of the 429 children with a known nutrional status, 93 had evidence of severe acute malnutrition (SAM). Blood cultures were obtained from 252 children 51 (20.2%) grew pathogenic bacteria with Klebsiella pneumoniae, Escherichia coli and Staphylococcus aureus being the most common. Conclusion: Despite a significant reduction in paediatric inpatient mortality in Malawi, infectious diseases remain the predominant cause. Abbreviations: ART: anti-retroviral therapy; Child PIP: Child Healthcare Problem Identification Programme; CCF: congestive cardiac failure; CNS: central nervous system; CoNS: coagulase-negative staphylococci; CSF: cerebrospinal fluid; DNA pcr: deoxyribonucleic acid polymerase chain reaction; ETAT: emergency triage assessment and treatment; LMIC: low- and middle-income countries; MDG: Millennium Development Goals; MRI: magnetic resonance imaging; MRSA: methicillin-resistant Staphylococcus aureus; NAI: non-accidental injury; NTS: non-typhi salmonella; PJP: Pneumocystis jiroveci pneumonia; PSHD: presumed severe HIV disease; QECH: Queen Elizabeth Central Hospital; RHD: rheumatic heart disease; RTA: road traffic accident; TB: tuberculosis; TBM: tuberculous meningitis; WHO: World Health Organization; SAM: severe acute malnutrition.
abstract_id: PUBMED:36529463
Occupational accidental injury deaths in Tokyo and Chiba prefectures, Japan: A 10-year study (2011-2020) of forensic institute evaluations. Occupational accidental injury deaths (OAIDs) are a major social problem, and the analysis of individual cases is important for developing injury prevention measures. In this study, OAIDs with autopsies performed at forensic facilities in the metropolitan area of Japan (Tokyo and Chiba prefectures) from 2011 to 2020 were reviewed. The epidemiological characteristics of these OAIDs (n = 136), which accounted for 13.5% of OAIDs reported in the region during the study period, were compared with those of non-occupational accidental injury deaths (non-OAID) cases (n = 3926). Among OAID cases, 134 (98.5%) were men and 13 (9.6%) were foreign-born workers, which was significantly more than in non-OAID cases (p < 0.001, respectively). OAIDs were most frequent in construction (39.0%) followed by the manufacturing category (21.3%). The percentage of OAIDs in workers aged 65 and over showed an increasing trend. Most accidents occurred just after the start of work or just before the workday ended, as well as during the peak months of the year. The most common type of accident was fall/crash from a height (25.0%), and the most common injury site was the chest; none of these cases were confirmed to have been wearing a safety belt properly. Among foreign-born workers, the most common type of accident was caught in/between. As the working population is expected to change in the future, and an increase in the number of older adults and foreign workers is expected, it is necessary to take preventive measures such as improving the work environment based on ergonomics and providing safety education.
abstract_id: PUBMED:37016022
Unintentional injury deaths among children under five in Hunan Province, China, 2015-2020. Injury is the most common cause of preventable morbidity and death among children under five. This study aimed to describe the epidemiological characteristics of injury-related mortality rates in children under five and to provide evidence for future preventive strategies. Data were obtained from the Under Five Child Mortality Surveillance System in Hunan Province, China, 2015-2020. Injury-related mortality rates with 95% confidence intervals (CI) were calculated by year, residence, gender, age, and major injury subtype (drowning, suffocation, traffic injuries, falls, and poisoning). And crude odds ratios (ORs) were calculated to examine the association of epidemiological characteristics with injury-related deaths. The Under Five Child Mortality Surveillance System registered 4,286,087 live births, and a total of 22,686 under-five deaths occurred, including 7586 (which accounted for 33.44% of all under-five deaths) injury-related deaths. The injury-related under-five mortality rate was 1.77‰ (95% CI 1.73-1.81). Injury-related deaths were mainly attributed to drowning (2962 cases, 39.05%), suffocation (2300 cases, 30.32%), traffic injuries (1200 cases, 15.82%), falls (627 cases, 8.27%), and poisoning (156 cases, 2.06%). The mortality rates due to drowning, suffocation, traffic injuries, falls, and poisoning were 0.69‰ (95% CI 0.67,0.72), 0.54‰ (95% CI 0.51,0.56), 0.28‰ (95% CI 0.26,0.30), 0.15‰ (95% CI 0.13,0.16), and 0.04‰ (95% CI 0.03,0.04), respectively. From 2015 and 2020, the injury-related mortality rates were 1.78‰, 1.77‰, 1.60‰, 1.78‰, 1.80‰, and 1.98‰, respectively, and showed an upward trend (χ2trend = 7.08, P = 0.01). The injury-related mortality rates were lower in children aged 0-11 months than in those aged 12-59 months (0.52‰ vs. 1.25‰, OR = 0.41, 95% CI 0.39-0.44), lower in urban than rural areas (1.57‰ vs. 1.88‰, OR = 0.84, 95% CI 0.80-0.88), and higher in males than females (2.05‰ vs . 1.45‰, OR = 1.42, 95% CI 1.35-1.49). The number of injury-related deaths decreased with children's age. Injury-related deaths happened more frequently in cold weather (around February). Almost half (49.79%) of injury-related deaths occurred at home. Most (69.01%) children did not receive treatment after suffering an injury until they died, and most (60.98%) injury-related deaths did not receive treatment because it was too late to get to the hospital. The injury-related mortality rate was relatively high, and we have described its epidemiological characteristics. Several mechanisms have been proposed to explain these phenomena. Our study is of great significance for under-five child injury intervention programs to reduce injury-related deaths.
Answer: Yes, pre-hospital deaths from accidental injury are often considered preventable to some extent. A retrospective study in North Staffordshire found that death was potentially preventable in at least 39% of those who died from accidental injury before they reached hospital, suggesting that wider training in first aid, especially for motorists, could help prevent many of these deaths (PUBMED:8173428). A similar study conducted two decades later in Cheshire echoed these findings, with the number of potentially preventable prehospital deaths remaining high and unchanged, and first aid intervention by bystanders before the arrival of emergency services being infrequent (PUBMED:28363752).
In Mekelle, Ethiopia, a study showed that pre-hospital fatal injuries commonly occurred in the younger age group, with road traffic accidents being the leading cause, indicating a need for preventive measures in traffic safety and emergency response (PUBMED:21309208). In Cook County, Illinois (USA), a study of paediatric accidental deaths highlighted the role of forensic pathologists and medico-legal death investigators in identifying causes and circumstances surrounding these deaths, which can inform age-specific preventive strategies (PUBMED:37318061).
Research in Bangladesh revealed that intentional and unintentional injuries are a major cause of death among women aged 10-50 years, suggesting the need for strategies to reduce injury-related deaths (PUBMED:10770304). In the Islamic Republic of Iran, the reduction of preventable injuries was considered a priority given the high number of deaths from unintentional injuries in rural areas (PUBMED:10793781).
Overall, these studies suggest that a significant proportion of pre-hospital deaths from accidental injuries could be prevented through improved first aid training, better safety measures, and timely intervention by bystanders and emergency services. |
Instruction: Co-occurrence of intoxication during sex and sexually transmissible infections among young African American women: does partner intoxication matter?
Abstracts:
abstract_id: PUBMED:18771645
Co-occurrence of intoxication during sex and sexually transmissible infections among young African American women: does partner intoxication matter? Background: The co-occurrence of a behaviour (being intoxicated on alcohol/drugs during sex) with a disease outcome [laboratory-confirmed sexually transmissible infection (STI) prevalence] among young African American women and their male sex partners was studied.
Methods: A cross-sectional study was conducted. Recruitment and data collection occurred in three clinics located in a metropolitan city of the Southern USA. A total of 715 African American adolescent females (15-21 years old) were enrolled (82% participation rate). The primary outcome measure was the analysis of self-collected vaginal swabs using nucleic acid amplification assays for Trichomonas vaginalis, Chlamydia trachomatis, and Neisseria gonorrhoeae.
Results: After controlling for age and self-efficacy to negotiate condom use, young women's alcohol/drug use while having sex was not significantly associated with STI prevalence [adjusted odds ratios (AOR) = 1.29, 95% confidence interval (CI) = 0.90-1.83]. However, using the same covariates, the association between male partners' alcohol/drug use and sexually transmitted disease prevalence was significant (AOR = 1.44, 95% CI = 1.03-2.02). Young women reporting that their sex partners had been drunk or high while having sex (at least once in the past 60 days) were approximately 1.4 times more likely to test positive for at least one of the three assessed STIs.
Conclusion: Young African American women reporting a male sex partner had been intoxicated during sex were significantly more likely to have an STI. The nature of this phenomenon could be a consequence of women's selection of risky partners and lack of condom use possibly stemming from their intoxication or their partners' intoxication.
abstract_id: PUBMED:9498318
Sex-related alcohol expectancies as moderators of the relationship between alcohol use and risky sex in adolescents. Objective: Alcohol is frequently identified as a potential contributor to HIV-related sexual risk taking. Drawing on alcohol expectancy explanations for postdrinking behavior, the present study tested the hypothesis that adolescents who drink alcohol on a given occasion will be more likely to engage in sexual risk-taking behavior to the extent that they believe that alcohol disinhibits sexual behavior or promotes sexual risk taking.
Method: The combined effects of sex-related alcohol expectancies and alcohol use in sexual situations were investigated using interview data from a representative sample of 907 (476 male) sexually experienced adolescents (13 to 19 years) who had ever consumed alcohol.
Results: Regression analyses on a composite measure of risk taking revealed that for two of three intercourse occasions examined alcohol use was associated with greater risk taking primarily among respondents who expected alcohol to increase risky sexual behavior.
Conclusions: The results lend support to expectancy theories of alcohol's effects on sexual risk taking and raise the possibility that providing overly simplistic warnings that "alcohol leads to risky sex" may paradoxically increase the likelihood that individuals will fail to act prudently when intoxicated. Preventive interventions might beneficially focus on weakening, rather than strengthening, individuals' expectancies with regard to the impact of alcohol on sexual behavior, so that self-protective behavior will be more likely to occur even during intoxication.
abstract_id: PUBMED:22099449
Selling sex in unsafe spaces: sex work risk environments in Phnom Penh, Cambodia. Background: The risk environment framework provides a valuable but under-utilised heuristic for understanding environmental vulnerability to HIV and other sexually transmitted infections among female sex workers. Brothels have been shown to be safer than street-based sex work, with higher rates of consistent condom use and lower HIV prevalence. While entertainment venues are also assumed to be safer than street-based sex work, few studies have examined environmental influences on vulnerability to HIV in this context.
Methods: As part of the Young Women's Health Study, a prospective observational study of young women (15-29 years) engaged in sex work in Phnom Penh, we conducted in-depth interviews (n = 33) to explore vulnerability to HIV/STI and related harms. Interviews were conducted in Khmer by trained interviewers, transcribed and translated into English and analysed for thematic content.
Results: The intensification of anti-prostitution and anti-trafficking efforts in Cambodia has increased the number of women working in entertainment venues and on the street. Our results confirm that street-based sex work places women at risk of HIV/STI infection and identify significant environmental risks related to entertainment-based sex work, including limited access to condoms and alcohol-related intoxication. Our data also indicate that exposure to violence and interactions with the police are mediated by the settings in which sex is sold. In particular, transacting sex in environments such as guest houses where there is little or no oversight in the form of peer or managerial support or protection, may increase vulnerability to HIV/STI.
Conclusions: Entertainment venues may also provide a high risk environment for sex work. Our results indicate that strategies designed to address HIV prevention among brothel-based FSWs in Cambodia have not translated well to street and entertainment-based sex work venues in which increasing numbers of women are working. There is an urgent need for targeted interventions, supported by legal and policy reforms, designed to reduce the environmental risks of sex work in these settings. Future research should seek to investigate sex work venues as risk environments, explore the role of different business models in mediating these environments, and identify and quantify exposure to risk in different occupational settings.
abstract_id: PUBMED:17288171
The comparative analysis: the occurrence of acute respiratory system infections and chronic diseases among active smokers and non-smokers Cigarette smoking is one of the factors causing a lot of health problems. Breathing the smoke makes the development of arteriosclerosis and ischemic heart disease faster and the risk of myocardial infarction much higher. Toxic substances contained in the smoke induce inflammatory processes in bronchial tree, which finally leads to the destruction of lungs. One of the way of preventing complications in the circulatory system and stopping the inflammatory process in lungs is to give up the habit of smoking. Within the period of three years the group of more than 1000 people (smokers and non-smokers) was examined and the analysis of occurrence of acute respiratory system infections and chronic diseases was conducted. In the studies the questionnaire prepared by the author of the paper, some specialistic studies and medical reports were used. The achieved results show that more and more women smoke as many cigarettes as men and for as many years as they do. Both men and women who graduated either a grammar school or a university smoke more often than with elementary level of education. People who smoke suffer more often from numerous acute respiratory tract infections and must more often pay a visit to general practitioner. Considering the sex there are no statistically significant differences in the occurrence of chronic pulmonary diseases and the cardiovascular system. The achieved results show the changes of the attitude to smoking in Polish society. The increase of the consumption of cigarettes among women with high education is very worrying. It is a serious challenge for the whole medical staff.
abstract_id: PUBMED:35543444
Sexually transmitted infections in Poland in 2013-2018 in comparison to other European countries based on infectious diseases surveillance in Poland and in Europe. Purpose: The aim of the study was to assess the epidemiological situation of newly sexually transmitted infections in Poland in 2013-2018 in comparison to other European countries based on infectious diseases surveillance.
Material And Methods: Analysis of the epidemiological situation was based on aggregated data from MZ-56 reports on infectious diseases, infections and poisoning sent from Sanitary Inspections to NIPH NIHNRI. Case-based data for gonorrhoea were analyzed in relation to transmission route and first place of medical diagnosis between 2017-2018.
Results: Between 2013-2018 in Poland 8,436 syphilis cases were diagnosed (mean diagnosis rate was 3.66 per 100,000), 2,395 gonorrhoeae cases, whereas number of Chlamydia trachomatis infections from 2014 to 2018 were 1,179 cases. In this time the decrease of 26.2% in newly recognized gonorrhoea cases were observed, whereas the diagnosis rate for chlamydia was stable grew up: from 0.42 per 100,000 in 2014 year to 0.80 in 2018 year. Most STI cases were recognized among men: male to female ratio for syphilis was 5:1, for gonorrhoea 11:1, whereas for chlamydia there is reverse tendency, there are more cases registered among women (0.8:1).
Conclusion: There are lower STI diagnosis rates in Poland compare to European countries and there are visible big disproportion between number of cases among men and women. Distribution of cases in all voivodeships in Poland and often huge disproportion in the number of new cases between these voivodeships indicate on underreporting problem in Poland.
abstract_id: PUBMED:11029835
Prions, infections and confusions in the "transmissible" spongiform encephalopathies. The other evidence-based science. III. Review There are some neurological disorders with a pathological hallmark called spongiosis which include Creutzfeld-Jakob disease and its new variant, the Gertsmann-Straussler-Scheinker Syndrome and the Fatal Familial Insomnia in humans; and Scrapie and Bovine Spongiform Encephalopathy, among others, in animals. The etiological agent has been considered either transmissible or hereditary or both. Curiously, this agent has no nucleic acids, is impossible to filter, is resistant to inactivation by chemical means, has not been cultured and is unobservable at electron microscopy. All of these facts have led to some researches to claim that these agents are similar to viruses appearing in computers. However, after almost fifty years of research, is still not possible to explain why and how such elements produce the diseases commented about. On the contrary, during these years have been possible to know that these entities called slow viral infections, transmissible amyloidosis, transmissible dementia, transmissible spongiform encephalopathies or prion diseases appear in individuals with genetical predispositions exposed to several worldwide immunological stressors. The possibility that prions are the consequence and not the cause of these diseases in animals and man is day by day more reliable, and supports the suggestion that a systematic intoxication due to pesticides as well as mycotoxin ingestion, produced mainly by different molds such as Aspergillus, Penicillium or Fusarium, seem to be the true etiology of these neurodegenerative disorders.
abstract_id: PUBMED:28929931
Changes over time in the risk of hospitalization for physical diseases among homeless men and women in Stockholm: A comparison of two cohorts. Aims: To follow-up hospitalization for physical diseases among homeless men and women compared with a control group from the general population. The study also investigated the changes in the difference between the homeless men and women and the general population over time by comparing two cohorts of homeless people (2000-2002 and 1996).
Methods: A total of 3887 people (24% women) who were homeless during the period 2000-2002 were compared with 11,661 people from the general population with respect to hospitalization for physical diseases and injuries (2000-2010). Indirect comparisons were used to compare the relative risk (RR) of hospitalization between the cohort of people who were homeless in 2000-2002 with a cohort of those who were homeless in 1996.
Results: Homeless people have an RR of being hospitalized for physical diseases twice that of the general population. The largest differences were found in skin diseases, infections, injury/poisoning and diseases of the respiratory system. Indirect comparison between people who were homeless in 2000-2002 and 1996 showed an increasing difference between young (18-35 years) homeless men and men in the control group (RR 1.32). The difference had also increased between homeless men and men in the control group for hospitalization for heart disease (RR 1.35), chronic obstructive pulmonary disease (RR 2.60) and poisoning (RR 1.89). Among women, the difference had decreased between homeless women and women in the control group for skin disease (RR 0.20) and injury/poisoning (RR 0.60). There was no significant difference between the sexes in the two homeless cohorts.
Conclusions: There was no improvement in excess hospitalization among homeless people over time. The difference between young homeless men and young men in the general population increased between 1996 and 2000-2002.
abstract_id: PUBMED:11901239
Imaging of white matter lesions. Magnetic resonance imaging (MRI) is very sensitive for the detection of white matter lesions (WML), which occur even in normal ageing. Intrinsic WML should be separated from physiological changes in the ageing brain, such as periventricular caps and bands, and from dilated Virchow-Robin spaces. Genuine WML are best seen with T2-weighted sequences such as long TR dual-echo spin-echo or FLAIR (fluid-attenuated inversion recovery); the latter has the advantage of easily separating WML from CSF-like lesions. Abnormal T2 signal in MRI is not specific, and can accompany any change in tissue composition. In the work-up of WML in small vessel disease, magnetic resonance angiography can be used to rule out (concomitant) large vessel disease, and diffusion-weighted MRI to identify new ischaemic lesions (amidst pre-existing old WML). The differential diagnosis of WML includes hereditary leukodystrophies and acquired disorders. The leukodystrophies that can present in adult age include metachromatic leukodystrophy, globoid cell leukodystrophy, adrenomyeloneuropathy, mitochondrial disorders, vanishing white matter, and cerebrotendinous xanthomatosis. These metabolic disorders typically present with symmetrical abnormalities that can be very diffuse, often with involvement of brainstem and cerebellum. Only the mitochondrial disorders tend to be more asymmetric and frequently involve the grey matter preferentially. Among the acquired white matter disorders, hypoxic-ischaemic causes are by far the most prevalent and without further clinical clues there is no need to even consider the next most common disorder, i.e. multiple sclerosis (MS). Among the nonischaemic disorders, MS is far more common than vasculitis, infection, intoxication and trauma. While vasculitis can mimic small vessel disease, MS has distinctive features with preferential involvement of the subcortical U-fibres, the corpus callosum, temporal lobes and the brainstem/cerebellum. Spinal cord lesions are very common in MS, but do not occur in normal ageing nor in small vessel disease.
abstract_id: PUBMED:28347165
Synthetic cathinones in Southern Germany - characteristics of users, substance-patterns, co-ingestions, and complications. Objective: To define the characteristics of synthetic cathinone users admitted to hospital including clinical and laboratory parameters and the complications of use.
Design: Retrospective single-center study of patients treated for acute cathinone intoxication and complications of cathinone use between January 2010 and January 2016.
Setting: A specialized clinical toxicology unit at an academic tertiary care center in Southern Germany serving a population of about 4 million.
Patients And Methods: 81 consecutive patients with laboratory-confirmed use of cathinones who presented for acute intoxication or complications of cathinone use were retrospectively analyzed.
Results And Conclusions: The patients were predominantly male (64%, 52/81) with a median age of 34 years. 60 were admitted for signs of acute intoxication while 21 suffered from complications of cathinone use. 70% of acutely intoxicated patients had an increased creatinine phosphokinase. Only a minority of patients presented with a sympathomimetic toxidrome. Three patients had infectious complications, 10 prolonged psychosis, 6 rhabdomyolyses and/or kidney failure, and two patients died. Based on presentations, cathinone use has increased with the first cases seen in 2010. Opiates/opioids are the main co-ingested drugs of abuse. The pattern of cathinone use shifted from methylone in 2010/2011 to 3,4-methylenedioxypyrovalerone (MDPV) and 3-methylmethcathinone (3-MMC) in 2014/2015. We conclude that in our setting "typical" cathinone users are males in their thirties. They are seldom drug naïve and regularly co-ingest illicit drugs. Preventive measures have to be tailored to these difficult to reach patients. Present efforts to educate young clubbers in their late teens may fail to reach the pertinent demographic.
abstract_id: PUBMED:12875511
Child health implications of worldwide urbanization. The upsurge of urbanization, often poorly planned and uncontrolled, has caused major impacts on human societies for at least two centuries. Urban environments and urbanized lifestyles have strong influences on health and well-being, including on infant and childhood populations in developed and underdeveloped countries, as well as among societies in developmental and environmental transition around the world. Urbanization will inevitably have significant impacts on the health of future generations. Notably, the health consequences of urbanized lifestyles are not confined to residents of cities and large towns but rather are becoming manifest in rapidly changing, previously traditional societies in rural and remote areas because globalization is altering infant feeding practices and the dietary habits and lifestyle patterns of their children. In underdeveloped countries, overcrowding and environmental pollution are huge problems that are exacerbated by undernutrition and infection, particularly respiratory and diarrheal diseases. In developed societies, other problems like injuries; poisonings; violence; drug abuse; exposure to chemical, biological, industrial, and atmospheric pollutants, including pesticides; sexually transmissible diseases; and 'lifestyle' diseases, including obesity and cardiovascular disease risk, are of great current and potential importance.
Answer: The study on the co-occurrence of intoxication during sex and sexually transmissible infections (STIs) among young African American women found that while the women's own alcohol/drug use during sex was not significantly associated with STI prevalence, the intoxication of their male sex partners was significantly associated with STI prevalence (PUBMED:18771645). After controlling for age and self-efficacy to negotiate condom use, young women who reported that their male sex partners had been drunk or high during sex were approximately 1.4 times more likely to test positive for at least one of the three assessed STIs (Trichomonas vaginalis, Chlamydia trachomatis, and Neisseria gonorrhoeae). This suggests that partner intoxication does matter and is a significant factor in the prevalence of STIs among young African American women. The nature of this phenomenon could be due to women's selection of risky partners and lack of condom use possibly stemming from their intoxication or their partners' intoxication. |
Instruction: Is pretreatment hemoglobin level a predictor of complete response to salvage chemotherapy for recurrent platinum-pretreated ovarian carcinoma?
Abstracts:
abstract_id: PUBMED:14584657
Is pretreatment hemoglobin level a predictor of complete response to salvage chemotherapy for recurrent platinum-pretreated ovarian carcinoma? Purpose Of Investigation: The aim of this retrospective study was to correlate some patient characteristics at relapse, including also baseline hemoglobin levels, with complete response rate and survival following second-line chemotherapy for recurrent platinum-pretreated ovarian carcinoma.
Methods: The investigation was conducted on 63 patients who received salvage chemotherapy with different agents for clinically detectable recurrent ovarian carcinoma following initial surgery and first-line platinum-based chemotherapy. Some patient characteristics at relapse (patient age, serum CA 125 level, baseline hemoglobin level, number of recurrence sites, ascites, platinum-free interval, and treatment-free interval) were related to complete response rate to salvage chemotherapy and survival after recurrence. Median baseline hemoglobin level was 11.6 g/dl (range, 7.5-15.0 g/dl).
Results: Second-line chemotherapy obtained a complete response in 17 (27.0%) patients and a partial response in 11 (17.5%), whereas stable disease and progressive disease were detected in 19 (30.1%) and 16 (25.4%) patients, respectively. By univariate analysis, complete response rate was related to baseline hemoglobin level (p = 0.0019), platinum-free interval (p = 0.0012) and treatment-free interval (p = 0.0048). Multiple logistic regression showed that platinum-free interval (p = 0.0107) and baseline hemoglobin level (0.0312) were independent predictors of complete response. Patients with baseline hemoglobin levels >11.6 g/dl had a 5.338 higher chance of obtaining a complete response when compared to those with lower hemoglobin values. The platinum-free interval was the only independent prognostic variable for survival after recurrence (p = 0.0141), whereas baseline hemoglobin level was not related to survival at univariate nor at multivariate analysis.
Conclusions: Baseline hemoglobin level is an independent predictor of complete response to salvage chemotherapy in patients with recurrent platinum-pretreated ovarian carcinoma. Attention must be paid to anemia correction in these patients, with the aim of improving both the chance of response to salvage treatment and the quality of life.
abstract_id: PUBMED:17100820
Role of salvage cytoreductive surgery in the treatment of patients with recurrent ovarian cancer after platinum-based chemotherapy. Objectives: The role of cytoreductive surgery, which is well established in the primary treatment for epithelial ovarian cancer, is controversial in recurrent disease. The aim of this study was to assess the clinical benefit of salvage surgical cytoreduction in patients with recurrent ovarian cancer after platinum-based chemotherapy.
Methods: We conducted a retrospective analysis of 46 patients with recurrent epithelial ovarian cancer treated at our department between 1988 and 2003. Twenty-three patients underwent salvage cytoreductive surgery (cytoreductive group), and the other 23 patients were treated without surgery (control group).
Results: Patients in cytoreductive group had a median survival of 41.7 months after recurrence, which was significantly longer than control group (18.8 months; P < 0.01). The duration of stay at home and the period oral intake was preserved were significantly longer in the cytoreductive group. In the cytoreductive group, survival was influenced by the residual disease after surgery (residual tumor diameter, <2 cm vs >2 cm; median survival, 50 months vs 35.2 months; P < 0.05). However, the number of recurrent sites (solitary vs multiple) and the lengths of treatment-free intervals after primary treatment (<6 months vs >6 months) showed no significant influence on survival.
Conclusions: The application of cytoreductive surgery might improve the prognosis of patients with recurrent ovarian cancer if the tumor was resectable. Preserved prognoses of platinum-resistant disease with short treatment-free interval demonstrated in this study suggest that the concept of maximum cytoreduction might be introduced in the treatment of recurrent disease in the future.
abstract_id: PUBMED:38256699
Maintenance Chemotherapy in Patients with Platinum-Sensitive Relapsed Epithelial Ovarian Cancer after Second-Line Chemotherapy. (1) Background: Our aim was to evaluate the efficacy and adverse effects of maintenance chemotherapy in platinum-sensitive recurrent epithelial ovarian cancer after second-line chemotherapy. (2) Methods: A total of 72 patients from a single institute who had been diagnosed with platinum-sensitive recurrent ovarian cancer and had experienced either complete or partial response after six cycles of second-line chemotherapy were divided into a standard group (n = 31) with six cycles or a maintenance group (n = 41) with more than six cycles. We then compared patient characteristics and survival outcomes between these two groups. (3) Results: In all patients, after primary management for the first recurrence, the maintenance group showed worse survival outcomes. Patients who had not undergone either surgery or radiotherapy were divided into complete response and partial response groups after six cycles of chemotherapy. In patients with partial response, maintenance chemotherapy led to a significant improvement in PFS (median, 3.6 vs. 6.7 months, p = 0.007), but no significant change in in OS. The median cycle number of maintenance chemotherapy was four. (4) Conclusions: Maintenance chemotherapy may still play an important role in patients with platinum-sensitive recurrent ovarian cancer, particularly in selected patient groups.
abstract_id: PUBMED:9826459
Prolonged disease-free survival by maintenance chemotherapy among patients with recurrent platinum-sensitive ovarian cancer. Objective: The aim of this study was to determine the potential benefit and complications of prolonged salvage and maintenance chemotherapy among patients with recurrent epithelial ovarian cancer who achieve response to salvage chemotherapy.
Methods: Patients with recurrent platinum-sensitive epithelial ovarian cancer who were treated between 1982 and 1996 and achieved complete response to platinum-based salvage chemotherapy were offered prolonged (1 year) monthly salvage followed by maintenance (every 8 weeks) chemotherapy. Patients who accepted such treatment (n = 16) were compared to those who refused and discontinued therapy (n = 11) with regard to overall survival from time of initial diagnosis and overall and disease-free survival from time of recurrence. Chemotherapy-related toxicity in the study group was recorded. Survival curves were constructed according to the Kaplan and Meier method and survival curves were compared using the log-rank test.
Results: Patients in the study and control groups were similar with regard to age, stage, histology, grade, performance status, primary cytoreductive surgery, type of primary and salvage chemotherapy, and method of assessment of tumor response. The study group had a significantly longer disease-free interval from date of recurrence than the control group (median: 35.0 versus 6.0 months, respectively, P = 0.001). The study group had longer overall survival from date of recurrence than the control group. However, the difference did not achieve statistical significance (median: 119 versus 90 months, respectively, P = 0.056). There was no significant difference between the study group and the control group as to survival from date of initial diagnosis (median: 157 versus 124 months, respectively, P = 0.28). Chemotherapy-related toxicity was minimal.
Conclusions: Prolonged salvage and maintenance chemotherapy is a safe method of treatment that may extend disease-free interval among patients with platinum-sensitive recurrent epithelial ovarian cancer who achieve response to salvage chemotherapy. These preliminary results need to be confirmed by a larger prospective randomized trial.
abstract_id: PUBMED:16466988
Ifosfamide/mesna as salvage therapy in platinum pretreated ovarian cancer patients--long-term results of a phase II study. Purpose: Salvage chemotherapy in advanced ovarian cancer is not yet standardized.
Patients: Twenty-one consecutive patients progressing on or relapsing after previous platinum-containing treatment were eligible for treatment with ifosfamide 5 g/m(2) infused over a 24-hour period every 3 weeks in a Phase II trial. After an initial bolus of 1 g/m(2) of mesna, mesna was applied at a dosage of 5 g/m(2) concomitantly with ifosfamide followed by additional dosages of 200 mg 3 times at 4-hour intervals after termination of the ifosfamide infusion.
Results: The rate of objective responses was 19 percent, with a 95%CI [5.45-41.91 percent]. One patient achieved a pathologic complete remission (pCR) and 3 patients a clinical partial remission (PR). Median time-to-progression was 3 months. One patient was a long-term survivor. Main toxicities according to NCI-CTC included Grade 4 neurotoxicity in one patient, Grade 3 gastrointestinal toxicity in 5 patients, Grade 3 infection in one patient, and Grade 3 and 4 leucopenia in 6 and 2 patients, respectively.
Conclusions: Monotherapy with ifosfamide represents an active regimen for salvage chemotherapy in advanced ovarian cancer patients progressing on or relapsing after previous platinum-pretreatment, even yielding a long-term surivor.
abstract_id: PUBMED:21211276
Gemcitabine based combination chemotherapy, a new salvage regimen for recurrent platinum resistant epithelial ovarian cancer Objective: To evaluate the efficacy and toxicities of gemcitabine combined with ifosfamide and anthracycline chemotherapy for recurrent platinum resistant ovarian epithelial cancer.
Methods: Gemcitabine 800 mg/m(2) (day 1, 8), ifosfamide 1.5 g/m(2) (day 1 - 3), adriamycin 40 mg/m(2) or epirubicin 60 mg/m(2) (day 1) or mitoxantrone 10 mg/m(2) (day 1, 8) were used in recurrent platinum resistant/refractory ovarian cancer patients, the cycle was repeated at interval of 21 to 28 days.
Results: A total of 60 patients received 172 cycles combined chemotherapy. There were no one cases complete response, while partial response 22 (37%, 22/60), stable 23 (38%, 23/60) and progression 15 (25%, 15/60) were observed, with clinical benefit rate 75% (45/60). The median time of progression-free survival was 7 months, and the median overall survival time was 20 months. The main side effect was hematologic toxicity with leukopenia rate of 82% (49/60), among which III-IV accounted for 31% (15/49). Digestive reaction was all in I-II, accounted for 42% (25/60).
Conclusion: The regimen of gemcitabine combined with ifosfamide and anthracycline is feasible, tolerable and effective in patients with recurrent platinum resistant/refractory epithelial ovarian cancer.
abstract_id: PUBMED:24769036
Oxaliplatin salvage for recurrent ovarian cancer: a single institution's experience in patient populations with platinum resistant disease or a history of platinum hypersensitivity. Objective: Hypersensitivity reactions can preclude platinum re-challenge for patients receiving second-line and higher carboplatin/cisplatin salvage therapy. The objective is to report our patient experience with oxaliplatin in recurrent or progressive epithelial ovarian (EOC), primary peritoneal (PPC) or fallopian tube cancer (FTC), including those with prior hypersensitivity reaction.
Methods: A single institution, retrospective review from 1995 to 2012 of patients receiving oxaliplatin for treatment of recurrent or progressive EOC, PPC, or FTC was performed. Data collected included patient demographics, diagnosis date, prior chemotherapy regimens, platinum-free interval(s), prior hypersensitivity reactions, oxaliplatin toxicity, length of therapy, disease response, and last follow-up. Those who received ≥1 cycles were included. A response to therapy was determined after ≥2 cycles.
Results: Forty-four patients were identified. All had prior carboplatin and 38.6% had prior cisplatin therapy. Twenty-three had a prior platinum hypersensitivity reaction. Patients received a median of 2 prior platinum containing regimens and 5 chemotherapy lines prior to oxaliplatin exposure. One patient experienced grade 3 pain. No grade 4 toxicities occurred. No treatment delays for pancytopenia noted. Nausea and dysesthesias were controlled medically and weren't dose-limiting. No nephropathy or neuropathy progressed on oxaliplatin or were dose-limiting. Disease response was observed in 43.2%. Of the responders, 36.8% had a prior platinum hypersensitivity reaction. Median number of 5 cycles of an oxaliplatin containing regimen was given. Median follow-up was 15.5 months.
Conclusions: In our experience, oxaliplatin is well tolerated and should be considered for platinum challenge after hypersensitivity even in patients with platinum resistant disease with a reasonable chance of response.
abstract_id: PUBMED:9514799
Response to salvage treatment in recurrent ovarian cancer treated initially with paclitaxel and platinum-based combination regimens. Objective: The aim of this study was to evaluate the response to salvage treatment in recurrent ovarian cancer treated initially with paclitaxel-based chemotherapy.
Methods: A retrospective review of patients with recurrent ovarian cancer treated with surgical debulking and paclitaxel-based chemotherapy was performed. All cases received second-line treatment with a response evaluated by clinical or surgical means. Data analysis was conducted using the SAS statistical package.
Results: Fifty cases of advanced stage disease were available for review. Patients received paclitaxel and cisplatin or carboplatin with a 72.0% response rate. The median time to recurrence after primary treatment was 6 months. Second-line treatment included cisplatin or carboplatin (50%), Taxol (10%), or lutetium (22%), an intraperitoneal radiolabeled monoclonal antibody targeted to TAG-72. A 52.0% clinical response to salvage treatment was detected. With a median follow-up of 7 months, 68.0% of patients had experienced recurrence or progression of their disease. The median time to second recurrence was 5 months. Cases sensitive to initial paclitaxel-containing chemotherapy responded to any of the salvage treatments more frequently than chemotherapy-resistant tumors (88.5% versus 11.5%, P < 0.05).
Conclusions: Recurrent ovarian cancer patients initially treated with paclitaxel-based chemotherapy frequently responded to salvage treatment. However, the duration of response was brief, and hospitalization for treatment-related side-effects was common. Tumor response to initial paclitaxel/platinum treatment was predictive of future response to second-line agents. Current salvage therapies appear to provide little benefit in cases of tumors resistant to primary chemotherapy.
abstract_id: PUBMED:9644684
Salvage therapy for ovarian cancer. Patients with epithelial ovarian cancer must receive optimal surgical care and state-of-the-art chemotherapy in the primary treatment setting. The salvage treatment of women with recurrent or persistent ovarian cancer remains a difficult task. A very small percentage of patients with platinum-sensitive, small-volume disease appear to achieve prolonged disease-free survival. The treatment of patients with larger-volume disease (> 0.5 cm) or platinum-resistant disease remains largely palliative. The plethora of available new agents has provided the physician with multiple options for salvage chemotherapy. Although cure in the salvage setting is not often achieved currently, palliative treatment allows many patients to live painfree, productive lives. Candidates for salvage therapy may be grouped into one of several categories, which reflect different prognoses for response. These categories include refractory disease (defined as progressive tumor during primary treatment), persistent disease (a partial response to primary therapy followed by elevated tumor markers or clinically evident persistent disease), and recurrent disease (initial complete response to primary therapy with subsequent relapse). Categorizing patients into these categories provides a systematic method for organizing the administration of salvage chemotherapy.
abstract_id: PUBMED:10907957
Ifosfamide plus oral etoposide salvage chemotherapy for platinum-resistant paclitaxel-pretreated ovarian cancer. Background: The prognosis of platinum resistant ovarian cancer is very poor and the treatment of choice has not been clearly defined.
Patients And Methods: We conducted a phase II study with the combination of ifosfamide i.v. at 2.25 g/m2 (days 1, 2) and etoposide per os at 100 mg daily (days 1-10) every four weeks. To be eligible for the study patients had to be resistant to platinum and paclitaxel pretreated.
Results: Forty-one patients entered the study. The median interval from the previous chemotherapy was 3.9 months. The median number of previous chemotherapeutic regimens was 2. Severe toxicities included neutropenia (41% of patients), leukopenia (29%) and thrombocytopenia (13%). Thirty-five patients are assessable for response. Nine patients responded (22% of the eligible, 26% of the assessable), four of them demonstrated complete response to chemotherapy (10% and 12%, respectively), while three patients demonstrated stabilization of their progressive disease. After a median follow-up of 18 months, time to progression is 3 months (range 0.9-14.4), duration of response is 9 months (2.5-11) and median survival is 13 months (2.5-37.4+).
Conclusions: The combination of ifosfamide with oral etoposide appears to have significant but manageable toxicity and encouraging efficacy in platinum resistant ovarian cancer.
Answer: Yes, pretreatment hemoglobin level is a predictor of complete response to salvage chemotherapy for recurrent platinum-pretreated ovarian carcinoma. A retrospective study found that baseline hemoglobin level was an independent predictor of complete response to salvage chemotherapy in these patients. Specifically, patients with baseline hemoglobin levels greater than 11.6 g/dl had a 5.338 higher chance of obtaining a complete response compared to those with lower hemoglobin values (PUBMED:14584657). |
Instruction: Do parents play different roles in drinking behaviours of male and female adolescents?
Abstracts:
abstract_id: PUBMED:25877273
Do parents play different roles in drinking behaviours of male and female adolescents? A longitudinal follow-up study. Objective: Gender differences in the associations between adolescent drinking behaviour, and perceived parental drinking behaviours and attitudes towards underage drinking, were investigated.
Methods: Data were drawn from two cohorts in the Child and Adolescent Behaviours in Long-term Evolution project. We used data from 2009 to 2006, when cohorts 1 and 2, respectively, were in grade 9. No cohort effect was found, so the two cohorts were pooled; 3972 students (1999 boys and 1973 girls) participated in the study. The major variables included adolescent drinking behaviours over the last month, and perceived parental drinking behaviours and parental attitudes towards underage drinking. The effects of the combination of parental drinking behaviours, and attitudes on the drinking behaviours of male and female adolescents, were analysed by logistic regression.
Results: The drinking behaviour of boys was correlated with the drinking behaviours and attitudes of their fathers but not with those of their mothers. Among boys, having a non-drinking father who was against underage drinking (OR=0.27, 95% CI 0.16 to 0.46), a non-drinking father who was favourable towards underage drinking (OR=0.61, 95% CI 0.39 to 0.94), or a drinking father who was against underage drinking (OR=0.44, 95% CI 0.23 to 0.85) significantly decreased the likelihood of alcohol consumption, whereas maternal behaviour and attitude were not significant influences. Among girls, having a non-drinking father who was against underage drinking (OR=0.52, 95% CI 0.30 to 0.91) or a non-drinking father who was favourable towards underage drinking (OR=0.51, 95% CI 0.32 to 0.83) significantly decreased the likelihood of alcohol consumption, as did having a non-drinking mother who was against underage drinking (OR=0.23, 95% CI 0.09 to 0.60).
Conclusions: The influences of fathers and mothers on the drinking behaviour of their adolescent children differed by offspring gender.
abstract_id: PUBMED:32934504
Drinking with parents: Different measures, different associations with underage heavy drinking? Aims: Is drinking with parents (DWP) likely to curb or to encourage adolescent heavy drinking? The scant number of studies addressing this issue have arrived at contradictory conclusions, which may reflect that different measures of DWP have been used. We pursued the assumption, taking potential confounding related to parental alcohol-specific rule-setting and parenting style into account.
Method: Data stem from the Norwegian 2015 ESPAD survey of 15-16 year olds. Drinking with parents at the last drinking event and the frequency of DWP in the past year were assessed among those who had consumed alcohol (n = 1374). Severe drunkenness and binge drinking in the past month were the outcomes. Parental covariates were accounted for in Poisson regression models.
Results: One in five (21%) had been drinking with their parents the last time they consumed alcohol, and this DWP measure was strongly and inversely related to both drunkenness and binge drinking. Adolescents who reported no DWP episodes in the past year (61%) and those who reported 1-2 such episodes (30%) barely differed with respect to the two outcomes. More frequent DWP (9%) was significantly associated with an increased risk of heavy episodic drinking, but the statistical impact on severe drunkenness was no longer significant when adjusting for parental covariates.
Conclusions: Different measures of DWP were related differently to adolescent heavy drinking, indicating that studies based on DWP at the last drinking event are biased in favour of the view that adolescents may "learn" sensible drinking by consuming alcohol with their parents.
abstract_id: PUBMED:27411789
Lost in translation: a focus group study of parents' and adolescents' interpretations of underage drinking and parental supply. Background: Reductions in underage drinking will only come about from changes in the social and cultural environment. Despite decades of messages discouraging parental supply, parents perceive social norms supportive of allowing children to consume alcohol in 'safe' environments.
Methods: Twelve focus groups conducted in a regional community in NSW, Australia; four with parents of teenagers (n = 27; 70 % female) and eight with adolescents (n = 47; 55 % female). Participants were recruited using local media. Groups explored knowledge and attitudes and around alcohol consumption by, and parental supply of alcohol to, underage teenagers; and discussed materials from previous campaigns targeting adolescents and parents.
Results: Parents and adolescents perceived teen drinking to be a common behaviour within the community, but applied moral judgements to these behaviours. Younger adolescents expressed more negative views of teen drinkers and parents who supply alcohol than older adolescents. Adolescents and parents perceived those who 'provide alcohol' (other families) as bad parents, and those who 'teach responsible drinking' (themselves) as good people. Both groups expressed a preference for high-fear, victim-blaming messages that targeted 'those people' whose behaviours are problematic.
Conclusions: In developing and testing interventions to address underage drinking, it is essential to ensure the target audience perceive themselves to be the target audience. If we do not have a shared understanding of underage 'drinking' and parental 'provision', such messages will continue to be perceived by parents who are trying to do the 'right' thing as targeting a different behaviour and tacitly supporting their decision to provide their children with alcohol.
abstract_id: PUBMED:33202941
Factors Associated with Smoking and Drinking among Early Adolescents in Vanuatu: A Cross-Sectional Study of Adolescents and Their Parents. This cross-sectional study determined whether various factors, such as parental behavior, attitude, and knowledge and sibling and peer behaviors, were associated with smoking and drinking among early adolescents in the Republic of Vanuatu. For this purpose, logistic regression analysis was used to determine the relative importance of the factors as well as the influences of the parents/guardians, siblings, and peers. The participants consisted of 157 seventh- and eighth-grade adolescents (mean age = 13.3 years; 52.2% girls), including their parents/guardians, from three public schools in Vanuatu. According to the results, the proportions of smokers and drinkers among the adolescents were 12.7% each, while the majority of the parents/guardians disapproved of underage smoking and drinking. In addition, peer influences (i.e., regularly smoking and/or drinking and offering tobacco and/or alcohol) was significantly associated with ever smoking and drinking, whereas parental and sibling influences did not have a significant impact on ever smoking and drinking. In sum, being given tobacco or alcohol from peers had the strongest association with ever smoking and drinking among the adolescents in this study. Thus, future school-based intervention programs should focus on enhancing early adolescents' life skills, including the ability to resist offers of tobacco and/or alcohol from their peers.
abstract_id: PUBMED:32438735
Why are Spanish Adolescents Binge Drinkers? Focus Group with Adolescents and Parents. Binge drinking in adolescents is a worldwide public healthcare problem. The aim of this study was to explore the perceptions about determinants of binge drinking in Spanish adolescents from the perspective of adolescents and parents. A qualitative study using fourteen semi-structured focus groups of adolescents was conducted during the 2014/2015 school year (n = 94), and four with parents (n = 19), based on the I-Change Model for health behaviour acquisition. Students had a low level of knowledge and risk perception and limited self-efficacy. Girls reported more parental control, and when they get drunk, society perceives them worse. Adolescents suggested focus preventive actions to improve self-efficacy and self-esteem. Parents were permissive about alcohol drinking but rejected binge drinking. They offered alcohol to their children, mainly during celebrations. A permissive family environment, lack of control by parents, adolescents' low-risk perception, low self-esteem and self-efficacy, as well as the increase of binge drinking in girls as part of the reduction of the gender gap, emerge as risk factors for binge drinking. Future health programmes aimed at reducing binge drinking should focus on enhancing motivational factors, self-esteem, and self-efficacy in adolescents; supervision and parental control; as well as pre-motivational factors by increasing knowledge and risk awareness, considering gender differences.
abstract_id: PUBMED:37788132
The role of psychosocial factors and biological sex on rural Thai adolescents' drinking intention and behaviours: A structural equation model. Aims: To examine the contributions of psychosocial factors (attitude towards drinking, perceived drinking norms [PDNs], perceived behavioural control [PBC]), and biological sex on drinking intention and behaviours among rural Thai adolescents.
Design: A cross-sectional study design.
Methods: In 2022, stratified by sex and grade, we randomly selected 474 rural Thai adolescents (Mage = 14.5 years; SD = 0.92; 50.6% male) from eight public district schools in Chiang Mai Province, Thailand, to complete a self-administered questionnaire. Structural equation modelling with the weighted least square mean and variance adjusted was used for data analysis.
Results: All adolescents' psychosocial factors contributed significantly to the prediction of drinking intention, which subsequently influenced their drinking onset, current drinking and binge drinking pattern in the past 30 days. PDNs emerged as the strongest psychosocial predictor of drinking intention, followed by PBC. Rural adolescents' drinking intention significantly mediated the relationship between all psychosocial factors and drinking behaviours either fully or partially. The path coefficient between drinking attitude and drinking intention was significantly different between males and females.
Conclusion: Different from previous studies focus on adolescents' drinking attitude, rural Thai adolescents' PDNs play a significant role on their drinking intention and subsequently their drinking onset and patterns. This nuanced understanding supports a paradigm shift to target adolescents' perceived drinking norms as a means to delay their drinking onset and problematic drinking behaviours.
Impact: Higher levels of perceived drinking norms significantly led to the increase in drinking intention among adolescents. Minimizing adolescents' perceptions of favourable drinking norms and promoting their capacity to resist drinking, especially due to peer pressure, are recommended for nursing roles as essential components of health education campaigns and future efforts to prevent underage drinking.
Patient Or Public Contribution: In this study, there was no public or patient involvement.
abstract_id: PUBMED:26381442
Does promoting parents' negative attitudes to underage drinking reduce adolescents' drinking? The mediating process and moderators of the effects of the Örebro Prevention Programme. Background And Aims: The Örebro Prevention Programme (ÖPP) was found previously to be effective in reducing drunkenness among adolescents [Cohen's d = 0.35, number needed to treat (NNT) = 7.7]. The current study tested the mediating role of parents' restrictive attitudes to underage drinking in explaining the effectiveness of the ÖPP, and the potential moderating role of gender, immigration status, peers' and parents' drinking and parent-adolescent relationship quality.
Design: A quasi-experimental matched-control group study with assessments at baseline, and at 18- and 30-month follow-ups.
Participants: Of the 895 target youths at ages 12-13 years, 811 youths and 651 parents at baseline, 653 youths and 524 parents at 18-month and 705 youths and 506 parents at 30-month follow-up participated in the study.
Measurements: Youths reported on their past month drunkenness, their parents' and peers' alcohol use and the quality of their relationship with parents. Parents reported on their attitudes to underage drinking.
Findings: The mediation analyses, using latent growth curve modeling, showed that changes in parents' restrictive attitudes to underage drinking explained the impact of the ÖPP on changes in youth drunkenness, which was reduced, and onset of monthly drunkenness, which was delayed, relative to controls. Mediation effect explained 57 and 45% of the effects on drunkenness and onset of monthly drunkenness, respectively. The programme effects on both parents' attitudes and youth drunkenness were similar across gender, immigrant status, parents' and peers' alcohol use and parent-youth relationship quality.
Conclusions: Increasing parents' restrictive attitudes to youth drinking appears to be an effective and robust strategy for reducing heavy underage drinking regardless of the adolescents' gender, cultural origin, peers' and parents' drinking and relationship quality with parents.
abstract_id: PUBMED:36554938
Preventive Health Behaviours among Adolescents and Their Parents during the COVID-19 Outbreak in the Light of the Health Beliefs Model. This article analysed the relationship between the preventive health behaviours of parents and teenagers during the COVID-19 outbreak, taking the Health Beliefs Model (HBM) as a point of reference. We assumed that parents' behaviours may be a cue to action for adolescents, looking at their preventive health behaviours regarding vaccination against COVID-19, as well as vaccination intention (among unvaccinated people); wearing protective masks where it is compulsory and where it is not obligatory; and maintaining physical distance and disinfecting hands in public places. The collected data were statistically analysed using the Statistica version 13.3 software package for advanced statistical data analysis. Descriptive statistics and correlation for non-parametric data (Spearman's correlation) were used. Research on a sample of 201 parents and their children revealed that young people engage in preventive behaviour less frequently than parents, but that the likelihood of such behaviour increases if they have a parent's cue to action. When formulating recommendations, we considered the gender of the surveyed parents, as the questionnaire was mainly completed by women, which may be an indicator of the unequal involvement in addressing the topic of the pandemic and preventive health behaviours, including attitudes towards vaccines.
abstract_id: PUBMED:37120817
Gender Differences in Clinical Characteristics and Lifestyle Behaviours of Overweight and Obese Adolescents. Background: Energy intake and energy expenditure are different in boys and girls, especially during the adolescent period, a critical period for the development of obesity. However, gender-specific lifestyle behaviours that may influence the development of obesity among adolescent have not received sufficient attention.
Aim: To determine gender differences in male and female overweight/ obese adolescents concerning their clinical parameters, dietary, sedentary and physical activity lifestyle behaviours.
Methods: From a total of 1036 secondary school students aged 10-17 years, BMI percentile for age and gender was used to identify overweight and obese individuals. These adolescents were then questioned on dietary, sedentary and physical activity lifestyle behaviours via a structured self-administered questionnaire.
Results: The overweight/obese adolescent identified were 92. Female adolescents were 1.5 times more than male adolescents. The male, overweight/ obese adolescents were significantly younger than their female counterparts (11.9 ± 1.0 years vs 13.2 ± 2.0 years p=0.0001). Female overweight/ obese adolescents were significantly heavier (67.1 ± 12.5 kg vs 59.6 ± 8.6 kg, p=0.003), with higher BMI (25.7 ± 3.7 kg/m2 vs 24.0 ± 2.3 kg/m2, p=0.012), and wider hip circumference (102.9 ± 9.0 cm vs 95.7 ± 6.7 cm, p=0.002). Regarding lifestyle behaviours, female overweight/ obese adolescents consumed more fast foods compared to their male counterparts (p=0.012). In contrast, significantly more male overweight/ obese adolescents were driven to and from school compared to female adolescents (p=0.028).
Conclusion: Gender differences exist between overweight/obese female and male adolescents. The females were older, heavier and consumed fast foods more frequently. While their male counterparts were younger and tended to engage in less physical exertion. These factors should be considered when planning adolescents' weight loss and prevention interventions.
abstract_id: PUBMED:31797418
Parents' perspective on their responsibilities with regard to adolescents' use of alcohol. Background: Parents have responsibilities when their adolescent offspring use alcohol, but little is known about their views on the topic. This knowledge would help healthcare organisations to develop practices to help parents meet their responsibilities.
Aim: The aim of this study was to describe parents' perceptions and experiences of their responsibilities and what support they needed to tackle underage drinking.
Design And Methods: This Finnish study was carried out in 2017. It used a qualitative descriptive method that involved carrying out semi-structured interviews with 20 parents of adolescents aged 14-15 years. The data were analysed using inductive content analysis.
Results: Parents described that it was their responsibility to prevent problems that could have a negative impact on their adolescents' well-being. When it came to their responsibilities with regard to adolescents using alcohol, parents saw these as guiding and protecting their child, being aware of how parents used alcohol themselves, the way they communicated with adolescents and the rules they set about experimenting with alcohol. Parents said that receiving collaborative support from peers, community and professionals working in health and social care services would be useful. This could help them to prevent adolescents using alcohol, tackle problems that occurred when adolescents drank and enable parents to play a responsible role in their local community.
Conclusion: Parents recognised that they had responsibilities to prevent alcohol having a negative impact on adolescents' well-being. They wanted to ensure that adolescents made sensible and responsible decisions when it came to using alcohol. Parents described that the most important form of support was peer support from other parents. The findings of this study can be used to inform the development of preventive healthcare services and support parents to meet their responsibilities with regard to tackling underage drinking.
Answer: Yes, parents do play different roles in the drinking behaviors of male and female adolescents. Studies have shown that the influence of fathers and mothers on the drinking behavior of their adolescent children differs by offspring gender. For instance, the drinking behavior of boys was found to be correlated with the drinking behaviors and attitudes of their fathers but not with those of their mothers. Among boys, having a non-drinking father who was against underage drinking significantly decreased the likelihood of alcohol consumption, whereas maternal behavior and attitude were not significant influences. On the other hand, among girls, having a non-drinking father who was against underage drinking or a non-drinking father who was favorable towards underage drinking significantly decreased the likelihood of alcohol consumption, as did having a non-drinking mother who was against underage drinking (PUBMED:25877273).
Additionally, different measures of drinking with parents (DWP) were related differently to adolescent heavy drinking, indicating that studies based on DWP at the last drinking event might be biased in favor of the view that adolescents may "learn" sensible drinking by consuming alcohol with their parents (PUBMED:32934504). Parents and adolescents have different perceptions of teen drinking and parental supply of alcohol, with both groups expressing a preference for high-fear, victim-blaming messages that target 'those people' whose behaviors are problematic (PUBMED:27411789).
Furthermore, peer influences were significantly associated with ever smoking and drinking among adolescents in Vanuatu, whereas parental and sibling influences did not have a significant impact on ever smoking and drinking (PUBMED:33202941). In Spain, a permissive family environment, lack of control by parents, adolescents' low-risk perception, low self-esteem and self-efficacy, as well as the increase of binge drinking in girls as part of the reduction of the gender gap, emerge as risk factors for binge drinking (PUBMED:32438735).
In rural Thai adolescents, psychosocial factors such as attitude towards drinking, perceived drinking norms, and perceived behavioral control, along with biological sex, contributed significantly to the prediction of drinking intention, which subsequently influenced their drinking onset, current drinking, and binge drinking pattern (PUBMED:37788132). The Örebro Prevention Programme (ÖPP) showed that increasing parents' restrictive attitudes to youth drinking is an effective strategy for reducing heavy underage drinking regardless of the adolescents' gender, cultural origin, peers' and parents' drinking, and relationship quality with parents (PUB Certainly, the Örebro Prevention Programme (ÖPP) demonstrated that increasing parents' restrictive attitudes to youth drinking is an effective strategy for reducing heavy underage drinking regardless of the adolescents' gender, cultural origin, peers' and parents' drinking, and relationship quality with parents. The mediation analyses showed that changes in parents' restrictive attitudes to underage drinking explained the impact of the ÖPP on changes in youth drunkenness (PUBMED:26381442).
During the COVID-19 outbreak, research indicated that adolescents engage in preventive behavior less frequently than parents, but the likelihood of such behavior increases if they have a parent's cue to action. This suggests that parents' behaviors may influence adolescents' health behaviors, including those related to substance use (PUBMED:36554938).
Gender differences in clinical characteristics and lifestyle behaviors of overweight and obese adolescents were also observed, with female adolescents consuming more fast foods compared to their male counterparts, and male adolescents being driven to and from school more frequently than female adolescents, indicating less physical exertion. These gender-specific lifestyle behaviors may influence the development of obesity and potentially other health-related behaviors, including drinking (PUBMED:37120817).
Finally, parents recognized their responsibilities to prevent alcohol from negatively impacting adolescents' well-being. They saw their responsibilities as guiding and protecting their child, being aware of their own alcohol use, communication with adolescents, and setting rules about experimenting with alcohol. Parents expressed a desire for collaborative support from peers, community, and professionals to prevent and address problems related to adolescent drinking (PUBMED:31797418). This underscores the complex role parents play in shaping their children's attitudes and behaviors towards alcohol, which can differ based on the gender of the adolescent. |
Instruction: Does pet ownership in infancy lead to asthma or allergy at school age?
Abstracts:
abstract_id: PUBMED:32911877
Age Differences in Pet Sensitization by Pet Ownership. Objectives: The association between pet sensitization and pet ownership remains unclear. Therefore, we aimed to elucidate the association between pet sensitization and pet ownership by age.
Methods: We retrospectively reviewed 2,883 patients who visited our allergy clinic for nasal symptoms from January 2003 to December 2014, of whom 1,957 patients with data on skin-prick tests and questionnaire responses were included and divided into adults (age >19 years) and children (age ≤19 years). The association between pet sensitization and pet ownership was evaluated in both groups.
Results: Among children, dog and cat sensitization showed no associations with dog and cat ownership, respectively. However, among adults, dog sensitization was significantly associated with dog ownership (odds ratio [OR], 3.283; P<0.001), and cat sensitization with cat ownership (OR, 13.732; P<0.001). After adjustment for age, sex, familial history of allergy, sinusitis, diabetes mellitus, other pet ownership, and non-pet sensitization, significant associations remained between dog sensitization and dog ownership (adjusted OR [aOR], 3.881; P<0.001), and between cat sensitization and cat ownership (aOR, 10.804; P<0.001) among adults. Dog ownership did not show any association with allergic rhinitis, asthma, or atopic dermatitis, whereas atopic dermatitis had a significant association with cat ownership in adults (aOR, 4.840; P<0.001).
Conclusion: Pet ownership in adulthood increased the risk of pet sensitization. However, pet ownership was not associated with the prevalence of atopic disorders, regardless of age, except for atopic dermatitis and cat ownership in adults.
abstract_id: PUBMED:37779525
Saliva contact during infancy and allergy development in school-age children. Background: Parent-child saliva contact during infancy might stimulate the child's immune system for effective allergy prevention. However, few studies have investigated its relation to allergy development in school-age children.
Objective: We sought to investigate the relationship between parent-child saliva contact during infancy and allergy development at school age.
Methods: We performed a large multicenter cross-sectional study involving Japanese school children and their parents. The self-administered questionnaires including questions from the International Study of Asthma and Allergies in Childhood were distributed to 3570 elementary and junior high school children in 2 local cities. Data were analyzed for the relationship between saliva contact during infancy (age <12 months) and the risk of allergy development, specifically eczema, allergic rhinitis, and asthma. For detailed Methods, please see the Methods section in this article's Online Repository at www.jacionline.org.
Results: The valid response rate was 94.7%. The mean and median age of children was 10.8 ± 2.7 and 11 (interquartile range, 9-13) years, respectively. Saliva contact via sharing eating utensils during infancy was significantly associated with a lower risk of eczema (odds ratio, 0.53; 95% CI, 0.34-0.83) at school age. Saliva contact via parental sucking of pacifiers was significantly associated with a lower risk of eczema (odds ratio, 0.24; 95% CI, 0.10-0.60) and allergic rhinitis (odds ratio, 0.33; 95% CI, 0.15-0.73), and had a borderline association with the risk of asthma in school-age children.
Conclusions: Saliva contact during infancy may reduce the risk of developing eczema and allergic rhinitis in school-age children.
abstract_id: PUBMED:22952649
Does pet ownership in infancy lead to asthma or allergy at school age? Pooled analysis of individual participant data from 11 European birth cohorts. Objective: To examine the associations between pet keeping in early childhood and asthma and allergies in children aged 6-10 years.
Design: Pooled analysis of individual participant data of 11 prospective European birth cohorts that recruited a total of over 22,000 children in the 1990s. EXPOSURE DEFINITION: Ownership of only cats, dogs, birds, rodents, or cats/dogs combined during the first 2 years of life. OUTCOME DEFINITION: Current asthma (primary outcome), allergic asthma, allergic rhinitis and allergic sensitization during 6-10 years of age.
Data Synthesis: Three-step approach: (i) Common definition of outcome and exposure variables across cohorts; (ii) calculation of adjusted effect estimates for each cohort; (iii) pooling of effect estimates by using random effects meta-analysis models.
Results: We found no association between furry and feathered pet keeping early in life and asthma in school age. For example, the odds ratio for asthma comparing cat ownership with "no pets" (10 studies, 11489 participants) was 1.00 (95% confidence interval 0.78 to 1.28) (I(2) = 9%; p = 0.36). The odds ratio for asthma comparing dog ownership with "no pets" (9 studies, 11433 participants) was 0.77 (0.58 to 1.03) (I(2) = 0%, p = 0.89). Owning both cat(s) and dog(s) compared to "no pets" resulted in an odds ratio of 1.04 (0.59 to 1.84) (I(2) = 33%, p = 0.18). Similarly, for allergic asthma and for allergic rhinitis we did not find associations regarding any type of pet ownership early in life. However, we found some evidence for an association between ownership of furry pets during the first 2 years of life and reduced likelihood of becoming sensitized to aero-allergens.
Conclusions: Pet ownership in early life did not appear to either increase or reduce the risk of asthma or allergic rhinitis symptoms in children aged 6-10. Advice from health care practitioners to avoid or to specifically acquire pets for primary prevention of asthma or allergic rhinitis in children should not be given.
abstract_id: PUBMED:11940064
Early, current and past pet ownership: associations with sensitization, bronchial responsiveness and allergic symptoms in school children. Background: Studies have suggested that early contact with pets may prevent the development of allergy and asthma.
Objective: To study the association between early, current and past pet ownership and sensitization, bronchial responsiveness and allergic symptoms in school children.
Methods: A population of almost 3000 primary school children was investigated using protocols of the International Study on Asthma and Allergies in Childhood (ISAAC). Allergic symptoms were measured using the parent-completed ISAAC questionnaire. Sensitization to common allergens was measured using skin prick tests (SPT)s and/or serum immunoglobulin (Ig)E determinations. Bronchial responsiveness was tested using a hypertonic saline challenge. Pet ownership was investigated by questionnaire. Current, past and early exposure to pets was documented separately for cats, dogs, rodents and birds. The data on current, past and early pet exposure were then related to allergic symptoms, sensitization and bronchial responsiveness.
Results: Among children currently exposed to pets, there was significantly less sensitization to cat (odds ratio (OR) = 0.69) and dog (OR = 0.63) allergens, indoor allergens in general (OR = 0.64), and outdoor allergens (OR = 0.60) compared to children who never had pets in the home. There was also less hayfever (OR = 0.66) and rhinitis (OR = 0.76). In contrast, wheeze, asthma and bronchial responsiveness were not associated with current pet ownership. Odds ratios associated with past pet ownership were generally above unity, and significant for asthma in the adjusted analysis (OR = 1.85), suggesting selective avoidance in families with sensitized and/or symptomatic children. Pet ownership in the first two years of life only showed an inverse association with sensitization to pollen: OR = 0.71 for having had furry or feathery pets in general in the first two years of life, and OR = 0.73 for having had cats and/or dogs in the first two years of life, compared to not having had pets in the first two years of life.
Conclusion: These results suggest that the inverse association between current pet ownership and sensitization and hayfever symptoms was partly due to the removal of pets in families with sensitized and/or symptomatic children. Pet ownership in the first two years of life only seemed to offer some protection against sensitization to pollen.
abstract_id: PUBMED:17157656
Influence of dog ownership and high endotoxin on wheezing and atopy during infancy. Background: Increased exposure to microbial products early in life may protect from development of atopic disorders in childhood. Few studies have examined the relationship of endotoxin exposure and pet ownership on atopy and wheezing during infancy.
Objective: Evaluate relationships among high endotoxin exposure, pet ownership, atopy, and wheezing in high-risk infants.
Methods: Infants (n = 532; mean age, 12.5 +/- 0.8 months) with at least 1 parent with confirmed atopy were recruited. A complete medical history and skin prick testing to foods and aeroallergens were performed at age 1 year. House dust samples were analyzed for endotoxin.
Results: Prevalences of wheezing were not independently associated with dog or cat ownership or endotoxin levels. Percutaneous reactivity to at least 1 allergen was observed in 28.6% of infants. Univariate analyses showed significant associations of any wheezing, recurrent wheezing, and recurrent wheezing with an event with daycare attendance, number of siblings, respiratory infections, maternal smoking, and history of parental asthma. Logistic regression adjusting for the latter variables showed that recurrent wheezing (odds ratio, 0.4; 95% CI, 0.1-0.9) as well as 2 other wheeze outcomes were significantly reduced in homes with high endotoxin exposure in the presence of 2 or more dogs.
Conclusion: Pet ownership or endotoxin did not independently modify aeroallergen sensitization or wheezing during infancy. However, high endotoxin exposure in the presence of multiple dogs was associated with reduced wheezing in infants.
Clinical Implications: A home environment with many dogs and high levels of endotoxin may be conducive to reduced wheezing in infancy.
abstract_id: PUBMED:12776448
Pet allergy: how important for Turkey where there is a low pet ownership rate. Exposure and sensitization to allergens derived from cats/dogs have been shown to represent an important risk factor for allergic respiratory diseases. So far, there has not been any study exploring cat/dog sensitization and related factors in our geographic location. The aim of this study was to determine the sensitization to cats/dogs in a group of patients with rhinitis and/or asthma and to evaluate the relationship between current and childhood exposure and sensitivity to pets. Three hundred twelve consecutive subjects with asthma and/or rhinitis were included in the study and were asked to reply a questionnaire concerning past and current pet ownership and presence of pet-related respiratory symptoms. After performing skin-prick tests, subjects were allocated into three groups: group 1 (n = 103), subjects with nonatopic asthma; group 2 (n = 54), allergic rhinitis and/or asthma patients with pet allergy; group 3 (n = 155), allergic rhinitis and/or asthma patients without pet allergy. Pet hypersensitivity was detected in 54 of 209 atopic subjects (25.8%). There was no difference in the rates of past pet ownership among subjects with (29.6%) and without (23.8%) pet allergy. However, the ratio of current pet ownership was higher in atopic patients with pet allergy (16.6%) than in nonatopic subjects (2.9%; p = 0.02). The prevalence of sensitization to pets in current owners (42.8%) was higher than prevalence of sensitization in patients who never had a pet (22.6%; p = 0.002; odds ratio, 2.67) and who owned a pet at childhood (28.2%; p = 0.038; odds ratio, 1.9). Thirteen subjects (13/54; 24%) described respiratory symptoms when exposed to cats and/or dogs. Rate of past pet ownership was similar in symptomatic and asymptomatic subjects with pet allergy (30.7% versus 29.2%; p > 0.05). Rate of current per ownership was higher in symptomatic subjects than in asymptomatic subjects with pet sensitivity (38.4% versus 9.5%; p < 0.0001). Our data indicate that pet allergens have the potential to become an important source of indoor allergens in our population. Our findings also suggest that current pet ownership--but not childhood pet keeping--seems to be a risk for the development of sensitization to pets.
abstract_id: PUBMED:33914361
Molecular sensitization patterns in animal allergy: Relationship with clinical relevance and pet ownership. Background: In vitro diagnosis using single molecules is increasingly complementing conventional extract-based diagnosis. We explored in routine patients with animal allergy to what extent molecules can explain polysensitization and identify primary sensitizers and how individual IgE patterns correlate with previous pet ownership and clinical relevance.
Methods: Serum samples from 294 children and adults with suspect allergic rhino-conjunctivitis or asthma and a positive skin prick test to cat, dog and/or horse were tested by ImmunoCAP for IgE antibodies against eleven different allergens from cat (Fel d 1,2,4,7), dog (Can f 1,2,3,4,5,6) and horse (Equ c 1).
Results: Patients monosensitized to cat (40.8%) or dog (6.1%) showed simple IgE patterns dominated by Fel d 1 (93%) and Can f 5 (67%), respectively. Double-sensitization to cat+dog (25.9%), cat+horse (5.4%) and polysensitization (20.7%) was associated with an increasing prevalence of the cross-reactive lipocalins Fel d 4/Can f 6/Equ c 1 and Fel d 7/Can f 1. While these lipocalins were not reliable markers for genuine sensitization per se, comparison of sIgE levels may give a clue on the primary sensitizer. Sensitizations to dog appeared to result from cross-reactivity with cat in 48%, with half of these sensitizations lacking clinical relevance. Individual sensitization patterns strongly mirrored current or previous pet ownership with the exception of Fel d 1 which regularly caused sensitization also in non-owners.
Conclusions: Allergen components can reasonably illuminate the molecular basis of animal (poly)sensitization in the majority of patients and are helpful in distinguishing between primary sensitization and sometimes less relevant cross-reactivity.
abstract_id: PUBMED:15208601
Airborne cat allergen reduction in classrooms that use special school clothing or ban pet ownership. Background: Allergens from furred animals are brought to school mainly via clothing of pet owners. Asthmatic children allergic to cat have more symptoms when attending a class with many cat owners, and some schools allocate specific resources to allergen avoidance measures.
Objective: The aim of the current study was to evaluate the effect of school clothing or pet owner-free classes compared with control classes on airborne cat allergen levels and to investigate attitudes and allergic symptoms among the children.
Methods: Allergen measurements were performed prospectively in 2 classes with school clothing, 1 class of children who were not pet owners, and 3 control classes during a 6-week period in 2 consecutive years. Portable pumps and petri dishes were used for collection of airborne cat allergen, and a roller was used for sampling on children's clothes. Cat allergen (Fel d 1) was analyzed with enzyme-linked immunoassay and immunostaining. Both years, questionnaires were administered to the children.
Results: We found 4-fold to 6-fold lower airborne cat allergen levels in intervention classes compared with control classes. Levels of cat allergen were 3-fold higher on clothing of cat owners than of children without cats in control classes. Pet ownership ban seemed less accepted than school clothing as an intervention measure.
Conclusion: For the first time, it has been shown that levels of airborne cat allergen can be reduced by allergen avoidance measures at school by using school clothing or pet ownership ban, and that both measures are equally efficient. The clinical effect of these interventions remains to be evaluated.
abstract_id: PUBMED:25077415
Pet ownership is associated with increased risk of non-atopic asthma and reduced risk of atopy in childhood: findings from a UK birth cohort. Background: Studies have shown an inverse association of pet ownership with allergy but inconclusive findings for asthma.
Objective: To investigate whether pet ownership during pregnancy and childhood was associated with asthma and atopy at the age of 7 in a UK population-based birth cohort.
Methods: Data from the Avon Longitudinal Study of Parents and Children (ALSPAC) were used to investigate associations of pet ownership at six time points from pregnancy to the age of 7 with asthma, atopy (grass, house dust mite, and cat skin prick test) and atopic vs. non-atopic asthma at the age of 7 using logistic regression models adjusted for child's sex, maternal history of asthma/atopy, maternal smoking during pregnancy, and family adversity.
Results: A total of 3768 children had complete data on pet ownership, asthma, and atopy. Compared with non-ownership, continuous ownership of any pet (before and after the age of 3) was associated with 52% lower odds of atopic asthma [odds ratio (OR) 0.48, 95% CI 0.34-0.68]. Pet ownership tended to be associated with increased risk of non-atopic asthma, particularly rabbits (OR 1.61, 1.04-2.51) and rodents (OR 1.86, 1.15-3.01), comparing continuous vs. non-ownership. Pet ownership was consistently associated with lower odds of sensitization to grass, house dust mite, and cat allergens, but rodent ownership was associated with higher odds of sensitization to rodent allergen. Differential effects of pet ownership on atopic vs. non-atopic asthma were evident for all pet types.
Conclusions And Clinical Relevance: Pet ownership during pregnancy and childhood in this birth cohort was consistently associated with a reduced risk of aeroallergen sensitization and atopic asthma at the age of 7, but tended to be associated (particularly for rabbits and rodents) with an increased risk of non-atopic asthma. The opposing effects on atopy vs. non-atopic asthma might be considered by parents when they are deciding whether to acquire a pet.
abstract_id: PUBMED:15813806
Early childhood environment related to microbial exposure and the occurrence of atopic disease at school age. Background: There is a growing body of evidence that the early childhood environment with respect to day care attendance, older siblings, pet ownership, and early life airway infections may protect from developing atopic disease. Few studies have distinguished between atopic sensitization and symptoms, and none have evaluated independent contributions for all of these different environmental conditions.
Objective: Examine independent effects on atopic sensitization and symptoms of day care attendance, older siblings, pet ownership, and early infancy's airway disease.
Methods: A cross-sectional survey among 8-13-year-old school children with complete data for 1555 children.
Results: After adjustment for confounders, atopic sensitization occurred less frequently in children that had attended a day care centre (OR: 0.73, 95% CI: 0.55-0.98) or had a cat or dog before 2 years of age (OR: 0.78, 95% CI: 0.61-0.99). Having older siblings yielded a nonsignificant trend towards protection (OR: 0.88, 95% CI: 0.70-1.11). For symptoms, there was no relation with having older sibs, day care attendance and pet ownership, although there was a trend towards protection for the combination of atopy and symptoms. In contrast, children with doctors' treated airway disease before age 2, more frequently reported recent symptoms of wheeze, asthma, rhinitis, or dermatitis (all P < 0.05).
Conclusion: Early life environmental exposure to day care, or pets may protect against atopic sensitization. Protection against symptoms only occurred if atopic sensitization was present as well.
Answer: The relationship between pet ownership in infancy and the development of asthma or allergies at school age has been the subject of various studies, and the findings are somewhat mixed.
One study found no association between pet ownership in early childhood and asthma or allergies in children aged 6-10 years. This pooled analysis of data from 11 European birth cohorts involving over 22,000 children concluded that pet ownership early in life did not appear to either increase or reduce the risk of asthma or allergic rhinitis symptoms (PUBMED:22952649).
Another study that focused on the association between early, current, and past pet ownership and sensitization, bronchial responsiveness, and allergic symptoms in school children found that current pet ownership was associated with less sensitization to various allergens and fewer hay fever and rhinitis symptoms. However, wheeze, asthma, and bronchial responsiveness were not associated with current pet ownership. The study also suggested that the inverse association between current pet ownership and sensitization and hay fever symptoms might be partly due to the removal of pets in families with sensitized and/or symptomatic children (PUBMED:11940064).
A study that evaluated the influence of dog ownership and high endotoxin on wheezing and atopy during infancy found that pet ownership or endotoxin did not independently modify aeroallergen sensitization or wheezing during infancy. However, high endotoxin exposure in the presence of multiple dogs was associated with reduced wheezing in infants (PUBMED:17157656).
Lastly, a study from the UK birth cohort suggested that pet ownership during pregnancy and childhood was consistently associated with a reduced risk of aeroallergen sensitization and atopic asthma at the age of 7, but tended to be associated with an increased risk of non-atopic asthma, particularly for rabbits and rodents (PUBMED:25077415).
In summary, the evidence does not conclusively show that pet ownership in infancy leads to asthma or allergies at school age. While some studies suggest a protective effect against sensitization and atopic asthma, others indicate no significant association or a potential increase in non-atopic asthma. It appears that the relationship between pet ownership and the development of asthma and allergies is complex and may be influenced by various factors, including the type of pet, the presence of endotoxin, and individual and family characteristics. |
Instruction: Does delay in breast irradiation following conservative breast surgery in node-negative breast cancer patients have an impact on risk of recurrence?
Abstracts:
abstract_id: PUBMED:9531372
Does delay in breast irradiation following conservative breast surgery in node-negative breast cancer patients have an impact on risk of recurrence? Purpose: This retrospective review was conducted to determine if delay in the start of radiotherapy after definitive breast surgery had any detrimental effect on local recurrence or disease-free survival in node-negative breast cancer patients.
Methods And Materials: A total of 568 patients with T1-T2, N0 breast cancer were treated with breast-conserving surgery and breast irradiation, without adjuvant systemic therapy between January 1, 1985 and December 31, 1992, at the London Regional Cancer Centre. Adjuvant breast irradiation consisted either of 50 Gy in 25 fractions or 40 Gy in 15 or 16 fractions, followed by a boost of 10 Gy or 12.5 Gy to the lumpectomy site. The time intervals from definitive breast surgery to breast irradiation used for analysis were 0-8 weeks (201 patients), > 8-12 weeks (235 patients), > 1216 weeks (91 patients), and > 16 weeks (41 patients). The time intervals of 0-12 weeks (436 patients) and > 12 weeks (132 patients) were also analyzed. Kaplan-Meier estimates of time to local recurrence and disease-free survival rates were calculated. The association between surgery-radiotherapy interval, age (< or = 40, > 40 years), tumor size (< or = 2, > 2cm), Scharf-Bloom-Richardson (SBR) grade, resection margins, lymphatic vessel invasion, extensive intraductal component, and local recurrence and disease-free survival were investigated using Cox regression techniques.
Results: Median follow-up was 63.5 months. Patients in all 4 time intervals were similar in terms of age and pathologic features. There was no statistically significant difference between the 4 groups in local recurrence or disease-free survival with surgery-radiotherapy interval (p = 0.189 and p = 0.413, respectively). The 5-year freedom from local relapse was 95.4%. The crude local recurrence rate was 6.9% (7.8% for 436 patients treated within 12 weeks (median follow-up 67 months) and 3.8% for 132 patients treated > 12 weeks from surgery (median follow-up 52 months). In a stepwise multivariable Cox regression model for disease-free survival, allowing for entry of known risk factors, tumour size (p < 0.001), grade (p < 0.001), and age (p = 0.048) entered the model, but the surgery-radiotherapy interval did not enter the model.
Conclusion: This retrospective study suggests that delay in start of breast irradiation beyond 12 and up to 16 weeks does not increase the risk of recurrence in node-negative breast cancer patients. The certainty of these results are limited by the retrospective nature of this analysis and the lack of information concerning the late local failure rate.
abstract_id: PUBMED:16246494
Eleven-year follow-up results in the delay of breast irradiation after conservative breast surgery in node-negative breast cancer patients. Purpose: This retrospective review was conducted to determine if delay in the start of radiotherapy after conservative breast surgery had any detrimental effect on local recurrence or disease-free survival in node-negative breast cancer patients.
Methods And Materials: A total of 568 patients with T1 and T2, N0 breast cancer were treated with breast-conserving surgery and breast irradiation, without adjuvant systemic therapy, between January 1, 1985 and December 31, 1992 at the London Regional Cancer Centre. The time intervals from definitive breast surgery to breast irradiation used for analysis were 0 to 8 weeks (201 patients), greater than 8 to 12 weeks (235 patients), greater than 12 to 16 weeks (91 patients), and greater than 16 weeks (41 patients). Kaplan-Meier estimates of time to local-recurrence and disease-free survival rates were calculated.
Results: Median follow-up was 11.2 years. Patients in all 4 time intervals were similar in terms of age and pathologic features. No statistically significant difference was seen between the 4 groups in local recurrence or disease-free survival with surgery radiotherapy interval (p = 0.521 and p = 0.222, respectively). The overall local-recurrence rate at 5 and 10 years was 4.6% and 11.3%, respectively. The overall disease-free survival at 5 and 10 years was 79.6% and 67.0%, respectively.
Conclusion: This retrospective study suggests that delay in the start of breast irradiation of up to 16 weeks from definitive surgery does not increase the risk of recurrence in node-negative breast cancer patients. The certainty of these results is limited by the retrospective nature of this analysis.
abstract_id: PUBMED:10594505
Prognosis after breast recurrence following conservative surgery and radiotherapy in patients with node-negative breast cancer. Background: Breast conservation surgery with radiotherapy is a safe and effective alternative to mastectomy for early-stage breast cancer. This retrospective study examined the outcome of patients with isolated local recurrence following conservative surgery and radiotherapy in node-negative breast cancer.
Methods: Between November 1979 and December 1994, 503 women with node-negative breast cancer were treated by conservation surgery and radiotherapy without adjuvant systemic therapy.
Results: After a median follow-up of 73 months the 5-year rate of freedom from local recurrence was 94 per cent. Thirty-five patients developed an isolated local recurrence within the breast as a first event. Thirty-three patients were treated with salvage mastectomy and two patients were treated with systemic therapy alone. The 5-year rate of freedom from second relapse was 46 per cent and the overall 5-year survival rate was 59 per cent for patients who had salvage mastectomy. Patients who developed breast recurrence as a first event had a 3.25 greater risk of developing distant metastasis (P < 0.001) than those who did not have breast recurrence as a first event.
Conclusion: Salvage mastectomy after local recurrence was an appropriate treatment if there was no evidence of distant metastasis. Breast recurrence after conservative surgery and radiotherapy in node-negative breast cancer predicted an increased risk of distant relapse.
abstract_id: PUBMED:8931610
Randomized clinical trial of breast irradiation following lumpectomy and axillary dissection for node-negative breast cancer: an update. Ontario Clinical Oncology Group. Background: Breast-conservation surgery is now commonly used to treat breast cancer. Postoperative breast irradiation reduces cancer recurrence in the breast. There is still controversy concerning the necessity of irradiation of the breast in all patients.
Purpose: We present an update of results from a randomized clinical trial designed to examine the efficacy of breast irradiation following conservation surgery in the treatment of women with axillary lymph node-negative breast cancer. The patients were enrolled from April 1984 through February 1989. Initial results were published in 1992 after a median follow-up time of 43 months. It was reported that recurrence of cancer in the breast occurred in 5.5% of the patients who received breast irradiation compared with 25.7% of those who did not. No difference in survival was detected between the two treatment groups. Now that the median patient follow-up has reached 7.6 years, the trial end points have been re-examined and an attempt has again been made to identify a group of patients at low risk for recurrence of cancer in the breast.
Methods: Eight hundred thirty-seven patients with node-negative breast cancer were randomly assigned to receive either radiation therapy (n = 416) or no radiation therapy (n = 421) following lumpectomy and axillary lymph node dissection. The cumulative local recurrence rate as a first event, distant recurrence (i.e., occurrence of metastasis) rate, and overall mortality rate for the treatment groups were described by the Kaplan-Meier method and compared with the use of the logrank test. The Cox proportional hazards model was used to adjust the observed treatment effect for the influence of various prognostic factors (patient age, tumor size, estrogen receptor level, and tumor histology) at study entry on the outcomes of local breast recurrence, distant recurrence, and overall mortality. All P values resulted from the use of two-tailed statistical tests.
Results: One hundred forty eight (35%) of the nonirradiated patients and 47 (11%) of the irradiated patients developed recurrent cancer in the breast (relative risk for patients in the former versus the latter group = 4.0; 95% confidence interval = 2.83-5.65; P < .0001). Ninety-nine (24%) of the patients in the former group have died compared with 87 (21%) in the latter group. Age (< 50 years), tumor size (> 2 cm), and tumor nuclear grade (poor) continued to be important predictors for local breast relapse. On the basis of these factors, we were unable to identify a subgroup of patients with a very low risk for local breast cancer recurrence. Tumor nuclear grade, as previously reported, and tumor size were important predictors for mortality.
Conclusions: Breast irradiation was shown to reduce cancer recurrence in the breast, but there was no statistically significant reduction in mortality. A subgroup of patients with a very low risk for local breast recurrence who might not require radiation therapy was not identified.
abstract_id: PUBMED:28063695
Ipsilateral axillary recurrence after breast conservative surgery: The protective effect of whole breast radiotherapy. Background And Purpose: Whole breast radiotherapy (WBRT) is one of the possible reasons for the low rate of axillary recurrence after breast-conserving surgery (BCS).
Patients And Methods: We retrospectively collected data from 4,129 consecutive patients with breast cancer ⩽2cm and negative sentinel lymph node who underwent BCS between 1997 and 2007. We compared the risk of axillary lymph node recurrence between patients treated by WBRT (n=2939) and patients who received partial breast irradiation (PBI; n=1,190) performed by a single dose of electron intraoperative radiotherapy.
Results: Median tumour diameter was 1.1cm in both WBRT and PBI. Women who received WBRT were significantly younger and expressed significantly more multifocality, extensive in situ component, negative oestrogen receptor status and HER2 over-expression than women who received PBI. After a median follow-up of 8.3years, 37 and 28 axillary recurrences were observed in the WBRT and PBI arm, respectively, corresponding to a 10-year cumulative incidence of 1.3% and 4.0% (P<0.001). Multivariate analysis resulted in a hazard ratio of 0.30 (95% CI 0.17-0.51) in favour of WBRT.
Conclusions: In this large series of women with T1 breast cancer and negative sentinel lymph node treated by BCS, WBRT lowered the risk of axillary recurrence by two thirds as compared to PBI.
abstract_id: PUBMED:30387028
Regional Recurrence Risk Following a Negative Sentinel Node Procedure Does Not Approximate the False-Negative Rate of the Sentinel Node Procedure in Breast Cancer Patients Not Receiving Radiotherapy or Systemic Treatment. Background: Although the false-negative rate of the sentinel lymph node biopsy (SLNB) in breast cancer patients is 5-7%, reported regional recurrence (RR) rates after negative SLNB are much lower. Adjuvant treatment modalities probably contribute to this discrepancy. This study assessed the 5-year RR risk after a negative SLNB in the subset of patients who underwent breast amputation without radiotherapy or any adjuvant treatment.
Methods: All patients operated for primary unilateral invasive breast cancer between 2005 and 2008 were identified in the Netherlands Cancer Registry. Patients with a negative SLNB who underwent breast amputation and who were not treated with axillary lymph node dissection, radiotherapy, or any adjuvant systemic treatment were selected. The cumulative 5-year RR rate was estimated by Kaplan-Meier analysis.
Results: A total of 13,452 patients were surgically treated for primary breast cancer and had a negative SLNB, and 2012 patients fulfilled the selection criteria. Thirty-eight RRs occurred during follow-up. Multifocal disease was associated with a higher risk of developing RR (P = 0.04). The median time to RR was 27 months and was significantly shorter in patients with estrogen receptor-negative (ER-) breast cancer (9.5 months; P = 0.003). The 5-year RR rate was 2.4% in the study population compared with 1.1% in the remainder of 11,440 SLNB-negative patients (P = 0.0002).
Conclusions: Excluding the effect of radiotherapy and systemic treatment resulted in a twofold 5-year RR risk in breast cancer patients with a tumor-free SLNB. This 5-year RR rate was still much lower than the reported false-negative rate of the SLNB procedure.
abstract_id: PUBMED:36305299
Patterns of regional recurrence according to molecular subtype in patients with pN2 breast cancer treated with limited field regional irradiation. Objective: There is little evidence regarding the radiotherapy modification based on molecular subtypes in breast cancer. This study aimed to identify the risk and patterns of regional recurrence according to molecular subtype in patients with pN2 breast cancer.
Methods: We identified 454 patients who underwent radical surgery for breast cancer with 4-9 axillary lymph node metastases. All patients underwent axillary lymph node dissection, adjuvant chemotherapy and limited-field regional nodal irradiation. The rates and patterns of regional recurrence were compared between the following three subgroups: luminal type (estrogen receptor- and/or progesterone receptor-positive), HER2-type (estrogen receptor- and progesterone receptor-negative and HER2-positive) and triple-negative type (estrogen receptor-, progesterone receptor- and HER2-negative).
Results: Regional recurrence occurred in 18/454 patients (4%). The risk of regional recurrence was higher in the triple-negative (hazard ratio 7.641) and HER2-type (hazard ratio 4.032) subtypes than in the luminal subtype. The predominant pattern of regional recurrence was inside the radiotherapy field in triple-negative breast cancer and outside the radiotherapy field in HER2-type and luminal-type cancers.
Conclusions: In patients with pN2 breast cancer, the risk of regional recurrence was higher in the triple-negative and HER2-type than in the luminal type. In-field recurrence was predominant in triple-negative cancer, while out-field recurrence was frequent in luminal and HER2-type breast cancers.
abstract_id: PUBMED:26870109
Breast cancer recurrence after sentinel lymph node biopsy. Objective: To look into the pattern of breast cancer recurrence following mastectomy, breast conservative surgery and radiotherapy or chemotherapy after SLNB at our institution.
Methods: Between January 2005 and December 2014, all patients diagnosed with breast cancer with clinically negative axilla, underwent SLNB. We reviewed their medical records to identify pattern of cancer recurrence.
Results: The median follow-up was 35.5 months. Eighty five patients (70.8%) had a negative sentinel lymph node (SLN) and subsequently had no further axillary treatment, one of them (1.2%) developed axillary recurrence 25 months postoperatively. Twenty five patients (20.8%) had a positive SLN (macrometastases) and subsequently had immediate axillary lymph node dissection (ALND). Ten patients (8.3%) had a positive SLN (micrometastases). In the positive SLN patients (macrometastases and micrometastases), there were two ipsilateral breast recurrences (5.7%), seen three and four years postoperatively. Also in this group, there was one (2.9%) distant metastasis to bone three years postoperatively.
Conclusion: In this series, the clinical axillary false negative rate for SLNB was 1.2% which is in accordance with the published literature. This supports the use of SLNB as the sole axillary staging procedure in breast cancer patients with negative SLNB. Axillary lymph node dissection can be safely omitted in patients with micrometastases in their sentinel lymph node(s).
abstract_id: PUBMED:32499914
Utility of regional nodal irradiation in Japanese patients with breast cancer with 1-3 positive nodes after breast-conserving surgery and axillary lymph-node dissection. The utility of regional nodal irradiation (RNI) is being considered in cases of 1-3 axillary node metastases after breast-conserving surgery (BCS) with axillary lymph-node dissection (ALND). Therefore, we examined the necessity of RNI by examining the sites of recurrences in cases at our institution. We retrospectively analyzed 5,164 cases of primary breast cancer between January 2000 and December 2014 at the Aichi Cancer Centre, identifying local and distant recurrences in 152 patients with primary breast cancer treated with BCS and ALND and who had 1-3 positive axillary nodes. All patients received whole-breast irradiation (WBI) and adjuvant systemic therapy with either chemotherapy or anti-endocrine therapy with or without anti-human epidermal growth factor receptor 2 therapy. The present study excluded patients with ipsilateral breast tumor recurrence, contralateral breast cancer, neoadjuvant chemotherapy, T4 tumors or N2-3 nodes and distant metastasis. From the database of our institution, we identified 152 cases that met the defined criteria. The median follow-up period was 71 months (1-176). Isolated locoregional recurrences were found in three patients (2.0%) and were recurrent only in the breast. Only one patient had local lymph node recurrence with distant recurrence. The 10-year rates of isolated regional disease-free survival (DFS), DFS, and overall survival were 95.41, 89.50 and 96.75%, respectively, which was better compared with previous studies. We conclude that the addition of RNI to WBI is not necessary for Japanese patients who have 1-3 positive axillary nodes and ALND.
abstract_id: PUBMED:31291997
The impact of age on the risk of ipsilateral breast tumor recurrence after breast-conserving therapy in breast cancer patients with a > 5 mm margin treated without boost irradiation. Background: The boost irradiation to the tumor bed following whole-breast irradiation (WBI) reduced the risk of ipsilateral breast tumor recurrence (IBTR). However, in Japan, almost all patients with a margin ≤5 mm receive boost irradiation to the tumor bed, but the decision to perform boost irradiation for those with a margin > 5 mm is dependent on the institution. Thus, institutional guidelines on utilizing boost irradiation for patients aged ≤40 or ≤ 50 years vary. We investigated the IBTR rate to assess the appropriate age for boost irradiation to the tumor bed with a margin > 5 mm.
Methods: From January 1993 to December 2010, 419 patients with early-stage breast cancer and negative margins (> 5 mm) after breast-conserving surgery received WBI without boost irradiation. The Gray test was used to compare the cumulative incidence of IBTR among patients aged ≤40, 41-50, and ≥ 51 years. Hazard ratios were estimated using the Fine and Gray models. Furthermore, as a subgroup analysis, we investigated whether IBTR depended on the use of systemic therapy, such as anthracycline or taxane regimens.
Results: The median follow-up time was 9.3 years. In multivariate analysis, only age predicted IBTR (p = 0.047). The 10-year IBTR rate was 15.7% in patients aged ≤40, 3.8% in those aged 41-50, and 2.0% in patients aged ≥51 years. The difference between patients aged ≤40 and 41-50 years was statistically significant (p = 0.045), whereas the difference between patients aged 41-50 and ≥ 51 years was not significant (p = 0.21).
Conclusions: In our institutional surgical setting, when boost irradiation is performed only for patients with a margin ≤5 mm, the IBTR rate after WBI without boost irradiation was significantly higher in patients aged ≤40 years, suggesting that boost irradiation should be used for patients in this age group.
Answer: The impact of delay in breast irradiation following conservative breast surgery in node-negative breast cancer patients on the risk of recurrence has been investigated in several studies. According to the retrospective reviews conducted, a delay in the start of radiotherapy up to 16 weeks from definitive surgery does not appear to increase the risk of recurrence in node-negative breast cancer patients. In a study with a median follow-up of 63.5 months, no statistically significant difference was found in local recurrence or disease-free survival with surgery-radiotherapy interval, and the 5-year freedom from local relapse was 95.4% (PUBMED:9531372). Another study with a median follow-up of 11.2 years supported these findings, showing no statistically significant difference in local recurrence or disease-free survival between different time intervals from surgery to radiotherapy (PUBMED:16246494).
However, it is important to note that these results are limited by the retrospective nature of the analyses. Additionally, other factors such as tumor size, grade, and age were found to be significant in multivariable models for disease-free survival, but the surgery-radiotherapy interval did not enter the model (PUBMED:9531372). This suggests that while the timing of radiotherapy post-surgery is not a significant factor for recurrence, other clinical and pathological features play a more critical role in the prognosis of node-negative breast cancer patients.
In conclusion, based on the evidence from the studies referenced, a delay in breast irradiation following conservative breast surgery in node-negative breast cancer patients does not have a significant impact on the risk of recurrence, although the certainty of these results is limited by the nature of the studies (PUBMED:9531372; PUBMED:16246494). |
Instruction: Mallampati class changes during pregnancy, labour, and after delivery: can these be predicted?
Abstracts:
abstract_id: PUBMED:20007793
Mallampati class changes during pregnancy, labour, and after delivery: can these be predicted? Background: An increase in Mallampati class is associated with difficult laryngoscopy in obstetrics. The goal of our study was to determine the changes in Mallampati class before, during, and after labour, and to identify predictive factors of the changes.
Methods: Mallampati class was evaluated at four time intervals in 87 pregnant patients: during the 8th month of pregnancy (T(1)), placement of epidural catheter (T(2)), 20 min after delivery (T(3)), and 48 h after delivery (T(4)). Factors such as gestational weight gain, duration of first and second stages of labour, and i.v. fluids administered during labour were evaluated for their predictive value. Mallampati classes 3 and 4 were compared for each time interval. Logistic regression was used to test the association between each factor and Mallampati class evolution.
Results: Mallampati class did not change for 37% of patients. The proportion of patients falling into Mallampati classes 3 and 4 at the various times of assessment were: T(1), 10.3%; T(2), 36.8%; T(3), 51.7%; and T(4), 20.7%. The differences in percentages were all significant (P<0.01). None of the evaluated factors was predictive.
Conclusions: The incidence of Mallampati classes 3 and 4 increases during labour compared with the pre-labour period, and these changes are not fully reversed by 48 h after delivery. This work confirms the absolute necessity of examining the airway before anaesthetic management in obstetric patients.
abstract_id: PUBMED:23710682
Effect of epidural analgesia on change in Mallampati class during labour. Mallampati class has been shown to increase during labour. Epidural analgesia might influence this change. The aim of our study was to compare the change in Mallampati class during labour in parturients who did and did not receive epidural analgesia and study the association of these changes with pre-defined clinical characteristics. We performed a prospective observational study of 190 parturients. Using standard methodology, photographs of the upper airway were taken with a digital camera during early labour and within 90 min of delivery. Two to three consultant anaesthetists, blinded to the origin of the photographs, evaluated the images obtained and assigned a Mallampati class to each. Overall, Mallampati class increased in 61 (32.1%), decreased in 18 (9.5%) and did not change in 111 (58.4%) parturients (p<0.001). The proportions of parturients in the epidural and non-epidural groups who demonstrated an increase, decrease and no change in Mallampati class were similar. Of the relationships between change in Mallampati class and the other factors studied, only the total dose of epidural levobupivacaine during labour demonstrated a weak positive correlation 0.17 (p=0.039) with Mallampati class. This study confirms that labour is associated with an increase in the Mallampati class in approximately one third of parturients. Our findings indicate that having an epidural does not influence the likelihood of a change in Mallampati class during labour.
abstract_id: PUBMED:8111943
Changing Mallampati score during labour. We present the case of a changing Mallampati score during the course of labour in a healthy primigravida. On admission to hospital, the airway was assessed as Mallampati class I-II. At 5 cm cervical dilation, the woman began to bear down strenuously and continued this despite being advised of the inherent hazard. At 8 cm dilation, Caesarean delivery was contemplated because of fetal heart rate decelerations. Repeat airway evaluation revealed marked oedema of the lower pharynx giving rise to a Mallampati score of III-IV. Improvement of the fetal heart rate tracing permitted vaginal delivery under local infiltration. Postpartum, the Mallampati score was still III-IV. However, 12 hr later it had returned to the admission classification of I-II. We recommend that, in addition to the usual airway evaluation on admission, the assessment be repeated in the obstetric patient before induction of general anaesthesia.
abstract_id: PUBMED:38187969
Assessment of airway parameters in obstetric patients and comparing them at different phases in the perinatal period: A prospective observational study. Background And Aims: Airway changes occur in different stages of pregnancy. We aimed to evaluate the changes in the upper airway in obstetric patients during pregnancy, labour and after delivery using multiple airway indices and identify the predictive factors of these changes.
Methods: This observational study was conducted on 90 parturients aged >20 years, having monofoetal pregnancy. The patient's weight was noted, airway assessment including Mallampati grading (MPG), and thyromental distance (TMD), sternomental distance (SMD), neck circumference (NC) and Wilson's risk score were measured in the second trimester of pregnancy (T0), between 32 and 34 weeks of gestation (T1), at the time of admission for safe confinement, between 38 and 40 weeks of gestation (T2), 2 h after delivery of baby (T3) and, 24 h after delivery (T4). Unpaired t-test and analysis of variance test were applied.
Results: Changes in mean (standard deviation [SD]) weight, recorded from T0 to T2, were from 56.96 (10.77) to 65.322 (11.49) kg (P = 0.001). A rise of one or two grades in MPG was detected as the pregnancy progressed, and a decrease of one grade was noted after delivery. A significant decrease in mean (SD) TMD was noted from 6.88 (0.65) to 6.36 (0.62) cm from T0 to T2 (P = 0.001). SMD also decreased in a similar manner as TMD. NC increased from T0 to T3 and then decreased at T4 (P = 0.004).
Conclusion: Following the second trimester of pregnancy, MPG increased by either one or two grades, with a decrease in TMD and SMD and an increase in NC.
abstract_id: PUBMED:32609359
Functional changes in decidual mesenchymal stem/stromal cells are associated with spontaneous onset of labour. Ageing and parturition share common pathways, but their relationship remains poorly understood. Decidual cells undergo ageing as parturition approaches term, and these age-related changes may trigger labour. Mesenchymal stem/stromal cells (MSCs) are the predominant stem cell type in the decidua. Stem cell exhaustion is a hallmark of ageing, and thus ageing of decidual MSCs (DMSCs) may contribute to the functional changes in decidual tissue required for term spontaneous labour. Here, we determine whether DMSCs from patients undergoing spontaneous onset of labour (SOL-DMSCs) show evidence of ageing-related functional changes compared with those from patients not in labour (NIL-DMSCs), undergoing Caesarean section. Placentae were collected from term (37-40 weeks of gestation), SOL (n = 18) and NIL (n = 17) healthy patients. DMSCs were isolated from the decidua basalis that remained attached to the placenta after delivery. DMSCs displayed stem cell-like properties and were of maternal origin. Important cell properties and lipid profiles were assessed and compared between SOL- and NIL-DMSCs. SOL-DMSCs showed reduced proliferation and increased lipid peroxidation, migration, necrosis, mitochondrial apoptosis, IL-6 production and p38 MAPK levels compared with NIL-DMSCs (P < 0.05). SOL- and NIL-DMSCs also showed significant differences in lipid profiles in various phospholipids (phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, phosphatidylserine), sphingolipids (ceramide, sphingomyelin), triglycerides and acyl carnitine (P < 0.05). Overall, SOL-DMSCs had altered lipid profiles compared with NIL-DMSCs. In conclusion, SOL-DMSCs showed evidence of ageing-related reduced functionality, accumulation of cellular damage and changes in lipid profiles compared with NIL-DMSCs. These changes may be associated with term spontaneous labour.
abstract_id: PUBMED:28404439
Haemodynamic changes during labour: continuous minimally invasive monitoring in 20 healthy parturients. Background: There are few studies on maternal haemodynamic changes during labour. None have used continuous cardiac output monitoring during all labour stages. In this observational study, we monitored haemodynamic variables continuously during the entire course of labour in healthy parturients.
Methods: Continuous haemodynamic monitoring with the LiDCOplus technique was performed in 20 healthy parturients during spontaneous labour, vaginal delivery and for 15minutes postpartum. Cardiac output, stroke volume, heart rate, systemic vascular resistance, and systolic arterial pressure were measured longitudinally at baseline (periods between/without contractions) and during contractions in early and late stage 1, stage 2, during delivery, and postpartum, and were analysed with marginal linear models.
Results: Twenty parturients were included. In early stage 1, baseline cardiac output was 6.3L/min (95% CI 5.7 to 6.9). Baseline values were similar across both labour stages and postpartum for all haemodynamic variables. During stage 2 contractions, cardiac output decreased by 32%, stroke volume decreased by 44%, heart rate increased by 52%, systemic vascular resistance increased by 88%, and systolic arterial pressure increased by 36% compared to baseline. During stage 1 contractions, haemodynamic changes were less profound and less uniform than during stage 2.
Conclusion: Progression of labour had no major effect on haemodynamic baseline values. Haemodynamic stress during contractions was substantial in both labour stages, yet most pronounced during the second stage of labour. The absence of an increase in stroke volume and cardiac output postpartum questions the common belief in an immediate rise in cardiac output after delivery due to autotransfusion from the contracted uterus.
abstract_id: PUBMED:34791944
Upper lip bite test compared to modified Mallampati test in predicting difficult airway in obstetrics: A prospective observational study. Difficult airway and intubation can have dangerous sequela for patients if not managed promptly. This issue is even more challenging among obstetric patients. Several studies have aimed to determine whether the test to predict a difficult airway or difficult intubation, is higher in accuracy. This study aims to compare the upper lip bite test with the modified Mallampati test in predicting difficult airway among obstetric patients. During this prospective observational study, 184 adult pregnant women, with ASA physical status of II, were enrolled. Difficult intubations of Cormack-Lehane grade III and IV were defined as difficult airways and difficult intubation in this study. Upper lip bite test, modified Mallampati test, thyromental distance and sternomental distance were noted for all patients. Modified Mallampati test, upper lip bite test and sternomental distance had highest specificity. Based on regression analysis, body mass index and Cormack-Lehane grade have a significant association. Modified Mallampati test was the most accurate test for predicting difficult airway. The best cut-off points of thyromental distance and sternomental distance in our study were 5cm and 15cm, respectively, by receiver operating characteristic curve analysis. Based on the results of the present study, it can be concluded that in the obstetric population, modified Mallampati test is practically the best test for predicting difficult airway. However, combining this test with upper lip bite test, thyromental distance and sternomental distance might result in better diagnostic accuracy.
abstract_id: PUBMED:33951785
Modified Mallampati Class Rating Before and After Cesarean Delivery: A Prospective Observational Study. Background: The modified Mallampati classification (MMC) provides an estimate of the tongue size relative to the oral cavity size, and is a usual screening tool for predicting difficult laryngoscopy. Previous studies have indicated an increase of MMC during the progression of pregnancy, but there is no comprehensive study in pregnant women undergoing cesarean delivery. The primary aim of this study was to evaluate the MMC before and after cesarean delivery.
Methods: This is a prospective observational study of 104 women who underwent cesarean section. MMC, thyromental distance, neck circumference, and upper lip bite test were evaluated at 4 different time points: during the pre-anesthetic visit (T0) and at 1 (T1), 6 (T2), and 24 (T3) hours after delivery. Factors evaluated for their predictive validity included gestational weight gain, operation time, amount of intravascular fluids, oxytocin dosage, and blood loss. The correlation between each factor and the MMC classification was tested by logistic regression.
Results: From 104 participants, 59.6% experienced Mallampati class changes. The proportions of patients classified as Mallampati III and IV at different time points were: T0 = 48.1% (MMC III only), T1 = 75.0%, T2 = 80.8%, and T3 = 84.6%, respectively. Gestational weight gain, duration of surgery, anesthetic method, blood loss, oxytocin dosage, or amount of intravenous fl uid were not correlated with the MMC change.
Conclusion: The number of patients with initial Mallampati III was high. In addition, a significant increase in MMC occurred after cesarean delivery. The data confirm the particular risk status of women undergoing cesarean delivery particularly regarding airway anatomy.
abstract_id: PUBMED:37059644
Symptoms of onset of labour and early labour: A scoping review. Background: Early labour care often insufficiently addresses the individual needs of pregnant women leading to great dissatisfaction. In-depth knowledge about symptoms of onset of labour and early labour is necessary to develop women-centred interventions.
Question Or Aim: To provide an overview on the current evidence about pregnant women's symptoms of onset of labour and early labour.
Methods: We conducted a scoping review in the five databases PubMed, Web of Science, CINHAL Complete, PsychInfo and MIDIRS in May 2021 and August 2022 using a sensitive search strategy. A total of 2861 titles and abstracts and 290 full texts were screened independently by two researchers using Covidence. For this article, data was extracted from 91 articles and summarised descriptively and narratively.
Findings: The most frequently mentioned symptoms were 'Contractions, labour pain' (n = 78, 85.7 %), 'Details about the contractions' (n = 51 articles, 56.0 %), 'Positive and negative emotions' (n = 50, 54.9 %) and 'Fear and worries' (n = 48 articles, 52.7 %). Details about the contractions ranged from a slight pulling to unbearable pain and the emotional condition varied from joy to great fear, showing an extraordinary diversity of symptoms highlighting the very individual character of early labour.
Discussion: A comprehensive picture of varying and contradicting symptoms of onset of labour and early labour was drawn. Different experiences indicate different needs. This knowledge builds a good basis to develop women-centred approaches to improve early labour care.
Conclusion: Further research is necessary to design individualised early labour interventions and evaluate their effectiveness.
abstract_id: PUBMED:31138388
Anthropometric Measures and Prediction of Maternal Sleep-Disordered Breathing. Study Objectives: Pregnant women are at risk for sleep-disordered breathing (SDB); however, screening methods in this dynamic population are not well studied. The aim of this study was to examine whether anthropometric measures can accurately predict SDB in pregnant women.
Methods: Pregnant women with snoring and overweight/obesity were recruited in the first trimester. Anthropometric measures were performed according to the International Standards for Anthropometric Assessment, including a seated neutral and extended neck Mallampati class. Home sleep apnea monitoring was performed using a level III device after completion of anthropometric assessment. SDB was defined as an apnea-hypopnea index ≥ 5 events/h of sleep. Pearson and Spearman tests examined correlations between various measures. Generalized linear models, sensitivity, specificity, and area under the curve as well as odds ratios were performed to test the model.
Results: A total of 129 participants were recruited, and 23 had SDB. Average gestational age was 10.6 ± 1.9 weeks. Due to concerns over multicollinearity, the final model included extended Mallampati class and upright neck circumference. Neck circumference was significantly higher in participants with Mallampati classes 2/3 and grade 4 compared to participants with Mallampati class 1 (P = .0005). Increasing neck circumference was associated with higher odds of SDB (P = .0022). In Mallampati class 1, odds ratio for SDB was 2.89 (1.19, 7.03) per unit increase in neck circumference.
Conclusions: Modeling neck circumference while allowing for differences by Mallampati class showed a nearly threefold increase in the risk of SDB with increasing neck circumference in women with Mallampati class 1. Other potential sites of airway obstruction need to be investigated in future research.
Answer: The Mallampati class, which is used to predict the difficulty of laryngoscopy and intubation, has been observed to change during pregnancy, labour, and after delivery. However, predicting these changes has proven to be challenging.
Studies have shown that the incidence of Mallampati classes 3 and 4 increases during labour compared with the pre-labour period, and these changes are not fully reversed by 48 hours after delivery (PUBMED:20007793). Another study confirmed that labour is associated with an increase in the Mallampati class in approximately one-third of parturients, but found that having an epidural does not influence the likelihood of a change in Mallampati class during labour (PUBMED:23710682). Additionally, a case report highlighted the dynamic nature of the Mallampati score during labour, with a patient's score increasing from I-II to III-IV during labour and then returning to the admission classification 12 hours postpartum (PUBMED:8111943).
Further research has shown that following the second trimester of pregnancy, the Mallampati grading (MPG) increased by either one or two grades, with a decrease in thyromental distance (TMD) and sternomental distance (SMD) and an increase in neck circumference (NC) (PUBMED:38187969). However, none of the evaluated factors, such as gestational weight gain, duration of surgery, anesthetic method, blood loss, oxytocin dosage, or amount of intravenous fluid, were correlated with the Mallampati class change (PUBMED:32609359, PUBMED:33951785, PUBMED:37059644).
In conclusion, while it is clear that Mallampati class can change during pregnancy, labour, and after delivery, predicting these changes based on the factors evaluated in the studies is not currently possible. The changes seem to be individual and not consistently associated with the factors that were studied. Therefore, the absolute necessity of examining the airway before anaesthetic management in obstetric patients is emphasized (PUBMED:20007793). |
Instruction: Does comparison of performance lead to better care?
Abstracts:
abstract_id: PUBMED:21969891
Incentives for better performance in health care. Incentives for better performance in health care have several modes and methods. They are designed to motivate and encourage people to perform well and improve their outcomes. They may include monetary or non-monetary incentives and may be applied to consumers, individual providers or institutions. One such model is the Pay-for-Performance system. In this system, beneficiaries are compared with one another based on a set of performance indicators and those that achieve a high level of performance are rewarded financially. This system is meant to recognise and primarily to reward high performers. Its goal is to encourage beneficiaries to strive for better performance. This system has been applied in several countries and for several recipients and settings. Early indications show that this system has had mixed effects on performance.
abstract_id: PUBMED:34729763
Social comparison in the classroom: Priming approach/avoidance changes the impact of social comparison on self-evaluation and performance. Background: Social comparisons between pupils are especially relevant at school. Such comparisons influence self-perception and performance. When pupils evaluate themselves more negatively and perform worse after an upward comparison (with a better off pupil) than a downward comparison (with a worse-off pupil), this is a contrast effect. On the other hand, when they evaluate themselves more positively and are better after an upward than downward comparison, this is an assimilation effect.
Aims: We examine assimilation and contrast effects of comparison in the classroom on pupils' self-evaluation and performance. Previous work by Fayant, Muller, Nurra, Alexopoulos, and Palluel-Germain (2011) lead us to hypothesize that approach vs. avoidance moderates the impact of upward vs. downward comparison: approach should lead to an assimilation effect on self-evaluation and performance, while avoidance should lead to contrast on self-evaluation and performance.
Methods: To test this hypothesis, we primed pupils with either approach or avoidance before reading upward or downward comparison information about another pupil. We then measured self-evaluation (Experiment 1) and performance (Experiments 1 and 2).
Results: Results confirmed our predictions and revealed the predicted interaction on self-evaluation (Experiment 1) and performance (Experiment 2): approach leads to an assimilation effect (in both experiments) whereas avoidance leads to a contrast effect (in Experiment 2).
Conclusions: These experiments replicate previous studies on self-evaluation and also extend previous work on performance and in a classroom setting. Priming approach before upward comparison seems especially beneficial to pupils.
abstract_id: PUBMED:30263077
Exploring Performance Calibration in Relation to Better or Worse Than Average Effect in Physical Education. The aim of this study was to explore students' calibration of sport performance in relation to better or worse than average effect in physical education settings. Participants were 147 fifth and sixth grade students (71 boys, 76 girls) who were tested in a soccer passing accuracy test after they had provided estimations for their own and their peers' performance in this test. Based on students' actual and estimated performance, calibration indexes of accuracy and bias were calculated. Moreover, students were classified in better, worse, or equal than average groups based on estimated scores of their own and their peers' average performance. Results showed that students overestimated their own performance while most of them believed that their own performance was worse than their peers' average performance. No significant differences in calibration accuracy of soccer passing were found between better, worse, or equal than average groups of students. These results were discussed with reference to previous calibration research evidence and theoretical and practical implications for self-regulated learning and performance calibration in physical education.
abstract_id: PUBMED:7091068
Childhood lead poisoning and inadequate child care. Sixteen caretakers of children hospitalized for their first episode of lead poisoning and 16 caretakers of children with normal lead levels were interviewed in their homes to determine if caretakers of children with lead poisoning provided more inadequate child care than the comparison group of caretakers. Children were matched according to age, race, and sex. Correlations were found between children's lead levels and caretakers' scores on the measures of inadequate child care. Differences were evident in the overall physical and cognitive emotional care provided to these children. No differences were found in the caretakers' ages, number of years of education and family monthly income, number of occupants in the household, and family mobility. Implications of the intertwined roles of inadequate child care, subclinical lead poisoning, and later developmental sequelae are discussed.
abstract_id: PUBMED:12940343
Does quality of care lead to better financial performance?: the case of the nursing home industry. The study describes the relationship between quality of care and financial performance (operating profit margin) as it pertains to the nursing home industry. We found that nursing homes that produce better outcomes and process of care were able to achieve lower patient care costs and report better financial performance.
abstract_id: PUBMED:29298402
Public Reporting of Primary Care Clinic Quality: Accounting for Sociodemographic Factors in Risk Adjustment and Performance Comparison. Performance measurement and public reporting are increasingly being used to compare clinic performance. Intended consequences include quality improvement, value-based payment, and consumer choice. Unintended consequences include reducing access for riskier patients and inappropriately labeling some clinics as poor performers, resulting in tampering with stable care processes. Two analytic steps are used to maximize intended and minimize unintended consequences. First, risk adjustment is used to reduce the impact of factors outside providers' control. Second, performance categorization is used to compare clinic performance using risk-adjusted measures. This paper examines the effects of methodological choices, such as risk adjusting for sociodemographic factors in risk adjustment and accounting for patients clustering by clinics in performance categorization, on clinic performance comparison for diabetes care, vascular care, asthma, and colorectal cancer screening. The population includes all patients with commercial and public insurance served by clinics in Minnesota. Although risk adjusting for sociodemographic factors has a significant effect on quality, it does not explain much of the variation in quality. In contrast, taking into account the nesting of patients within clinics in performance categorization has a substantial effect on performance comparison.
abstract_id: PUBMED:25467361
Better documentation improves patient care. This article is the sixth in a series of seven describing the journey within NHS Lanarkshire in partnership with the University of the West of Scotland to support nursing and midwifery leadership roles through Scotland's Leading Better Care programme. Preceding articles have provided an overview of the programme and discussed a range of staff development work programmes. This article describes work carried out on clinical documentation to promote delivery of the three quality ambitions of safe, effective and person-centred care.
abstract_id: PUBMED:31552847
ABC, VED and lead time analysis in the surgical store of a public sector tertiary care hospital in Delhi. Background: An efficient inventory control system would help optimize the use of resources and eventually help improve patient care.
Objectives: The study aimed to find out the surgical consumables using always, better, and control (ABC) and vital, essential, and desirable (VED) technique as well as calculating the lead time of specific category A and vital surgical consumables.
Methods: This was a descriptive, record-based study conducted from January to March 2016 in the surgical stores of the All India Institute of Medical Sciences, New Delhi. The study comprised all the surgical consumables which were procured during the financial year 2014-2015. Stores ledger containing details of the consumption of the items, supply orders, and procurement files of the items were studied for performing ABC analysis and calculating the lead time. A list of surgical consumables was distributed to the doctors, nursing staff, technical staff, and hospital stores personnel to categorize them into VED categories after explaining them the basis for the classification.
Results: ABC analysis revealed that 35 items (14%), 52 items (21%), and 171 items (69%) were categorized into A (70% annual consumption value [ACV]), B (20% ACV), and C (10% ACV) category, respectively. In the current study, vital items comprised the majority of the items, i.e., 73% of the total items and essential (E) category of items comprised 26% of all the items. The average internal, external, and total lead time was 17 days (range 3-30 days), 25 days (range 5-38) and 44 days (range 18-98 days), respectively.
Conclusions: Hospitals stores need to implement inventory management techniques to reduce the number of stock-outs and internal lead time.
abstract_id: PUBMED:36339366
Atrial Fibrillation Better Care Pathway Adherent Care Improves Outcomes in Chinese Patients With Atrial Fibrillation. Background: Atrial fibrillation (AF) is a complex disease associated with comorbidities and adverse outcomes. The Atrial fibrillation Better Care (ABC) pathway has been proposed to streamline the integrated and holistic approach to AF care.
Objectives: This study sought to evaluate patients' characteristics, incidence of adverse events, and impact on outcomes with ABC pathway-adherent management.
Methods: The study included consecutive AF patients enrolled in the nationwide, ChioTEAF registry (44 centers, 20 Chinese provinces from October 2014 to December 2018), with available data to evaluate the ABC criteria and on the 1-year follow-up.
Results: A total of 3,520 patients (mean age 73.1 ± 10.4 years, 43% female) were included, of which 1,448 (41.1%) were managed as ABC pathway adherent. The latter were younger and had comparable CHA2DS2-VASc and lower HAS-BLED (mean 71.7 ± 10.3 years of age vs 74.1 ± 10.4 years of age; P < 0.01; 3.54 ± 1.60 vs 3.44 ± 1.70; P = 0.10; and 1.95 ± 1.10 vs 2.12 ± 1.20; P < 0.01, respectively) scores compared with ABC-nonadherent patients. At 1-year follow-up, patients managed adherent to the ABC pathway had a lower incidence of the primary composite outcome of all-cause death or any thromboembolic event (1.5% vs 3.6%; P < 0.01) as compared with ABC-nonadherent patients. On multivariate analysis, ABC pathway-adherent care was independently associated with a lower risk of the composite endpoint (OR: 0.51; 95% CI: 0.31-0.84).
Conclusions: Adherence to the ABC pathway for integrated care in a contemporary nationwide cohort of Chinese AF patients was suboptimal. Clinical management adherent to the ABC pathway was associated with better outcomes.
abstract_id: PUBMED:22728581
The better model to predict and improve pediatric health care quality: performance or importance-performance? Background: The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective.
Purpose: Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children.
Approach: Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection.
Findings: The importance-performance multiplicative additive model was a better predictor of pediatric health care quality.
Practice Implications: Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable resources targeting the wrong areas for improvement. Consequently, this finding aids in health care quality research and policy decisions on organizational improvement strategies.
Answer: The comparison of performance can lead to better care in various contexts, but the effects are nuanced and depend on the methods and settings in which performance comparison is implemented.
In health care settings, incentives such as the Pay-for-Performance system are designed to motivate and encourage better performance by financially rewarding high performers. This system aims to encourage beneficiaries to strive for better performance and has been applied in several countries with mixed effects on performance (PUBMED:21969891).
In educational settings, social comparison in the classroom can impact pupils' self-evaluation and performance. Priming pupils with an approach before reading upward comparison information about another pupil can lead to an assimilation effect, which is beneficial to pupils' self-evaluation and performance. In contrast, priming with avoidance can lead to a contrast effect (PUBMED:34729763).
In physical education, students' calibration of sport performance in relation to better or worse than average effect showed that students overestimated their own performance and believed their performance was worse than their peers' average performance. However, no significant differences in calibration accuracy were found between different groups of students (PUBMED:30263077).
In the nursing home industry, better outcomes and processes of care were associated with lower patient care costs and better financial performance, suggesting that quality of care can lead to better financial performance (PUBMED:12940343).
Public reporting of primary care clinic quality, which involves performance comparison, can have unintended consequences such as reducing access for riskier patients and inappropriately labeling some clinics as poor performers. Methodological choices in risk adjustment and performance categorization can significantly affect clinic performance comparison (PUBMED:29298402).
In the context of inventory management in a hospital's surgical store, implementing inventory management techniques can help optimize resources and improve patient care (PUBMED:31552847).
For patients with atrial fibrillation, adherence to the Atrial fibrillation Better Care (ABC) pathway for integrated care was associated with better outcomes, indicating that performance comparison and adherence to care pathways can improve patient outcomes (PUBMED:36339366).
Finally, in pediatric health care, the importance-performance model was found to be a better predictor of pediatric health care quality and a more successful method for improving the quality of care provided to children, compared to a performance-only model (PUBMED:22728581).
In summary, comparison of performance can lead to better care when appropriately applied and when factors such as incentives, priming, calibration, quality of care, risk adjustment, and the importance of performance are taken into account. However, it is important to consider the specific context and potential unintended consequences of performance comparison initiatives. |
Instruction: Individual unmet needs for care: are they sensitive as outcome criterion for the effectiveness of mental health services interventions?
Abstracts:
abstract_id: PUBMED:18777143
Individual unmet needs for care: are they sensitive as outcome criterion for the effectiveness of mental health services interventions? Background: Mental health interventions should demonstrate an effect on patients' functioning as well as his/her needs, in particular on unmet needs whose assessment depends on the perspective of either the patient or the clinician. However, individual met and unmet needs appear to change over time, qualitatively and quantitatively, raising questions about their sensitivity to change and about the association between level of needs and treatment.
Methods: Data on baseline and follow-up need assessment in community mental health services in four European countries in the context of a cluster randomised trial on a novel mental health service intervention were used, which involved 102 clinicians with key worker roles and 320 patients with schizophrenia or related psychotic disorders. Need assessment was performed with the Camberwell assessment of needs short appraisal schedule (CANSAS) among patients as well as clinicians. Focus is the sensitivity to change in unmet needs over time as well as the concordance between patient and clinician ratings and their relationship with treatment condition.
Results: At follow-up 294 patients (92%) had a full need assessment, while clinician rated needs were available for 302 patients (94%). Generally, the total number of met needs remained quite stable, but unmet needs decreased significantly over time, according to patients as well as to clinicians. Sensitivity to change of unmet needs is quite high: about two third of all unmet needs made a transition to no or met need, and more than half of all unmet needs at follow-up were new. Agreement between patient and clinician on unmet needs at baseline as well as follow-up was rather low, without any indication of a specific treatment effect.
Conclusions: Individual unmet needs appear to be quite sensitive to change over time but as yet less suitable as outcome criterion of treatment or specific interventions.
abstract_id: PUBMED:38379675
Determinants of unmet needs for mental health services amongst adolescents in Shiraz, Iran: a cross-sectional study. Background: Mental disorders are increasingly prevalent among adolescents without appropriate response. There are a variety of reasons for unmet mental health needs, including attitudinal and structural barriers. Accordingly, we investigated perceived mental health needs, using mental health services, and their barriers in adolescents.
Method: This cross-sectional study was conducted in 2022 in Shiraz, Iran. Demographic characteristics, the Adolescent Unmet Needs Checklist, and the Young Schema Questionnaire were administered to 348 adolescents aged 13-19 years. Adolescents were classified as having no needs, fully met needs, partially met needs, or wholly unmet needs. Logistic regression analysis was used to determine factors associated with perceived unmet need and refer participants to healthcare centers.
Results: 193 (55.5%) adolescents reported perceived need for mental healthcare out of whom, 21.6% reported fully and 21.6% partially unmet needs. Noticeably, only 12.4% of needy participants reported met need. "Reluctance to seek mental healthcare" and "asked but not receiving help" were common barriers to using the services.
Conclusion: The present study reveals unmet mental healthcare needs as a significant public health concern among the adolescents. To address this significant concern, reorientation of primary care, removing economic barriers from mental healthcare services, and improving health literacy in the community are recommended.
abstract_id: PUBMED:36214725
Unmet needs for food, medicine, and mental health services among vulnerable older adults during the COVID-19 pandemic. Objective: To examine sociodemographic factors associated with having unmet needs in medications, mental health, and food security among older adults during the COVID-19 pandemic.
Data Sources And Study Setting: Primary data and secondary data from the electronic health records (EHR) in an age-friendly academic health system in 2020 were used.
Study Design: Observational study examining factors associated with having unmet needs in medications, food, and mental health.
Data Collecting/extraction Methods: Data from a computer-assisted telephone interview and EHR on community-dwelling older patients were analyzed.
Principle Findings: Among 3400 eligible patients, 1921 (53.3%) (average age 76, SD 11) responded, with 857 (45%) of respondents having at least one unmet need. Unmet needs for medications were present in 595 (31.0%), for food in 196 (10.2%), and for mental health services in 292 (15.2%). Racial minorities had significantly higher probabilities of having unmet needs for medicine and food, and of being referred for services related to medications, food, and mental health. Patients living in more resource-limited neighborhoods had a higher probability of being referred for mental health services.
Conclusions: Age-friendly health systems (AFHS) and their recognition should include assessing and addressing social risk factors among older adults. Proactive efforts to address unmet needs should be integral to AFHS.
abstract_id: PUBMED:34942217
Association of functional disability with mental health services use and perceived unmet needs for mental health care among adults with serious mental illness. Background: Approximately 13.1 million U.S. adults experienced serious mental illness (SMI) in 2019. Persons with disability (PWD) have higher risks of SMI. Ensuring adequate access to mental health (MH) services for PWD is imperative to ameliorate this burden.
Methods: Using the 2015-2019 National Survey on Drug Use and Health, we obtained study variables for U.S. adults with SMI in the past year and used multivariable logistic regression models to examine the association of disability with MH services and perceived unmet needs.
Results: The sample comprised 12,532 respondents, representing 11,143,650 U.S. adults with SMI. Overall, PWD had higher proportions of using prescription medications (64.7% vs. 46.2%), outpatient treatment (48.4% vs. 36.5%) and inpatient treatment (8.6% vs. 4.7%) compared to persons without disability; however, the prevalence of perceived unmet MH service needs was also higher (46.3% vs. 39.4%) among PWD. Multivariable logistic regression models showed presence of any disability, cognitive and ≥2 limitations were significantly associated with MH services use (all p<0.01). However, PWD were significantly more likely to report perceived unmet MH service needs (p<0.01 for any disability as well as cognitive, complex activity, and ≥2 limitations).
Limitations: Due to data limitations, disability status and SMI may be misclassified for some respondents, and the results may not be generalized to all individuals with SMI.
Conclusion: While PWD were more likely to use MH services, they also had higher odds of unmet MH needs. These results call for more effective and tailored mental health services for PWD.
abstract_id: PUBMED:30357724
Mental Health Service Use and Perceived Unmet Needs for Mental Health Care in Asian Americans. Using data from the Asian American Quality of Life (AAQoL, n = 2609) survey, logistic regression models of mental health service use and perceived unmet needs were estimated with background variables, ethnicity, and mental health status. More than 44% of the participants were categorized as having mental distress (Kessler 6 [K6] ≥ 6) and 6.1% as having serious mental illness (SMI, K6 ≥ 13). About 23% had used services (mental health specialist, general doctor, and/or religious leader) for their emotional concerns during the past year, and about 7% reported that there was a time that they needed mental health care but could not get it. In the multivariate analyses, the presence of mental distress and SMI increased the odds of using any service and having perceived unmet needs. Those who had used services exhibited higher odds of reporting unmet needs, calling concerns about the quality of services and user satisfaction.
abstract_id: PUBMED:16740858
Mental health care services for children with special health care needs and their family members: prevalence and correlates of unmet needs. Objectives: To estimate the prevalence and correlates of unmet needs for mental health care services for children with special health care needs and their families.
Methods: We use the National Survey of Children With Special Health Care Needs to estimate the prevalence of unmet mental health care needs among children with special health care needs (1-17 years old) and their families. Using logistic-regression models, we also assess the independent impact of child and family factors on unmet needs.
Results: Substantial numbers of children with special health care needs and members of their families have unmet needs for mental health care services. Children with special health care needs who were poor, uninsured, and were without a usual source of care were statistically significantly more likely to report that their mental health care needs were unmet. More severely affected children and those with emotional, developmental, or behavioral conditions were also statistically significantly more likely to report that their mental health care needs went unmet. Families of severely affected children or of children with emotional, developmental, or behavioral conditions were also statistically significantly more likely to report that their mental health care needs went unmet.
Conclusions: Our results indicate that children with special health care needs and their families are at risk for not receiving needed mental health care services. Furthermore, we find that children in families of lower socioeconomic status are disproportionately reporting higher rates of unmet needs. These data suggest that broader policies to identify and connect families with needed services are warranted but that child- and family-centered approaches alone will not meet the needs of these children and their families. Other interventions such as anti-poverty and insurance expansion efforts may be needed as well.
abstract_id: PUBMED:32993667
Unmet mental health needs in the general population: perspectives of Belgian health and social care professionals. Background: An unmet mental health need exists when someone has a mental health problem but doesn't receive formal care, or when the care received is insufficient or inadequate. Epidemiological research has identified both structural and attitudinal barriers to care which lead to unmet mental health needs, but reviewed literature has shown gaps in qualitative research on unmet mental health needs. This study aimed to explore unmet mental health needs in the general population from the perspective of professionals working with vulnerable groups.
Methods: Four focus group discussions and two interviews with 34 participants were conducted from October 2019 to January 2020. Participants' professional backgrounds encompassed social work, mental health care and primary care in one rural and one urban primary care zone in Antwerp, Belgium. A topic guide was used to prompt discussions about which groups have high unmet mental health needs and why. Transcripts were coded using thematic analysis.
Results: Five themes emerged, which are subdivided in several subthemes: (1) socio-demographic determinants and disorder characteristics associated with unmet mental health needs; (2) demand-side barriers; (3) supply-side barriers; (4) consequences of unmet mental health needs; and (5) suggested improvements for meeting unmet mental health needs.
Conclusions: Findings of epidemiological research were largely corroborated. Some additional groups with high unmet needs were identified. Professionals argued that they are often confronted with cases which are too complex for regular psychiatric care and highlighted the problem of care avoidance. Important system-level factors include waiting times of subsidized services and cost of non-subsidized services. Feelings of burden and powerlessness are common among professionals who are often confronted with unmet needs. Professionals discussed future directions for an equitable mental health care provision, which should be accessible and targeted at those in the greatest need. Further research is needed to include the patients' perspective of unmet mental health needs.
abstract_id: PUBMED:35787360
Children's unmet need for mental health care within and outside metropolitan areas. Background: Rural communities experience a lack of pediatric mental health providers. It is unclear if this leads to greater unmet needs for specialty mental health services among rural children.
Methods: Data from the 2016-2019 National Survey of Children's Health were used to identify children aged 6-17 years with a mental health condition. Caregiver-reported need and receipt of specialty mental health care for their child (met need, unmet need, or no need) was compared according to residence in a Metropolitan Statistical Area (MSA).
Results: The analysis included 13,021 children (14% living outside MSAs). Unmet need for mental health services was reported for 9% of children, with no difference by rural-urban residence (p = 0.940). Multivariable analysis confirmed this finding and identified urban children as less likely to have no need for mental health services, compared to rural children (relative risk ratio of no need vs. met need: 0.79; 95% confidence interval: 0.65, 0.95; p = 0.015).
Conclusion: Children with mental health conditions living in rural areas (outside MSAs) did not have higher rates of unmet needs for specialty mental health services, but they had lower rates of any caregiver-reported needs for such services. Further work is needed to examine caregivers' demand for pediatric specialty mental health services.
abstract_id: PUBMED:36419047
Middle-aged and older people with urgent, unaware, and unmet mental health care needs: Practitioners' viewpoints from outside the formal mental health care system. Background: Mental health challenges are highly significant among older individuals. However, the non-utilization of mental health services increases with age. Although universal health coverage (UHC) was reported to reduce unmet health care needs, it might not be sufficient to reduce unmet mental health care needs from a clinical perspective. Despite the existence of UHC in Japan, this study aimed to explore the factors related to the non-utilization of formal mental health care systems among middle-aged and older people with urgent, unaware, and unmet mental health care needs.
Methods: Purposeful sampling was used as the sampling method in this study by combining snowball sampling and a specific criterion. The interviewees were nine practitioners from four sectors outside the mental health care system, including long-term care, the public and private sector, as well as general hospitals in one area of Tokyo, where we had conducted community-based participatory research for five years. The interviews were conducted by an interdisciplinary team, which comprised a psychiatrist, a public health nurse from a non-profit organization, and a Buddhist priest as well as a social researcher to cover the broader unmet health care needs, such as physical, psychosocial, and spiritual needs. The basic characteristics of the interviewees were enquired, followed by whether the interviewees had case of middle-aged or older individuals with urgent, unaware, and unmet mental health care needs. If the answer was yes, we asked the interviewees to describe the details. The interviews pertinent to this study were conducted between October 2021 and November 2021. In this study, we adopted a qualitative descriptive approach. First, we created a summary of each case. Next, we explored the factors related to the non-utilization of formal mental health care systems by conducting a thematic analysis to identify the themes in the data collected.
Results: The over-arching category involving "the factors related to an individual person" included two categories, as follows: 1) "Individual intrinsic factors," which comprised two sub-categories, including "difficulty in seeking help" and "delusional disorders," and 2) "family factors," which comprised "discord between family members," "denial of service engagement," "multiple cases in one family," and "families' difficulty in seeking help." The over-arching category "the factors related to the systems" included four categories, as follows: 1) "Physical health system-related factors," which comprised "the indifference of physical healthcare providers regarding mental health" and "the discontinuation of physical health conditions," 2) "mental health system-related factors," which comprised "irresponsive mental health care systems" and "uncomfortable experiences in previous visits to clinics," and 3) "social service system-related factors," which comprised "the lack of time to provide care," "social service not allowed without diagnosis," and "no appropriate service in the community," as well as 4) " the lack of integration between the systems." Apart from the aforementioned factors, "the community people-related factor" and "factors related to inter-regional movements" also emerged in this study.
Conclusions: The results of this study suggest a specific intervention target, and they provide further directions for research and policy implementation. The suggested solutions to the issues pertinent to this study are as follows: the recognition of the ways in which older people may inadequately understand their health or be unaware of available services, the building of a therapeutic alliance for "the individual intrinsic factors." Regarding the "family factors," the solutions include the provision of particularly intensive care for families with family discords, families with multiple cases, and families who find it difficult to seek help, as well as making intensive efforts for ensuring early involvement after contact with health care services. Regarding the "the factors related to the systems," the solutions include the implementation of mental health education campaigns aimed at enhancing mental health knowledge among non-mental health professionals, as well as formulating and implementing reforms ensuring that such professionals are increasingly responsible especially with regard to emergency inpatient care. It also include listening without ageism in clinical practice, the expansion of social services regarding human resources and the flexibility of use which increases the breadth of the types of care, as well as facilitating the integration between the associated health care systems. Further suggestions include encouraging community residents to join social security systems as well as the provision of particularly intensive care for people who have just moved in.
abstract_id: PUBMED:32239364
A Profile of Care Coordination, Missed School Days, and Unmet Needs Among Oregon Children with Special Health Care Needs with Behavioral and Mental Health Conditions. To inform Oregon's Title V needs assessment activities, we aimed to (1) characterize the state's subpopulation of children with special health care needs (CSHCN) with behavioral and mental health conditions (B/MHC) and (2) determine associations of care coordination with missed school days and unmet needs for this subpopulation. We analyzed 2009-2010 National Survey of Children with Special Health Care Needs data on 736 Oregon CSHCN < 18 years, including 418 CSHCN with B/MHC. Among Oregon CSHCN with B/MHC: 48.9% missed ≥ 4 school days, 25% had ≥ 1 unmet health services need, and 14.8% had ≥ 1 unmet family support services need. Care coordination was associated with lower adjusted odds of ≥ 1 unmet health services need but was not significantly associated with missed school days or unmet family support services need. The approach to identify Oregon CSHCN with B/MHC may be adopted by other states endeavoring to improve health for this vulnerable subpopulation.
Answer: Individual unmet needs for care are indeed sensitive to change over time and can reflect the effectiveness of mental health services interventions to some extent. However, they may not be entirely suitable as the sole outcome criterion for treatment or specific interventions. According to a study that used data from community mental health services in four European countries, the total number of met needs remained quite stable, but unmet needs decreased significantly over time, according to both patients and clinicians. This suggests that unmet needs are sensitive to change. However, the agreement between patient and clinician ratings on unmet needs was rather low, and there was no indication of a specific treatment effect, indicating that while individual unmet needs are sensitive to change, they may not be the most reliable outcome criterion for assessing the effectiveness of mental health services interventions (PUBMED:18777143).
Moreover, other studies have highlighted the complexity of unmet needs in various populations, suggesting that multiple factors contribute to whether needs are met or unmet. For instance, among adolescents in Shiraz, Iran, perceived unmet mental health needs were a significant public health concern, with barriers such as reluctance to seek mental healthcare and not receiving help when asked (PUBMED:38379675). Similarly, among vulnerable older adults during the COVID-19 pandemic, unmet needs for mental health services were associated with sociodemographic factors, indicating that broader social determinants play a role in the effectiveness of interventions (PUBMED:36214725).
Additionally, functional disability has been associated with both higher use of mental health services and higher prevalence of perceived unmet mental health service needs among adults with serious mental illness (PUBMED:34942217). This suggests that while services may be used more frequently by certain populations, this does not necessarily translate into met needs, again highlighting the complexity of using unmet needs as an outcome measure.
In conclusion, while individual unmet needs for care are sensitive to change and can provide valuable information about the impact of mental health services interventions, they should not be used in isolation as an outcome criterion due to their variability and the influence of multiple factors beyond the interventions themselves. |
Instruction: Are we degenerate tetraploids?
Abstracts:
abstract_id: PUBMED:18996928
Degenerate tetraploidy was established before bdelloid rotifer families diverged. Rotifers of Class Bdelloidea are abundant freshwater invertebrates known for their remarkable ability to survive desiccation and their lack of males and meiosis. Sequencing and annotation of approximately 50-kb regions containing the four hsp82 heat shock genes of the bdelloid Philodina roseola, each located on a separate chromosome, have suggested that its genome is that of a degenerate tetraploid. In order to determine whether a similar structure exists in a bdelloid distantly related to P. roseola and if degenerate tetraploidy was established before the two species separated, we sequenced regions containing the hsp82 genes of a bdelloid belonging to a different family, Adineta vaga, and the histone gene clusters of P. roseola and A. vaga. Our findings are entirely consistent with degenerate tetraploidy and show that it was established before the two bdelloid families diverged and therefore probably before the bdelloid radiation.
abstract_id: PUBMED:21506483
Use of degenerate primers in rapid generation of microsatellite markers in Panicum maximum. Guineagrass (Panicum maximum Jacq.) is an important forage grass of tropical and semi-tropical regions, largely apomictic and predominantly exist in tetraploid form. For molecular breeding work, it is prerequisite to develop and design molecular markers for characterization of genotypes, development of linkage map and marker assisted selection. Hence, it is an important researchable issue to develop molecular markers in those crops where such information is scanty. Among many molecular markers, microsatellites or simple sequence repeat (SSR) markers are preferred markers in plant breeding. Degenerate primers bearing simple sequence repeat as anchor motifs can be utilized in rapid development of SSR markers; however selection of suitable degenerate primers is a prerequisite for such procedure so that SSR enriched genomic library can be made rapidly. In the present study seven degenerated primers namely KKVRVRV(AG)10, KKVRVRV(GGT)5, KKVRVRV(CT)10, KKVRVRV(AAT)6, KKVRVRV(GTG)6, KKVRVRV(GACA)5, and KKVRVRV(CAA)6 were used in amplification of Panicum maximum genomic DNA. Primers with repeat motifs (GGT)5 and (AAT)6 have not reacted whereas (AG)10, (GACA)5 and (CAA)6 highly informative as they have generated many DNA fragments ranging from 250 to 1600 bps as revealed from the results obtained with restriction digestion of recombinant plasmids. Primer with (CT)10 anchor repeat, amplified fragments of high molecular weight where as (GTG)6 primer generated only six bands with low concentration indicating less suitability of these primerin SSR markers development in P maximum.
abstract_id: PUBMED:18362354
Evidence for degenerate tetraploidy in bdelloid rotifers. Rotifers of class Bdelloidea have evolved for millions of years apparently without sexual reproduction. We have sequenced 45- to 70-kb regions surrounding the four copies of the hsp82 gene of the bdelloid rotifer Philodina roseola, each of which is on a separate chromosome. The four regions comprise two colinear gene-rich pairs with gene content, order, and orientation conserved within each pair. Only a minority of genes are common to both pairs, also in the same orientation and order, but separated by gene-rich segments present in only one or the other pair. The pattern is consistent with degenerate tetraploidy with numerous segmental deletions, some in one pair of colinear chromosomes and some in the other. Divergence in 1,000-bp windows varies along an alignment of a colinear pair, from zero to as much as 20% in a pattern consistent with gene conversion associated with recombinational repair of DNA double-strand breaks. Although pairs of colinear chromosomes are a characteristic of sexually reproducing diploids and polyploids, a quite different explanation for their presence in bdelloids is suggested by the recent finding that bdelloid rotifers can recover and resume reproduction after suffering hundreds of radiation-induced DNA double-strand breaks per oocyte nucleus. Because bdelloid primary oocytes are in G(1) and therefore lack sister chromatids, we propose that bdelloid colinear chromosome pairs are maintained as templates for the repair of DNA double-strand breaks caused by the frequent desiccation and rehydration characteristic of bdelloid habitats.
abstract_id: PUBMED:15796963
Effects of degenerate oligonucleotide-primed polymerase chain reaction amplification and labeling methods on the sensitivity and specificity of metaphase- and array-based comparative genomic hybridization. Degenerate oligonucleotide-primed polymerase chain reaction (DOP-PCR) is often applied to small amounts of DNA from microdissected tissues in the analyses of chromosomal copy number with comparative genomic hybridization (CGH). The sensitivity and specificity in CGH analyses largely depend on the unbiased amplification and labeling of probe DNA, and the sensitivity and specificity should be high enough to detect one-copy changes in aneuploid cancer cells when accurate assessment of chromosomal instability is needed. The present study was designed to assess the effects of DOP-PCR and labeling method on the sensitivity of metaphase- and array-based CGHs in the detection of one-copy changes in near-tetraploid Kato-III cells. By focusing on several chromosomes whose absolute copy numbers were determined by FISH, we first compared the green-to-red ratio profiles of metaphase- and array-based CGH to the absolute copy numbers using the DNA diluted with varying proportions of lymphocyte DNA, with and without prior DOP-PCR amplification, and found that the amplification process scarcely affected the sensitivity but gave slightly lower specificity. Second, we compared random priming (RP) labeling with nick translation (NT) labeling and found that the RP labeling gave fewer false-positive gains and fewer false-negative losses in the detection of one-copy changes. In array CGH, locus-by-locus concordance between the DNAs with and without DOP-PCR amplification was high (nearly 100%) in the gain of three copies or more and the loss of two copies or more. This suggests that we could pinpoint the candidate genes within large-shift losses-gains that are detected with array CGH in microdissected tissues.
abstract_id: PUBMED:19077184
Are we degenerate tetraploids? More genomes, new facts. Background: Within the bilaterians, the appearance and evolution of vertebrates is accompanied by enormous changes in anatomical, morphological and developmental features. This evolution of increased complexity has been associated with two genome duplications (2R hypothesis) at the origin of vertebrates. However, in spite of extensive debate the validity of the 2R hypothesis remains controversial. The paucity of sequence data in early years of genomic era was an intrinsic obstacle in tracking the genome evolutionary history of chordates.
Hypothesis: In this article I review the 2R hypothesis by taking into account the recent availability of genomic sequence data for an expanding range of animals. I argue here that genetic architecture of lower metazoans and representatives of major vertebrate and invertebrate lineages provides no support for the hypothesis relating the origin of vertebrates with widespread gene or genome duplications.
Conclusion: It appears that much of the genomic complexity of modern vertebrates is very ancient likely predating the origin of chordates or even the Bilaterian-Nonbilaterian divergence. The origin and evolution of vertebrates is partly accompanied by an increase in gene number. However, neither can we take this subtle increase in gene number as an only causative factor for evolution of phenotypic complexity in modern vertebrates nor we can take it as a reflection of polyplodization events early in their history.
abstract_id: PUBMED:32151294
Hyper-polyploid embryos survive after implantation in mice. Polyploids generated by natural whole genome duplication have served as a dynamic force in vertebrate evolution. As evidence for evolution, polyploid organisms exist generally, however there have been no reports of polyploid organisms in mammals. In mice, polyploid embryos under normal culture conditions normally develop to the blastocyst stage. Nevertheless, most tetraploid embryos degenerate after implantation, indicating that whole genome duplication produces harmful effects on normal development in mice. Most previous research on polyploidy has mainly focused on tetraploid embryos. Analysis of various ploidy outcomes is important to comprehend the effects of polyploidization on embryo development. The purpose of this present study was to discover the extent of the polyploidization effect on implantation and development in post-implantation embryos. This paper describes for the first time an octaploid embryo implanted in mice despite hyper-polyploidization, and indicates that these mammalian embryos have the ability to implant, and even develop, despite the harmfulness of extreme whole genome duplication.
abstract_id: PUBMED:9192896
Molecular evidence for an ancient duplication of the entire yeast genome. Gene duplication is an important source of evolutionary novelty. Most duplications are of just a single gene, but Ohno proposed that whole-genome duplication (polyploidy) is an important evolutionary mechanism. Many duplicate genes have been found in Saccharomyces cerevisiae, and these often seem to be phenotypically redundant. Here we show that the arrangement of duplicated genes in the S. cerevisiae genome is consistent with Ohno's hypothesis. We propose a model in which this species is a degenerate tetraploid resulting from a whole-genome duplication that occurred after the divergence of Saccharomyces from Kluyveromyces. Only a small fraction of the genes were subsequently retained in duplicate (most were deleted), and gene order was rearranged by many reciprocal translocations between chromosomes. Protein pairs derived from this duplication event make up 13% of all yeast proteins, and include pairs of transcription factors, protein kinases, myosins, cyclins and pheromones. Tetraploidy may have facilitated the evolution of anaerobic fermentation in Saccharomyces.
abstract_id: PUBMED:19621130
Ultrastructural changes of the egg apparatus associated with fertilisation of natural tetraploid Trifolium pratense L. (Fabaceae). The aim of this study is to describe the ultrastructural changes of the egg apparatus associated with fertilisation of the natural tetraploid Trifolium pratense. The pollen tube enters one of the synergids through the filliform apparatus from the micropyle. Before the entry of the pollen tube into the embryo sac, one of the synergids begins to degenerate, as indicated by increased electron density and a loss of volume. This cell serves as the site of entry for the pollen tube. Following fertilization, the vacuolar organisation in the zygote changes; in addition to the large micropylar vacuole, there are several small vacuoles of varying size. Ribosomal concentration increases significantly after fertilisation. In T. pratense, ultrastructural changes between the egg cell and zygote stages are noticeable. Several marked changes occur in the egg cell because of fertilisation. The zygote cell contains ribosomes has many mitochondria, plastids, lipids, vacuoles. After fertilization, most of the food reserves are located in the integument in the form of starch. The zygote shows ultrastructural changes when compared to the egg cell and appears to be metabolically active.
abstract_id: PUBMED:21629737
Self-Renewal Signalling in Presenescent Tetraploid IMR90 Cells. Endopolyploidy and genomic instability are shared features of both stress-induced cellular senescence and malignant growth. Here, we examined these facets in the widely used normal human fibroblast model of senescence, IMR90. At the presenescence stage, a small (2-7%) proportion of cells overcome the 4n-G1 checkpoint, simultaneously inducing self-renewal (NANOG-positivity), the DNA damage response (DDR; γ-H2AX-positive foci), and senescence (p16inka4a- and p21CIP1-positivity) signalling, some cells reach octoploid DNA content and divide. All of these markers initially appear and partially colocalise in the perinucleolar compartment. Further, with development of senescence and accumulation of p16inka4a and p21CIP1, NANOG is downregulated in most cells. The cells increasingly arrest in the 4n-G1 fraction, completely halt divisions and ultimately degenerate. A positive link between DDR, self-renewal, and senescence signalling is initiated in the cells overcoming the tetraploidy barrier, indicating that cellular and molecular context of induced tetraploidy during this period of presenescence is favourable for carcinogenesis.
abstract_id: PUBMED:27812096
Experimental Evolution Reveals Interplay between Sch9 and Polyploid Stability in Yeast. Polyploidization has crucial impacts on the evolution of different eukaryotic lineages including fungi, plants and animals. Recent genome data suggest that, for many polyploidization events, all duplicated chromosomes are maintained and genome reorganizations occur much later during evolution. However, newly-formed polyploid genomes are intrinsically unstable and often quickly degenerate into aneuploidy or diploidy. The transition between these two states remains enigmatic. In this study, laboratory evolution experiments were conducted to investigate this phenomenon. We show that robust tetraploidy is achieved in evolved yeast cells by increasing the abundance of Sch9-a protein kinase activated by the TORC1 (Target of Rapamycin Complex 1) and other signaling pathways. Overexpressing SCH9, but not TOR1, allows newly-formed tetraploids to exhibit evolved phenotypes and knocking out SCH9 diminishes the evolved phenotypes. Furthermore, when cells were challenged with conditions causing ancestral cells to evolve aneuploidy, tetraploidy was maintained in the evolved lines. Our results reveal a determinant role for Sch9 during the early stage of polyploid evolution.
Answer: Based on the provided abstracts, there is no direct evidence to suggest that humans are degenerate tetraploids. The abstract from PUBMED:19077184 discusses the 2R hypothesis, which posits that two rounds of genome duplication occurred at the origin of vertebrates, but the review concludes that the genetic architecture of lower metazoans and representatives of major vertebrate and invertebrate lineages provides no support for the hypothesis relating the origin of vertebrates with widespread gene or genome duplications. It suggests that the genomic complexity of modern vertebrates is very ancient and likely predates the origin of chordates or even the Bilaterian-Nonbilaterian divergence. Therefore, the review argues against the idea that the evolution of vertebrates, including humans, is the result of early polyploidization events.
In contrast, other abstracts discuss instances of degenerate tetraploidy in different organisms. For example, PUBMED:18996928 and PUBMED:18362354 describe degenerate tetraploidy in bdelloid rotifers, suggesting that their genomes are the result of ancient tetraploid events. Similarly, PUBMED:9192896 provides molecular evidence for an ancient duplication of the entire yeast genome, proposing that Saccharomyces cerevisiae is a degenerate tetraploid resulting from a whole-genome duplication.
However, none of these abstracts provide evidence that humans are degenerate tetraploids. Instead, they discuss polyploidy in other species and the evolutionary implications of such events. The abstract from PUBMED:32151294 does mention polyploid embryos in mice, indicating that mammalian embryos can implant and even develop despite whole genome duplication, but this does not directly relate to the genomic history of humans.
In conclusion, based on the information from the abstracts provided, there is no support for the claim that humans are degenerate tetraploids. |
Instruction: Abnormal blood pressure circadian rhythm in acute ischaemic stroke: are lacunar strokes really different?
Abstracts:
abstract_id: PUBMED:19689751
Abnormal blood pressure circadian rhythm in acute ischaemic stroke: are lacunar strokes really different? Background: A pathologically reduced or abolished circadian blood pressure variation has been described in acute stroke. However, studies on alterations of circadian blood pressure patterns after stroke and stroke subtypes are scarce. The objective of this study was to evaluate the changes in circadian blood pressure patterns in patients with acute ischaemic stroke and their relation to the stroke subtype.
Aims: We studied 98 consecutive patients who were admitted within 24 h after ischaemic stroke onset. All patients had a detailed clinical examination, laboratory studies and a CT scan study of the brain on admission. To study the circadian rhythm of blood pressure, a continuous blood pressure monitor (Spacelab 90217) was used. Patients were classified according to the percentage fall in the mean systolic blood pressure or diastolic blood pressure at night compared with during the day as: dippers (fall> or =10-20%); extreme dippers (> or =20%); nondipper (<10%); and reverse dippers (<0%, that is, an increase in the mean nocturnal blood pressure compared with the mean daytime blood pressure). Data were separated and analysed in two groups: lacunar and nonlacunar infarctions. Statistical testing was conducted using the SSPS 12.0. Methods We studied 60 males and 38 females, mean age: 70.5+/-11 years. The patient population consisted of 62 (63.2%) lacunar strokes and 36 (36.8%) nonlacunar strokes. Hypertension was the most common risk factor (67 patients, 68.3%). Other risk factors included hypercholesterolaemia (44 patients, 44.8%), diabetes mellitus (38 patients, 38.7%), smoking (24 patients, 24.8%) and atrial fibrillation (19 patients, 19.3%). The patients with lacunar strokes were predominantly men (P=0.037) and had a lower frequency of atrial fibrillation (P=0.016) as compared with nonlacunar stroke patients. In the acute phase, the mean systolic blood pressure was 136+/-20 mmHg and diastolic blood pressure was 78.7+/-11.8. Comparing stroke subtypes, there were no differences in 24-h systolic blood pressure and 24-h diastolic blood pressure between patients with lacunar and nonlacunar infarction. However, patients with lacunar infarction showed a mean decline in day-night systolic blood pressure and diastolic blood pressure of approximately 4 mmHg [systolic blood pressure: 3.9 (SD 10) mmHg, P=0.003; diastolic blood pressure 3.7 (SD 7) mmHg, P=0.0001] compared with nonlacunar strokes. Nonlacunar strokes showed a lack of 24-h nocturnal systolic blood pressure and diastolic blood pressure fall. The normal diurnal variation in systolic blood pressure was abolished in 87 (88.9%) patients, and the variation in diastolic blood pressure was abolished in 76 (77.5%) patients. On comparing lacunar and nonlacunar strokes, we found that the normal diurnal variation in systolic blood pressure was abolished in 53 (85.4%) lacunar strokes and in 34 (94.4%) nonlacunar strokes (P=nonsignificant). In terms of diurnal variation in diastolic blood pressure, it was abolished in 43 (69.3%) lacunar strokes and in 33 (91.6%) nonlacunar strokes (P=0.026).
Conclusions: Our results show clear differences in the blood pressure circadian rhythm of acute ischaemic stroke between lacunar and nonlacunar infarctions by means of 24-h blood pressure monitoring. The magnitude of nocturnal systolic and diastolic blood pressure dip was significantly higher in lacunar strokes. Besides, patients with lacunar strokes presented a higher percentage of dipping patterns in the diastolic blood pressure circadian rhythm. Therefore, one should consider the ischaemic stroke subtype when deciding on the management of blood pressure in acute stroke.
abstract_id: PUBMED:38192040
Hemodynamic changes in progressive cerebral infarction: An observational study based on blood pressure monitoring. Progressive cerebral infarction (PCI) is a common complication in patients with ischemic stroke that leads to poor prognosis. Blood pressure (BP) can indicate post-stroke hemodynamic changes which play a key role in the development of PCI. The authors aim to investigate the association between BP-derived hemodynamic parameters and PCI. Clinical data and BP recordings were collected from 80 patients with cerebral infarction, including 40 patients with PCI and 40 patients with non-progressive cerebral infarction (NPCI). Hemodynamic parameters were calculated from the BP recordings of the first 7 days after admission, including systolic and diastolic BP, mean arterial pressure, and pulse pressure (PP), with the mean values of each group calculated and compared between daytime and nighttime, and between different days. Hemodynamic parameters and circadian BP rhythm patterns were compared between PCI and NPCI groups using t-test or non-parametric equivalent for continuous variables, Chi-squared test or Fisher's exact test for categorical variables, Cox proportional hazards regression analysis and binary logistic regression analysis for potential risk factors. In PCI and NPCI groups, significant decrease of daytime systolic BP appeared on the second and sixth days, respectively. Systolic BP and fibrinogen at admission, daytime systolic BP of the first day, nighttime systolic BP of the third day, PP, and the ratio of abnormal BP circadian rhythms were all higher in the PCI group. PCI and NPCI groups were significantly different in BP circadian rhythm pattern. PCI is associated with higher systolic BP, PP and more abnormal circadian rhythms of BP.
abstract_id: PUBMED:19571828
Circadian blood pressure and heart rate characteristics in haemorrhagic vs ischaemic stroke in Chinese people. To compare the circadian variation of blood pressure (BP) between patients with intra-cerebral haemorrhage (ICH) and with cerebral infarction (CI), around-the-clock BP measurements were obtained from 89 hypertensive patients with ICH, from 63 patients with CI and from 16 normotensive volunteers. The single and population-mean cosinor yielded individual and group estimates of the MESOR (Midline Estimating Statistic Of Rhythm, a rhythm-adjusted mean value), circadian double amplitude and acrophase (measures of extent and timing of predictable daily change). Comparison shows that without any difference in BP MESOR, the circadian amplitude of systolic (S) BP was larger in ICH than CI patients (P<0.001), and both groups differed from the healthy volunteers in BP MESOR and pulse pressure (P<0.001) and in the circadian amplitude of SBP (P<0.005). The smaller population circadian amplitude of diastolic (D) BP of the ICH group (P=0.042) is likely related to a larger scatter of individual circadian acrophases in this group as compared with that in the other two groups, an inference supported by a smaller day-night ratio of DBP for ICH vs CI patients (P=0.007). Heart rate (HR) variability, gauged by the standard deviation (SD), was decreased in both patient groups as compared with that in healthy controls, more so among ICH than CI patients (P=0.025). Thus, patients with ICH had a higher incidence of abnormal circadian characteristics of BP than patients with CI, the major differences relating to a larger circadian amplitude of SBP, a smaller HR-SD, and a larger incidence of odd circadian acrophases of DBP.
abstract_id: PUBMED:25492947
Dynamic analysis of blood pressure changes in progressive cerebral infarction. Background: Progressive cerebral infarction is one of the leading causes of high disability and lethality for stroke patients. However, the association between progression of BP changes and cerebral infarction is not currently well understood.
Methods: We analyzed the dynamic changes in the BP of patients with acute ischemic stroke and explored the correlation between BP change and cerebral infarction progression.
Results: 30.9% (30/97) of the patients investigated developed to progressive cerebral infarction 17-141 h after admission. The percentage of patients with a long history of hypertension was significantly higher in the progressive group than in the non-progressive group. The mean systolic BP of the patients 16 h to 5 d after admission was also much higher in the progressive group. A greater abnormality of circadian blood pressure was also observed among patients in the progressive group.
Conclusions: Hypertension history of more than 5 years is an important risk factor for progressive cerebral infarction. Both the elevation of systolic blood pressure 16 h to 5 d after admission and abnormal circadian blood pressure are associated with the disease progression.
abstract_id: PUBMED:11641298
Stroke prognosis and abnormal nocturnal blood pressure falls in older hypertensives. It remains uncertain whether abnormal dipping patterns of nocturnal blood pressure influence the prognosis for stroke. We studied stroke events in 575 older Japanese patients with sustained hypertension determined by ambulatory blood pressure monitoring (without medication). They were subclassified by their nocturnal systolic blood pressure fall (97 extreme-dippers, with >/=20% nocturnal systolic blood pressure fall; 230 dippers, with >/=10% but <20% fall; 185 nondippers, with >/=0% but <10% fall; and 63 reverse-dippers, with <0% fall) and were followed prospectively for an average duration of 41 months. Baseline brain magnetic resonance imaging (MRI) disclosed that the percentages with multiple silent cerebral infarct were 53% in extreme-dippers, 29% in dippers, 41% in nondippers, and 49% in reverse-dippers. There was a J-shaped relationship between dipping status and stroke incidence (extreme-dippers, 12%; dippers, 6.1%; nondippers, 7.6%; and reverse-dippers, 22%), and this remained significant in a Cox regression analysis after controlling for age, gender, body mass index, 24-hour systolic blood pressure, and antihypertensive medication. Intracranial hemorrhage was more common in reverse-dippers (29% of strokes) than in other subgroups (7.7% of strokes, P=0.04). In the extreme-dipper group, 27% of strokes were ischemic strokes that occurred during sleep (versus 8.6% of strokes in the other 3 subgroups, P=0.11). In conclusion, in older Japanese hypertensive patients, extreme dipping of nocturnal blood pressure may be related to silent and clinical cerebral ischemia through hypoperfusion during sleep or an exaggerated morning rise of blood pressure, whereas reverse dipping may pose a risk for intracranial hemorrhage.
abstract_id: PUBMED:19892304
Ambulatory blood pressure monitoring in stroke survivors: do we really control our patients? Background: We aim to evaluate prospectively the long-term changes of blood pressure (BP) in stroke survivors using ambulatory BP monitoring (ABPM) and compare them with the clinic conventional measurements.
Methods: We studied 101 patients who were admitted within 24h after stroke onset. To study the circadian rhythm of BP a continuous BP monitor (Spacelab 90207) was used. After six and twelve months follow-up a new ABPM was undertaken. Data were analyzed using the SSPS 12.0.
Results: We studied 62 males and 39 females, mean age: 70.9+/-10.7 years. We included 88 ischemic strokes and 13 hemorrhagic strokes. In the acute phase mean 24 h BPs were 136+/-19/78.6+/-11.4 mm Hg. The normal diurnal variation in BP was abolished in 88 (87.1%) patients. After six months, 74 patients were assessed. Mean office readings were 137.5+/-23.8/76.4+/-11.4 mm Hg, and high systolic BPs and diastolic BPs were found in 37% and 11% of the subjects respectively. ABPM revealed a mean BP of 118.5+/-20.1/70.3+/-8.6 (p<0.0001). In 57 (76.9%), the normal BP pattern remained abolished (p<0.001). After one year, 63 patients were assessed. Mean office readings were 130.8+/-26.3/77.6+/-9.3 mm Hg, and high systolic BPs and diastolic BPs were found in 23.8% and 10% of the subjects respectively. Mean 24 h BPs were 117+/-12.5/69.7+/-7.2 (p<0.001). The normal diurnal variation in BP was now abolished in 47 (74.6%) patients (p<0.001).
Conclusion: Survivors of stroke, both hypertensive and non-hypertensive patients, present a chronic disruption of circadian rhythm of BP. Conventional clinical recordings are an unreliable method of controlling these patients and ABPM should be routinely performed in this population.
abstract_id: PUBMED:26350775
Tight control of blood pressure after ischemic stroke is associated with nocturnal hypotension episodes Aim: To evaluate whether a tighter blood pressure (BP) control in patients with recent ischemic stroke is associated with the presence of nocturnal hypotension (NHP) episodes.
Patients And Methods: We included one hundred consecutive patients who had been discharged for ischemic stroke in the previous six months. To evaluate adequacy of BP control in these patients office BP and 24-h ambulatory BP monitoring values were used.
Results: We studied 63 males and 37 females; mean age was 69 ± 11 years. Sixty-eight lacunar and 32 non-lacunar strokes were included. Episodes of NHP were observed in 59 patients. Clinical hypertension was present in 34 patients. An abnormal pattern of circadian rhythm of BP was present in 72 subjects. Only 18 patients had BP within normal limits. Episodes of NHP were more frequent in subjects with good BP control versus patients with bad BP control: 88.8% and 52.4 % respectively (p = 0.007). The presence of NHP episodes was also inversely related to number of BP parameters altered (p = 0.001).
Conclusions: Tight control of BP after ischemic stroke is associated with a high frequency of NHP episodes. It is likely that aggressively lowering BP levels within the normal range after an ischemic stroke may be not beneficial, particularly in elderly patients.
abstract_id: PUBMED:16946144
Blood pressure changes during the initial week after different subtypes of ischemic stroke. Background And Purpose: The purpose of this study was to clarify the differences in the acute blood pressure course among different ischemic stroke subtypes.
Methods: We divided 588 consecutive patients with acute brain infarction into four clinical subgroups to study the blood pressure levels during the initial 6 hospital days.
Results: During the 6 days, systolic blood pressure of lacunar and atherothrombotic patients was higher (P=0.0001) and diastolic blood pressure of lacunar patients was higher (P=0.0371) than of patients with the other subtypes. Preexisting hypertension was associated with elevated acute systolic blood pressure in all patients and in each subtype and with elevated acute diastolic blood pressure in all patients, cardioembolic patients, and patients with stroke of other etiology. After adjustment by preexisting hypertension, diabetes mellitus with a hemoglobin A1c >7.0% was associated with elevated systolic blood pressure in all, lacunar, and cardioembolic patients and with diastolic blood pressure in all patients.
Conclusions: Blood pressure course of patients sustaining acute stroke varied widely according to stroke subtypes. Poorly controlled diabetes mellitus, as well as preexisting hypertension, appeared to influence blood pressure during the initial week of stroke.
abstract_id: PUBMED:15037874
Factors influencing acute blood pressure values in stroke subtypes. The aim of this prospective observational study was to determine the association of acute blood pressure values with independent factors (demographic, clinical characteristics, early complications) in stroke subgroups of different aetiology. We evaluated data of 346 first-ever acute (<24 h) stroke patients treated in our stroke unit. Casual and 24-h blood pressure (BP) values were measured. Stroke risk factors and stroke severity on admission were documented. Strokes were divided into subgroups of different aetiopathogenic mechanism. Patients were imaged with CT-scan on admission and 5 days later to determine the presence of brain oedema and haemorrhagic transformation. The relationship of different factors to 24-h BP values (24-h BP) was evaluated separately in each stroke subgroup. In large artery atherosclerotic stroke (n=59), history of hypertension and stroke severity correlated with higher 24-h BP respectively. In cardioembolic stroke (n=87), history of hypertension, stroke severity, haemorrhagic transformation and brain oedema were associated with higher 24-h BP, while heart failure with lower 24-h BP. History of hypertension and coronary artery disease was related to higher and lower 24-h BP, respectively, in lacunar stroke (n=75). In patients with infarct of undetermined (n=57) cause 24-h BP were mainly influenced by stroke severity and history of hypertension. An independent association between higher 24-h BP and history of hypertension and cerebral oedema was documented in intracerebral haemorrhage (n=68). In conclusion, different factors influence acute BP values in stroke subtypes of different aetiology. If the clinical significance of these observations is verified, a differentiated approach in acute BP management based on stroke aetiology may be considered.
abstract_id: PUBMED:25657183
Effects of blood pressure-lowering treatment in different subtypes of acute ischemic stroke. Background And Purpose: The Scandinavian Candesartan Acute Stroke Trial (SCAST) found no benefits of blood pressure-lowering treatment with candesartan in acute stroke. We have investigated whether the effect of treatment is different in different subtypes of ischemic stroke.
Methods: SCAST was a randomized- and placebo-controlled trial of candesartan in 2029 patients presenting within 30 hours of ischemic or hemorrhagic stroke and systolic blood pressure ≥140 mm Hg. Ischemic stroke subtype was categorized by the Oxfordshire Community Stroke Project classification. There were 2 primary effect variables: the composite vascular end point of vascular death, myocardial infarction, or stroke during the first 6 months and functional outcome at 6 months.
Results: A total of 1733 patients with ischemic stroke were included: total anterior circulation infarcts in 129, partial anterior in 850, posterior in 236, and lacunar in 510 patients. For functional outcome there was a significant trend toward a better effect of candesartan in patients with larger infarcts (total anterior circulation or partial anterior circulation) than in patients with smaller infarcts (lacunar infarction; P=0.02). For the composite vascular end point, there were no differences in treatment effect.
Conclusions: The results suggest that the effect of blood pressure-lowering treatment with candesartan may differ according to different types of acute ischemic stroke, but this needs to be confirmed in future trials.
Clinical Trial Registration Url: http://www.clinicaltrials.gov. Unique identifier: NCT00120003.
Answer: The study of blood pressure circadian rhythm in patients with acute ischemic stroke has revealed differences between lacunar and nonlacunar infarctions. In a study involving 98 consecutive patients admitted within 24 hours after ischemic stroke onset, it was found that patients with lacunar infarction showed a mean decline in day-night systolic and diastolic blood pressure of approximately 4 mmHg, which was significantly higher compared to nonlacunar strokes. Furthermore, the normal diurnal variation in systolic blood pressure was abolished in 85.4% of lacunar strokes and 94.4% of nonlacunar strokes, which was not statistically significant. However, the diurnal variation in diastolic blood pressure was abolished in a significantly higher percentage of nonlacunar strokes (91.6%) compared to lacunar strokes (69.3%) (PUBMED:19689751).
These findings suggest that there are clear differences in the blood pressure circadian rhythm between lacunar and nonlacunar infarctions, with lacunar strokes presenting a higher percentage of dipping patterns in the diastolic blood pressure circadian rhythm. This indicates that the subtype of ischemic stroke should be considered when managing blood pressure in acute stroke patients. The study concludes that the magnitude of nocturnal systolic and diastolic blood pressure dip was significantly higher in lacunar strokes, which could have implications for treatment and management strategies (PUBMED:19689751). |
Instruction: Is the current treatment of primary and secondary amyloidosis effective?
Abstracts:
abstract_id: PUBMED:12924037
Is the current treatment of primary and secondary amyloidosis effective? Background: Retrospective study about results of treatment of patients (pts) with primary (AL) and secondary (AA) amyloidosis is given. 31 pts with systemic forms of amyloidosis have been treated and followed-up in our department since 1993.
Methods And Results: 6 men and 11 women were in the AL group with the mean age of 59 years. Multiple myeloma was diagnosed in 9 pts, monoclonal gammapathy of undetermined significance (MGUS) was found in 8 pts. The kidneys were affected in all pts, heart in 59% of pts, liver, joints and skin in 26% of pts and polyneuropathy was detected only in 1 pt. Progression of renal insufficiency with decrease of glomerular filtration rate (GFR) was detected in the AL group at the end of follow-up period compared with the initial level (p < 0.05) despite the intensive treatment. The difference did not reach statistical significance in other investigated parameters. Median of survival was 13 months from the assessment of diagnosis. Partial remission of amyloidosis was achieved in 9 pts, stable disease was in 5 pts and in 3 pts the disease progressed. 4 men and 10 women were in the AA group with mean age of 58 years. Underlying disease was rheumatoid arthritis in 7 pts, ankylosing spondylitis in 2 pts, juvenile chronic arthritis in 1 pt, Crohn's disease in 2 pts, eosinophilic fasciitis in 1 pt and chronic abscesses in NK cell deficiency in 1 pt. The kidneys were affected in all pts, bowels and heart in 36% of pts. GFR (p < 0.05) and plasma creatinine (p < 0.01) significantly decreased at the end of follow-up period compared with initial levels. Median of survival was 30 months. Partial remission was achieved in 2 pts, stable disease was in 3 pts and progression was detected in 9 pts despite the use of various treatment regimens.
Conclusions: Both forms of systemic amyloidosis represent severe disease with limited response to treatment. The use of new drugs is promising and could lead to better response to treatment.
abstract_id: PUBMED:19827724
Autologous stem cell transplantation in the treatment of primary systemic amyloidosis Primary systemic amyloidosis is a plasma cell dyscrasia resulting multiorgan failure and death. The optimal treatment method is unknown. Current status of therapy for primary amyloidosis based on has been publication analysis. Treatment options for primary amyloidosis are similar to multiple myleoma therapy. Treatment of primary amyloidosis should be based on risk factors analysis. In selected patients high-dose therapy and autologous stem cell transplantation has been associated with higher response rate than standard chemotherapy However, autologous transplantation for primary amyloidosis remains controversial because of high treatment-related mortality. Novel non-transplant methods for primary amyloidosis therapy are highly effective. Early and detailed diagnosis and treatment based on risk factors can influence survival in group of patients with systemic primary amyloidosis.
abstract_id: PUBMED:20009442
Effective and well tolerated treatment with melphalan and dexamethasone for primary systemic AL amyloidosis with cardiac involvement. A 60-year-old woman was admitted with acute heart failure and was diagnosed as having primary systemic AL amyloidosis with cardiac involvement by endomyocardial biopsy. Electrophoresis revealed an IgG-lambda monoclonal component and amyloidosis was evident in the gastric and rectal mucosa. Her cardiac function at diagnosis was poor, including an ejection fraction of 59% and IVS of 19 mm, and serum cardiac troponin T (cTnT) was elevated (0.12 ng/ml). She was treated with melphalan-dexamethasone (Mel-Dex) therapy once a month. After more than a year, cardiac function and performance status were maintained, with decreasing levels of cTnT, indicating that Mel-Dex represents a feasible and effective therapeutic option for patients with AL amyloidosis with cardiac dysfunction.
abstract_id: PUBMED:29359203
Rare Undiagnosed Primary Amyloidosis Unmasked During Surgical Treatment of Primary Hyperparathyroidism: A Case Report. Primary amyloidosis (PA) is a protein deposition disorder that presents with localized or multisystemic disease. The incidence is low in the general public, ranging from three to eight cases per million, and with nonspecific presenting symptoms typically occurring later in life. Due to late presentation, substantial and irreversible damage has usually already occurred by the time of the diagnosis. However, if inadvertent diagnosis occurs before irreversible damage has taken place, as it did in the following case, some patients may benefit from the disease-arresting treatment. A 70-year-old female with a history of obstructive sleep apnea, hypertension, and arthritis presented with worsening dysphagia and biochemically confirmed primary hyperparathyroidism (PHPT). Further workup demonstrated multinodular goiter with compressive symptoms and substernal extension, osteopenia, and discrepant parathyroid localization on imaging. Intraoperatively, markedly difficult dissection and obliteration of tissue planes were encountered. Extensive, diffuse amyloid deposition in both the normal and pathologic parathyroid glands and thyroid tissue on surgical pathology leads to subsequent fibril typing by mass spectrometry and leads to the diagnostic of primary amyloid light-chain (AL) amyloidosis (PA; λ light chains). Subsequent workup for the underlying cause of the amyloid deposition revealed an immunoglobulin A monoclonal gammopathy of unknown significance. The surgical treatment of PHPT and compressive thyroid nodule unmasked an undiagnosed PA, allowing for early workup and monitoring of the progression of amyloidosis. The temporal comorbidity of PHPT and PA raises an interesting and, as yet, unanswered question regarding the pathophysiologic association between the two conditions.
abstract_id: PUBMED:25946210
Efficacy of different modes of fractional CO2 laser in the treatment of primary cutaneous amyloidosis: A randomized clinical trial. Background: Primary cutaneous amyloidosis (PCA) comprises three main forms: macular, lichen, and nodular amyloidosis. The current available treatments are quite disappointing.
Objectives: Assess and compare the clinical and histological changes induced by different modes of Fractional CO2 laser in treatment of PCA.
Patients And Methods: Twenty five patients with PCA (16 macular and 9 lichen amyloidosis) were treated by fractional CO2 using; superficial ablation (area A) and deep rejuvenation (area B). Each patient received 4 sessions with 4 weeks intervals. Skin biopsies were obtained from all patients at baseline and one month after the last session. Patients were assessed clinically and histologically (Congo red staining, polarized light). Patients were followed-up for 3 months after treatment.
Results: Both modes yielded significant reduction of pigmentation, thickness, itching, and amyloid deposits (P-value < 0.001). However, the percentage of reduction of pigmentation was significantly higher in area A (P-value = 0.003). Pain was significantly higher in area B. Significant reduction in dermal amyloid deposits denotes their trans-epidermal elimination induced by fractional photothermolysis.
Conclusion: Both superficial and deep modes of fractional CO2 laser showed comparable efficacy in treatment of PCA. Superficial mode being better tolerated by patients, is recommended as a valid therapeutic option.
abstract_id: PUBMED:22405606
External beam radiation therapy is safe and effective in treating primary pulmonary amyloidosis. The aim of the prospective study was to explore the safety and effectiveness of external beam radiation therapy (EBRT) in three patients with biopsy proven primary pulmonary amyloidosis, including two tracheobronchial amyloidosis patients and one primary parenchymal amyloidosis patient. All three patients were treated to 24 Gy in 12 fractions utilizing CT simulation and 3-D planning. All three patients had significant improvement in clinical symptoms, radiological imaging as well as pulmonary function tests. The improvement in the clinical symptoms was evident in 2 days. Toxicities related to EBRT were not observed during the follow-up range from 42 to 54 months. EBRT to 24 Gy was safe and effective in the three patients with primary pulmonary amyloidosis, and resulted in rapid relief of pulmonary symptoms.
abstract_id: PUBMED:34493149
Efficacy of 1064-nm Nd:YAG picosecond laser in lichen amyloidosis treatment: clinical and dermoscopic evaluation. Lichen amyloidosis (LA) is a type of primary localized cutaneous amyloidosis characterized by multiple localized, hyperpigmented, grouped papules, in which the deposition of amyloid materials from altered keratinocytes usually resists to current treatments. We presented two LA patients with non-satisfactory results of topical treatments. After the first treatment using 1064-nm Nd: YAG picosecond (ps-Nd:YAG) laser, there was an improvement with persistence up to 3-month follow up after five sessions of 4-week interval, as well as a decrease in number, thickness, and darkness of lesions from clinical and dermoscopic evaluation. Thus, the ps-Nd:YAG laser could be efficacious for LA treatment.
abstract_id: PUBMED:15572585
The combination of thalidomide and intermediate-dose dexamethasone is an effective but toxic treatment for patients with primary amyloidosis (AL). Based on the efficacy of thalidomide in multiple myeloma and on its synergy with dexamethasone on myeloma plasma cells, we evaluated the combination of thalidomide (100 mg/d, with 100-mg increments every 2 weeks, up to 400 mg) and dexamethasone (20 mg on days 1-4) every 21 days in 31 patients with primary amyloidosis (AL) whose disease was refractory to or had relapsed after first-line therapy. Eleven (35%) patients tolerated the 400 mg/d thalidomide dose. Overall, 15 (48%) patients achieved hematologic response, with 6 (19%) complete remissions and 8 (26%) organ responses. Median time to response was 3.6 months (range, 2.5-8.0 months). Treatment-related toxicity was frequent (65%), and symptomatic bradycardia was a common (26%) adverse reaction. The combination of thalidomide and dexamethasone is rapidly effective and may represent a valuable second-line treatment for AL.
abstract_id: PUBMED:27873215
Current and Future Treatment Approaches in Transthyretin Familial Amyloid Polyneuropathy. Opinion Statement: Treatment of transthyretin familial amyloid polyneuropathy (TTR FAP) must be tailored to disease stage. Patients with early stage disease (i.e., without major impairment in walking ability), especially younger patients, should be referred as soon as possible for liver transplantation (LT) in the absence of major comorbid conditions. LT remains the most effective treatment option to date and should be offered to these patients as early as possible. Bridging therapy with an oral TTR stabilizer (tafamidis or diflunisal, according to local access to these treatments) should be started as soon as the diagnosis of TTR FAP is established. Early stage patients who do not wish to or have contraindications to LT should be treated with an oral TTR stabilizer or get access to the newly developed therapeutic options (IONIS TTR-Rx, patisiran, doxycycline/TUDCA). Late stage patients (presenting with significant walking impairment) are usually older and notoriously difficult to treat. They should be offered an oral TTR stabilizer but are not candidates for LT due to a significant rate of perioperative complications and increased risk of progressive neurological and especially cardiac disease despite LT. Access to the different therapies in development should also be considered depending on respective inclusion and exclusion criteria. The abovementioned treatment options were mostly validated in Val30Met mutation patients, but should also be offered to non-Val30Met patients, although mortality rates after LT are higher in these patients. Treatment decisions should be made on an individual basis. Screening for heart, eye, and renal involvement is mandatory for every patient at disease diagnosis and regularly thereafter, even in transplanted patients. Symptomatic treatment should be offered as needed, as well as genetic counseling to at-risk family members. Asymptomatic mutation carriers should benefit from regular screening for early symptoms of disease. Current therapeutic management of TTR FAP will hopefully be changed in the near future with data from the ongoing phase 2/3 studies testing the TTR gene silencing agents. In the longer term, it is likely that combined therapeutic approaches will be necessary to reverse the disease process.
abstract_id: PUBMED:20079210
The effects and safety of bortezomib combined with dexamethasone in the treatment of primary systemic amyloidosis Objective: To evaluate the effects and safety of the regimen of bortezomib combined with dexamethasone (VD) in the treatment of primary systemic (AL) amyloidosis.
Methods: Five newly diagnosed AL amyloidosis patients confirmed by renal biopsy with a median of 3 organs involved (3 to 5 organs) were treated by VD regimen for 3 (1-4) cycles.
Results: Among 3 evaluable patients, 1 was in stable condition and 2 had hematologic response (partial remission and complete remission) and organ function improvement. Hematologic responses were rapid (median 1.5 cycles) and median time to organ response was 2 cycles. Three cases were survived and the periods of follow up were 5, 4 and 4 months respectively. The other 2 died 2 and 14 months after diagnosis. The side effects were asthenia, diarrhea, constipation, edema aggravation and fever, all of which were in I grade. No treatment associating death was found.
Conclusion: VD regimen might be an efficient, rapid effective and safe regimen in the treatment of AL amyloidosis.
Answer: The current treatment of primary and secondary amyloidosis shows limited effectiveness and varies in response among patients. For primary amyloidosis (AL), treatments similar to those for multiple myeloma, such as high-dose therapy and autologous stem cell transplantation, have been associated with higher response rates but also come with high treatment-related mortality, making them controversial (PUBMED:19827724). Melphalan and dexamethasone (Mel-Dex) therapy has been reported as an effective and well-tolerated option for AL amyloidosis with cardiac involvement (PUBMED:20009442). Thalidomide combined with dexamethasone has shown to be rapidly effective as a second-line treatment for AL, despite frequent treatment-related toxicity (PUBMED:15572585). Bortezomib combined with dexamethasone (VD regimen) has also been suggested as an efficient, rapid, and safe regimen for AL amyloidosis (PUBMED:20079210).
For secondary amyloidosis (AA), the treatment response is also limited, with a median survival of 30 months reported in one study. Despite various treatment regimens, progression was detected in a significant number of patients (PUBMED:12924037).
In terms of localized forms of amyloidosis, such as primary cutaneous amyloidosis (PCA) and primary pulmonary amyloidosis, other treatment modalities have been explored. Fractional CO2 laser treatment has shown significant clinical and histological improvement in PCA (PUBMED:25946210), and external beam radiation therapy (EBRT) has been effective and safe in treating primary pulmonary amyloidosis (PUBMED:22405606). Additionally, the 1064-nm Nd:YAG picosecond laser has been efficacious in the treatment of lichen amyloidosis (PUBMED:34493149).
Overall, while there are treatments available that can provide some benefit, both forms of systemic amyloidosis are severe diseases with a limited response to treatment, and the use of new drugs is considered promising for better outcomes (PUBMED:12924037). Early diagnosis and treatment based on risk factors can influence survival in patients with systemic primary amyloidosis (PUBMED:19827724). |
Instruction: Tarsal Bone Dysplasia in Clubfoot as Measured by Ultrasonography: Can It be Used as a Prognostic Indicator in Congenital Idiopathic Clubfoot?
Abstracts:
abstract_id: PUBMED:26090970
Tarsal Bone Dysplasia in Clubfoot as Measured by Ultrasonography: Can It be Used as a Prognostic Indicator in Congenital Idiopathic Clubfoot? A Prospective Observational Study. Background: Congenital talipes equinovarus (CTEV )/clubfoot is the most common congenital orthopedic condition. The success rate of Ponseti casting in the hands of the legend himself is not 100%. The prediction of difficult to correct foot and recurrences still remains a mystery to be solved. We all know that tarsal bones are dysplastic in clubfoot and considering it; we hypothesize that the amount of tarsal dysplasia can predict management duration and outcome. In literature we were not able to find studies that satisfactorily quantify the amount of tarsal dysplasia. Hence, it was considered worthwhile to quantify the amount of dysplasia in tarsal bone and to correlate these parameters with the duration and outcome of treatment by conventional method.
Methods: A total of 25 infants with unilateral idiopathic clubfoot that have not taken any previous treatment were included in the study. An initial ultrasonography was done before start of treatment in 3 standard planes to measure the maximum length of 3 tarsal bones (talus, calcaneus, and navicular). Ponseti method of treatment was used; pirani scoring was done at each OPD (out patient department) visit. Number of casts required for complete correction and need for any surgical intervention were taken as the outcome parameters.
Results: We found that there is a significant correlation between number of casts required and the dysplasia of talus (α error=0.05). We also found a significant negative correlation between relative dysplasia of talus and number of casts required (r=-0.629 sig=0.001, r=-0.552 sig=0.004).
Conclusions: Tarsal bone dysplasia as quantified by using ultrasonography can be used as a prognostic indicator in congenital idiopathic clubfoot. Although promising the method needs further studies and can be more useful after long-term follow-up where recurrences if any can be documented.
Level Of Evidence: Level II.
abstract_id: PUBMED:8006172
Clubfeet and tarsal coalition. Tarsal coalition was noted in 18 cases of rigid equinovarus deformity. Sixteen cases were encountered at surgery and two at morbid dissection. There were 14 patients in the series; six had associated pathologic conditions that might have caused their clubfeet to be deemed "teratologic," whereas eight did not and were considered to have congenital clubfeet. Four patients in the series had bilateral coalitions. Preoperative radiographs demonstrated the coalition in only one case. A presurgical magnetic resonance image (MRI) clearly showed the coalition in another case. Nonoperative treatment was unsuccessful. Two patients with tibial dysplasia had ankle disarticulations. The remaining 16 feet required extensive soft-tissue releases, internal fixation, and coalition excision. The vast majority of cases showed cartilaginous subtalar coalition at the medial facet. The patients were followed for an average of 6 years, and two recurrences were noted. Remaining feet were painless and plantargrade, but were rather stiff. This anomaly may be more common than previously described. It is usually not suspected preoperatively and may likewise be difficult to recognize at surgery. A preoperative MRI scan may also be helpful.
abstract_id: PUBMED:1427530
The morbid anatomy of congenital deficiency of the tibia and its relevance to treatment. Two specimens obtained from a 5-month-old boy with bilateral type 1a congenital tibial deficiency were dissected to describe their morbid anatomy. Examination revealed a rigid equinovarus deformity, absence of a tibial remnant, an abnormal saddle-shaped talus, and several tarsal coalitions. Observation of the arterial pattern during surgery supported the previously reported finding that persistence of an immature arterial structure is inherent in this condition. Knowledge of potential structural anomalies is essential during the planning of an amputation or of a knee or an ankle reconstruction. Anatomic abnormalities may affect the design of soft-tissue flaps in an amputation and plantigrade positioning and foot biomechanics in reconstructive procedures.
abstract_id: PUBMED:15211647
Kantaputra mesomelic dysplasia: a second reported family. We present the clinical and radiographic findings in a mother and son with a dominantly inherited mesomelic skeletal dysplasia almost identical to that described in a large Thai family by Kantaputra et al., in which ankle, carpal and tarsal synostoses were noted. The proband in the family is a 48-year-old woman with mesomelic limb shortening, most pronounced in the upper limbs. Her parents were of normal stature and build. Her 15-year-old son has similar mesomelic limb shortening, and in addition talipes equinovarus. Radiological examination showed severe shortening of the radius and ulna with bowing of the radius and dislocation of the radial head. Multiple carpal and tarsal synostoses were present and in addition, the talus and calcaneum were fused. In the original Thai family, linkage to chromosome 2q24-q32, which contains the HOXD cluster has been reported, and it is postulated that the phenotype may result from a disturbance of regulation of the HOXD cluster. Although linkage analysis was not possible in our family, molecular analysis was undertaken and HOXD11 was sequenced, however, no mutations were detected. This is only the second reported family affected with Kantaputra mesomelic dysplasia (MIM 156232), a distinct mesomelic skeletal dysplasia.
abstract_id: PUBMED:1764254
Congenital disorders of the extremities. N/A
abstract_id: PUBMED:244427
Congenital orthopedic diseases N/A
abstract_id: PUBMED:3757380
Research for genetic and environmental factors in orthopedic diseases. This is a review article of the past 40 years of research in Britain on the etiology of developmental disorders of the skeleton, covering both rare unifactorial diseases (chondroosteodystrophies) and common localized disorders (e.g., clubfoot and congenital dislocation of the hip) of multifactorial inheritance.
abstract_id: PUBMED:18938507
On the technique of reinsertion in congenital clubfoot. N/A
abstract_id: PUBMED:23326983
The incidence of common orthopaedic problems in newborn at Siriraj Hospital. Background: The congenital orthopaedic anomalies in Thai population had a limited data and the previously studies are based on only hospital chart records.
Objective: To determine the incidence of common congenital orthopedic problems by physical examination in newborn at Siriraj Hospital.
Material And Method: A prospective study was conducted by physical examination of 3,396 newborns from June 2009 to September 2009. All orthopaedic abnormalities of newborns were recorded along with maternal age, obstetric history of mother, complications during pregnancy, complications in labour stage, mode of delivery and presentation. Sex of newborn, birth weight, body length and APGAR score were recorded.
Results: Incidence of calcaneovalgus was found in 60:1,000 live births following by metatarsus adductus in 7.6:1,000, polydactyly or syndactyly in 2.6:1,000, talipes equninovarus in 2.4:1,000, brachial plexus injury in 1.5:1,000, developmental dysplasia of hip in 0.6:1,000, osteogenesis imperfecta in 0.6:1,000, skeketal dysplasia in 0.6:1,000, congenital vertical talus in 0.3: 1,000 and fracture clavicle at birth in 0.3: 1,000.
Conclusion: In the present study, the calcaneovalgus was the most common orthopaedic problem followed by metatasus adductus, polydactyly or syndactyly.
abstract_id: PUBMED:19407302
Neonatal foot deformities and their relationship to developmental dysplasia of the hip: an 11-year prospective, longitudinal observational study. In a prospective study over 11 years we assessed the relationship between neonatal deformities of the foot and the presence of ultrasonographic developmental dysplasia of the hip (DDH). Between 1 January 1996 and 31 December 2006, 614 infants with deformities of the foot were referred for clinical and ultrasonographic evaluation. There were 436 cases of postural talipes equinovarus deformity (TEV), 60 of fixed congenital talipes equinovarus (CTEV), 93 of congenital talipes calcaneovalgus (CTCV) and 25 of metatarsus adductus. The overall risk of ultrasonographic dysplasia or instability was 1:27 in postural TEV, 1:8.6 in CTEV, 1:5.2 in CTCV and 1:25 in metatarsus adductus. The risk of type-IV instability of the hip or irreducible dislocation was 1:436 (0.2%) in postural TEV, 1:15.4 (6.5%) in CTCV and 1:25 (4%) in metatarsus adductus. There were no cases of hip instability (type IV) or of irreducible dislocation in the CTEV group. Routine screening for DDH in cases of postural TEV and CTEV is no longer advocated. The former is poorly defined, leading to the over-diagnosis of a possibly spurious condition. Ultrasonographic imaging and surveillance of hips in infants with CTCV and possibly those with metatarsus adductus should continue.
Answer: Yes, tarsal bone dysplasia in clubfoot as measured by ultrasonography can be used as a prognostic indicator in congenital idiopathic clubfoot. A study included 25 infants with unilateral idiopathic clubfoot and used ultrasonography to measure the maximum length of three tarsal bones (talus, calcaneus, and navicular) before starting treatment with the Ponseti method. The number of casts required for complete correction and the need for any surgical intervention were taken as outcome parameters. The study found a significant correlation between the number of casts required and the dysplasia of the talus. There was also a significant negative correlation between relative dysplasia of the talus and the number of casts required, suggesting that greater tarsal dysplasia was associated with a higher number of casts needed for correction. These findings indicate that quantifying tarsal bone dysplasia using ultrasonography can serve as a prognostic tool for the management duration and outcome of congenital idiopathic clubfoot treatment (PUBMED:26090970). |
Instruction: Is a one-week course of triple anti-Helicobacter pylori therapy sufficient to control active duodenal ulcer?
Abstracts:
abstract_id: PUBMED:11421880
Is a one-week course of triple anti-Helicobacter pylori therapy sufficient to control active duodenal ulcer? Background: Triple therapy currently forms the cornerstone of the treatment of patients with Helicobacter pylori-positive duodenal ulcer.
Aim: To establish whether prolonged antisecretory therapy is necessary in patients with active duodenal ulcer.
Methods: A total of 77 patients with H. pylori-positive duodenal ulcer were included in a prospective, controlled, double-blind study. All patients received a 7-day treatment with omeprazole 20 mg b.d., clarithromycin 500 mg b.d. and amoxicillin 1000 mg b.d. Patients in the omeprazole group underwent an additional 14-day therapy with omeprazole 20 mg; patients in placebo group received placebo. Endoscopy was performed upon inclusion in the study and after 3 and 8 weeks.
Results: Seventy-four patients were eligible for a per protocol analysis after 3 weeks, and 65 after 8 weeks. After 3 weeks, the healing rate was 89% in the omeprazole group and 81% in the placebo group (P=0.51). After 8 weeks, the ulcer healed in 97% of the patients in the total group (95% CI: 92.7-100%). H. pylori was eradicated in 88% of patients in the omeprazole group and in 91% in the placebo group (P=1.0). No statistically significant differences between the groups were found in ulcer-related symptoms or in ulcer healing.
Conclusion: In patients with H. pylori-positive duodenal ulcer, a 7-day triple therapy alone is sufficient to control the disease.
abstract_id: PUBMED:10227015
Antral biopsy is sufficient to confirm Helicobacter pylori eradication with the "new" one-week long triple treatments Background: To evaluate whether antral biopsies are enough for confirming Helicobacter pylori eradication with the "new" one week triple therapies with omeprazole.
Patients And Methods: 229 duodenal ulcer patients were treated with omeprazole for 7 days plus two antibiotics. Eradication was confirmed with histology (two biopsies from both gastric antrum and body) and 13C-urea breath test one month after the end of therapy.
Results: H. pylori eradication was achieved in 76.9% of the patients (95% CI: 71-82%). Histology at antrum was highly reliable to detect eradication failure: in all but in one case in which H. pylori was observed at gastric body, was the microogranism also observed at antrum. Infection prevalences at both locations were not homogeneous (McNemar: 6.4; p < 0.05). Concordance between antral biopsies and breath test for H. pylori diagnosis after therapy was excellent (kappa: 0.91; SE: 0.07), and both prevalences were homogeneous (McNemar: 1.3; p > 0.05).
Conclusions: Taking antral biopsies is enough for confirming H. pylori eradication with the "new" one week triple therapies.
abstract_id: PUBMED:9670578
Helicobacter pylori: 6-day triple therapy in duodenal ulcer Objectives: To find the effectiveness of short-term eradication treatment of Helicobacter pylori in the duodenal ulcer.
Design: Intervention study, open controlled, randomised with parallel groups.
Setting: Three Health Centres in the city of Valencia.
Patients: Patients with a duodenal ulcer diagnosis and a Helicobacter pylori infection who attended the Primary Care physician.
Intervention: The study group (48 patients) was treated for six days with the triple therapy: Amoxycillin, Clarithromycin and Omeprazole. The control group (40 patients) was treated with Omeprazole for six weeks.
Measurements And Main Results: The observance period lasted a year, after which the Elisa test was conducted. Eradication was successful for 65% of those treated with the triple therapy, but for only 30% of those treated with monotherapy. The consumption of medication for the ulcer during the year of observance was almost three times greater in the group treated with monotherapy than in the triple-therapy group.
Conclusions: Eradicative triple therapy was shown to be more effective and efficient than monotherapy. It is feasible to use it in Primary Care. Eradicative triple therapy is not advisable within six days: a longer treatment period should be employed with this recommendable therapy.
abstract_id: PUBMED:10962387
Acid suppression therapy is not required after one-week anti-Helicobacter pylori triple therapy for duodenal ulcer healing Aim: To compare healing of Helicobacter pylori-related non complicated duodenal ulcer after one-week eradication triple therapy alone and after triple therapy with further 3-weeks antisecretory treatment with ranitidine.
Methods: Three hundred and forty three patients with symptomatic H. pylori positive duodenal ulcer were included in this randomized double-blind placebo controlled study. H. pylori infection was established by rapid urease test and histopathology of antral biopsies. All patients were treated for one week with ranitidine 300 mg b.i.d., amoxicillin 1 g b.i.d., clarithromycin 500 mg b.i.d., and then randomly treated for the following 3 weeks either with ranitidine 300 mg once daily (triple therapy + ranitidine, n =180) or placebo (triple therapy alone, n =163). Ulcer healing was assessed by endoscopy 4 weeks after inclusion. H. pylori eradication was established by (13) C-urea breath testing 5 weeks after the end of triple therapy.
Results: In intention to treat, duodenal ulcer healed at 4 weeks in 86 % of patients treated with triple therapy + ranitidine and in 83 % of patients treated with triple therapy alone (equivalence: 90 % CI [-3. 8 %; 9.2 %]). The H. pylori eradication rates were 67 % and 69 % respectively. Ulcer healed in 88 % of patients in whom H. pylori eradication was achieved and in 77 % of patients in whom eradication failed.
Conclusion: These results demonstrate that one-week triple therapy alone is highly effective in healing non complicated H. pylori associated duodenal ulcer without additional antisecretory treatment.
abstract_id: PUBMED:8821611
One week treatment for Helicobacter pylori infection: a randomised study of quadruple therapy versus triple therapy. This study evaluated one week of quadruple therapy as treatment for Helicobacter pylori infection. Sixty duodenal ulcer patients were randomised to receive either standard triple therapy (tripotassium dicitrato bismuthate 120 mg qds+tetracycline 500 mg qds+metronidazole 400 mg qds), quadruple therapy A (triple therapy+omeprazole 20 mg od) or quadruple therapy B (triple therapy+omeprazole 40 mg od), for 7 days. H. pylori eradication rates were 65%, 60% and 60%, respectively, with no significant differences between the groups. These results suggest that quadruple therapy provides no benefits over one week of triple therapy for treatment of H. pylori infection.
abstract_id: PUBMED:10975783
Cure of Helicobacter pylori-positive active duodenal ulcer patients: a double-blind, multicentre, 12-month study comparing a two-week dual vs a one-week triple therapy. GISU (Interdisciplinary Group for Ulcer Study). Aims: To compare a two-week dual therapy to a one-week triple therapy for the healing of duodenal ulcer and the eradication of the Helicobacter pylori infection.
Patients And Methods: A total of 165 patients with active duodenal ulcer were enrolled in the study. At entry, endoscopy, clinical examination and laboratory tests were performed. Histology and the rapid urease test were used to diagnose Helicobacter pylori infection. Patients received either lansoprazole 30 mg plus amoxycillin 1 g bid for two weeks (two-week, dual therapy) or lansoprazole 30 mg plus amoxycillin 1 g plus tinidazole 500 mg bid for one week plus lansoprazole qd for an additional week (one-week, triple therapy). Two and twelve months after cessation of therapy, endoscopy and clinical assessments were repeated.
Results: Duodenal ulcer healing and Helicobacter pylori eradication were both significantly greater (p<0.0001) in the triple therapy group (healing: 98.6%; Helicobacter pylori cure rate: 72.6%) than in the dual therapy group (healing: 77.3%; Helicobacter pylori cure rate: 33.3%). Ulcers healed more frequently in Helicobacter pyloricured than in Helicobacter pylori-not cured patients (94.9% vs. 77.2%; p<0.0022). After one year, Helicobacter pylori eradication was re-confirmed in 46/58 patients previously treated with the triple therapy and in 10/40 patients treated with the dual therapy [p<0.0001]. Only three duodenal ulcer relapses were observed throughout follow-up: all were in Helicobacter pylori-not cured patients.
Conclusions: Triple therapy was more effective than dual both in curing Helicobacter pylori infection and healing active duodenal ulcers. The speed of ulcer healing obtained after only 7 days of antibiotics and 14 days of proton pump inhibitors confirmed that longer periods of anti ulcer therapy were not necessary. Helicobacter pylori -not cured patients had more slowly healing ulcers which were more apt to relapse when left untreated.
abstract_id: PUBMED:9745160
New one-week, low-dose triple therapy for the treatment of duodenal ulcer with Helicobacter pylori infection. Background: Antimicrobial therapy is the recommended treatment for duodenal ulcer associated with Helicobacter pylori infection. The eradication of bismuth-based triple therapy with bismuth subcitrate, metronidazole and amoxicillin is limited by low compliance, drug resistance and side-effects. Two-week proton pump inhibitor (PPI)-based triple therapy has a higher eradication rate but is costly. This study was designed to compare the efficacy, patient compliance and cost of short-term PPI-based triple therapy with those of bismuth-based triple therapy.
Methods: Ninety patients with active duodenal ulcer disease and H pylori infection, proven with the 13C-urea breath test and CLO test (Campylobacter-like organism test) were treated randomly in three therapeutic groups: Group A, DeNol 120 mg, amoxicillin 500 mg and metronidazole 250 mg four times a day orally for 14 days; Group B, omeprazole 20 mg plus clarithromycin 500 mg twice a day and amoxicillin 500 mg four times a day for 14 days; Group C, omeprazole 20 mg, clarithromycin 250 mg and metronidazole 500 mg twice a day for seven days. Nizatidine 150 mg twice a day was given continuously following the end of anti-H pylori therapy for each group. Two months later, endoscopy, the CLO test and 13C-urea breath test were repeated to assess the eradication rate of H pylori and the ulcer-healing rate. Drug tolerance was evaluated by patients themselves by daily recording of any side-effects.
Results: Eighty-four patients completed the entire course of therapy and evaluation for H pylori infection. The H pylori eradication rates in Groups A, B and C were 75% (21/28), 93% (26/28) and 89% (25/28), respectively (p = 0.466). The ulcer healing rate was 86% (24/28) in Group A and 89% (25/28) in Groups B and C (p = 0.764). A total of 74 patients (88%) were free from symptoms at the end of the triple therapy. Symptom relief was faster in patients with PPI-based triple therapy (Groups B and C) (days 3 and 4) than for patients with bismuth-based triple therapy (day 5). The cost of Group C therapy was lower than that for Groups A and B. There were no major side-effects in any of the patients.
Conclusions: One-week triple therapy with omeprazole, clarithromycin and metronidazole is highly effected for the eradication of H pylori. A therapeutic regime of one week's duration with lower cost, good compliance and mild side-effects may offer a good choice for treatment of duodenal ulcer associated with H pylori infection in clinical practice.
abstract_id: PUBMED:14534667
The impact of Helicobacter pylori resistance on the efficacy of a short course pantoprazole based triple therapy. Background: Many of the currently used Helicobacter pylori eradication regimens fail to cure the infection due to either antimicrobial resistance or poor patient compliance. Those patients will remain at risk of developing potentially severe complications of peptic ulcer disease.
Aim: We studied the impact of the antimicrobial resistance on the efficacy of a short course pantoprazole based triple therapy in a single-center pilot study.
Methods: Forty previously untreated adult patients (age range 20 to 75 years, 14 males) infected with Helicobacter pylori and with inactive or healing duodenal ulcer disease were assigned in this open cohort study to 1 week twice daily treatment with pantoprazole 40 mg, plus clarithromycin 250 mg and metronidazole 400 mg. Helicobacter pylori was assessed at entry and 50 3 days after the end of treatment by rapid urease test, culture and histology of gastric biopsies. The criteria for eradication was a negative result in the tests. Susceptibility of Helicobacter pylori to clarithromycin and metronidazole was determined before treatment with the disk diffusion test.
Results: One week treatment and follow up were complete in all patients. Eradication of Helicobacter pylori was achieved in 35/40 patients (87.5%) and was higher in patients with nitroimidazole-susceptible strains [susceptible: 20/20 (100%), resistant: 10/15 (67%)]. There were six (15%) mild adverse events reports.
Conclusions: A short course of pantoprazole-based triple therapy is well tolerated and effective in eradicating Helicobacter pylori. The baseline metronidazole resistance may be a significant limiting factor in treatment success.
abstract_id: PUBMED:9042978
One-week low-dose triple therapy for Helicobacter pylori is sufficient for relief from symptoms and healing of duodenal ulcers. Aim: To test the hypothesis that 1-week low-dose triple therapy for H. pylori is sufficient for relief from dyspeptic symptoms and healing of duodenal ulcers.
Methods: Fifty-nine out-patients with duodenal ulcers and positive rapid urease test participated in this randomized, double-blind, two-centre study. All patients were treated for 1 week with omeprazole 20 mg b.d., clarithromycin 250 mg b.d. and metronidazole 400 mg b.d. In a double-blind fashion, patients were then randomly treated for another 3 weeks with either omeprazole 20 mg once daily or an identical-looking placebo. Patients were investigated endoscopically before treatment for H. pylori, after 2 weeks and after 4 weeks. H. pylori infection was assessed by a 13C-urea breath test at the time of enrollment and 4 weeks after cessation of any study medication.
Results: Fifty-two patients were included in the 'all patients treated' analysis of efficacy. The overall H. pylori cure rate was 96% (95% CI = 87-100%), with no difference between the treatment groups. After 2 weeks duodenal ulcer healing was confirmed in 91% (95% CI = 80-100%) of patients treated with omeprazole and in 76% (95% CI = 60-91%) in the placebo group (P = 0.14). After 4 weeks all ulcers had healed. Relief from dyspeptic symptoms and adverse events (13.8 and 16.7%) did not differ between the treatment groups.
Conclusions: One-week low-dose triple therapy consisting of omeprazole, clarithromycin and metronidazole is a highly effective and well-tolerated approach to the cure of H. pylori infection in patients with a duodenal ulcer. Our data suggest that continuation of antisecretory drug therapy beyond anti-H. pylori therapy is actually excessive regarding relief from dyspeptic symptoms and healing of duodenal ulcers.
abstract_id: PUBMED:9527978
new one-week triple therapies with metronidazole for the eradication of Helicobacter pylori: clarithromycin or amoxycillin as the second antibiotic Background: To compare the efficacy of two "new" one-week triple therapies (with omeprazole, metronidazole and clarithromycin or amoxycillin) for the eradication of Helicobacter pylori and healing duodenal ulcer.
Methods: Randomised therapeutic trial. Eighty-eight consecutive duodenal ulcer patients with H. pylori infection were studied. At endoscopy, biopsies from both gastric antrum and body were obtained for histologic study (H&E). Two different therapies were administered for one week: omeprazole (O) (20 mg b.i.d.) and metronidazole (M) (500 mg b.i.d.) associated with clarithromycin (C) (500 mg b.i.d.) (group OMC, n = 44) or amoxycillin (A) (1 g b.i.d.) (group OMA, n = 44). Endoscopy with biopsies was repeated one month after completing therapy, and a 13C-urea breath test was also performed. Compliance was evaluated by tablet count. Analysis of data: multiple logistic regression, intention-to-treat. Eradication was defined as the absence of H. pylori by all diagnostic methods.
Results: Mean age (standard deviation) was 45(14) years, 75% males. Distribution of variables was similar in both therapeutic groups. Forty-two patients in each group completed the protocol. Eradication was achieved in 90.5% (95% CI = 78-96%) in group OMC and in 57% (42-71%) in group OMA (p < 0.001). In the multivariate analysis the type of therapy was the only variable which influenced on H. pylori eradication (OR = 7.1; CI = 2.2-24; p = 0.001). Ulcer healing was demonstrated in 88% (75-95%) of patients in group OMC and in 71% (56-83%) in group OMA (p = 0.1). Ulcer healing was higher when eradication was achieved (90%; 80-95%) than in H. pylori-positive patients (50%; 31-69%) (p < 0.001). Eradication of H. pylori was the only variable which influenced on ulcer healing (OR = 9.3; CI = 2.8-31; p < 0.001).
Conclusion: The "new" triple therapy with omeprazole, metronidazole and clarithromycin (administered in a twice-a-day basis and only for one week) had an excellent efficacy for the eradication of H. pylori, significantly higher than that obtained with amoxycillin instead of clarithromycin. Both therapies achieved a high ulcer healing rate when H. pylori was eradicated, even with omeprazole administered only for one week.
Answer: Yes, a one-week course of triple anti-Helicobacter pylori therapy is sufficient to control active duodenal ulcer. Studies have shown that a 7-day treatment with a combination of omeprazole, clarithromycin, and amoxicillin is effective in healing duodenal ulcers and eradicating H. pylori infection. For instance, a study with 77 patients found that after 3 weeks, the healing rate was 89% in the omeprazole group and 81% in the placebo group, and after 8 weeks, the ulcer healed in 97% of the patients in the total group. H. pylori was eradicated in 88% of patients in the omeprazole group and in 91% in the placebo group, with no statistically significant differences between the groups in ulcer-related symptoms or in ulcer healing (PUBMED:11421880). Another study with 343 patients demonstrated that one-week triple therapy alone is highly effective in healing non-complicated H. pylori-associated duodenal ulcer without additional antisecretory treatment (PUBMED:10962387). Furthermore, a study comparing a two-week dual therapy to a one-week triple therapy for the healing of duodenal ulcer and the eradication of the H. pylori infection showed that triple therapy was more effective than dual both in curing H. pylori infection and healing active duodenal ulcers (PUBMED:10975783). Additionally, a study with 59 out-patients found that one-week low-dose triple therapy consisting of omeprazole, clarithromycin, and metronidazole is a highly effective and well-tolerated approach to the cure of H. pylori infection in patients with a duodenal ulcer, suggesting that continuation of antisecretory drug therapy beyond anti-H. pylori therapy is actually excessive regarding relief from dyspeptic symptoms and healing of duodenal ulcers (PUBMED:9042978). Therefore, the evidence supports that a one-week course of triple therapy is sufficient to control active duodenal ulcer. |
Instruction: Is dengue and malaria co-infection more severe than single infections?
Abstracts:
abstract_id: PUBMED:32962765
Prevalence of and risk factors for severe malaria caused by Plasmodium and dengue virus co-infection: a systematic review and meta-analysis. Background: Co-infection with both Plasmodium and dengue virus (DENV) infectious species could have serious and fatal outcomes if left undiagnosed and without timely treatment. The present study aimed to determine the pooled prevalence estimate of severe malaria among patients with co-infection, the risk of severe diseases due to co-infection, and to describe the complications of severe malaria and severe dengue among patients with co-infection.
Methods: Relevant studies published between databases between 12 September 1970 and 22 May 2020 were identified and retrieved through a search of the ISI Web of Science, Scopus, and MEDLINE. The pooled prevalence and 95% confidence interval (CI) of severe malaria among patients with Plasmodium and DENV co-infection was estimated with a random-effects model to take into account the between-study heterogeneity of the included studies. The risks of severe malaria and severe diseases due to co-infection were estimated with the pooled odds ratio (OR) and 95% CI with a random-effects model.
Results: Of the 5653 articles screened, 13 studies were included in the systematic review and meta-analysis. The results demonstrated that the pooled prevalence estimate of severe malaria among patients with co-infection was 32% (95% CI: 18-47%, I2 = 92.3%). Patients with co-infection had a higher risk of severe diseases than those with DENV mono-infection (odds ratio [OR] = 3.94, 95% CI: 1.96-7.95, I2 = 72%). Patients with co-infection had a higher risk of severe dengue than those with DENV mono-infection (OR = 1.98, 95% CI: 1.08-3.63, I2 = 69%). The most severe complications found in severe dengue were bleeding (39.6%), jaundice (19.8%), and shock/hypotension (17.9%), while the most severe complications found in severe malaria were severe bleeding/bleeding (47.9%), jaundice (32.2%), and impaired consciousness (7.43%).
Conclusions: The present study found that there was a high prevalence of severe malaria among patients with Plasmodium and DENV co-infection. Physicians in endemic areas where these two diseases overlap should recognize that patients with this co-infection can develop either severe malaria or severe dengue with bleeding complications, but a greater risk of developing severe dengue than severe malaria was noted in patients with this co-infection.
Trial Registration: The protocol of this study was registered at PROSPERO: CRD42020196792 .
abstract_id: PUBMED:31423536
Severe dengue in travellers: pathogenesis, risk and clinical management. Rationale For Review: Dengue is a frequent cause of febrile illness among travellers and has overtaken malaria as the leading cause of febrile illness for those traveling to Southeast Asia. The purpose is to review the risk of dengue and severe dengue in travellers with a particular focus on the pathogenesis and clinical management of severe dengue.
Risk, Pathogenesis And Clinical Management: The risk of travel-acquired dengue depends on destination, season and duration of travel and activities during travel. Seroconversion rates reported in travellers, therefore, vary between <1% and >20%. The most common life-threatening clinical response to dengue infection is the dengue vascular permeability syndrome, epidemiologically linked to secondary infection, but can also occur in primary infection. Tertiary and quaternary infections are usually associated with mild or no disease. Antibody-dependent enhancement, viral factors, age, host factors and clinical experience of the managing physician modulate the risk of progressing to severe dengue. The relative risk of severe dengue in secondary versus primary infection ranges from 2 to 7. The absolute risk of severe dengue in children in highly endemic areas is ~0.1% per year for primary infections and 0.4% for secondary infections. About 2-4% of secondary infections lead to severe dengue. Severe dengue and death are both relatively rare in general travellers but more frequently in those visiting friends and relatives. Clinical management of severe dengue depends on judicious use of fluid rehydration.
Conclusions: Although dengue is a frequent cause of travel illness, severe dengue and deaths are rare. Nevertheless, dengue infections can interrupt travel and lead to evacuation and major out-of-pocket costs. Dengue is more frequent than many other travel-related vaccine preventable diseases, such as hepatitis A, hepatitis B, rabies, Japanese encephalitis and yellow fever, indicating a need for a dengue vaccine for travellers.
abstract_id: PUBMED:22549018
Is dengue and malaria co-infection more severe than single infections? A retrospective matched-pair study in French Guiana. Background: Dengue and malaria are two major arthropod-borne infections in tropical areas, but dual infections were only described for the first time in 2005. Reports of these concomitant infections are scarce and there is no evidence of more severe clinical and biological pictures than single infections.
Methods: To compare co-infections to dengue alone and malaria alone, a retrospective matched-pair study was conducted between 2004 and 2010 among patients admitted in the emergency department of Cayenne hospital, French Guiana.
Results: 104 dengue and malaria co-infection cases were identified during the study period and 208 individuals were matched in two comparison groups: dengue alone and malaria alone. In bivariate analysis, co-infection clinical picture was more severe than separated infections, in particular using the severe malaria WHO criteria. In multivariate analysis, independent factors associated with co-infection versus dengue were: masculine gender, CRP level > 50 mg/L, thrombocytopaenia < 50 109/L, and low haematocrit <36% and independent factors significantly associated with co-infections versus malaria were red cells transfusion, low haematocrit < 36%, thrombocytopaenia < 50 109/L and low Plasmodium parasitic load < 0.001%.
Conclusions: In the present study, dengue and malaria co-infection clinical picture seems to be more severe than single infections in French Guiana, with a greater risk of deep thrombocytopaenia and anaemia.
abstract_id: PUBMED:19000545
Acute renal failure in a patient with severe malaria and dengue shock syndrome. Malaria is an infectious disease caused by plasmodium, which lives and breeds in human blood cells, and is transmitted through the bites of Anopheles mosquitoes. Renal impairment, often caused by malaria, is acute renal failure (ARF) due to acute tubular necrosis (ATN). Dengue virus is transmitted from human to human through Aedes aegypti mosquito bites. Dengue hemorrhagic fever (DHF), the most severe stage of infection, is characterized by bleeding and shock tendencies (dengue shock syndrome, DSS). ARF is a less common complication in patients with DHF, with an incidence of less than 10%. Mixed infections of two infectious agents may cause overlapping symptoms and have been reported in Africa and India. We report here a patient with ARF due to mixed infection of severe malaria and DSS. The patient presented with fever and had a history of repeated malaria infection. Physical examination revealed stable vital signs and hepatosplenomegaly. Laboratory data showed hemoconcentration, thrombocytopenia and increased serum aminotransferase. Chest X-ray showed pleural effusion. A malarial antigen and thick smear examination showed the trophozoite stage of P. falciparum. On Day 3, blood pressure dropped to 80/60 mmHg, pulse was 120 beats/minute, weak, and body temperature 36.8 C, with icterus. Other tests revealed an increase of serum urea nitrogen and creatinine levels, and serologically anti-dengue IgG antibody (+) and anti-dengue IgM antibody (-). Based on these findings, we diagnosed the patient as having both malaria and DDS. We treated the patient with the parenteral anti-malarial agent, artemisinin. Supportive treatment and treatment of complications were also performed simultaneously for DSS. The patient experienced an oliguria episode but responded well to a diuretic. The patient was discharged after clinical and laboratory examinations showed positive progress.
abstract_id: PUBMED:28049485
The dangers of accepting a single diagnosis: case report of concurrent Plasmodium knowlesi malaria and dengue infection. Background: Dengue and malaria are two common, mosquito-borne infections, which may lead to mortality if not managed properly. Concurrent infections of dengue and malaria are rare due to the different habitats of its vectors and activities of different carrier mosquitoes. The first case reported was in 2005. Since then, several concurrent infections have been reported between the dengue virus (DENV) and the malaria protozoans, Plasmodium falciparum and Plasmodium vivax. Symptoms of each infection may be masked by a simultaneous second infection, resulting in late treatment and severe complications. Plasmodium knowlesi is also a common cause of malaria in Malaysia with one of the highest rates of mortality. This report is one of the earliest in literature of concomitant infection between DENV and P. knowlesi in which a delay in diagnosis had placed a patient in a life-threatening situation.
Case Presentation: A 59-year old man staying near the Belum-Temengor rainforest at the Malaysia-Thailand border was admitted with fever for 6 days, with respiratory distress. His non-structural protein 1 antigen and Anti-DENV Immunoglobulin M tests were positive. He was treated for severe dengue with compensated shock. Treating the dengue had so distracted the clinicians that a blood film for the malaria parasite was not done. Despite aggressive supportive treatment in the intensive care unit (ICU), the patient had unresolved acidosis as well as multi-organ failure involving respiratory, renal, liver, and haematological systems. It was due to the presentation of shivering in the ICU, that a blood film was done on the second day that revealed the presence of P. knowlesi with a parasite count of 520,000/μL. The patient was subsequently treated with artesunate-doxycycline and made a good recovery after nine days in ICU.
Conclusions: This case contributes to the body of literature on co-infection between DENV and P. knowlesi and highlights the clinical consequences, which can be severe. Awareness should be raised among health-care workers on the possibility of dengue-malaria co-infection in this region. Further research is required to determine the real incidence and risk of co-infection in order to improve the management of acute febrile illness.
abstract_id: PUBMED:28219910
Severe Plasmodium knowlesi with dengue coinfection. We report a case of severe Plasmodium knowlesi and dengue coinfection in a previously healthy 59-year-old Malay man who presented with worsening shortness of breath, high-grade fever with chills and rigors, dry cough, myalgia, arthralgia, chest discomfort and poor appetite of 1 week duration. There was a history mosquito fogging around his neighbourhood in his hometown. Further history revealed that he went to a forest in Jeli (northern part of Kelantan) 3 weeks prior to the event. Initially he was treated as severe dengue with plasma leakage complicated with type 1 respiratory failure as evidenced by positive serum NS1-antigen and thrombocytopenia. Blood for malarial parasite (BFMP) was sent for test as there was suspicion of malaria due to persistent thrombocytopenia despite recovering from dengue infection and the presence of a risk factor. The test revealed high count of malaria parasite. Confirmatory PCR identified the parasite to be Plasmodium knowlesi Intravenous artesunate was administered to the patient immediately after acquiring the BFMP result. Severe malaria was complicated with acute kidney injury and septicaemic shock. Fortunately the patient made full recovery and was discharged from the ward after 2 weeks of hospitalisation.
abstract_id: PUBMED:29899735
Elimination of Falciparum Malaria and Emergence of Severe Dengue: An Independent or Interdependent Phenomenon? The global malaria burden, including falciparum malaria, has been reduced by 50% since 2000, though less so in Sub-Saharan Africa. Regional malaria elimination campaigns beginning in the 1940s, up-scaled in the 1950s, succeeded in the 1970s in eliminating malaria from Europe, North America, the Caribbean (except Haiti), and parts of Asia and South- and Central America. Dengue has grown dramatically throughout the pantropical regions since the 1950s, first in Southeast Asia in the form of large-scale epidemics including severe dengue, though mostly sparing Sub-Saharan Africa. Globally, the WHO estimates 50 million dengue infections every year, while others estimate almost 400 million infections, including 100 million clinical cases. Curiously, despite wide geographic overlap between malaria and dengue-endemic areas, published reports of co-infections have been scarce until recently. Superimposed acute dengue infection might be expected to result in more severe combined disease because both pathogens can induce shock and hemorrhage. However, a recent review found no reports on more severe morbidity or higher mortality associated with co-infections. Cases of severe dual infections have almost exclusively been reported from South America, and predominantly in persons infected by Plasmodium vivax. We hypothesize that malaria infection may partially protect against dengue - in particular falciparum malaria against severe dengue - and that this inter-species cross-protection may explain the near absence of severe dengue from the Sub-Saharan region and parts of South Asia until recently. We speculate that malaria infection elicits cross-reactive antibodies or other immune responses that infer cross-protection, or at least partial cross-protection, against symptomatic and severe dengue. Plasmodia have been shown to give rise to polyclonal B-cell activation and to heterophilic antibodies, while some anti-dengue IgM tests have high degree of cross-reactivity with sera from malaria patients. In the following, the historical evolution of falciparum malaria and dengue is briefly reviewed, and we explore early evidence of subclinical dengue in high-transmission malaria areas as well as conflicting reports on severity of co-morbidity. We also discuss examples of other interspecies interactions.
abstract_id: PUBMED:32687019
Association of Dengue Virus and Leptospira Co-Infections with Malaria Severity. Plasmodium infections are co-endemic with infections caused by other agents of acute febrile illnesses, such as dengue virus (DENV), chikungunya virus, Leptospira spp., and Orientia tsutsugamushi. However, co-infections may influence disease severity, treatment outcomes, and development of drug resistance. When we analyzed cases of acute febrile illness at the All India Institute of Medical Sciences, New Delhi, India, from July 2017 through September 2018, we found that most patients with malaria harbored co-infections (Plasmodium mixed species and other pathogens). DENV was the most common malaria co-infection (44% of total infections). DENV serotype 4 was associated with mild malaria, and Leptospira was associated with severe malaria. We also found the presence of P. knowlesi in our study population. Therefore, in areas with a large number of severe malaria cases, diagnostic screening for all 4 DENV serotypes, Leptospira, and all Plasmodium species should be performed.
abstract_id: PUBMED:30069218
Severe Cerebral Falciparum Malaria with Dengue Coinfection: A Case Report. A rare severe case of the coinfection of cerebral falciparum malaria and dengue is reported in this paper. The patient was a 50 yr old male in the north of China, who returned from Uganda not long ago and suffered unconsciousness after fever for 4 d before the falciparum malaria and dengue infection was diagnosed via blood smear microscopy, RT-PCR and serology, although the antimalarial and symptomatic treatment was carried out, ultimately the patients died of septic shock because of multiple organ dysfunctions. This case report showed such a coinfection was prone to cause severe acute consequence. The timely diagnosis and delicate treatment are crucial for prognosis of patients.
abstract_id: PUBMED:36602218
Unusual co-infection of severe malaria by Plasmodium vivax and dengue virus in Mexico. Malaria and dengue fever are among the most common mosquito-borne diseases worldwide; however, reports of coinfection are rare. We present a case of severe malaria and dengue coinfection in a 16-yearold female patient presenting with fever, thrombocytopenia, pleural effusion, myopericarditis, and acute respiratory distress syndrome. Dengue infection was confirmed by the presence of immunoglobin M antibodies and nonstructural protein 1, while malaria was confirmed by the presence of Plasmodium vivax in thick and thin blood smears. This is the first report of a dengue/malaria coinfection in Mexico.
Answer: The available literature suggests that dengue and malaria co-infection can indeed be more severe than single infections. A systematic review and meta-analysis found a high prevalence of severe malaria among patients with Plasmodium and dengue virus (DENV) co-infection, with patients having a higher risk of severe diseases than those with DENV mono-infection (PUBMED:32962765). A retrospective matched-pair study in French Guiana indicated that the clinical picture of co-infection was more severe than that of single infections, particularly when using the severe malaria WHO criteria (PUBMED:22549018). Case reports have also highlighted the severity of co-infections, with one case of severe Plasmodium knowlesi and dengue coinfection resulting in multi-organ failure (PUBMED:28049485), and another case of severe cerebral falciparum malaria with dengue coinfection leading to septic shock and death (PUBMED:30069218). Additionally, a study in India found that dengue virus serotype 4 was associated with mild malaria, while Leptospira co-infection was associated with severe malaria (PUBMED:32687019). Furthermore, a case report from Mexico described a severe malaria and dengue coinfection presenting with multiple complications (PUBMED:36602218).
These findings collectively suggest that co-infection with dengue and malaria can lead to more severe clinical outcomes than infection with either pathogen alone. The increased severity may be due to the overlapping symptoms, the potential for each infection to exacerbate the other, and the challenges in diagnosing and managing co-infections. Health-care workers in endemic regions should be aware of the possibility of co-infections and consider them in their differential diagnoses to ensure timely and appropriate treatment. |
Instruction: Can the site of brain lesion predict improved motor function after low-TENS treatment on the post-stroke paretic arm?
Abstracts:
abstract_id: PUBMED:11594644
Can the site of brain lesion predict improved motor function after low-TENS treatment on the post-stroke paretic arm? Objectives: Previous reports suggest that afferent stimulation improves arm motor function in patients suffering from stroke. The aim of this pilot study was to test the hypothesis that the brain lesion location determines the response to low-frequency (1.7 Hz) transcutaneous electric nerve stimulation (Low-TENS) therapy.
Design: Magnetic resonance imaging (MRI) was performed on 14 patients who had previously received Low-TENS on the paretic arm after stroke.
Methods: MR images were classified with two different methods. First, lesions in the cortical and the subcortical areas were registered. Secondly, any change in a described periventricular white matter (PVWM) area was recorded. Interactions between the lesion site, as detected by MRI, and response to Low-TENS treatment were analysed.
Results: Arm motor function after Low-TENS treatment in relation to lesion in different brain areas showed that absence of lesions in the PVWM area increased the possibility for improved motor capacity after afferent stimulation.
Conclusions: The site of lesion may play a role in prognosis/outcome after Low-TENS treatment but this hypothesis should be further tested in a larger prospective study.
abstract_id: PUBMED:26381168
Individual prediction of chronic motor outcome in the acute post-stroke stage: Behavioral parameters versus functional imaging. Several neurobiological factors have been found to correlate with functional recovery after brain lesions. However, predicting the individual potential of recovery remains difficult. Here we used multivariate support vector machine (SVM) classification to explore the prognostic value of functional magnetic resonance imaging (fMRI) to predict individual motor outcome at 4-6 months post-stroke. To this end, 21 first-ever stroke patients with hand motor deficits participated in an fMRI hand motor task in the first few days post-stroke. Motor impairment was quantified assessing grip force and the Action Research Arm Test. Linear SVM classifiers were trained to predict good versus poor motor outcome of unseen new patients. We found that fMRI activity acquired in the first week post-stroke correctly predicted the outcome for 86% of all patients. In contrast, the concurrent assessment of motor function provided 76% accuracy with low sensitivity (<60%). Furthermore, the outcome of patients with initially moderate impairment and high outcome variability could not be predicted based on motor tests. In contrast, fMRI provided 87.5% prediction accuracy in these patients. Classifications were driven by activity in ipsilesional motor areas and contralesional cerebellum. The accuracy of subacute fMRI data (two weeks post-stroke), age, time post-stroke, lesion volume, and location were at 50%-chance-level. In conclusion, multivariate decoding of fMRI data with SVM early after stroke enables a robust prediction of motor recovery. The potential for recovery is influenced by the initial dysfunction of the active motor system, particularly in those patients whose outcome cannot be predicted by behavioral tests.
abstract_id: PUBMED:21478079
Motor unit number reductions in paretic muscles of stroke survivors. The objective of this study is to assess whether there is evidence of spinal motoneuron loss in paretic muscles of stroke survivors, using an index measurement called motor unit number index (MUNIX). MUNIX, a recently developed novel neurophysiological technique, provides an index proportional to the number of motor units in a muscle, but not necessarily an accurate absolute count. The MUNIX technique was applied to the first dorsal interosseous (FDI) muscle bilaterally in nine stroke subjects. The area and power of the maximum M-wave and the interference pattern electromyogram (EMG) at different contraction levels were used to calculate the MUNIX. A motor unit size index (MUSizeIndex) was also calculated using maximum M-wave recording and the MUNIX values. We observed a significant decrease in both maximum M-wave amplitude and MUNIX values in the paretic FDI muscles, as compared with the contralateral muscles. Across all subjects, the maximum M-wave amplitude was 6.4 ± 2.3 mV for the paretic muscles and 9.7 ± 2.0 mV for the contralateral muscles (p < 0.001). These measurements, in combination with voluntary EMG recordings, resulted in the MUNIX value of 109 ± 53 for the paretic muscles, much lower than the MUNIX value of 153 ± 38 for the contralateral muscles ( p < 0.01). No significant difference was found in MUSizeIndex values between the paretic and contralateral muscles. However, the range of MUSizeIndex values was slightly wider for paretic muscles (48.8-93.3 μV) than the contralateral muscles (51.7-84.4 μV). The findings from the index measurements provide further evidence of spinal motoneuron loss after a hemispheric brain lesion.
abstract_id: PUBMED:12021939
Evidence-based physiotherapeutic concepts for improving arm and hand function in stroke patients: a review. In recent years, our understanding of motor learning, neuroplasticity and functional recovery after the occurrence of brain lesion has grown significantly. New findings in basic neuroscience provided stimuli for research in motor rehabilitation. Repeated motor practice and motor activity in a real world environment have been identified in several prospective studies as favorable for motor recovery in stroke patients. EMG initiated electrical muscle stimulation -- but not electrical muscle stimulation alone -- improves motor function of the centrally paretic arm and hand. Although a considerable number of physiotherapeutic "schools" has been established, a conclusive proof of their benefit and a physiological model of their effect on neuronal structures and processes are still missing. Nevertheless, evidence-based strategies for motor rehabilitation are more and more available, particularly for patients suffering from central paresis.
abstract_id: PUBMED:21555982
Transfer of motor skill learning from the healthy hand to the paretic hand in stroke patients: a randomized controlled trial. Background: Bilateral transfer of a motor skill is a phenomenon based on the observation that the performance of a skill with one hand can "teach" the same skill to the other hand.
Aim: In this study the ability of bilateral transfer to facilitate the motor skill of the paretic hand in patients that suffered a stroke was tested.
Design: In a randomized controlled trial subjects were randomly assigned to either the test group or the control group.
Setting: The experiment was performed in a general hospital rehabilitation facility for inpatients and outpatients.
Population: We studied 20 outpatients, who had their first stroke episode characterized by a brain lesion to a single hemisphere, at the end of their rehabilitation treatment. The criteria used for the selection were based on a physical examination, the time elapsed from the stroke and cognitive requirements.
Methods: The experiment consisted in training the healthy hand of each patient from the test group to execute the nine hole peg test 10 times a day, for three consecutive days, and then test the paretic hand with the same test and with bimanual tasks. The control group was not trained but went through the same analysis.
Results: The homogeneity of the two groups has been proven. In the test group we found that the execution speed of the nine hole peg test with the paretic hand, after training the healthy hand, was on average 22.6% faster than the value recorded at baseline. The training had a positive effect on the execution of bimanual tasks. Meanwhile, no significant difference was found in the control group.
Conclusion: This is the first evidence that bilateral transfer of motor skills is present in patients that suffered a stroke, and that it improves the ability of the affected hand.
Clinical Rehabilitation Impact: This observation could open the way to the development of a new approach for the rehabilitation of stroke patients.
abstract_id: PUBMED:33299400
Baseline Motor Impairment Predicts Transcranial Direct Current Stimulation Combined with Physical Therapy-Induced Improvement in Individuals with Chronic Stroke. Transcranial direct current stimulation (tDCS) can enhance the effect of conventional therapies in post-stroke neurorehabilitation. The ability to predict an individual's potential for tDCS-induced recovery may permit rehabilitation providers to make rational decisions about who will be a good candidate for tDCS therapy. We investigated the clinical and biological characteristics which might predict tDCS plus physical therapy effects on upper limb motor recovery in chronic stroke patients. A cohort of 80 chronic stroke individuals underwent ten to fifteen sessions of tDCS plus physical therapy. The sensorimotor function of the upper limb was assessed by means of the upper extremity section of the Fugl-Meyer scale (UE-FM), before and after treatment. A backward stepwise regression was used to assess the effect of age, sex, time since stroke, brain lesion side, and basal level of motor function on UE-FM improvement after treatment. Following the intervention, UE-FM significantly improved (p < 0.05), and the magnitude of the change was clinically important (mean 6.2 points, 95% CI: 5.2-7.4). The baseline level of UE-FM was the only significant predictor (R2 = 0.90, F(1, 76) = 682.80, p < 0.001) of tDCS response. These findings may help to guide clinical decisions according to the profile of each patient. Future studies should investigate whether stroke severity affects the effectiveness of tDCS combined with physical therapy.
abstract_id: PUBMED:12582832
Hemispheric specialization in the co-ordination of arm and trunk movements during pointing in patients with unilateral brain damage. During pointing movements involving trunk displacement, healthy subjects perform stereotypically, selecting a strategy in which the movement is initiated with either the hand or trunk, and where the trunk continues after the end of the hand movement. In a previous study, such temporal co-ordination was not found in patients with left-hemispheric brain lesions reaching with either their dominant paretic or with their non-dominant non-paretic arm. This co-ordination deficit may be associated in part with the presence of a lesion in the dominant left hemisphere. If so, then no deficit should be observed in patients with stroke-related damage in their non-dominant right hemisphere moving with their ipsilesional arm. To verify this, 21 right-hand dominant adults (7 who had had a stroke in the right hemisphere, 7 who had had a stroke in the left hemisphere and 7 healthy subjects) pointed to two targets located on a table in front of them in the ipsilateral and contralateral workspace. Pointing was done under three movement conditions: while not moving the trunk, while bending the trunk forward and while bending the trunk backwards. The experiment was repeated with the non-paretic arm of patients with stroke and for the right and left arms of healthy subjects. Kinematic data were recorded (Optotrak). Results showed that, compared to healthy subjects, arm-trunk timing was disrupted in patients with stroke for some conditions. As in patients with lesions in the dominant hemisphere, arm-trunk timing in those with lesions in the non-dominant hemisphere was equally more variable than movements in healthy subjects. However, patients with dominant hemisphere lesions used significantly less trunk displacement than those with non-dominant hemisphere lesions to accomplish the task. The deficit in trunk displacement was not due to problems of trunk control or sitting balance since, in control experiments, all subjects were able to move the trunk the required distance, with and without the added weight of the limb. Results support the hypothesis that the temporal co-ordination of trunk and arm recruitment during pointing movements is mediated bilaterally by each hemisphere. However, the difference in the range of trunk displacement between patients with left and right brain lesions suggests that the left (dominant) hemisphere plays a greater role than the right in the control of movements involving complex co-ordination between the arm and trunk.
abstract_id: PUBMED:32576025
Electroacupuncture promotes motor function and functional connectivity in rats with ischemic stroke: an animal resting-state functional magnetic resonance imaging study. Background: To evaluate whether electroacupuncture (EA) treatment at LI11 and ST36 could reduce motor impairments and enhance brain functional recovery in a rat model of ischemic stroke.
Methods: A rat model of middle cerebral artery occlusion (MCAO) was established. EA at LI11 and ST36 was started at 24 h (MCAO + EA group) after ischemic stroke modeling. Untreated model (MCAO) and sham-operated (Sham) groups were included as controls. The neurological deficits of all groups were assessed using modified neurologic severity scores (mNSS) at 24 h and 14 days after MCAO. To further investigate the effect of EA on infarct volume and brain function, functional magnetic resonance imaging was used to estimate the size of the brain lesions and neural activities of each group at 14 days after ischemic stroke.
Results: EA treatment of MCAO rats led to a significant reduction in the infarct volumes accompanied by functional recovery, reflected in improved mNSS outcomes and motor functional performances. Furthermore, functional connectivity between the left motor cortex and left cerebellum posterior lobe, right motor cortex, left striatum and bilateral sensory cortex were decreased in MCAO group but increased after EA treatment.
Conclusion: EA at LI11 and ST36 could enhance the functional connectivity between the left motor cortex and the motor function-related brain regions, including the motor cortex, sensory cortex and striatum, in rats. EA exhibits potential as a treatment for ischemic stroke.
abstract_id: PUBMED:23652723
Laterality affects spontaneous recovery of contralateral hand motor function following motor cortex injury in rhesus monkeys. The purpose of this study was to test whether brain laterality influences spontaneous recovery of hand motor function after controlled brain injuries to arm areas of M1 and lateral premotor cortex (LPMC) of the hemisphere contralateral to the preferred hand in rhesus monkeys. We hypothesized that monkeys with stronger hand preference would exhibit poorer recovery of skilled hand use after such brain injury. Degree of handedness was assessed using a standard dexterity board task in which subjects could use either hand to retrieve small food pellets. Fine hand/digit motor function was assessed using a modified dexterity board before and after the M1 and LPMC lesions in ten monkeys. We found a strong negative relationship between the degree of handedness and the recovery of manipulation skill, demonstrating that higher hand preference was associated with poorer recovery of hand fine motor function. We also observed that monkeys with larger lesions within M1 and LPMC had greater initial impairment of manipulation and poorer recovery of reaching skill. We conclude that monkeys with a stronger hand preference are likely to show poorer recovery of contralesional hand fine motor skill after isolated brain lesions affecting the lateral frontal motor areas. These data may be extended to suggest that humans who exhibit weak hand dominance, and perhaps individuals who use both hands for fine motor tasks, may have a more favorable potential for recovery after a unilateral stroke or brain injury affecting the lateral cortical motor areas than individuals with a high degree of hand dominance.
abstract_id: PUBMED:23366830
Electrocorticographic decoding of ipsilateral reach in the setting of contralateral arm weakness from a cortical lesion. Brain machine interfaces have the potential for restoring motor function not only in patients with amputations or lesions of efferent pathways in the spinal cord and peripheral nerves, but also patients with acquired brain lesions such as strokes and tumors. In these patients the most efficient components of cortical motor systems are not available for BMI control. Here we had the opportunity to investigate the possibility of utilizing subdural electrocorticographic (ECoG) signals to control natural reaching movements under these circumstances. In a subject with a left arm monoparesis following resection of a recurrent glioma, we found that ECoG signals recorded in remaining cortex were sufficient for decoding kinematics of natural reach movements of the nonparetic arm, ipsilateral to the ECoG recordings. The relationship between the subject's ECoG signals and reach trajectory in three dimensions, two of which were highly correlated, was captured with a computationally simple linear model (mean Pearson's r in depth dimension= 0.68, in height= 0.73, in lateral= 0.24). These results were attained with only a small subset of 7 temporal/spectral neural signal features. The small subset of neural features necessary to attain high decoding results show promise for a restorative BMI controlled solely by ipsilateral ECoG signals.
Answer: The site of a brain lesion may indeed play a role in predicting improved motor function after low-frequency transcutaneous electric nerve stimulation (Low-TENS) treatment on the post-stroke paretic arm. A pilot study indicated that the absence of lesions in the periventricular white matter (PVWM) area increased the possibility for improved motor capacity after afferent stimulation with Low-TENS. However, the authors of this study suggest that this hypothesis should be further tested in a larger prospective study to confirm these findings (PUBMED:11594644). |
Instruction: Angiographic findings of in-stent intimal hyperplasia after stent-assisted coil embolization: are they permanent findings?
Abstracts:
abstract_id: PUBMED:26315002
Angiographic findings of in-stent intimal hyperplasia after stent-assisted coil embolization: are they permanent findings? Objective: Stent-assisted coil embolization for the treatment of intracranial aneurysms has been used widely. This study aimed to investigate the effect of stent implantation in the nonatherosclerotic parent artery with cerebral aneurysms. The authors evaluated luminal changes and the related factors following stent-assisted coil embolization.
Methods: This study included 97 patients harboring a total of 99 unruptured aneurysms of the distal internal carotid artery (ICA) who underwent single-stent implantation and more than 1 session of conventional angiography during follow-up (midterm follow-up only, n = 70; midterm and long-term follow-up, n = 29) between January 2009 and April 2014. The luminal narrowing point was measured using a local thickness map (ImageJ plug-in).
Results: Stent-assisted coil embolization caused dynamic luminal narrowing of approximately 82% of the parent artery diameter on average after 8 months, which was reversed to 91% after 25 months. In addition, luminal narrowing greater than 40% was noticed in 2 (7%) of the 29 patients who experienced spontaneous reversion without additional management during follow-up. Most luminal narrowing changes seen were diffuse.
Conclusions: Luminal narrowing after aneurysm stent-assisted coil embolization is a dynamic process and appears to be a spontaneously reversible event. Routine management of luminal narrowing may not cause adverse events that require additional treatment.
abstract_id: PUBMED:37654432
Embolization of unruptured wide-necked aneurysms at the MCA bifurcation using the Neuroform atlas stent-assisted coiling: a two-center retrospective study. Background: The management of middle cerebral artery (MCA) aneurysms remains a controversial topic, and MCA aneurysms have traditionally been treated primarily by surgical clipping. The Neuroform Atlas Stent™ (NAS, available from Stryker Neurovascular, Fremont, California) represents the latest generation of intracranial stents with improved stent delivery system capabilities.
Objective: This study aims to investigate the safety, feasibility and efficacy exhibited by NAS in treating unruptured aneurysms at the MCA bifurcation.
Methods: This was a two-center retrospective study involving 42 patients with unruptured wide-necked aneurysms (WNAs) of the MCA treated with the NAS from October 2020 to July 2022.
Results: The stent was used to treat 42 cases of unruptured WNA at the MCA bifurcation. Endovascular treatment techniques had a 100% success rate. Immediate postoperative angiography found complete aneurysm occlusion in 34 patients (80.9%) (mRRC 1), neck remnant in 7 patients (16.7%) (mRRC 2), and residual aneurysm in 1 patient (2.4%) (mRRC 3). The thromboembolic complication rate was 2.4% (1/42). The follow-up period was 8.7 months on average (3-16 months). The last angiographic follow-up results revealed complete aneurysm occlusion in 39 patients (92.9%) (mRRC 1), neck remnant in 3 (7.1%) patients (mRRC 2), no aneurysm recanalization or recurrence, and no cases of stent intimal hyperplasia. During the latest clinical follow-up, all patients had an mRS score of 0.
Conclusion: Our study demonstrates that the NAS can be applied to treat unruptured WNAs at the MCA bifurcation with favorable safety, feasibility, and efficacy.
abstract_id: PUBMED:33387206
Histopathologic findings of vascular damage after mechanical thrombectomy using stent retriever in canine models. Although mechanical thrombectomy is a powerful predictor of stroke outcome, it induces vessel wall injury in the acute phase. This study aimed to analyze the degree and the condition of recovery of wall injury after the acute phase via angiography and histopathological analysis of autopsied canine models. Digital subtraction angiography (DSA) and embolization with autologous thrombus were performed in six canines. The model of arterial occlusion was effective in all target vessels. Mechanical thrombectomy was performed in completely occluded vessels using stent retriever. Follow-up angiographic and histopathologic evaluations were performed 1 month later. Complete recanalization using stent retriever was achieved in four cases. Slight residual vessel narrowing after recanalization and moderate narrowing was observed in one case each. Histopathological analysis showed that inflammation, hemorrhage, and device-induced medial injury were not observed in any of the cases. Severe intimal proliferation (grade 4), marked diffuse thrombosis (grade 4), and weak vascular endothelial cell loss (grade 1) were observed in one case and weak endovascular proliferation was observed in one case. Although successful complete recanalization was achieved with a single mechanical thrombectomy attempt and no change was observed in the follow-up DSA, special attention should be paid to postoperative follow-up, as device-induced intimal proliferation, diffuse thrombosis, and endothelial cell loss may remain after 1 month.
abstract_id: PUBMED:26658279
Single-center experience in the endovascular treatment of wide-necked intracranial aneurysms with a bridging intra-/extra-aneurysm implant (pCONus). Purpose: To retrospectively evaluate the safety and efficacy of the endovascular treatment of wide-necked intracranial aneurysms assisted by a novel intra-/extra-aneurysm stent-like implant (pCONus).
Methods: Initial and follow-up angiographic and clinical results are presented of 25 patients with 25 unruptured and ruptured wide-necked intracranial aneurysms treated by reconstruction of the aneurysm neck using the pCONus implant followed by coil occlusion of the fundus.
Results: Successful intra-/extra-aneurysm deployment of the pCONus with coil occlusion of the fundus was achieved in all but one case. Procedure-related ischemic complications were observed in three cases with permanent deterioration in one. Acceptable aneurysm occlusion was achieved in all cases. Follow-up angiography revealed sufficient occlusion in 81.0% of the aneurysms. Intimal hyperplasia in the stented segment of the parent artery or device migration has not been observed to date.
Conclusions: The pCONus device offers a promising treatment option for complex wide-necked bifurcation intracranial aneurysms. Acute or delayed dislocations of coils into the parent artery are successfully avoided.
abstract_id: PUBMED:12623576
Palmaz-Schatz stent embolization: long-term clinical and angiographic follow-up. A long-term clinical and angiographic follow-up of a case of peripheral coronary stent embolization is reported. No clinical sequelae occurred in the immediate and long-term (five years) followup. The five-year follow-up angiographic images provided visual documentation of absence of stent-associated stenosis. This case highlights the concept that fibro-intimal hyperplasia may not occur when plaque and balloon trauma are absent at the site of stent embolization.
abstract_id: PUBMED:26600281
Long-term Follow-up of In-stent Stenosis After Pipeline Flow Diversion Treatment of Intracranial Aneurysms. Background: There is scant information on in-stent stenosis after flow diversion treatment of intracranial aneurysms with the Pipeline Embolization Device (PED).
Objective: To assess the incidence, severity, nature, and clinical consequences of in-stent stenosis on angiographic follow-up after treatment with the PED.
Methods: A retrospective study of patients who underwent aneurysm treatment with the PED was conducted. In-stent stenosis was assessed on subsequent follow-up angiography. Intimal hyperplasia was defined as a uniform growth process beyond the limits of the metallic mesh at <25%. In-stent stenosis represented an area of parent vessel narrowing, most often focal, graded as mild (25%-50%), moderate (50%-75%), or severe (>75%).
Results: Between June 2011 and April 2015, 80 patients were treated with the PED. Angiographic follow-up was available for 51 patients (representing 76% of available or 64% of all patients). Mean follow-up was 12.5 months. In-stent stenosis was detected in 5 patients (9.8%) at a median of 6 months. Stenosis was mild in 4 of 5 (80%) and moderate in 1 of 5 (20%) patients. There were no cases of severe stenosis. No stenosis caused flow limitation, clinical symptoms, or required re-treatment. Additional follow-up angiography was available in 2 of 5 stenosis patients showing marked improvement. Sixteen patients (31%) had intimal hyperplasia, and 28 patients (55%) had no stenosis. Asymptomatic stent occlusion occurred in 2 patients (4%) related to medication noncompliance.
Conclusion: Treatment with the PED was associated with a 9.8% rate of in-stent stenosis, detected on first angiographic follow-up, at a median of 6 months. None were symptomatic or required re-treatment, and they showed significant improvement on follow-up.
Abbreviation: FD, flow diverter.
abstract_id: PUBMED:19898840
Treatment of elastase-induced intracranial aneurysms in New Zealand white rabbits by use of a novel neurovascular embolization stent device. Introduction: This study aims to test a novel balloon expandable stent covered with a polytetrafluoroethylene membrane (neurovascular embolization cover (NEC), NFocus Neuromedical, Palo Alto, California) regarding angiographic and histologic aneurysm occlusion. Radiopacity, stent placement, navigation, flexibility, and intimal proliferation were also evaluated.
Methods: Eight aneurysms were induced in New Zealand white rabbits. Digital subtraction angiography (DSA) was performed directly after stent placement and after 5 and 10 min. Four and 8 weeks after stent placement, an intra-arterial DSA control was performed. The animals were then sacrificed and the aneurysms histologically evaluated.
Results: The radiopaque markers were clearly visible. Although all the stents were easily navigated into the subclavian artery, the limited flexibility of the stent resulted in straightening of the vessel in four cases. As a result, exact stent placement was achieved and acutely confirmed in only two cases. However, at sacrifice, angiographic and histologic occlusion was noted at follow-up in five aneurysms.
Conclusion: In tortuous anatomy, the relative stiffness of the stent makes exact stent placement challenging. This may have been exacerbated by the movement of the vessels due to proximity to the heart in this model. Future studies should evaluate whether existing residual flow into an aneurysm lumen might lead to embolization without any additional treatment. Anticoagulation remains a very important part of aneurysm treatment with stents. The trend toward aneurysm occlusion by excluding it from the blood circulation seems a promising method in future endovascular therapy. The NEC device shows good potential.
abstract_id: PUBMED:15163968
Use of covered stent grafts in treatment of embolism-threatening stenoses and arterial occlusions The present work was aimed at demonstrating possibilities and prospects of using covered stent grafts in treatment of patients with embolism-hazardous parietal thrombi, embolism-endangered stenoses, and occlusions of peripheral arteries. Using stent grafts covered with non-woven materials, including PTFE films ("Hemobahn endograft" and "JOSTENT Peripheral Stent Graft") in the arteries of the iliac and femoropoplietal segments, makes it possible to avoid not only acute and delayed occlusions, but to use a PTFE-covered stent graft as a means of isolating the intima from the blood flow, as a method of "inhibiting" intimal hyperplasia in the stented arterial segment. The article deals with clinical follow-ups of patients with embolism-dangerous atherosclerotic stenoses and occlusions of iliac arteries, who underwent successful implantation of covered stents into the affected segments of the arterial bed. These findings demonstrably show high efficacy of using methods of endografting in clinical situations wherein surgical treatment is associated with increased risk.
abstract_id: PUBMED:18847343
Application of covered stent grafts for intracranial vertebral artery dissecting aneurysms. Object: Utilization of covered stent grafts in treating neurovascular disorders has been reported, but their efficacy and safety in vertebral artery (VA) dissecting aneurysms needs further investigation.
Methods: Six cases are presented involving VA dissecting aneurysms that were treated by positioning a covered stent graft. Two aneurysms were located distal to the posterior inferior cerebellar artery, and 4 were located proximal to the posterior inferior cerebellar artery. Aspirin as well as ticlopidine or clopidogrel were administered after the procedure to prevent stent-related thrombosis. All patients were followed up both angiographically and clinically.
Results: Five of the 6 patients underwent successful placement of a covered stent graft. The covered stent could not reach the level of the aneurysm in 1 patient with serious vasospasm who died secondary to severe subarachnoid hemorrhage that occurred 3 days later. Patient follow-up ranged from 6 to 14 months (mean 10.4 months), and demonstrated complete stabilization of the obliterated aneurysms, and no obvious intimal hyperplasia. No procedure-related complications such as stenosis or embolization occurred in the 5 patients with successful stent graft placement.
Conclusions: Although long-term follow-up studies using a greater number of patients is required for further validation of this technique, this preliminary assessment shows that covered stent graft placement is an efficient, safe, and microinvasive technique, and is a promising tool in treating intracranial VA dissecting aneurysms.
abstract_id: PUBMED:30415222
The occurrence of neointimal hyperplasia after flow-diverter implantation is associated with cardiovascular risks factors and the stent design. Background: Neo-intimal hyperplasia (NIH) is frequently observed after flow-diverter stent (FDS) implantation. Although mostly asymptomatic, this vascular response can sometimes lead to delayed ischemic strokes. This study intended to evaluate the factors potentially influencing the rates of NIH following FDS treatment.
Material And Methods: All aneurysm treatments performed with a Pipeline embolization device (PED) or a SILK stent from May 2011 to May 2015 were collected in a prospectively maintained database. Patient demographics, clinical, and angiographic outcomes including both digital subtraction angiography and C-arm cone-beam CT were registered. Two blind reviewers rated the presence of NIH on a binary scale (present/absent).
Results: From 148 patients, 63 datasets were available for analysis. Inter-reader agreement was excellent (Kappa=0.88). NIH was positively correlated with smoking, dyslipidemia, and high blood pressure, but not with aneurysm characteristics. At early follow-up (<12 months), NIH was more frequently associated with the use of the SILK stent (68%) rather than the PED (38%): P<0.02. At long-term follow-up, the NIH rate in the total population dropped from 55% to 26% with no more significant difference between the two stents. The complete occlusion rate as seen in early follow-up was higher in the SILK group with 76% vs 65% but without statistical significance (P=0.4).
Conclusion: NIH is a dual-vessel reaction after FDS implant. When planning a treatment in locations at risk of ischemic complications if severe NIH would occur, then the stent design should be considered. However, minimal NIH might also be needed as it is involved in aneurysm healing. Before treatment patients should be recommended best medical management of their cardiovascular risks factors to prevent an excessive NIH reaction.
Answer: The angiographic findings of in-stent intimal hyperplasia after stent-assisted coil embolization are not necessarily permanent. According to a study that included patients with unruptured aneurysms of the distal internal carotid artery who underwent single-stent implantation, luminal narrowing caused by stent-assisted coil embolization was observed to be a dynamic process. On average, there was an 82% narrowing of the parent artery diameter after 8 months, which spontaneously reversed to 91% after 25 months. Additionally, in a subset of patients who experienced luminal narrowing greater than 40%, spontaneous reversion without additional management was noticed during follow-up. These findings suggest that luminal narrowing after aneurysm stent-assisted coil embolization appears to be a spontaneously reversible event (PUBMED:26315002).
Moreover, in another study involving the use of the Neuroform Atlas Stent™ for unruptured wide-necked aneurysms at the MCA bifurcation, no cases of stent intimal hyperplasia were observed during the latest angiographic follow-up (PUBMED:37654432). Similarly, a study on the Pipeline Embolization Device (PED) found that in-stent stenosis was detected in a small percentage of patients (9.8%) at a median of 6 months, with none being symptomatic or requiring re-treatment, and showing significant improvement on subsequent follow-up (PUBMED:26600281).
These studies collectively indicate that in-stent intimal hyperplasia after stent-assisted coil embolization is not a permanent finding and may improve or resolve over time without the need for additional treatment. |
Instruction: Could thrombocyte parameters be an inflammatory marker in the brucellosis?
Abstracts:
abstract_id: PUBMED:23348158
Could thrombocyte parameters be an inflammatory marker in the brucellosis? Aim: To investigate links between platelet parameters mean platelet volume (MPV), platelet count (PC), platelet distribution width (PDW), platelet mass (PM) and brucella tube agglutination titers (BSTAT) in patients with brucellosis. Initially, PC, MPV, PM and PDW calculations were compared between periods before and after treatment. The correlation between inflammation markers (erythrocyte sedimentation rate, ESR, white blood cell count, WBC, and C reactive protein, CRP) and platelet parameters was subsequently investigated.
Methods: This self-controlled study included 40 patients who had positive BSTAT at least at a titer of 1/160. Platelet parameters and inflammation values (CRP, ESR) at the time of positive BSTAT at least at a titer of 1/160 (pre-treatment) were compared with control of the same parameters at the time when BSTAT became negative or when the titers reduced 4 folds (post-treatment).
Results: Mean platelet volume values (7.90+1.96) were significantly elevated in post treatment period when compared to pre treatment (7.58+1.96), (p= 0.023). Post treatment CRP, ESR and PC were significantly reduced when compared to pretreatment values (p=0.000, p=0.000 and p=0.025, respectively). In the pretreatment period, a direct correlation between ESR and PC values (r=0.036, p=0.025), and inverse correlations between ESR with MPV (r=-0.337, p=0.038) was found. A dependent predictive factor in multivariate logistic regression analysis for BSTAT was not found.
Conclusion: We suggest that PC and MPV may be inflammatory markers in brucellosis.
abstract_id: PUBMED:30766564
The predictive role of haematological parameters in the diagnosis of osteoarticular brucellosis. Background: Brucellosis is a zoonosis that affects several systems, especially with the osteoarticular involvement.
Objectives: This study aims to compare the neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio (PLR), monocyte/lymphocyte ratio (MLR), C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), mean platelet volume (MPV) and red blood cell distribution (RDW) in patients with the osteoarticular involvement and those with non-localised brucellosis and evaluate their predictive value for the diagnosis of osteoarticular brucellosis.
Methods: We enrolled 140 patients with brucellosis, 70 with the osteoarticular involvement and 70 without any localised involvement. We collected patients' data retrospectively and compared haematological parameters between both groups. In patients with osteoarticular brucellosis, a correlation of the NLR with the ESR and CRP and correlation of the MLR with the ESR and CRP were assessed. Furthermore, the predictive performance of the ESR, CRP, NLR and MLR on the osteoarticular involvement was evaluated.
Results: The NLR, MLR, ESR, CRP, neutrophil and monocyte levels were higher in the patient group than the control group.
Conclusion: The NLR, MLR, ESR and CRP are useful parameters to estimate the clinical course of patients with brucellosis, and the NLR and MLR are alternative to inflammatory markers in the osteoarticular involvement.
abstract_id: PUBMED:35404144
Can Endocan Be a Novel Marker in the Diagnosis of Brucellosis? Introduction: Brucellosis remains an important public health problem in many developing countries. This study examines the serum levels of endocan, a novel immune-inflammatory marker, in this potentially difficult to diagnose disease, and their predictive diagnostic value. Methods: Fifty patients under follow-up with diagnoses of brucellosis between May 1, 2020, and December 1, 2020, and 50 healthy individuals constituting the control group were included in the study. Cases were classified as acute, subacute, or chronic, depending on the duration of their symptoms. Patients' plasma specimens were collected before the initiation of brucellosis treatment. Results: Serum endocan levels were higher among the patients with brucellosis than in the healthy control group (p < 0.001). Endocan levels also differed significantly among the patients with acute, subacute, and chronic brucellosis (p < 0.001 for all). Comparison of C-reactive protein (CRP) and the erythrocyte sedimentation rate (ESR) in the patients with acute, subacute, and chronic brucellosis revealed a significant difference only in terms of CRP levels between the acute and chronic patients (p = 0.018). No significant association was observed between serum endocan levels and growth in blood culture or serum agglutination test results in the patients with brucellosis (p > 0.05). However, a significant correlation was found between patients' CRP and ESR values and endocan levels (r = 0.572, and r = 0.415, respectively, p < 0.001). At a cutoff value of 597.35 pg/mL, serum endocan levels exhibited 90% sensitivity and 85% specificity in differentiating patients diagnosed with brucellosis from healthy individuals (area under the curve = 0.927, p < 0.001, 95% confidence interval = 0.877-0.977). Conclusion: Endocan may be a useful guide in differentiating patients with brucellosis from healthy individuals, and in distinguishing between acute, subacute, and chronic brucellosis. Ethics committee approval No: B.30.2.ATA.0.01.00/203.
abstract_id: PUBMED:29435171
Brucella induces unfolded protein response and inflammatory response via GntR in alveolar macrophages. Brucella is an intracellular bacterium that causes the zoonosis brucellosis worldwide. Alveolar macrophages (AM) constitute the main cell target of inhaled Brucella. Brucella thwarts immune surveillance and evokes endoplasmic reticulum (ER) stress to replicate in macrophages via virulence factors. The GntR regulators family was concentrated as an important virulence factor in controlling virulence and intracellular survival of Brucella. However, the detailed underlying mechanism for the host-pathogen interaction is poorly understood. In this study the BSS2_II0438 mutant (ΔGntR) was constructed. The type IV secretion system (T4SS) virulence factor genes (VirB2, VirB6, and VirB8) were down-expression in ΔGntR. ΔGntR could infect and proliferate to high titers in GAMs without a significant difference compared with the parental strain. ΔGntR infection increased the expression of ER stress marker genes GRP78, ATF6, and PERK in the early stages of its intracellular cycle but decreased the expression of these genes in the late stages. ΔGntR increased greatly the number of Brucella CFUs in the inactive ER stress state in GAMs. Meanwhile, ΔGntR infection increased the levels of IFN-γ, IL-1β, and TNF-α, indicating ΔGntR could induce the secretion of inflammatory but not anti-inflammatory cytokines IL-10. Taken together, our results clarified the role of the GntR in B. suis. S2 virulence expression and elucidated that GntR is potentially involved in the signaling pathway of the Brucella-induced UPR and inflammatory response in GAMs.
abstract_id: PUBMED:38359069
Investigation of new inflammatory biomarkers in patients with brucella. Background: Delayed diagnosis and inadequate treatment of infectious and inflammatory diseases, such as Brucella, lead to high rates of mortality and morbidity. The aim of our study was to investigate the association between serum levels of apelin, presepsin, and irisin with inflammation, laboratory parameters, and blood culture in patients with brucella.
Patients And Methods: This prospective case-control study involves 30 patients with brucellosis and 30 healthy, matched control subjects. Thirty patients who were diagnosed with brucellosis were aged ≥ 18 years. Blood samples were taken from the patients on the first day they were diagnosed with brucellosis. The values of irisin, presepsin, and apelin were studied. In addition, blood samples were also taken from 30 healthy individuals for the control group. Irisin, presepsin, and apelin values that were measured in the patients on the first day were compared with those values measured in the control group.
Results: The sex and age statuses of the subjects are matched among the groups. The levels of irisin were significantly higher in patients with brucellosis compared to the control group (p<0.045). There was no significant difference between the two groups in terms of apelin and presepsin levels (p values 0.087 and 0.162, respectively). There was a positive correlation between irisin levels and elevated ALT levels, as well as positive blood cultures.
Conclusions: It appears that the measurement of irisin levels may be beneficial in patients with brucellosis. Irisin can be used as a diagnostic marker for brucella infection and may greatly clinicians to predict the severity disease and treatment response.
abstract_id: PUBMED:36422675
Rapid vertical flow technique for the highly sensitive detection of Brucella antibodies with Prussian blue nanoparticle labeling and nanozyme-catalyzed signal amplification. Brucellosis is a chronic infectious disease caused by Brucella, which is characterized by inflammation of reproductive organs and fetal membranes, abortion, infertility, and local inflammatory lesions of various tissues. Due to the widespread prevalence and spread of brucellosis, it has not only caused huge losses to animal husbandry, but also brought serious impacts on human health and safety. Therefore, rapid and accurate diagnosis is of great significance for the effective control of brucellosis. Therefore, we have developed a rapid vertical flow technique (RVFT) using Prussian blue nanoparticles (PBNPs) as a marker material for the detection of brucellosis antibodies. Lipopolysaccharide (LPS) was purified and used to detect brucellosis antibodies to improve the sensitivity of this technique. To enhance the sensitivity of serum antibody detection, a single multifunctional compound buffer was created using whole blood as a biological sample while retaining the advantages of typical lateral flow immunoassays. After signal amplification, standard Brucella-positive serum (containing Brucella antibody at 4000 IU mL-1) could be detected in this system even at a dilution factor of 1 × 10-2. The detection limit was 40 IU mL-1, which is ten times that before signal amplification. This RVFT displayed good specificity and no cross-reactivity. This RVFT effectively avoided the false negative phenomenon of lateral flow immunoassays, was easy to operate, had a short reaction time, has good repeatability, and could elicit results that were visible to the naked eye for 2 ~ 3 min without any equipment. Since this method is very important for controlling the prevalence of brucellosis, it holds great promise for application in primary medical units and veterinary brucellosis detection.
abstract_id: PUBMED:6484445
Lactate levels in Brucella arthritis. In this study the synovial fluid cell types and the synovial fluid lactate levels of patients with Brucella, septic rheumatoid, gouty and osteoarthritic mono-arthritis are presented. It is shown that lactate levels coupled with the clinical picture and the cell type of the synovial fluid appear to be an early additional diagnostic marker for the differentiation between septic, inflammatory and brucella-induced mono-arthritis.
abstract_id: PUBMED:28164606
Increased Plasma Levels of Heme Oxygenase-1 in Human Brucellosis. Background: Brucellosis is associated with inflammation and the oxidative stress response. Heme oxygenase-1 (HO-1) is a cytoprotective stress-responsive enzyme that has anti-inflammatory and anti-oxidant effects. Nevertheless, the role of HO-1 in human brucellosis has not yet been studied. The aim of this study was to examine the plasma levels of HO-1 in patients with brucellosis and to evaluate the ability of plasma HO-1 levels as an auxiliary diagnosis, a severity predictor, and a monitor for brucellosis treatments.
Methods: A total of 75 patients with brucellosis were divided into the acute, subacute, chronic active, and chronic stable groups. An additional 20 volunteers were included as the healthy control group. The plasma HO-1 levels and other laboratory parameters were measured in all groups. Furthermore, the plasma levels of HO-1 in the acute group were compared before and after treatment.
Results: The plasma HO-1 levels were considerably increased in the acute (4.97 ± 3.55), subacute (4.98 ± 3.23), and chronic active groups (4.43 ± 3.00) with brucellosis compared to the healthy control group (1.03 ± 0.63) (p < 0.01). In the acute group, the plasma HO-1 levels in the post-treatment group (2.33 ± 2.39) were significantly reduced compared to the pre-treatment group (4.97 ± 3.55) (p < 0.01). On the other hand, the plasma HO-1 levels were higher in the chronic active group (4.43 ± 3.00) than the chronic stable group (2.74 ± 2.23) (p < 0.05). However, the plasma HO-1 levels in the chronic stable group (2.74 ± 2.23) remained higher than the levels in the healthy control group (1.03 ± 0.63) (p < 0.05). The HO-1 levels were positively correlated with the C-reactive protein (CRP) levels in patients with brucellosis (r = 0.707, p < 0.01).
Conclusions: The plasma HO-1 levels can reflect patients' brucellosis status and may be used as a supplementary plasma marker for diagnosing brucellosis and monitoring its treatment.
abstract_id: PUBMED:37640734
Co-delivery of streptomycin and hydroxychloroquine by labeled solid lipid nanoparticles to treat brucellosis: an animal study. Can brucellosis-related biochemical and immunological parameters be used as diagnostic and treatment indicators? The goal of this project was to look at biochemical parameters, trace elements, and inflammatory factors in the acute and chronic stages of brucellosis after treatment with streptomycin and hydroxychloroquine-loaded solid lipid nanoparticles (STR-HCQ-SLN). The double emulsion method was used for the synthesis of nanoparticles. Serum levels of trace elements, vitamin D, CRP, and biochemical parameters were measured in rats involved in brucellosis. The therapeutic effect of STR-HCQ-SLN was compared with that of free drugs. In both healthy and infected rats, serum concentrations of copper, zinc, iron, magnesium, potassium, and biochemical parameters of the liver were significantly different. By altering the serum levels of the aforementioned factors, treatment with STR-HCQ-SLN had a positive therapeutic effect on chronic brucellosis. Vitamin D levels declined (46.4%) and CRP levels rose (from 7.5 mg to less than 1 mg) throughout the acute and chronic stages of brucellosis. This study showed that by comparing the biochemical parameters and the levels of trace elements in the serum of healthy and diseased mice in the acute and chronic stages of brucellosis, it is possible to get help from other routine methods for diagnosis.
abstract_id: PUBMED:22479178
In situ microscopy analysis reveals local innate immune response developed around Brucella infected cells in resistant and susceptible mice. Brucella are facultative intracellular bacteria that chronically infect humans and animals causing brucellosis. Brucella are able to invade and replicate in a broad range of cell lines in vitro, however the cells supporting bacterial growth in vivo are largely unknown. In order to identify these, we used a Brucella melitensis strain stably expressing mCherry fluorescent protein to determine the phenotype of infected cells in spleen and liver, two major sites of B. melitensis growth in mice. In both tissues, the majority of primary infected cells expressed the F4/80 myeloid marker. The peak of infection correlated with granuloma development. These structures were mainly composed of CD11b⁺ F4/80⁺ MHC-II⁺ cells expressing iNOS/NOS2 enzyme. A fraction of these cells also expressed CD11c marker and appeared similar to inflammatory dendritic cells (DCs). Analysis of genetically deficient mice revealed that differentiation of iNOS⁺ inflammatory DC, granuloma formation and control of bacterial growth were deeply affected by the absence of MyD88, IL-12p35 and IFN-γ molecules. During chronic phase of infection in susceptible mice, we identified a particular subset of DC expressing both CD11c and CD205, serving as a reservoir for the bacteria. Taken together, our results describe the cellular nature of immune effectors involved during Brucella infection and reveal a previously unappreciated role for DC subsets, both as effectors and reservoir cells, in the pathogenesis of brucellosis.
Answer: Yes, thrombocyte parameters could be considered as inflammatory markers in brucellosis. A study indicated that mean platelet volume (MPV) and platelet count (PC) may serve as inflammatory markers in this disease. The research found that MPV values were significantly elevated post-treatment compared to pre-treatment, and there was a reduction in CRP, ESR, and PC post-treatment. Additionally, a direct correlation between ESR and PC values and an inverse correlation between ESR and MPV were observed in the pre-treatment period. However, a dependent predictive factor for brucella tube agglutination titers (BSTAT) was not identified in multivariate logistic regression analysis (PUBMED:23348158). |
Instruction: Are there common triggers of preterm deliveries?
Abstracts:
abstract_id: PUBMED:11426894
Are there common triggers of preterm deliveries? Objective: To assess the effect(s) of transient events which are perceived as stressful on the inseption of preterm delivery.
Design: A case-control study, with immature infants as cases and borderline term babies as controls.
Setting: A teaching maternity hospital in Athens.
Population: All infants born at less than 37 weeks of gestation, during a twelve-month period.
Methods: Information was collected about maternal socio-demographic and lifestyle characteristics, clinical variables and stressful events occurring within two weeks prior to delivery.
Main Outcome Measures: Factors affecting the risk of preterm delivery.
Results: Extreme prematurity (<33 weeks) is more common among younger (<25 years of age) and older (>29 years of age) women and is positively associated with parity, body mass index and smoking, whereas it is inversely associated with educational level, regular physical exercise and serious nausea/vomiting. After controlling for these factors, however, only coitus during the last weeks of pregnancy had a significant triggering effect on prematurity (P = 0.004, odds ratio 3.21, 95% CI 1.45 to 7.09 for very immature babies, and P = 0.04, OR = 2.20, 95% CI 1.03 to 4.70 for immature babies). On the contrary, several events perceived as stressful, such as illness of relatives or friends, husband's departure, loss of employment, were unrelated to the onset of premature labour.
Conclusions: Coitus during the last few weeks of pregnancy appears to increase the risk of preterm delivery, while a possible detrimental effect of physical exertion seems more limited. Stressful events should not receive undue attention as possible causes of preterm delivery.
abstract_id: PUBMED:32783334
Maternal characteristics and neonatal outcomes of emergency repeat caesarean deliveries due to early-term spontaneous labour onset. Background: The optimal timing of elective repeat caesarean delivery has yet to be determined. One of the reasons to schedule an elective repeat caesarean delivery before 39 weeks gestation is to avoid emergency caesarean delivery due to spontaneous onset of labour.
Aims: By ascertaining maternal characteristics and neonatal outcomes associated with early-term onset of spontaneous labour, we aim to determine the optimal timing for each individual repeat caesarean delivery.
Materials And Methods: We performed a retrospective analysis of women with repeat caesarean deliveries planned at 38 weeks gestation between 2005 and 2019 at a tertiary referral hospital in Japan. A multivariate logistic regression analysis was adopted to identify independent contributing factors for early-term spontaneous labour onset. We also compared the rate of neonatal adverse events between women who underwent emergency repeat caesarean deliveries due to the onset of early-term labour and the ones who underwent elective repeat caesarean deliveries at 38 weeks.
Results: We included 1152 women. History of vaginal deliveries (adjusted odds ratio (AOR), 2.12; 95% confidence interval (95% CI), 1.21-3.74), history of preterm deliveries (AOR, 2.28; 95% CI, 1.38-3.77), and inadequate maternal weight gain during pregnancy (AOR, 1.78; 95% CI, 1.15-2.75) significantly increased the risk of early-term spontaneous labour onset. In terms of occurrence rate of neonatal complications, we found no significant difference between the groups.
Conclusion: These maternal factors are significant predictors for early-term labour onset of repeat caesarean deliveries. The onset of early-term labour did not increase the likelihood of neonatal complications.
abstract_id: PUBMED:36277462
Trends and risk factors in tribal vs nontribal preterm deliveries in Gujarat, India. Background: Although risk factors of preterm deliveries across the world have been extensively studied, the trends and risk factors of preterm deliveries for the population of rural India, and specifically tribal women, remain unexplored.
Objective: The aim of this study was to assess and compare the preterm delivery rates among women from a rural area in Gujarat, India, based on socioeconomic and clinical factors. The second aim of the study was to assess and identify predictors or risk factors for preterm deliveries.
Study Design: This was a retrospective medical record review study investigating deliveries that took place at the Kasturba Maternity Hospital in Jhagadia, Gujarat, from January 2012 to June 2019 (N=32,557). We performed odds ratio and adjusted odds ratio analyses of preterm delivery risk factors. Lastly, we also considered the neonatal outcomes of preterm deliveries, both overall and comparing tribal and nontribal mothers.
Results: For the study period, the tribal preterm delivery rate was 19.7% and the nontribal preterm delivery rate was 13.9%; the rate remained consistent for both groups over the 7-year study period. Adjusted odds ratios indicated that tribal status (adjusted odds ratio, 1.16; 95% confidence interval, 1.08-1.24), maternal illiteracy ((adjusted odds ratio, 1.29, 95% confidence interval, 1.18-1.42), paternal illiteracy (adjusted odds ratio, 1.27; 95% confidence interval, 1.15-1.410), hemoglobin <10 g/dL (adjusted odds ratio, 1.41; 95% confidence interval, 1.32-1.51), and a lack of antenatal care (adjusted odds ratio, 2.15; 95% confidence interval, 1.94-2.37) are significantly associated with higher odds of preterm delivery. The overall stillbirth rate among tribal women was 3.06% and 1.73% among nontribal women; among preterm deliveries, tribal women have a higher proportion of stillbirth outcomes (11.77%) than nontribal women (8.86%).
Conclusion: Consistent with existing literature, risk factors for preterm deliveries in rural India include clinical factors such as a lack of antenatal care and low hemoglobin. In addition, sociodemographic factors, such as tribal status, are independently associated with higher odds of delivering preterm. The higher rates of preterm deliveries among tribal women need to be studied further to detail the underlying reasons of how it can influence a woman's delivery outcome.
abstract_id: PUBMED:27680098
Is maternal trait anxiety a risk factor for late preterm and early term deliveries? Background: Anxiety is associated with preterm deliveries in general (before week 37 of pregnancy), but is that also true for late preterm (weeks 34/0-36/6) and early term deliveries (weeks 37/0-38/6)? We aim to examine this association separately for spontaneous and provider-initiated deliveries.
Methods: Participants were pregnant women from the Norwegian Mother and Child Cohort Study (MoBa), which has been following 95 200 pregnant women since 1999. After excluding pregnancies with serious health complications, 81 244 participants remained. National ultrasound records were used to delineate late preterm, early term, and full-term deliveries, which then were subdivided into spontaneous and provider-initiated deliveries. We measured trait anxiety based on two ratings of the anxiety items on the Symptom Checklist-8 (Acta Psychiatr Scand 87:364-7, 1993). Trait anxiety was transformed into categorizing the score at the mean and at ± 2 standard deviations.
Results: Trait anxiety was substantially associated with late preterm and early term deliveries after adjusting for confounders. In the whole sample, women with the highest anxiety scores (+2 standard deviations) were more likely [(odds ratio (OR) = 1.7; 95 % confidence-interval (CI) 1.3-2.0)] to delivering late preterm than women with the lowest anxiety scores. Their odds of delivering early term were also high (OR = 1.4; CI 1.3-1.6). Women with spontaneous deliveries and the highest anxiety scores had higher odds (OR = 1.4; CI 1.1-1.8) of delivering late preterm and early term (OR = 1.3; CI = 1.3-1.5). The corresponding odds for women with provider-initiated deliveries were OR = 1.7 (CI = 1.2-2.4) for late preterm and OR = 1.3 for early term (CI = 1.01-1.6). Irrespective of delivery onset, women with provider-initiated deliveries had higher levels of anxiety than women delivering spontaneously. However, women with high anxiety were equally likely to have provider-initiated or spontaneous deliveries.
Conclusions: This study is the first to show substantial associations between high levels of trait anxiety and late preterm delivery. Increased attention should be given to the mechanism underlying this association, including factors preceding the pregnancy. In addition, acute treatment should be offered to women displaying high levels of anxiety throughout pregnancy to avoid suffering for the mother and the child.
abstract_id: PUBMED:25190352
Trends in preterm birth in singleton deliveries in a Hong Kong population. Objective: To examine trends in preterm birth and its relationship with perinatal mortality in Hong Kong.
Methods: In a retrospective cohort study, data were reviewed from singletons delivered between 1995 and 2011 at a university teaching hospital. Trends in preterm birth (between 24 and 36 weeks of pregnancy), perinatal mortality, and subtypes of preterm birth (spontaneous, iatrogenic, and following preterm premature rupture of membranes [PPROM]) were examined via linear regression.
Results: There were 103 364 singleton deliveries, of which 6722 (6.5%) occurred preterm, including 1835 (1.8%) early preterm births (24-33 weeks) and 4887 (4.7%) late preterm births (34-36 weeks). Frequency of preterm birth remained fairly consistent over the study period, but that of spontaneous preterm birth decreased by 25% (β=-0.83; P<0.001), from 4.5% to 3.8%. Frequency of preterm birth following PPROM increased by 135% (β=0.82; P<0.001), from 0.7% to 1.7%. The perinatal mortality rate decreased from 56.7 to 37.0 deaths per 1000 deliveries before 37 weeks (β=-0.16; P=0.54). Early preterm birth contributed to 16.0% of all deaths.
Conclusion: Although the overall rate of preterm birth in Hong Kong has remained constant, the frequencies of its subtypes have changed. Overall perinatal mortality is gradually decreasing, but early preterm birth remains a major contributor.
abstract_id: PUBMED:22359730
The Use of Total Cervical Occlusion along with McDonald Cerclage in Patients with Recurrent Miscarriage or Preterm Deliveries. Objectives: To study the fetal outcome with the use of McDonald cerclage and total cervical occlusion in women with recurrent mid-trimester miscarriages or preterm deliveries, as well as complications of total cervical occlusion in the women.
Methods: Prospective descriptive observational study on patients with two or more mid-trimester miscarriages, deliveries before 36 weeks, or patients who have experienced failure of transvaginal cerclage.
Results: Twenty-six women were studied. Of these, 92% delivered at term. Two women delivered at 33 and 35 weeks, respectively. There was one neonatal death. Take home baby rate was 96.2%. There was no serious maternal morbidity among the patients.
Conclusion: The addition of external cervical OS occlusion to McDonald cerclage could improve fetal outcome in women with recurrent mid-trimester miscarriages and preterm deliveries.
abstract_id: PUBMED:7170921
Preterm deliveries in twin pregnancies in Aberdeen. The incidence of preterm delivery was 28.2% of all twin deliveries. Preterm delivery was common in monozygotic twins due to a high incidence of spontaneous rupture of the membranes. In addition, preterm delivery was associated more often with boy rather than girl infants.
abstract_id: PUBMED:31138940
Clinical and sociodemographic correlates of preterm deliveries in two tertiary hospitals in southern Nigeria. Background: To determine the prevalence of preterm delivery and identify the associated risk factors.
Design: This was a five - month prospective case control study of two cohorts of women who had preterm and term deliveries.
Setting: Central Hospital (CH), Warri, and Delta State University Teaching Hospital (DELSUTH), Oghara, respectively in southern Nigeria.
Participants: 522 women which consisted of 174 who presented in preterm labour or with preterm prelabour rupture of membranes as cases and 348 parturient with term deliveries served as controls.
Interventions: The study was conducted from May 1st 2015 to September 30th 2015. Socio - demographic characteristics, past gynaecological/obstetric factors, maternal/obstetric factors, and fetal outcomes were compared, and associations between these variables and gestational age at delivery were determined.
Main Outcome Measures: Prevalence of preterm delivery associated clinical and socio-demographic correlates and the fetal salvage rates.
Results: The incidence of preterm birth was 16%. Maternal age (p < 0.002), parity (p < 0.000), booking status (p < 0.000), and socio - economic class (p < 0.000) were significantly associated with preterm births. Others were multiple pregnancy (p < 0.000), pre - eclampsia/eclampsia (p < 0.000), anaemia (p < 0.000), malaria (p < 0.000), UTI (p < 0.012), premature rupture of membrane (p < 0.000) and antepartum haemorrhage (p < 0.000). Fetal salvage rate was zero for extreme preterm neonates and 100% at late preterm.
Conclusion: Preterm birth was common, with well-defined correlates and predictors. The fetal salvage rates were significantly different across the categories of preterm neonates.
Funding: The study was self-funded by the authors.
abstract_id: PUBMED:28283205
Chorioamnionitis caused by Serratia marcescens in a healthy pregnant woman with preterm premature rupture of membranes: A rare case report and review of the literature. The incidence of chorioamnionitis varies widely. The highest incidence is reported in preterm deliveries. Among preterm deliveries, chorioamnionitis usually occurs after preterm premature rupture of membranes (PPROM). To date, only five cases of chorioamnionitis due to Serratia marcescens were reported. Here we present a case of a pregnant woman with chorioamnionitis due to Serratia marcescens who delivered a premature neonate at 28 weeks and four days of gestation. We also conducted a review of the literature in order to identify and characterize the clinical presentation and outcomes of this rare infection. A 36 year old female (gravida 9, para 6) was admitted with cervical effacement of 16mm and intact membranes at gestational age of 25 weeks and five days. One week following her admission PPROM was noticed. Treatment with the standard antibiotic regimen for PPROM was initiated. Thirteen days after the diagnosis of PPROM (28 weeks and four days) she developed chills, abdominal pain, sub febrile fever, tachycardia, leukocytosis and fetal tachycardia, and a clinical diagnosis of chorioamnionitis was made. An urgent CS was performed. In the first post-operative day the patient developed surgical sight infection. Cultures obtained from the purulent discharge of the wound, as well as cultures from the placenta and uterine cavity that were obtained during surgery grew Serratia marcescens. The patient was treated with Meropenem for six days, with a good clinical response. We present a rare case of nosocomialy acquired Serratia marcescens chorioamnionitis in a patient with PPROM. This case emphasizes the need for good infection control measures. Our favorable outcome together with the scares reports in the literature, add insight into this type of rare infection.
abstract_id: PUBMED:30461011
Maternal serum vitamin D levels and preterm delivery among low-risk parturients in Lagos, Nigeria. Objective: To determine the association between low maternal vitamin D levels and preterm delivery among parturients in Lagos, Nigeria.
Methods: The present study was an analytical cross-sectional study of women with preterm deliveries (defined as <37 weeks) and women with term deliveries (defined as ≥37 weeks) at the labor unit of Lagos University Teaching Hospital between December 1, 2015, and October 31, 2016. Relevant information was obtained via a proforma, and maternal venous samples were collected immediately after delivery. Serum 25-hydroxy vitamin D was determined by a vitamin D enzyme-linked immunoassay kit.
Results: The study enrolled 103 women in each group. The overall prevalence of vitamin D deficiency was 14.1% (29/206). 24 (23.3%) women with preterm delivery had low serum vitamin D (<30 ng/mL) as compared with only 5 (4.9%) women with term delivery (P<0.001). Compared with normal serum vitamin D levels, low maternal vitamin D had an approximately nine-fold higher likelihood of preterm delivery (adjusted odds ratio 9.41, 95% confidence interval 2.42-36.54; P<0.001).
Conclusion: The prevalence of serum vitamin D deficiency was higher among women with preterm delivery than among those with term delivery. The potential role of prenatal vitamin D supplementation in the prevention of preterm delivery should be further investigated.
Answer: Yes, there are several common triggers and risk factors associated with preterm deliveries. These include:
1. Maternal age, with extreme prematurity being more common among younger (<25 years) and older (>29 years) women (PUBMED:11426894).
2. High parity, body mass index, and smoking are positively associated with preterm delivery, while higher educational level, regular physical exercise, and serious nausea/vomiting are inversely associated (PUBMED:11426894).
3. Coitus during the last few weeks of pregnancy has been found to significantly increase the risk of preterm delivery (PUBMED:11426894).
4. History of vaginal deliveries, history of preterm deliveries, and inadequate maternal weight gain during pregnancy are significant predictors for early-term labor onset in repeat caesarean deliveries (PUBMED:32783334).
5. Socioeconomic and clinical factors such as tribal status, maternal and paternal illiteracy, low hemoglobin levels, and lack of antenatal care are significantly associated with higher odds of preterm delivery (PUBMED:36277462).
6. High levels of maternal trait anxiety have been substantially associated with late preterm and early term deliveries (PUBMED:27680098).
7. The use of total cervical occlusion along with McDonald cerclage in patients with recurrent miscarriage or preterm deliveries could improve fetal outcomes (PUBMED:22359730).
8. Clinical and sociodemographic factors such as maternal age, parity, booking status, socioeconomic class, multiple pregnancy, pre-eclampsia/eclampsia, anaemia, malaria, UTI, premature rupture of membrane, and antepartum haemorrhage are associated with preterm births (PUBMED:31138940).
9. Chorioamnionitis, particularly with organisms like Serratia marcescens, can be a trigger for preterm delivery, especially after preterm premature rupture of membranes (PPROM) (PUBMED:28283205).
10. Low maternal vitamin D levels have been associated with an increased likelihood of preterm delivery (PUBMED:30461011).
These triggers and risk factors can vary based on individual circumstances, geographic location, and population demographics. It is important to note that while some triggers can be managed or mitigated, others may be less controllable. |
Instruction: Patterns of relapse of N2 nonsmall-cell lung carcinoma patients treated with preoperative chemotherapy: should prophylactic cranial irradiation be reconsidered?
Abstracts:
abstract_id: PUBMED:11413530
Patterns of relapse of N2 nonsmall-cell lung carcinoma patients treated with preoperative chemotherapy: should prophylactic cranial irradiation be reconsidered? Background: Although it induces a relevant reduction in the risk of both visceral metastases and locoregional recurrences, the combination of chemotherapy and surgery only marginally improves the survival of patients with Stage IIIA(N2) (International Union Against Cancer staging and classification system) nonsmall-cell lung carcinoma (NSCLC). The purpose of the current study was to analyze the patterns of relapse in these patients.
Methods: In this study, the authors compared the patterns of relapse in 81 patients with clinically detectable N2 NSCLC who had been treated with preoperative chemotherapy with the relapse patterns of 186 comparable patients who had been treated with primary surgery. Clinically detectable N2 (cN2) denotes mediastinal lymph node enlargement on computed tomography scan, which then is confirmed by mediastinoscopy.
Results: Overall 20% of patients developed a locoregional recurrence. Chemotherapy decreased the risk of visceral metastasis as 28% of the patients preoperatively treated and 38%of those not treated with preoperative chemotherapy presented a visceral metastasis (P < 0.05). Preoperative chemotherapy and adenocarcinoma subtypes were associated with a higher rate of brain metastasis (P < 0.05). Thirty-two percent of the patients treated preoperatively and 18% of those not treated with preoperative chemotherapy presented a brain metastasis (P < 0.05), which was isolated in 22% and 11% of the patients, respectively (P < 0.05).
Conclusion: The current study found that preoperative chemotherapy for cN2 decreases the risk of visceral metastasis but is associated with a high rate of isolated brain metastases. Prophylactic cranial irradiation may need to be reinvestigated in clinical trials, especially in patients who present with an adenocarcinoma.
abstract_id: PUBMED:27396646
Prophylactic cranial irradiation for patients with lung cancer. The incidence of brain metastases in patients with lung cancer has increased as a result of improved local and systemic control and better diagnosis from advances in brain imaging. Because brain metastases are responsible for life-threatening symptoms and serious impairment of quality of life, resulting in shortened survival, prophylactic cranial irradiation has been proposed in both small-cell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC) to try to improve incidence of brain metastasis, survival, and eventually quality of life. Findings from randomised controlled trials and a meta-analysis have shown that prophylactic cranial irradiation not only reduces the incidence of brain metastases in patients with SCLC and with non-metastatic NSCLC, but also improves overall survival in patients with SCLC who respond to first-line treatment. Although prophylactic cranial irradiation is potentially associated with neurocognitive decline, this risk needs to be balanced against the potential benefit in terms of brain metastases incidence and survival. Several strategies to reduce neurotoxicity are being investigated.
abstract_id: PUBMED:11576722
The role of prophylactic cranial irradiation in the treatment of lung cancer. Patients with lung cancer face concurrent risks of their disease by local, regional as well as distant failure. The brain is one of the major sites of distant relapse and the prevention of cerebral metastasis has therefore gained rising interest. A recent meta-analysis has confirmed the benefit of prophylactic cranial irradiation in patients with limited disease small-cell lung cancer in complete remission following induction therapy. In non-small-cell lung cancer, aggressive multimodality therapy regimens including surgery have achieved locoregional control rates of 50% and higher. In these patient groups the relatively high incidence of brain relapses as a site of first failure causes substantial morbidity and worsens the prognosis. Given the proven efficacy of prophylactic cranial irradiation (PCI) to prevent metastases to the brain, the introduction of PCI into the treatment of non-small cell lung cancer in the curative setting seems promising.
abstract_id: PUBMED:17562236
Prophylactic cranial irradiation for patients with lung cancer. The central nervous system is a common site of metastasis in patients with small cell lung cancer (SCLC) and non-small-cell lung cancer. Despite advances in combined modality therapy, intracranial relapse continues to be a common site of recurrence and a major cause of morbidity for patients with lung cancer. Prophylactic cranial irradiation (PCI) has proven to be effective in reducing the incidence of brain metastases in patients with lung cancer. Based upon results of a metaanalysis demonstrating a small improvement in overall survival, PCI is now routinely offered to patients with limited-stage SCLC after a complete or near-complete response to initial treatment. However, many questions remain unanswered regarding the optimal dose, fractionation, and toxicity of PCI in patients with limited-stage SCLC. Additionally, the role of PCI in patients with extensive-stage SCLC and locally advanced non-small-cell lung cancer is unclear. Several important collaborative group trials are under way in an attempt to further define the role of PCI in patients with lung cancer.
abstract_id: PUBMED:23306141
Present role of prophylactic cranial irradiation Prophylactic cranial irradiation (PCI) plays a role in the management of lung cancer patients, especially small cell lung cancer (SCLC) patients. As multimodality treatments are now able to ensure better local control and a lower rate of extracranial metastases, brain relapse has become a major concern in lung cancer. As survival is poor after development of brain metastases (BM) in spite of specific treatment, PCI has been introduced in the 1970's. PCI has been evaluated in randomized trials in both SCLC and non-small cell lung cancer (NSCLC) to reduce the incidence of BM and possibly increase survival. PCI reduces significantly the BM rate in both limited disease (LD) and extensive disease (ED) SCLC and in non-metastatic NSCLC. Considering SCLC, PCI significantly improves overall survival in LD (from 15 to 20% at 3 years) and ED (from 13 to 27% at 1 year) in patients who respond to first-line treatment; it should thus be part of the standard treatment in all responders in ED and in good responders in LD. No dose-effect relationship for PCI was demonstrated in LD SCLC patients so that the recommended dose is 25 Gy in 10 fractions. In NSCLC, even if the risk of brain dissemination is lower than in SCLC, it has become a challenging issue. Studies have identified subgroups at higher risk of brain failure. There are more local treatment possibilities for BM related to NSCLC, but most BM will eventually recur so that PCI should be reconsidered. Few randomized trials have been performed. Most of them could demonstrate a decreased incidence of BM in patients with PCI, but they were not able to show an effect on survival as they were underpowered. New trials are needed. Among long-term survivors, neuro-cognitive toxicity may be observed. Several approaches are being evaluated to reduce this possible toxicity. PCI has no place for other solid tumours at risk such as HER2+ breast cancer patients.
abstract_id: PUBMED:15988684
Prophylactic cranial irradiation with combined modality therapy for patients with locally advanced non-small cell lung cancer. Central nervous system (CNS) metastasis is a significant problem for many patients with non-small cell lung cancer (NSCLC). The earlier data reported a high incidence of CNS metastasis in patients with locally advanced NSCLC who were treated with radiotherapy alone. However, poor control of both thoracic and extracranial systemic disease dominated the results of the early trials. The risk for CNS metastasis as the first site of failure remains a significant concern for patients who have completed modern combined modality therapy. With improvements in the treatment of thoracic and systemic disease, there is renewed interest in prophylactic cranial irradiation (PCI). The results from the Radiation Therapy Oncology Group (RTOG) trial of PCI to prevent CNS relapse in patients with locally advanced NSCLC are anticipated.
abstract_id: PUBMED:10561344
Prophylactic cranial irradiation in locally advanced non-small-cell lung cancer after multimodality treatment: long-term follow-up and investigations of late neuropsychologic effects. Purpose: Relapse pattern and late toxicities in long-term survivors were analyzed after the introduction of prophylactic cranial irradiation (PCI) into a phase II trial on trimodality treatment of locally advanced (LAD) non-small-cell lung cancer (NSCLC).
Patients And Methods: Seventy-five patients with stage IIIA(N2)/IIIB NSCLC were treated with induction chemotherapy, preoperative radiochemotherapy, and surgery. PCI was routinely offered during the second period of study accrual. Patients were given a total radiation dose of 30 Gy (2 Gy per daily fraction) over a 3-week period starting 1 day after the last chemotherapy cycle.
Results: Introduction of PCI reduced the rate of brain metastases as first site of relapse from 30% to 8% at 4 years (P =.005) and that of overall brain relapse from 54% to 13% (P <.0001). The effect of PCI was also observed in the good-prognosis subgroup of 47 patients who had a partial response or complete response to induction chemotherapy, with a reduction of brain relapse as first failure from 23% to 0% at 4 years (P =.01). Neuropsychologic testing revealed impairments in attention and visual memory in long-term survivors who received PCI as well as in those who did not receive PCI. T2-weighted magnetic resonance imaging revealed white matter abnormalities of higher grades in patients who received PCI than in those who did not.
Conclusion: PCI at a moderate dose reduced brain metastases in LAD-NSCLC to a clinically significant extent, comparable to that in limited-disease small-cell lung cancer. Late toxicity to normal brain was acceptable. This study supports the use of PCI within intense protocols for LAD-NSCLC, particularly in patients with favorable prognostic factors.
abstract_id: PUBMED:15913909
Prophylactic cranial irradiation for preventing brain metastases in patients undergoing radical treatment for non-small-cell lung cancer: a Cochrane Review. Purpose: To investigate whether prophylactic cranial irradiation (PCI) has a role in the management of patients with non-small-cell lung cancer (NSCLC) treated with curative intent.
Methods And Materials: A search strategy was designed to identify randomized controlled trials (RCTs) comparing PCI with no PCI in NSCLC patients treated with curative intent. The electronic databases MEDLINE, EMBASE, LILACS, and Cancerlit were searched, along with relevant journals, books, and review articles to identify potentially eligible trials. Four RCTs were identified and reviewed. A total of 951 patients were randomized in these RCTs, of whom 833 were evaluable and reported. Forty-two patients with small-cell lung cancer were excluded, leaving 791 patients in total. Because of the small patient numbers and trial heterogeneity, no meta-analysis was attempted.
Results: Prophylactic cranial irradiation did significantly reduce the incidence of brain metastases in three trials. No trial reported a survival advantage with PCI over observation. Toxicity data were poorly collected and no quality of life assessments were carried out in any trial.
Conclusion: Prophylactic cranial irradiation may reduce the incidence of brain metastases, but there is no evidence of a survival benefit. It was not possible to evaluate whether any radiotherapy regimen is superior, and the effect of PCI on quality of life is not known. There is insufficient evidence to support the use of PCI in clinical practice. Where possible, patients should be offered entry into a clinical trial.
abstract_id: PUBMED:28411884
Risk factors for relapse of resectable pathologic N2 non small lung cancer and prediction model for time-to-progression. Background: Pathologic N2 non-small-cell lung cancer (NSCLC) was demonstrated with poor survival among literature. In this study, we retrospectively reviewed patients with pathologic N2 NSCLC and received anatomic resection (i.e. lobectomy) for further relapse risk factor analysis. The aim of this study is to identify the clinicopathologic factors related to relapse among resectable N2 NSCLC patients and to help clinicians in developing individualized follow up program and treatment plan.
Method: From January 2005 to July 2012, 90 diagnosed pathologic N2 NSCLC patients were enrolled into this study. We retrospectively reviewed medical records, image studies, and pathology reports to collect the patient clinico-pathologic factors.
Result: We identified that patients with visceral pleural invasion (p = 0.001) and skip metastases along mediastinal lymph node (p = 0.01) had a significant relationship to distant and disseminated metastases. Patients who had 2 or more risk factors for relapse demonstrated poor disease free survival than those who had less than 2 risk factors (p = 0.02). The number of involved metastatic area were significantly influential to the period of time-to-progression. The duration of time-to-progression was correlated with square of number of involved metastatic areas. (Pearson correlation coefficient = -0.29; p = 0.036).
Conclusion: Relapse risk factors of resectable pathologic N2 NSCLC patient after anatomic resection were visceral pleural invasion, skip mediastinal lymph node involvement, and the receipt of neoadjuvant therapy. The duration of time-to-progression was correlated with square of number of involved metastatic areas.
abstract_id: PUBMED:1651304
Prophylactic cranial irradiation for lung cancer patients at high risk for development of cerebral metastasis: results of a prospective randomized trial conducted by the Radiation Therapy Oncology Group. Beginning in February 1984, 187 evaluable patients with adenocarcinoma or large cell carcinoma of the lung clinically confined to the chest were randomized to receive either conventionally fractionated thoracic irradiation alone or thoracic irradiation with concurrent, prophylactic cranial irradiation. The study population included 161 patients treated for medically or surgically inoperable primary cancers, and 26 patients undergoing adjuvant postoperative mediastinal irradiation following attempted curative resection of primary cancers found to have metastasized to hilar or mediastinal lymph nodes. Elective brain irradiation was not effective in preventing the clinical appearance of brain metastases, although the time to develop brain metastases appears to have been delayed. Eighteen of 94 patients (19%) randomized to chest irradiation alone have developed brain metastases as opposed to 8/93 patients (9%) randomized to receive prophylactic cranial irradiation (p = .10). No survival difference was observed between the treatment arms. Among the 26 patients undergoing prior resection of all gross intrathoracic disease, brain metastases were observed in 3/12 patients (25%) receiving adjuvant chest irradiation alone, compared to none of 14 receiving prophylactic cranial irradiation (p = .06). In the absence of fully reliable therapy for the primary disease, and without effective systemic therapy preventing dissemination to other, extrathoracic sites, prophylactic cranial irradiation for inoperable non-small cell lung cancer cannot be justified in routine clinical practice. Further investigation in the adjuvant, postoperative setting may be warranted.
Answer: The study by De Ruysscher et al. (PUBMED:11413530) found that preoperative chemotherapy for clinically detectable N2 non-small cell lung carcinoma (NSCLC) decreases the risk of visceral metastasis but is associated with a high rate of isolated brain metastases. Specifically, 32% of patients treated preoperatively developed brain metastases compared to 18% of those not treated with preoperative chemotherapy. This suggests that prophylactic cranial irradiation (PCI) may need to be reconsidered in clinical trials, especially for patients with adenocarcinoma subtypes.
The role of PCI in lung cancer has been discussed in several studies. A meta-analysis confirmed the benefit of PCI in patients with limited disease small-cell lung cancer in complete remission following induction therapy (PUBMED:11576722). For NSCLC, aggressive multimodality therapy regimens including surgery have achieved locoregional control rates of 50% and higher, but the incidence of brain relapses as a site of first failure remains a concern, suggesting that PCI could be promising in the curative setting for NSCLC (PUBMED:17562236).
Furthermore, PCI has been shown to significantly reduce the incidence of brain metastases in both limited disease (LD) and extensive disease (ED) SCLC and in non-metastatic NSCLC (PUBMED:23306141). In SCLC, PCI significantly improves overall survival in LD and ED in patients who respond to first-line treatment, indicating that it should be part of the standard treatment in all responders in ED and in good responders in LD.
In the context of locally advanced NSCLC, the introduction of PCI reduced the rate of brain metastases as the first site of relapse and overall brain relapse (PUBMED:10561344). However, a Cochrane Review indicated that while PCI may reduce the incidence of brain metastases, there is no evidence of a survival benefit, and the effect of PCI on quality of life is not known (PUBMED:15913909).
Given these findings, the consideration of PCI for patients with N2 NSCLC treated with preoperative chemotherapy seems warranted, particularly in light of the high rate of isolated brain metastases observed in this group. Clinical trials and further research are needed to determine the optimal use of PCI in this setting, balancing the potential benefits in reducing brain metastases against the risks of neurocognitive decline and other toxicities. |
Instruction: Minimally displaced clavicle fracture after high-energy injury: are they likely to displace?
Abstracts:
abstract_id: PUBMED:24869608
Minimally displaced clavicle fracture after high-energy injury: are they likely to displace? Background: Nondisplaced or minimally displaced clavicle fractures are often considered to be benign injuries. These fractures in the trauma patient population, however, may deserve closer follow-up than their low-energy counterparts. We sought to determine the initial assessment performed on these patients and the rate of subsequent fracture displacement in patients sustaining high-energy trauma when a supine chest radiograph on initial trauma survey revealed a well-aligned clavicle fracture.
Methods: We retrospectively reviewed the cases of trauma alert patients who sustained a midshaft clavicle fracture (AO/OTA type 15-B) with less than 100% displacement treated at a single level 1 trauma centre between 2005 and 2010. We compared fracture displacement on initial supine chest radiographs and follow-up radiographs. Orthopedic consultation and the type of imaging studies obtained were also recorded.
Results: Ninety-five patients with clavicle fractures met the inclusion criteria. On follow-up, 57 (60.0%) had displacement of 100% or more of the shaft width. Most patients (63.2%) in our study had an orthopedic consultation during their hospital admission, and 27.4% had clavicle radiographs taken on the day of admission.
Conclusion: Clavicle fractures in patients with a high-energy mechanism of injury are prone to fracture displacement, even when initial supine chest radiographs show nondisplacement. We recommend clavicle films as part of the initial evaluation for all patients with clavicle fractures and early follow-up within the first 2 weeks of injury.
abstract_id: PUBMED:26403878
Open reduction and plating for displaced mid third clavicle fractures - A prospective study. Introduction: Displaced middle third clavicle fractures were treated conservatively with figure of '8' harness in the past. Current management trend in treating displaced clavicle fractures with internal fixation provide rigid immobilization and pain relief avoiding non-union, shortening and deformity. This study prospectively evaluates the functional outcome of 25 patients with clavicle fractures treated surgically.
Materials And Methods: 25 patients with displaced mid third clavicle fractures were included in the study. Open reduction and internal fixation with clavicular locking plate placed superiorly was done. Patients were followed up on 3, 6, 8, and 12 weeks. Functional outcome was assessed using DASH scores and Simple Shoulder Test (SST). Statistical analysis was done using One-way ANOVA.
Results: Out of the 26 clavicles operated (one patient had bilateral fracture), 6 were comminuted (23%) and the rest were 2 part displaced fractures. Interfragmentary screws were used in 3 cases with butterfly fragment. All fractures united (mean = 6.8 weeks). The DASH scores reduced to a significant negligible level by 8 weeks in all but 4 cases with comminution where it took longer than 8 weeks to reach negligible levels. The SST showed significant improvement in all cases by 8 weeks after surgery. All patients were satisfied with the outcome. 84% of patients returned to their work by 6 weeks.
Conclusion: Primary plating of displaced mid third clavicle fractures with superiorly placed locking plate avoids complications of non-operative management and leads to early return to pre injury activities.
abstract_id: PUBMED:26054648
Use of a real-size 3D-printed model as a preoperative and intraoperative tool for minimally invasive plating of comminuted midshaft clavicle fractures. Background: Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution.
Methods: We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury.
Results: The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union.
Conclusions: This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.
abstract_id: PUBMED:30332872
Minimally invasive retrograde insertion of elastic intramedullary nails for displaced clavicle fractures in children Objective: To study the application and effect of retrograde titanium elastic nails fixation for the treatment of displaced clavicle fracture in children under closed reduction.
Methods: From January 2014 to November 2016, 26 children with displaced fractures of the clavicle were treated by closed reduction and retrograde inserted titanium elastic nails including 14 boys and 12 girls with an average age of 9.2 years old ranging from 7 to 14 years. Time from injury to operation was 2 to 7 days with an average of 2.8 days. Visual analogue score (VAS) was used to evaluate the main complaint pain in all patients before and 2 days after operation. The Neer score of shoulder function between affected side and healthy side at 2 months after operation were compared.
Results: All the 26 children were followed up for 6 to 12 months. All cases healed well without infection, broken nails or titanium elastic nails exit complications. All children achieved anatomical reduction, good bony union, and good recovery of shoulder joint activity. The average time of removing nail was 14 to 32(16.25±2.62)weeks. The pain VAS score was significantly relieved 2 days after operation (P<0.05). At 2 months after operation, the Neer score of shoulder joint was 98.46±1.07 in affected side and 98.58±1.10 in healthy side respectively, there was no significant difference between the two groups (P>0.05).
Conclusions: Titanium elastic intramedullary nail fixation for the treatment of displaced clavicular fracture in children has the advantages of minimal invasion, no effect on skin beauty, rapid healing of fracture, good recovery of postoperative function, simple nailing and less complications.
abstract_id: PUBMED:21610439
Progressive displacement of clavicular fractures in the early postinjury period. Background: Historically, minimally to moderately displaced clavicular fractures have been managed nonoperatively. However, there is no clear evidence on whether clavicular fractures can progressively displace following injury and whether such displacement might influence decisions for surgery.
Methods: We retrospectively reviewed data on 56 patients who received operative treatment for clavicular fractures at our institution from February 2002 to February 2007 and identified those patients who were initially managed nonoperatively based on radiographic evaluation (<2 cm displacement) and then subsequently went on to meet operative indications (≥2 cm displacement) as a result of progressive displacement. Standardized radiographic measurements for horizontal shortening (medial-lateral displacement) and vertical translation (cephalad-caudad displacement) were developed and used.
Results: Fifteen patients with clavicle fractures initially displaced less than 2 cm and treated nonoperatively underwent later surgery because of progressive displacement (14 diaphyseal and 1 lateral). Radiographs performed during the injury workup and at a mean of 14.8 days postinjury demonstrated that progressive deformity had taken place. Ten of 15 patients (67%) displayed progressive horizontal shortening. Average change in horizontal shortening between that of the injury radiographs and the repeat radiographs in this group was 14.3 mm (5.9-29 mm). Thirteen of 15 patients (87%) displayed progressive vertical translation. Eight of 15 patients (53%) displayed both progressive horizontal shortening and vertical translation.
Conclusion: We have demonstrated that a significant proportion of clavicle fractures (27% of our operative cases over a 5-year period) are minimally displaced at presentation, but are unstable and demonstrate progressive deformity during the first few weeks after injury. Because of this experience, we recommend close monitoring of nonoperatively managed clavicular fractures in the early postinjury period. A prudent policy is to obtain serial radiographic evaluation for 3 weeks, even for initially, minimally displaced clavicle fractures.
abstract_id: PUBMED:37658855
Precontoured dynamic compression plate using patient-specific 3D-printed models in minimally invasive surgical technique for midshaft clavicle fractures. Introduction: This study introduced a novel approach for the treatment of midshaft clavicle fractures, utilizing patient-specific 3D-printed models for accurate preoperative contouring of dynamic compression plates (DCPs) and an alternative minimally invasive plate osteosynthesis (MIPO) technique with precontoured DCPs through small vertical separated incisions.
Patient And Methods: Mirror image 3D clavicular models were reproduced from 40 patients with acute displaced midshaft clavicle fractures who underwent MIPO using precontoured DCPs inserted through small, vertical separated incisions. Exclusion criteria included patients with open fractures, pathological fractures, ipsilateral limb injury, skeletal immature patients, and those who had previous clavicle fractures or surgery. Postoperative evaluation was conducted using clinical and radiographic review. The Constant-Murley and American Shoulder and Elbow Surgeons Shoulder Scores were used for clinical evaluations, and the Patient and Observer Scar Assessment Scale was used to assess surgical scars.
Results: The average time to union of all fractures was 12.88 weeks (range, 8-15) without loss of reduction. The patient-specific precontoured DCPs fitted well in all cases, with fracture consolidation and minimal three cortical sides connecting the fracture fragment. No hardware prominence and skin complications occurred, and clinical evaluation showed no existing difference compared with the contralateral sides. The average Constant-Murley and American Shoulder and Elbow Surgeons Shoulder Scores were 96.33 ± 3.66 and 93.26 ± 5.15, respectively. Two patients requested their implant removal, and scar qualities were satisfactory.
Conclusions: Our study demonstrated that the use of a patient-specific precontoured DCP, in combination with 3D printing technology, provides accurate preoperative planning, effective fracture reduction, and improved postoperative outcomes in displaced midshaft clavicle fractures. The MIPO with a patient-specific precontoured DCP through separated vertical incisions along the Langer's lines appears to be a promising option, regarding appearance, avoiding associated complications, and obviating the need for reoperation. These results suggest that this technique has merit and can be a viable option for the treatment of midshaft clavicle fractures.
abstract_id: PUBMED:29122118
Safe intramedullary fixation of displaced midshaft clavicle fractures with 2.5mm Kirschner wires - technique description and a two-part versus multifragmentary fracture fixation outcome comparison. Introduction: The aim of this study was to present a modified Murray and Schwarz 2.5-mm Kirschner wire (K-wire) intramedullary (IM) technique for fixation of displaced midshaft clavicle fractures (DMCF), and to compare the differences in treatment outcome of two-part (Robinson 2B.1) and multifragmentary (Robinson 2B.2) DMCF.
Methods: A retrospective analysis of 91 patients who underwent IM fixation with a 2.5-mm K-wire for DMCF and had a 1-year post-operative follow-up between 2000 and 2012 was performed. The patients were allocated into two groups: Robinson 2B.1 (n = 64) and Robinson 2B.2 (n = 27). Assessed outcomes were non-union, reoperation rate, wire migration and infection.
Results: There was no statistically significant difference in the rate of non-union (2B.1,2B.2; 3.13%, 7.41%; p = 0.365), reoperation (2B.1, 2B.2; 3.13%, 7.41%; p = 0.365), K-wire migration (2B.1, 2B.2; 0.00%, 0.00%; p = 1.00) and clavicle shortening at 12-months (2B.1, 2B.2; 3.13%, 7.41%; p = 0.365).
Conclusion: Intramedullary clavicle fixation with a 2.5-mm K-wire is a safe surgical technique. 2B.1 injuries treated with 2.5-mm IM K-wire fixation have relatively improved outcome compared with displaced 2B.2 fractures for both non-union and reoperation rates. There were no occurrences of implant migration with either 2B.1 or 2B.2 injuries, and a non-significant difference in implant irritation was documented with IM K-fixation. The non-union rate with K-wire IM fixation of 2B.1 injuries concords with the published results of other IM devices and thus this technique should be added to the surgeon's armamentarium when considering surgical treatment of such injuries.
abstract_id: PUBMED:29707047
Flexible intramedullary nailing versus nonoperative treatment for paediatric displaced midshaft clavicle fractures. Purpose: The treatment of displaced midshaft clavicle fractures in children remains controversial. The purpose of our study was to compare the outcome of displaced midshaft clavicle fractures in children who were managed operatively by flexible intramedullary nailing (FIN) with nonoperative treatment.
Methods: A prospective review of 31 children (mean age 10.5 years) with displaced midshaft clavicle fractures treated either by FIN or nonoperatively and with at least a six-month follow-up was undertaken. In all, 24 children underwent FIN and seven underwent nonoperative treatment. The patient outcomes included the Constant-Murley score, Customer Satisfaction Questionnaire (CSQ-8), numeric pain rating scale, time to union and time to return to activity. Surgical complications were recorded.
Results: The two groups were comparable with regards to age, gender and mechanism of injury. At six months of follow-up, the Constant-Murley (97.8 versus 94.7, p < 0.001) and CSQ-8 (29.1 versus 19.1, p < 0.001) scores were higher in the FIN group. Time to union and return to activity were significantly shorter in the FIN group (7.3 and 9.2 weeks versus 10.4 and 16.6 weeks respectively, p < 0.01). The only surgical complication was a FIN exchange for skin irritation due to nail prominence.
Conclusion: FIN is a minimally invasive procedure for children with displaced midshaft clavicle fractures associated with shorter time to union, quicker return to activity and higher Constant-Murley and CSQ-8 scores when compared with nonoperative treatment. However, the difference in Constant-Murley scores was not clinically significant. Furthermore, the advantages of FIN are at the expense of an increased complication rate of 12.5% (upper 95% confidence interval 33.3%).
Level Of Evidence: Therapeutic, II.
abstract_id: PUBMED:21485562
Minimally invasive treatment for fresh acromioclavicular dislocation and the distal clavicle fracture Objective: To explore the minimally invasive treatment for fresh acromioclavicular dislocation and the distal clavicle fracture.
Methods: Thirty skeletons of human shoulder were measured and compared, and the normal data on healthy people were measured with the help of ultrasound-guided. So the invasion point was located at the cross between subclavian axis and the line from coracoid tip to apophysis behind cone ligament node. From January 2001 to January 2010, 127 patients with fresh acromioclavicular dislocation and distal clavicle fracture were treated with minimally invasive internal fixation after locating the invasive point at the body surface. Among the patients, 97 patients were male and 30 patients were female, ranging in age from 19 to 56 years, with an average of 43 years. According to Rockwood classification, among 93 patients with fresh acromioclavicular dislocation, 67 patients were type III, 11 patients were type IV and 15 patients were type V. All the 34 patients with distal clavicle fractures were associated with coracoclavicular ligament broken. The duration from injury to operation ranged from 1 to 8 days. The therapeutic effects were evaluated by using the of shoulder scoring system, University of California (UCLA).
Results: After the minimally invasive treatment, all the patients had completely reduction at early time. One hundred and thirteen patients were followed up,and the duration ranged from 13 to 15 months,averaged 14 months. Nine patients had screw loose slightly within 30 days, but the reductions and functions were acceptable. Seven patients had complications of frozen shoulder and recovered in 6 months. The average UCLA shoulder score was (32.0 +/- 4.7), and 87 patients got an excellent result, 20 good and 6 fair.
Conclusion: This minimally invasive treatment has advantages such as little trauma and low cost, which is worthy of clinical applications.
abstract_id: PUBMED:27521136
Comparing fracture healing disorders and long-term functional outcome of polytrauma patients and patients with an isolated displaced midshaft clavicle fracture. Background: Although clavicle fractures are a common injury in polytrauma patients, the functional outcome of displaced midshaft clavicle fractures (DMCFs) in this population is unknown. Our hypothesis was that there would be no differences in fracture healing disorders or functional outcome in polytrauma patients with a DMCF compared with patients with an isolated DMCF, regardless of the treatment modality.
Methods: A retrospective cohort study of patients (treated at our level I trauma center) with a DMCF was performed and a follow-up questionnaire was administered. Polytrauma patients, defined as an Injury Severity Score ≥16, and those with an isolated clavicle fracture were compared. Fracture healing disorders (nonunion and delayed union) and delayed fixation rates were determined. Functional outcome was assessed by the Quick Disability of the Arm, Shoulder, and Hand questionnaire.
Results: A total of 152 patients were analyzed, 71 polytrauma patients and 81 patients with an isolated DMCF. Questionnaire response of 121 patients (80%) was available (mean, 53 months; standard deviation, 22 months). No differences were found between polytrauma patients and those with an isolated DMCF with regard to nonunion (7% vs. 5%, respectively), delayed union (4% vs. 4%), and delayed fixation rate (13% vs. 13%). Polytrauma patients had an overall worse functional outcome, regardless of initial nonoperative treatment or delayed operative fixation.
Conclusion: Polytrauma patients had a similar nonunion and delayed fixation rate but had an overall worse functional outcome compared with patients with an isolated DMCF. For polytrauma patients, a wait and see approach can be advocated without the risk of decreased upper extremity function after delayed fixation.
Answer: Yes, minimally displaced clavicle fractures after high-energy injury are likely to displace. A study that retrospectively reviewed trauma alert patients with midshaft clavicle fractures (AO/OTA type 15-B) with less than 100% displacement found that on follow-up, 60% of the patients had displacement of 100% or more of the shaft width. This suggests that clavicle fractures in patients with a high-energy mechanism of injury are prone to fracture displacement, even when initial supine chest radiographs show nondisplacement (PUBMED:24869608). |
Instruction: Is the ventilatory threshold coincident with maximal fat oxidation during submaximal exercise in women?
Abstracts:
abstract_id: PUBMED:11125763
Is the ventilatory threshold coincident with maximal fat oxidation during submaximal exercise in women? Background: The purpose of this study was to detect the fraction of peak oxygen consumption (VO2peak) that elicits maximal rates of fat oxidation during submaximal treadmill exercise. It was hypothesized that this point would appear at a work rate just below the ventilatory threshold.
Methods:
Experimental Design: subjects completed a protocol requiring them to exercise for 15 min on a treadmill at six different workloads, 25, 40, 55, 65, 75, and 85% VO2peak, over two separate visits.
Participants: nine healthy, moderately-trained eumenorrheic females (age = 28.8+/-5.99 yrs, VO2peak = 47.20 +/-2.57 ml x kg(-1) x min(-1)) volunteered for the study.
Measures: a one-way ANOVA with repeated measures was used to test for differences across exercise intensities in the metabolic variables (i.e. substrate oxidation, blood lactate concentration ([La-]), RER, and the contribution of fat to total energy expenditure). Following significant F ratios, post-hoc tests were used to detect differences between the means for various exercise intensities.
Results: Exercise at 75% VO2peak elicited the greatest rate of fat oxidation (4.75+/-0.49 kcal x min(-1)), and this intensity was coincident with the ventilatory threshold (76+/-7.41% VO2peak). Moreover, a significant difference (t(8) = -3.98, p<0.01) was noted between the mean ventilatory threshold and lactate threshold.
Conclusions: The finding that a relatively heavy work rate elicits the highest rate of fat oxidation in an active, female population has application in exercise prescription and refutes the belief that low-intensity exercise is preferred for fat metabolism.
abstract_id: PUBMED:29541108
Exercise training at the maximal fat oxidation intensity improved health-related physical fitness in overweight middle-aged women. Background/objective: The purpose of this study was to test the hypothesis that exercise training at the maximal fat oxidation (FATmax) intensity would improve the health-related physical fitness in overweight middle-aged women.
Methods: Thirty women (45-59 years old and BMI 28.2 ± 1.8 kg/m2) were randomly allocated into the Exercise and Control groups. Body composition, FATmax, predicted maximal oxygen uptake, heart function during submaximal exercise, stroke volume, left ventricular ejection fraction, trunk muscle strength, and body flexibility were measured before and after the experimental period.
Results: Following the 10 weeks of supervised exercise training, the Exercise group achieved significant improvements in body composition, cardiovascular function, skeletal muscle strength, and body flexibility; whereas there were no changes in these variables of the Control group. There was also no significant change in daily energy intake for all participants before and after the interventions.
Conclusion: The 10-week FATmax intensity training is an effective treatment to improve health-related physical fitness in overweight middle-aged women.
abstract_id: PUBMED:26301195
Maximal Fat Oxidation Rate during Exercise in Korean Women with Type 2 Diabetes Mellitus. Background: The purpose of this study was to determine the appropriate exercise intensity associated with maximum fat oxidation, improvement of body composition, and metabolic status in Korean women with type 2 diabetes mellitus (T2DM).
Methods: The study included a T2DM group (12 women) and a control group (12 women). The groups were matched in age and body mass index. The subjects performed a graded exercise test on a cycle ergometer to measure their maximal fat oxidation (Fatmax). We also measured their body composition, metabolic profiles, and mitochondrial DNA (mtDNA).
Results: The exercise intensity for Fatmax was significantly lower in the T2DM group (34.19% maximal oxygen uptake [VO2 max]) than the control group (51.80% VO2 max). Additionally, the rate of fat oxidation during exercise (P<0.05) and mtDNA (P<0.05) were significantly lower in the T2DM group than the control group. The VO2 max level (P<0.001) and the insulin level (P<0.05) were positively correlated with the rate of fat oxidation.
Conclusion: The results of this study suggest lower exercise intensity that achieves Fatmax is recommended for improving fat oxidation and enhancing fitness levels in Korean women with T2DM. Our data could be useful when considering an exercise regimen to improve health and fitness.
abstract_id: PUBMED:34868426
Comparison of the Ramp and Step Incremental Exercise Test Protocols in Assessing the Maximal Fat Oxidation Rate in Youth Cyclists. The incremental exercise test is the most common method in assessing the maximal fat oxidation (MFO) rate. The main aim of the study was to determine whether the progressive linear RAMP test can be used to assess the maximal fat oxidation rate along with the intensities that trigger its maximal (FATmax) and its minimal (FATmin) values. Our study comprised 57 young road cyclists who were tested in random order. Each of them was submitted to two incremental exercise tests on an electro-magnetically braked cycle-ergometer - STEP (50 W·3 min-1) and RAMP (~0.278 W·s-1) at a 7-day interval. A stoichiometric equation was used to calculate the fat oxidation rate, while the metabolic thresholds were defined by analyzing ventilation gases. The Student's T-test, Bland-Altman plots and Pearson's linear correlations were resorted to in the process of statistical analysis. No statistically significant MFO variances occurred between the tests (p = 0.12) and its rate amounted to 0.57 ± 0.15 g·min-1 and 0.53 ± 0.17 g·min-1 in the STEP and RAMP, respectively. No statistically significant variances in the absolute and relative (to maximal) values of oxygen uptake and heart rate were discerned at the FATmax and FATmin intensities. The RAMP test displayed very strong oxygen uptake correlations between the aerobic threshold and FATmax (r = 0.93, R2 = 0.87, p < 0.001) as well as the anaerobic threshold and FATmin (r = 0.88, R2 = 0.78, p < 0.001). Our results corroborate our hypothesis that the incremental RAMP test as well as the STEP test are reliable tools in assessing MFO, FATmax and FATmin intensities.
abstract_id: PUBMED:36952099
Short-Term Effect of Bariatric Surgery on Cardiorespiratory Response at Submaximal, Ventilatory Threshold, and Maximal Exercise in Women with Severe Obesity. Purpose: People with obesity have varying degrees of cardiovascular, pulmonary, and musculoskeletal dysfunction that affect aerobic exercise testing variables. Short time after bariatric surgery, these dysfunctions could affect both peak oxygen consumption ([Formula: see text] O2 peak), the gold standard for assessing cardiorespiratory fitness (CRF) and aerobic capacity evaluated with ventilatory threshold (VT1). The purpose of this study was to evaluate the short-term effect of bariatric surgery, i.e. before the resumption of physical activity, on submaximal, at VT1 and maximal cardiorespiratory responses in middle-aged women with severe obesity.
Materials And Methods: Thirteen middle-aged women with severe obesity (age: 36.7 ± 2.3 years; weight: 110.5 ± 3.6 kg, BMI: 41.8 ± 1.1 kg/m2) awaiting bariatric surgery participated in the study. Four weeks before and 6 to 8 weeks after surgery, body composition was determined by bioelectrical impedance. The participants performed an incremental cycling test to [Formula: see text] O2 peak.
Results: After bariatric surgery, all body composition parameters were reduced, absolute [Formula: see text] O2 peak and peak workload decline with a lower VT1. Relative [Formula: see text] O2 at peak and at VT1 (ml/min/kg or ml/min/kg of FFM) remained unchanged. Ventilation was lower after bariatric surgery during exercise with no change in cardiac response.
Conclusion: Our results showed that weight loss alone at short-term after bariatric surgery decreased CRF as seen by a decrease in absolute [Formula: see text] O2 peak, and peak workload with lower VT1, whereas relative [Formula: see text] O2 (ml/min/kg or ml/min/kg of FFM) during exercise remained unchanged in women with obesity. Rapid FFM loss affects cardiorespiratory responses at submaximal and maximal.
abstract_id: PUBMED:31427862
Exercise Training at Maximal Fat Oxidation Intensity for Overweight or Obese Older Women: A Randomized Study. The purpose was to study the therapeutic effects of 12 weeks of supervised exercise training at maximal fat oxidation intensity (FATmax) on body composition, lipid profile, cardiovascular function, and physical fitness in overweight or obese older women. Thirty women (64.2 ± 5.1 years old; BMI 27.1 ± 2.3 kg/m2; body fat 41.3 ± 4.6%) were randomly allocated into the Exercise or Control groups. Participants in the Exercise group were trained at their individualized FATmax intensity (aerobic training), three days/week for one hour/day for 12 weeks. The Exercise group had significantly decreased body mass, BMI, fat mass, visceral trunk fat, and diastolic blood pressure. Furthermore, there were significant increases in high-density lipoprotein-cholesterol, predicted VO2max, left ventricular ejection fraction, and sit-and-reach performance. There were no changes in the measured variables of the Control group. These outcomes indicate that FATmax is an effective exercise intensity to improve body composition and functional capacity for older women with overweight or obesity.
abstract_id: PUBMED:29344008
Understanding the factors that effect maximal fat oxidation. Lipids as a fuel source for energy supply during submaximal exercise originate from subcutaneous adipose tissue derived fatty acids (FA), intramuscular triacylglycerides (IMTG), cholesterol and dietary fat. These sources of fat contribute to fatty acid oxidation (FAox) in various ways. The regulation and utilization of FAs in a maximal capacity occur primarily at exercise intensities between 45 and 65% VO2max, is known as maximal fat oxidation (MFO), and is measured in g/min. Fatty acid oxidation occurs during submaximal exercise intensities, but is also complimentary to carbohydrate oxidation (CHOox). Due to limitations within FA transport across the cell and mitochondrial membranes, FAox is limited at higher exercise intensities. The point at which FAox reaches maximum and begins to decline is referred to as the crossover point. Exercise intensities that exceed the crossover point (~65% VO2max) utilize CHO as the predominant fuel source for energy supply. Training status, exercise intensity, exercise duration, sex differences, and nutrition have all been shown to affect cellular expression responsible for FAox rate. Each stimulus affects the process of FAox differently, resulting in specific adaptions that influence endurance exercise performance. Endurance training, specifically long duration (>2 h) facilitate adaptations that alter both the origin of FAs and FAox rate. Additionally, the influence of sex and nutrition on FAox are discussed. Finally, the role of FAox in the improvement of performance during endurance training is discussed.
abstract_id: PUBMED:36755562
Sequencing patterns of ventilatory indices in less trained adults. Submaximal ventilatory indices, i.e., point of optimal ventilatory efficiency (POE) and anaerobic threshold (AT), are valuable indicators to assess the metabolic and ventilatory response during cardiopulmonary exercise testing (CPET). The order in which the ventilatory indices occur (ventilatory indices sequencing pattern, VISP), may yield additional information for the interpretation of CPET results and for exercise intensity prescription. Therefore, we determined whether different VISP groups concerning POE and AT exist. Additionally, we analysed fat metabolism via the exercise intensity eliciting the highest fat oxidation rate (Fatmax) as a possible explanation for differences between VISP groups. 761 less trained adults (41-68 years) completed an incremental exercise test on a cycle ergometer until volitional exhaustion. The ventilatory indices were determined using automatic and visual detection methods, and Fatmax was determined using indirect calorimetry. Our study identified two VISP groups with a lower work rate at POE compared to AT in VISPPOE < AT but not in group VISPPOE = AT. Therefore, training prescription based on POE rather than AT would result in different exercise intensity recommendations in 66% of the study participants and consequently in unintended physiological adaptions. VISPPOE < AT participants were not different to VISPPOE = AT participants concerning VO2peak and Fatmax. However, participants exhibiting a difference in work rate (VISPPOE < AT) were characterized by a higher aerobic capacity at submaximal work rate compared to VISPPOE = AT. Thus, analysing VISP may help to gain new insights into the complex ventilatory and metabolic response to exercise. But a methodological framework still must be established.
abstract_id: PUBMED:30649365
Increasing Iron Status through Dietary Supplementation in Iron-Depleted, Sedentary Women Increases Endurance Performance at Both Near-Maximal and Submaximal Exercise Intensities. Background: Iron deficiency persists as the most common micronutrient deficiency globally, despite having known detrimental effects on physical performance. Although iron supplementation and aerobic exercise have been examined individually and are known to improve physical performance, the impact of simultaneous iron supplementation and aerobic training remains unclear.
Objective: The aim of this study was to examine the individual and combined effects of iron supplementation and aerobic training on improving maximal and submaximal physical performance in iron-depleted, nonanemic (IDNA) women. We hypothesized that women receiving iron would improve their endurance performance but not their estimated maximal oxygen consumption (eVO2max).
Methods: Seventy-three sedentary, previously untrained IDNA (serum ferritin <25 µg/L and hemoglobin >110 g/L) women aged 18-26 y with a body mass index (kg/m2) of 17-25 participated in a double-blind, 8-wk, randomized controlled trial with a 2 × 2 factorial design including iron supplementation (42 mg elemental Fe/d) or placebo and aerobic exercise training (5 d/wk for 25 min at 75-85% of age-predicted maximum heart rate) or no training. Linear models were used to examine relations between training, supplement, and changes in the primary outcomes of observed maximal oxygen consumption (VO2peak) and eVO2max and ventilatory threshold (absolute oxygen consumption and percentage of maximum). Re-evaluation of a published meta-analysis was used to compare effects of iron supplementation on maximal oxygen consumption (VO2max) and VO2peak.
Results: There were significant training-by-supplement interactions for VO2peak, volume of oxygen consumption at the ventilatory threshold, and the percentage of eVO2max where the threshold occurred, with the iron-untrained group performing better than the placebo-untrained group. There was no beneficial effect of iron supplementation for VO2max (mean difference: 0.53; 95% CI: -0.75, 1.81; P = 0.42), but a significant benefit was observed for VO2peak (mean difference: 1.87; 95% CI: 0.15, 3.60; P = 0.03).
Conclusions: Iron supplementation increases endurance performance at submaximal and maximal (VO2peak) exercise intensities in IDNA women. However, increasing iron status does not increase eVO2max. This trial was registered at clinicaltrials.gov as NCT03002090.
abstract_id: PUBMED:27072372
Positive effect of exercise training at maximal fat oxidation intensity on body composition and lipid metabolism in overweight middle-aged women. The purpose of this study was to test the hypothesis that 10 weeks of supervised exercise training at the maximal fat oxidation (FATmax) intensity would improve important variables of body composition and lipid metabolism in overweight middle-aged women. A longitudinal study design was employed to evaluate the effects of FATmax exercise training. Thirty women (45-59 years old; BMI 28·2 ± 1·8 kg m(-2) ; body fat 38·9 ± 4·1%) were randomly allocated into the Exercise and Control groups, n = 15 in each group. Body composition, FATmax, predicted VO2 max, lipid profile, plasma lipoprotein lipase activity and serum leptin concentration were measured before and after the experimental period. The Exercise group was trained at the individualized FATmax intensity, 5 days per week and 1 h per day for 10 weeks. No diet control was introduced during the experimental period for all participants. Exercise group obtained significant decreases in body mass, BMI, body fat % and abdominal fat mass, as well as the concentrations of triglycerides, serum leptin and blood glucose. The activity of lipoprotein lipase was increased in trained participants. There were no changes in these variables in the Control group. In addition, there was no significant change in daily energy intake for all participants before and after the experimental period. In conclusion, the 10-week FATmax exercise training achieved improvements in body composition and lipid metabolism in overweight middle-aged women. This result suggests FATmax is an effective exercise training intensity for obesity treatment.
Answer: The ventilatory threshold is not consistently coincident with maximal fat oxidation during submaximal exercise in women. One study found that exercise at 75% VO2peak elicited the greatest rate of fat oxidation, and this intensity was coincident with the ventilatory threshold in a group of healthy, moderately-trained eumenorrheic females (PUBMED:11125763). However, this finding does not necessarily generalize to all populations of women. For example, in Korean women with type 2 diabetes mellitus, the exercise intensity for maximal fat oxidation was significantly lower than in the control group, suggesting that the ventilatory threshold and maximal fat oxidation may not coincide in this population (PUBMED:26301195). Additionally, the ventilatory threshold and maximal fat oxidation rate may not be aligned in other specific groups, such as overweight middle-aged women (PUBMED:29541108) or older women (PUBMED:31427862), where exercise training at the maximal fat oxidation intensity was shown to improve health-related physical fitness and body composition.
Moreover, the relationship between ventilatory threshold and maximal fat oxidation can be influenced by various factors, including training status, exercise intensity, sex differences, and nutrition (PUBMED:29344008). Therefore, while there may be instances where the ventilatory threshold and maximal fat oxidation coincide, it is not a universal rule and can vary among individuals and specific populations. |
Instruction: Does age matter?
Abstracts:
abstract_id: PUBMED:34118094
A history of previous childbirths is linked to women's white matter brain age in midlife and older age. Maternal brain adaptations occur in response to pregnancy, but little is known about how parity impacts white matter and white matter ageing trajectories later in life. Utilising global and regional brain age prediction based on multi-shell diffusion-weighted imaging data, we investigated the association between previous childbirths and white matter brain age in 8,895 women in the UK Biobank cohort (age range = 54-81 years). The results showed that number of previous childbirths was negatively associated with white matter brain age, potentially indicating a protective effect of parity on white matter later in life. Both global white matter and grey matter brain age estimates showed unique contributions to the association with previous childbirths, suggesting partly independent processes. Corpus callosum contributed uniquely to the global white matter association with previous childbirths, and showed a stronger relationship relative to several other tracts. While our findings demonstrate a link between reproductive history and brain white matter characteristics later in life, longitudinal studies are required to establish causality and determine how parity may influence women's white matter trajectories across the lifespan.
abstract_id: PUBMED:27919183
Accelerated Gray and White Matter Deterioration With Age in Schizophrenia. Objective: Although brain changes in schizophrenia have been proposed to mirror those found with advancing age, the trajectory of gray matter and white matter changes during the disease course remains unclear. The authors sought to measure whether these changes in individuals with schizophrenia remain stable, are accelerated, or are diminished with age.
Method: Gray matter volume and fractional anisotropy were mapped in 326 individuals diagnosed with schizophrenia or schizoaffective disorder and in 197 healthy comparison subjects aged 20-65 years. Polynomial regression was used to model the influence of age on gray matter volume and fractional anisotropy at a whole-brain and voxel level. Between-group differences in gray matter volume and fractional anisotropy were regionally localized across the lifespan using permutation testing and cluster-based inference.
Results: Significant loss of gray matter volume was evident in schizophrenia, progressively worsening with age to a maximal loss of 8% in the seventh decade of life. The inferred rate of gray matter volume loss was significantly accelerated in schizophrenia up to middle age and plateaued thereafter. In contrast, significant reductions in fractional anisotropy emerged in schizophrenia only after age 35, and the rate of fractional anisotropy deterioration with age was constant and best modeled with a straight line. The slope of this line was 60% steeper in schizophrenia relative to comparison subjects, indicating a significantly faster rate of white matter deterioration with age. The rates of reduction of gray matter volume and fractional anisotropy were significantly faster in males than in females, but an interaction between sex and diagnosis was not evident.
Conclusions: The findings suggest that schizophrenia is characterized by an initial, rapid rate of gray matter loss that slows in middle life, followed by the emergence of a deficit in white matter that progressively worsens with age at a constant rate.
abstract_id: PUBMED:34048307
Effects of Age on White Matter Microstructure in Children With Neurofibromatosis Type 1. Children with neurofibromatosis type 1 (NF1) often report cognitive challenges, though the etiology of such remains an area of active investigation. With the advent of treatments that may affect white matter microstructure, understanding the effects of age on white matter aberrancies in NF1 becomes crucial in determining the timing of such therapeutic interventions. A cross-sectional study was performed with diffusion tensor imaging from 18 NF1 children and 26 age-matched controls. Fractional anisotropy was determined by region of interest analyses for both groups over the corpus callosum, cingulate, and bilateral frontal and temporal white matter regions. Two-way analyses of variance were done with both ages combined and age-stratified into early childhood, middle childhood, and adolescence. Significant differences in fractional anisotropy between NF1 and controls were seen in the corpus callosum and frontal white matter regions when ages were combined. When stratified by age, we found that this difference was largely driven by the early childhood (1-5.9 years) and middle childhood (6-11.9 years) age groups, whereas no significant differences were appreciable in the adolescence age group (12-18 years). This study demonstrates age-related effects on white matter microstructure disorganization in NF1, suggesting that the appropriate timing of therapeutic intervention may be in early childhood.
abstract_id: PUBMED:31271249
Paternal age contribution to brain white matter aberrations in autism spectrum disorder. Aim: Although advanced parental age holds an increased risk for autism spectrum disorder (ASD), its role as a potential risk factor for an atypical white matter development underlying the pathophysiology of ASD has not yet been investigated. The current study was aimed to detect white matter disparities in ASD, and further investigate the relationship of paternal and maternal age at birth with such disparities.
Methods: Thirty-nine adult males with high-functioning ASD and 37 typically developing (TD) males were analyzed in the study. The FMRIB Software Library and tract-based spatial statistics were utilized to process and analyze the diffusion tensor imaging data.
Results: Subjects with ASD exhibited significantly higher mean diffusivity (MD) and radial diffusivity (RD) in white matter fibers, including the association (inferior fronto-occipital fasciculus, right inferior longitudinal fasciculus, superior longitudinal fasciculi, uncinate fasciculus, and cingulum), commissural (forceps minor), and projection tracts (anterior thalamic radiation and right corticospinal tract) compared to TD subjects (Padjusted < 0.05). No differences were seen in either fractional anisotropy or axial diffusivity. Linear regression analyses assessing the relationship between parental ages and the white matter aberrations revealed a positive correlation between paternal age (PA), but not maternal age, and both MD and RD in the affected fibers (Padjusted < 0.05). Multiple regression showed that only PA was a predictor of both MD and RD.
Conclusion: Our findings suggest that PA contributes to the white matter disparities seen in individuals with ASD compared to TD subjects.
abstract_id: PUBMED:24361462
Differential vulnerability of gray matter and white matter to intrauterine growth restriction in preterm infants at 12 months corrected age. Intrauterine growth restriction (IUGR) is associated with a high risk of abnormal neurodevelopment. Underlying neuroanatomical substrates are partially documented. We hypothesized that at 12 months preterm infants would evidence specific white-matter microstructure alterations and gray-matter differences induced by severe IUGR. Twenty preterm infants with IUGR (26-34 weeks of gestation) were compared with 20 term-born infants and 20 appropriate for gestational age preterm infants of similar gestational age. Preterm groups showed no evidence of brain abnormalities. At 12 months, infants were scanned sleeping naturally. Gray-matter volumes were studied with voxel-based morphometry. White-matter microstructure was examined using tract-based spatial statistics. The relationship between diffusivity indices in white matter, gray matter volumes, and perinatal data was also investigated. Gray-matter decrements attributable to IUGR comprised amygdala, basal ganglia, thalamus and insula bilaterally, left occipital and parietal lobes, and right perirolandic area. Gray-matter volumes positively correlated with birth weight exclusively. Preterm infants had reduced FA in the corpus callosum, and increased FA in the anterior corona radiata. Additionally, IUGR infants had increased FA in the forceps minor, internal and external capsules, uncinate and fronto-occipital white matter tracts. Increased axial diffusivity was observed in several white matter tracts. Fractional anisotropy positively correlated with birth weight and gestational age at birth. These data suggest that IUGR differentially affects gray and white matter development preferentially affecting gray matter. At 12 months IUGR is associated with a specific set of structural gray-matter decrements. White matter follows an unusual developmental pattern, and is apparently affected by IUGR and prematurity combined.
abstract_id: PUBMED:36358443
Interpretation for Individual Brain Age Prediction Based on Gray Matter Volume. The relationship between age and the central nervous system (CNS) in humans has been a classical issue that has aroused extensive attention. Especially for individuals, it is of far greater importance to clarify the mechanisms between CNS and age. The primary goal of existing methods is to use MR images to derive high-accuracy predictions for age or degenerative diseases. However, the associated mechanisms between the images and the age have rarely been investigated. In this paper, we address the correlation between gray matter volume (GMV) and age, both in terms of gray matter themselves and their interaction network, using interpretable machine learning models for individuals. Our goal is not only to predict age accurately but more importantly, to explore the relationship between GMV and age. In addition to targeting each individual, we also investigate the dynamic properties of gray matter and their interaction network with individual age. The results show that the mean absolute error (MAE) of age prediction is 7.95 years. More notably, specific locations of gray matter and their interactions play different roles in age, and these roles change dynamically with age. The proposed method is a data-driven approach, which provides a new way to study aging mechanisms and even to diagnose degenerative brain diseases.
abstract_id: PUBMED:30323144
Prevalence of white matter hyperintensities increases with age. White matter hyperintensities (WMHs) that arise with age and/or atherosclerosis constitute a heterogeneous disorder in the white matter of the brain. However, the relationship between age-related risk factors and the prevalence of WMHs is still obscure. More clinical data is needed to confirm the relationship between age and the prevalence of WMHs. We collected 836 patients, who were treated in the Renmin Hospital, Hubei University of Medicine, China from January 2015 to February 2016, for a case-controlled retrospective analysis. According to T2-weighted magnetic resonance imaging results, all patients were divided into a WMHs group (n = 333) and a non-WMHs group (n = 503). The WMHs group contained 159 males and 174 females. The prevalence of WMHs increased with age and was associated with age-related risk factors, such as cardiovascular diseases, smoking, drinking, diabetes, hypertension and history of cerebral infarction. There was no significant difference in sex, education level, hyperlipidemia and hyperhomocysteinemia among the different age ranges. These findings confirm that age is an independent risk factor for the prevalence and severity of WMHs. The age-related risk factors enhance the occurrence of WMHs.
abstract_id: PUBMED:26446690
Age exacerbates HIV-associated white matter abnormalities. Both HIV disease and advanced age have been associated with alterations to cerebral white matter, as measured with white matter hyperintensities (WMH) on fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI), and more recently with diffusion tensor imaging (DTI). This study investigates the combined effects of age and HIV serostatus on WMH and DTI measures, as well as the relationships between these white matter measures, in 88 HIV seropositive (HIV+) and 49 seronegative (HIV-) individuals aged 23-79 years. A whole-brain volumetric measure of WMH was quantified from FLAIR images using a semi-automated process, while fractional anisotropy (FA) was calculated for 15 regions of a whole-brain white matter skeleton generated using tract-based spatial statistics (TBSS). An age by HIV interaction was found indicating a significant association between WMH and older age in HIV+ participants only. Similarly, significant age by HIV interactions were found indicating stronger associations between older age and decreased FA in the posterior limbs of the internal capsules, cerebral peduncles, and anterior corona radiata in HIV+ vs. HIV- participants. The interactive effects of HIV and age were stronger with respect to whole-brain WMH than for any of the FA measures. Among HIV+ participants, greater WMH and lower anterior corona radiata FA were associated with active hepatitis C virus infection, a history of AIDS, and higher current CD4 cell count. Results indicate that age exacerbates HIV-associated abnormalities of whole-brain WMH and fronto-subcortical white matter integrity.
abstract_id: PUBMED:24983715
Age-related effects in the neocortical organization of chimpanzees: gray and white matter volume, cortical thickness, and gyrification. Among primates, humans exhibit the most profound degree of age-related brain volumetric decline in particular regions, such as the hippocampus and the frontal lobe. Recent studies have shown that our closest living relatives, the chimpanzees, experience little to no volumetric decline in gray and white matter over the adult lifespan. However, these previous studies were limited with a small sample of chimpanzees of the most advanced ages. In the present study, we sought to further test for potential age-related decline in cortical organization in chimpanzees by expanding the sample size of aged chimpanzees. We used the BrainVisa software to measure total brain volume, gray and white matter volumes, gray matter thickness, and gyrification index in a cross-sectional sample of 219 captive chimpanzees (8-53 years old), with 38 subjects being 40 or more years of age. Mean depth and cortical fold opening of 11 major sulci of the chimpanzee brains were also measured. We found that chimpanzees showed increased gyrification with age and a cubic relationship between age and white matter volume. For the association between age and sulcus depth and width, the results were mostly non-significant with the exception of one negative correlation between age and the fronto-orbital sulcus. In short, results showed that chimpanzees exhibit few age-related changes in global cortical organization, sulcus folding and sulcus width. These findings support previous studies and the theory that the age-related changes in the human brain is due to an extended lifespan.
abstract_id: PUBMED:38424358
White matter brain age as a biomarker of cerebrovascular burden in the ageing brain. As the brain ages, it almost invariably accumulates vascular pathology, which differentially affects the cerebral white matter. A rich body of research has investigated the link between vascular risk factors and the brain. One of the less studied questions is that among various modifiable vascular risk factors, which is the most debilitating one for white matter health? A white matter specific brain age was developed to evaluate the overall white matter health from diffusion weighted imaging, using a three-dimensional convolutional neural network deep learning model in both cross-sectional UK biobank participants (n = 37,327) and a longitudinal subset (n = 1409). White matter brain age gap (WMBAG) was the difference between the white matter age and the chronological age. Participants with one, two, and three or more vascular risk factors, compared to those without any, showed an elevated WMBAG of 0.54, 1.23, and 1.94 years, respectively. Diabetes was most strongly associated with an increased WMBAG (1.39 years, p < 0.001) among all risk factors followed by hypertension (0.87 years, p < 0.001) and smoking (0.69 years, p < 0.001). Baseline WMBAG was associated significantly with processing speed, executive and global cognition. Significant associations of diabetes and hypertension with poor processing speed and executive function were found to be mediated through the WMBAG. White matter specific brain age can be successfully targeted for the examination of the most relevant risk factors and cognition, and for tracking an individual's cerebrovascular ageing process. It also provides clinical basis for the better management of specific risk factors.
Answer: Yes, age matters in the context of brain health and white matter integrity. Research has shown that age is associated with changes in both gray and white matter in the brain. For instance, in schizophrenia, there is an accelerated loss of gray matter volume that worsens with age, and a constant rate of white matter deterioration that is significantly faster compared to healthy individuals (PUBMED:27919183). In children with neurofibromatosis type 1 (NF1), age-related effects on white matter microstructure suggest that early childhood may be the most appropriate timing for therapeutic intervention (PUBMED:34048307).
Furthermore, paternal age at birth has been linked to white matter disparities in individuals with autism spectrum disorder (ASD), with older paternal age contributing to higher mean diffusivity and radial diffusivity in white matter fibers (PUBMED:31271249). In preterm infants with intrauterine growth restriction (IUGR), age-related changes preferentially affect gray matter, but white matter also shows an unusual developmental pattern affected by both IUGR and prematurity (PUBMED:24361462).
The prevalence of white matter hyperintensities (WMHs), which are associated with age and atherosclerosis, increases with age and is linked to age-related risk factors such as cardiovascular diseases and hypertension (PUBMED:30323144). Age also exacerbates HIV-associated white matter abnormalities, with older HIV-positive individuals showing a significant association between WMH and older age (PUBMED:26446690).
In chimpanzees, few age-related changes in global cortical organization are observed, suggesting that the profound age-related brain volumetric decline seen in humans may be due to an extended lifespan (PUBMED:24983715). Lastly, white matter brain age has been proposed as a biomarker of cerebrovascular burden in the ageing brain, with diabetes, hypertension, and smoking being significant risk factors for increased white matter brain age (PUBMED:38424358).
In summary, age plays a crucial role in the structural integrity and functional capacity of the brain, influencing both gray and white matter across the lifespan. |
Instruction: Can Composite Nutritional Supplement Based on the Current Guidelines Prevent Vitamin and Mineral Deficiency After Weight Loss Surgery?
Abstracts:
abstract_id: PUBMED:26319661
Can Composite Nutritional Supplement Based on the Current Guidelines Prevent Vitamin and Mineral Deficiency After Weight Loss Surgery? Background: Nutritional deficiencies occur after weight loss surgery. Despite knowledge of nutritional risk, there is little uniformity of postoperative vitamin and mineral supplementation. The objective of this study was to evaluate a composite supplement based on the clinical practice guidelines proposed in 2008 regarding vitamin and mineral supplementation after Roux-en-Y gastric bypass. The composite included iron (Fe) and calcium as well.
Methods: A retrospective chart review of 309 patients undergoing laparoscopic Roux-en-Y gastric bypass (LRYGB) was evaluated for the development of deficiencies in iron and vitamins A, B1, B12, and D. Patients were instructed to take a custom vitamin and mineral supplement that was based on society-approved guidelines. The clinical practice guidelines were modified to include 1600 international units (IU) of vitamin D3 instead of the recommended 800 IU.
Results: The compliant patients' deficiency rates were significantly lower than those of the noncompliant patients for iron (p = 0.001), vitamin A (p = 0.01), vitamin B12 (p ≈ 0.02), and vitamin D (p < 0.0001). Women's menstrual status did not significantly influence the development of iron deficiency.
Conclusions: Use of a composite based on guidelines proposed by the AACE, TOS, and the ASMBS appears to be effective for preventing iron and vitamins A, B1, B12, and D deficiencies in the LRYGB patients during the first postoperative year. Separation of calcium and Fe does not need to be mandatory. Even with simplification, compliance is far from universal.
abstract_id: PUBMED:32030614
Comparison of Bariatric Branded Chewable Multivitamin/Multimineral Formulations to the 2016 American Society for Metabolic and Bariatric Surgery Integrated Health Nutritional Guidelines. Postoperative vitamin and mineral supplementation are integral components of the management of the weight loss surgery patient. Supplements differ in type, amount, and salt form. No recent publication has compared bariatric branded commercially available products with current practice guidelines. Registered dietitians belonging to the New England Bariatric Dietitians LinkedIn group were surveyed to identify their recommendation practices. These results were then used to compare and discuss in a comprehensive fashion the most widely recommended bariatric branded chewable supplements to the 2016 American Society for Metabolic and Bariatric Surgery Integrated Health Nutritional Guidelines.
abstract_id: PUBMED:28392254
American Society for Metabolic and Bariatric Surgery Integrated Health Nutritional Guidelines for the Surgical Weight Loss Patient 2016 Update: Micronutrients. Background: Optimizing postoperative patient outcomes and nutritional status begins preoperatively. Patients should be educated before and after weight loss surgery (WLS) on the expected nutrient deficiencies associated with alterations in physiology. Although surgery can exacerbate preexisting nutrient deficiencies, preoperative screening for vitamin deficiencies has not been the norm in the majority of WLS practices. Screening is important because it is common for patients who present for WLS to have at least 1 vitamin or mineral deficiency preoperatively.
Objectives: The focus of this paper is to update the 2008 American Society for Metabolic and Bariatric Surgery Nutrition in Bariatric Surgery Guidelines with key micronutrient research in laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, laparoscopic sleeve gastrectomy, biliopancreatic diversion, and biliopancreatic diversion/duodenal switch.
Methods: Four questions regarding recommendations for preoperative and postoperative screening of nutrient deficiencies, preventative supplementation, and repletion of nutrient deficiencies in pre-WLS patients have been applied to specific micronutrients (vitamins B1 and B12; folate; iron; vitamins A, E, and K; calcium; vitamin D; copper; and zinc).
Results: Out of the 554 articles identified as meeting preliminary search criteria, 402 were reviewed in detail. There are 92 recommendations in this update, 79 new recommendations and an additional 13 that have not changed since 2008. Each recommendation has a corresponding graded level of evidence, from grade A through D.
Conclusions: Data continue to suggest that the prevalence of micronutrient deficiencies is increasing, while monitoring of patients at follow-up is decreasing. This document should be viewed as a guideline for a reasonable approach to patient nutritional care based on the most recent research, scientific evidence, resources, and information available. It is the responsibility of the registered dietitian nutritionist and WLS program to determine individual variations as they relate to patient nutritional care.
abstract_id: PUBMED:16041224
Nutritional concerns related to Roux-en-Y gastric bypass: what every clinician needs to know. Weight loss surgery, particularly the Roux-en-Y gastric bypass (REYGB), has become a popular treatment strategy for obesity. Often the only measure of success is the amount of weight lost following surgery. Unfortunately the nutritional adequacy of the postoperative diet has frequently been overlooked, and in the months to years that follow, nutritional deficiencies have become apparent, including protein-calorie malnutrition and various vitamin and mineral deficiencies contributing to medical illnesses and limiting optimal health. Therefore, patients require close monitoring following REYGB, with special regard to the rapidity of weight loss and vigilant screening for signs and symptoms of subclinical and clinical nutritional deficiencies. Several specific nutrients require close surveillance postoperatively to prevent life-threatening complications related to deficient states. This article addresses nutritional concerns associated with REYGB with fastidious focus on recognition and treatment of the nutritional deficiencies and promotion of nutritional health following REYGB. Recommendations regarding nutritional intake following REYGB are based on available scientific data, albeit limited. In cases where data do not exist, expert or consensus opinion is provided and recommendations for future research are given. Ultimately, clinical application of this information will contribute to the prevention of nutrition-related illness associated with REYGB.
abstract_id: PUBMED:26947791
An optimized multivitamin supplement lowers the number of vitamin and mineral deficiencies three years after Roux-en-Y gastric bypass: a cohort study. Background: Vitamin and mineral deficiencies are common after Roux-en-Y gastric bypass (RYGB) surgery. In particular, inadequate serum concentrations of ferritin and vitamin B12 have been found in 11% and 23% (respectively) of patients using a standard multivitamin supplement (sMVS) 1 year after RYGB.
Objective: To evaluate the effectiveness and safety of Weight Loss Surgery (WLS) Forte® (a pharmaceutical-grade, optimized multivitamin supplement) compared with an sMVS and a control group (nonuser) 3 years after RYGB.
Setting: General hospital specialized in bariatric surgery.
Methods: A follow-up cohort study of a triple-blind randomized, controlled clinical trial.
Results: At baseline 148 patients were enrolled (74 [50%] in the sMVS group and 74 [50%] in the WLS Forte group). After a mean follow-up of 36 months, 11 (7%) patients were lost to follow-up, of whom 2 were secondary to death. At the end of the study, 11 (17%) patients in the WLS Forte and 17 (24%) in the sMVS group stopped using a supplement. In addition, 64 (47%) patients were using WLS Forte and 45 (33%) patients a sMVS. Patient characteristics and follow-up length were comparable between the groups. Significantly more patients were diagnosed with anemia (16% versus 3% [P = .021]), a ferritin deficiency (14% versus 3% [P = .043]), and a zinc deficiency (8% versus 0% [P = .033]) in the sMVS group compared with WLS Forte. Five patients developed a vitamin B12 deficiency while using WLS Forte, versus 15 of sMVS users (P = .001). No adverse events occurred that were related to supplement use.
Conclusion: At 3 years postoperative of RYGB, an optimized multivitamin supplement (WLS Forte) was more effective in reducing anemia and ferritin, vitamin B12, and zinc deficiencies compared with a standard supplement and control.
abstract_id: PUBMED:18626380
Nutritional deficiency of post-bariatric surgery body contouring patients: what every plastic surgeon should know. Background: Bariatric surgery, particularly the Roux-en-Y gastric bypass, is currently the most effective method of sustainable weight loss for the morbidly obese patient population. Unfortunately, the nutritional adequacy of the postoperative diet has frequently been overlooked, and in the months to years that follow, many nutritional deficiencies have become apparent. Furthermore, once weight loss has reached a plateau, many patients become candidates for body contouring surgery and other aesthetic operations.
Methods: The aim of this review was to highlight the nutritional deficiencies of post-bariatric surgery patients as related to planned body contouring surgery. This review was prepared by an extensive search of the bariatric surgery literature.
Results: The current data indicate that many post-bariatric surgery patients have protein-calorie malnutrition as well as various vitamins and mineral deficiencies that may limit optimal health and healing.
Conclusions: It is essential that those plastic surgeons who treat post-bariatric surgery patients are aware of these nutritional deficiencies, which can be minimized by adhering to eating guidelines and supplemental prescriptions. Although there are many studies documenting relationships between malnutrition and poor wound healing, the optimal nutrient intake in the post-bariatric surgery state to promote wound healing is unknown. It is, however, clear that proteins, vitamin A, vitamin C, arginine, glutamine, zinc, and selenium have significant beneficial effects on wound healing and optimizing the immune system.
abstract_id: PUBMED:17679300
Nutritional implications of bariatric surgery on the gastrointestinal tract Anatomical change in the anatomy of the gastrointestinal tract after bariatric surgery leads to modification of dietary patterns that have to be adapted to new physiological conditions, either related with the volume of intakes or the characteristics of the macro- and micronutrients to be administered. Restrictive diet after bariatric surgery (basically gastric bypass and restrictive procedures) is done at several steps. The first phase after surgery consists in the administration of clear liquids for 2-3 days, followed by completely low-fat and high-protein content (> 50-60 g/day) liquid diet for 2-4 weeks, normally by means of formula-diets. Soft or grinded diet including very soft protein-rich foods, such as egg, low-calories cheese, and lean meats such as chicken, cow, pork, or fish (red meats are not so well tolerated) is recommended 2-4 weeks after hospital discharge. Normal diet may be started within 8 weeks from surgery or even later. It is important to incorporate hyperproteic foods with each meal, such egg whites, lean meats, cheese or milk. All these indications should be done under the supervision of an expert nutrition professional to always advise the patients and adapting the diet to some special situations (nausea/vomiting, constipation, diarrhea, dumping syndrome, dehydration, food intolerances, overfeeding, etc.). The most frequent vitamin and mineral deficiencies in the different types of surgeries are reviewed, with a special focus on iron, vitamin B12, calcium, and vitamin D metabolism. It should not be forgotten that the aim of obesity surgery is making the patient loose weight and thus post-surgery diet is designed to achieve that goal although without forgetting the essential role that nutritional education has on the learning of new dietary habits contributing to maintain that weight loss over time.
abstract_id: PUBMED:21979398
Vitamin D and calcium status and appropriate recommendations in bariatric surgery patients. Bariatric surgery is becoming an increasingly common procedure performed to achieve long-term weight loss in morbidly obese patients. Bariatric surgery may cause long-term morbidity because of vitamin and mineral deficiencies. This review synthesizes the research on vitamin D and calcium status in obese patients before and after bariatric surgery. The literature shows that morbidly obese patients are likely deficient in vitamin D prior to surgery because of poor sunlight exposure, less bioavailability of the vitamin when sequestered in fat cells, and inhibited hepatic vitamin activation. Gastric bypass surgery may further exacerbate vitamin D and calcium deficiencies secondary to poor compliance, loss to follow-up, reduced food intake, and malabsorption. It is imperative that research be conducted to determine adequate supplementation regimens for vitamin D and calcium in bariatric surgery patients.
abstract_id: PUBMED:10731253
Reversal of Hair Loss following Vertical Gastroplasty when Treated with Zinc Sulphate. BACKGROUND: Nutritional complications following surgery for morbid obesity include both vitamin and mineral deficiency. Severe cases of zinc deficiency can lead to alopecia, diarrhea, emotional disorders, weight loss, intercurrent infection, bullous-pustular dermatitis and hypogonadism in males. Hair loss may occur after vertical gastroplasty (VG). METHODS: Diffuse hair loss occurred in 47 out of 130 patients who underwent VG. All patients had been routinely advised to take a multivitamin supplement, but 47 developed hair loss despite taking the supplement. These patients were then prescribed Zinc Sulphate 200 mg three times a day. There was no alteration in the vitamin supplementation. RESULTS: Arrest of hair loss and regrowth occurred in all patients. However, five patients reported recurrence of hair loss after stopping zinc. This loss was reversed within 6 months of recommencing zinc 600 mg daily. Ten control patients had no hair loss after gastrointestinal surgery. CONCLUSION: Significant hair loss occurred in about one-third of patients after VG, and was reversed by zinc supplementation.
abstract_id: PUBMED:17036577
Clinical management after bariatric surgery: value of a multidisciplinary approach. Comprehensive and collaborative longitudinal care is essential for optimal outcomes after bariatric surgery. This approach is important to manage the many potential surgical and medical comorbidities in patients who undergo bariatric surgery. Medical management programs require prompt and often frequent adjustment as the nutritional program changes and as weight loss occurs. Familiarity with the recommended nutritional program, monitoring and treatment of potential vitamin and mineral deficiencies, effects of weight loss on medical comorbid conditions, and common postoperative surgical issues should allow clinicians to provide excellent care. Patients must understand the importance of regularly scheduled medical follow-up to minimize potentially serious medical and surgical complications. Because the long-term success of bariatric surgery relies on patients' ability to make sustained lifestyle changes in nutrition and physical activity, we highlight the role of these 2 modalities in their overall care. Our guidelines are based on clinical studies, when available, combined with our extensive clinical experience. We present our multidisciplinary approach to postoperative care that is provided after bariatric surgery and that builds on our presurgical evaluation.
Answer: Yes, a composite nutritional supplement based on current guidelines can prevent vitamin and mineral deficiency after weight loss surgery. A study evaluating a custom vitamin and mineral supplement based on society-approved guidelines, which included modifications such as increasing vitamin D3 intake to 1600 IU instead of the recommended 800 IU, found that compliant patients had significantly lower deficiency rates for iron, vitamins A, B12, and D compared to noncompliant patients. This suggests that adherence to a composite supplement based on guidelines proposed by the AACE, TOS, and ASMBS is effective in preventing deficiencies in these nutrients during the first postoperative year of laparoscopic Roux-en-Y gastric bypass (LRYGB) patients (PUBMED:26319661).
Furthermore, the American Society for Metabolic and Bariatric Surgery (ASMBS) Integrated Health Nutritional Guidelines for the Surgical Weight Loss Patient 2016 Update provides recommendations for preoperative and postoperative screening of nutrient deficiencies, preventative supplementation, and repletion of nutrient deficiencies. This update includes 92 recommendations with corresponding graded levels of evidence, indicating a comprehensive approach to patient nutritional care based on recent research and scientific evidence (PUBMED:28392254).
Additionally, an optimized multivitamin supplement (WLS Forte®) was found to be more effective in reducing anemia and deficiencies in ferritin, vitamin B12, and zinc compared to a standard multivitamin supplement and control group three years after Roux-en-Y gastric bypass surgery (PUBMED:26947791).
In summary, using a composite nutritional supplement that adheres to current guidelines, such as those provided by the ASMBS, and possibly optimizing the supplement based on individual patient needs, can effectively prevent vitamin and mineral deficiencies after weight loss surgery. |
Instruction: Variation in lymph node assessment after colon cancer resection: patient, surgeon, pathologist, or hospital?
Abstracts:
abstract_id: PUBMED:21174232
Variation in lymph node assessment after colon cancer resection: patient, surgeon, pathologist, or hospital? Background: Evaluation of ≥ 12 lymph nodes after colon cancer resection has been adopted as a hospital quality measure, but compliance varies considerably. We sought to quantify relative proportions of the variation in lymph node assessment after colon cancer resection occurring at the patient, surgeon, pathologist, and hospital levels.
Methods: The 1998-2005 Surveillance, Epidemiology, and End Results-Medicare database was used to identify 27,101 patients aged 65 years and older with Medicare parts A and B coverage undergoing colon cancer resection. Multilevel logistic regression was used to model lymph node evaluation as a binary variable (≥ 12 versus <12) while explicitly accounting for clustering of outcomes.
Results: Patients were treated by 4,180 distinct surgeons and 2,656 distinct pathologists at 1,113 distinct hospitals. The overall rate of 12-lymph node (12-LN) evaluation was 48%, with a median of 11 nodes examined per patient, and 33% demonstrated lymph node metastasis on pathological examination. Demographic and tumor-related characteristics such as age, gender, tumor grade, and location each demonstrated significant effects on rate of 12-LN assessment (all P < 0.05). The majority of the variation in 12-LN assessment was related to non-modifiable patient-specific factors (79%). After accounting for all explanatory variables in the full model, 8.2% of the residual provider-level variation was attributable to the surgeon, 19% to the pathologist, and 73% to the hospital.
Conclusion: Compliance with the 12-LN standard is poor. Variation between hospitals is larger than that between pathologists or surgeons. However, patient-to-patient variation is the largest determinant of 12-LN evaluation.
abstract_id: PUBMED:28088321
Surgeon-, pathologist-, and hospital-level variation in suboptimal lymph node examination after colectomy: Compartmentalizing quality improvement strategies. Background: The goals of this study were to characterize the variation in suboptimal lymph node examination for patients with colon cancer across individual surgeons, pathologists, and hospitals and to examine if this variation affects 5-year, disease-specific survival.
Methods: A retrospective cohort study was conducted by merging the New York State Cancer Registry with the Statewide Planning & Research Cooperative System, Medicaid, and Medicare claims to identify resections for stages I-III colon cancer from 2004-2011. Multilevel logistic regression models characterized variation in suboptimal lymph node examination (<12 lymph nodes). Multilevel competing-risks Cox models were used for survival analyses.
Results: The overall rate of suboptimal lymph node examination was 32% in 12,332 patients treated by 1,503 surgeons and 814 pathologists at 187 hospitals. Patient-level predictors of suboptimal lymph node examination were older age, male sex, nonscheduled admission, lesser stage, and left colectomy procedure. Hospital-level predictors of suboptimal lymph node examination were a nonacademic status, a rural setting, and a low annual number of resections for colon cancer. The percent of the total clustering variance attributed to surgeons, pathologists, and hospitals was 8%, 23%, and 70%, respectively. Increasing the pathologist and hospital-specific rates of suboptimal lymph node examination were associated with worse 5-year, disease-specific survival.
Conclusion: There was a large variation in suboptimal lymph node examination between surgeons, pathologists, and hospitals. Collaborative efforts that promote optimal examination of lymph nodes may improve prognosis for colon cancer patients. Given that 93% of the variation was attributable to pathologists and hospitals, endeavors in quality improvement should focus on these 2 settings.
abstract_id: PUBMED:38205923
Variation in Lymph Node Assessment for Colon Cancer at the Tumor, Surgeon, and Hospital Level. Background: We hypothesized that tumor- and hospital-level factors, compared with surgeon characteristics, are associated with the majority of variation in the 12 or more lymph nodes (LNs) examined quality standard for resected colon cancer.
Study Design: A dataset containing an anonymized surgeon identifier was obtained from the National Cancer Database for stage I to III colon cancers from 2010 to 2017. Multilevel logistic regression models were built to assign a proportion of variance in achievement of the 12 LNs standard among the following: (1) tumor factors (demographic and pathologic characteristics), (2) surgeon factors (volume, approach, and margin status), and (3) facility factors (volume and facility type).
Results: There were 283,192 unique patient records with 15,358 unique surgeons across 1,258 facilities in our cohort. Achievement of the 12 LNs standard was high (90.3%). Achievement of the 12 LNs standard by surgeon volume was 88.1% and 90.7% in the lowest and highest quartiles, and 86.8% and 91.6% at the facility level for high and low annual volume quartiles, respectively. In multivariate analysis, the following tumor factors were associated with meeting the 12 LNs standard: age, sex, primary tumor site, tumor grade, T stage, and comorbidities (all p < 0.001). Tumor factors were responsible for 71% of the variation in 12 LNs yield, whereas surgeon and facility characteristics contributed 17% and 12%, respectively.
Conclusions: Twenty-nine percent of the variation in the 12 LNs standard is linked to modifiable factors. The majority of variation in this quality metric is associated with non-modifiable tumor-level factors.
abstract_id: PUBMED:22808760
Who is responsible for inadequate lymph node retrieval after colorectal surgery: surgeon or pathologist? Background: Many factors have been described influencing survival of patients with colorectal cancer. The most important prognostic factor is lymph node involvement. The National Comprehensive Cancer Network indicates that at least 12 lymph nodes (LN12) must be retrieved for proper staging and treatment planning. The surgeon and the pathologist influence the number of retrieved lymph nodes.
Methods: We retrospectively reviewed all patients with diagnosis and subsequent surgery for colorectal cancer from January 2004 to January 2010 at Gulhane Military Medical Academy in Ankara, Turkey. We investigated the relationship between LN 12 and the independent variables of tumour size, lymph node involvement, metastasis, age, gender, surgeon, pathologist, surgical specimen length, tumour stage, and localization. Statistical analysis utilized the Shapiro-Wilk test, interquartile range, Mann-Whitney test, chi-square and chi-square likelihood ratio tests, and Kruskal-Wallis nonparametric variance analysis. In order to identify influencing factors for retrieval of lymph nodes, multiple linear regression was performed. In order to identify the direction and extent of effects of these influencing factors, logistic regression was performed. OR (Odds Ratio) and 95% CI (Confidence Interval) of the OR were calculated.
Results: There were 223 study patients, 134 with colon cancer and 89 with rectal cancer. There was no statistical significance in terms of age, gender, cancer type and postoperative tumour size, number of metastatic lymph nodes > 4, or LN12 (p > 0.05). Statistical significance was found between surgeons and LN12, the number of operations and LN12 (p < 0.001), and pathologists and LN12 (p = 0.049).
Conclusions: Harvesting an adequate number of lymph nodes is crucial for patients with colorectal cancer in terms of staging and planning further treatment modalities such as adjuvant chemotherapy. Multidisciplinary collaboration between surgeons and pathologists is vital for optimal patient outcomes.
abstract_id: PUBMED:20026828
Colon cancer and low lymph node count: who is to blame? Objective: To identify the factors that contribute to the disparity in the number of lymph nodes examined for curative colon cancer resections.
Design: Our prospectively accrued cancer registry was analyzed for all colon cancer resections performed in a consecutive 52-month period (January 1, 2003, through April 30, 2007).
Setting: The study was performed at an 851-bed community hospital. Seventeen surgeons performed colon resections, with the number of resections varying from 1 to 154. Ten pathologists and 3 pathology assistants evaluated the specimens.
Patients: A total of 430 patients met the inclusion criteria and underwent surgical resection. Only patients with colon cancer were included in the study; patients with rectal cancers, in situ disease only, T4 tumors, and stage IV disease at the time of diagnosis were excluded to ensure a uniform group of patients, all undergoing resection with curative intent.
Main Outcome Measures: Age of the patient; the surgeon, pathologist, and pathology technician; stage of disease; and year of surgery were analyzed.
Results: No statistical difference was found in the number of lymph nodes retrieved based on the surgeon (P = .21), pathologist (P = .11), or pathology technician (P = .26). Age of the patient, primary site of the tumor, stage, and year of surgery were all significantly associated with number of lymph nodes retrieved (P <.001).
Conclusions: The origin of a low lymph node count appears multifactorial. Inadequate lymph node retrieval for colon cancer resections cannot uniformly be attributed to 1 factor, such as the surgeon.
abstract_id: PUBMED:23047124
Hospital characteristics associated with maintenance or improvement of guideline-recommended lymph node evaluation for colon cancer. Background: Over the past 20 years, surgical practice organizations have recommended the identification of ≥12 lymph nodes from surgically treated colon cancer patients as an indicator of quality performance for adequate staging; however, studies suggest that significant variation exists among hospitals in their level of adherence to this recommendation. We examined hospital-level factors that were associated with institutional improvement or maintenance of adequate lymph node evaluation after the introduction of surgical quality guidelines.
Research Design: Using the 1996-2007 SEER-Medicare data, we evaluated hospital characteristics associated with short-term (1999-2001), medium-term (2002-2004), and long-term (2005-2007) guideline-recommended (≥12) lymph node evaluation compared with initial evaluation levels (1996-1998) using χ tests and multivariate logistic regression analysis, adjusting for patient case-mix.
Results: We identified 228 hospitals that performed ≥6 colon cancer surgeries during each study period from 1996-2007. In the initial study period (1996-1998), 26.3% (n=60) of hospitals were performing guideline-recommended evaluation, which increased to 28.1% in 1999-2001, 44.7% in 2002-2004, and 70.6% in 2005-2007. In multivariate analyses, a hospital's prior guideline performance [odds ratio (OR) (95% confidence interval (CI)): 4.02 (1.92, 8.42)], teaching status [OR (95% CI): 2.33 (1.03, 5.28)], and American College of Surgeon's Oncology Group membership [OR (95% CI): 3.39 (1.39, 8.31)] were significantly associated with short-term guideline-recommended lymph node evaluation. Prior hospital performance [OR (95% CI): 2.41 (1.17, 4.94)], urban location [OR (95% CI): 2.66 (1.12, 6.31)], and American College of Surgeon's Oncology Group membership [OR (95% CI): 6.05 (2.32, 15.77)] were associated with medium-term performance; however, these factors were not associated with long-term performance.
Conclusions: Over the 12-year period, there were marked improvements in hospital performance for guideline-recommended lymph node evaluation. Understanding patterns in improvement over time contributes to debates over optimal designs of quality-improvement programs.
abstract_id: PUBMED:15232693
Sentinel lymph node biopsy in colorectal carcinoma Lymph node status as an important prognostic factor in colon and rectal cancer is affected by the selection and number of lymph nodes examined and by the quality of histopathological assessment. The multitude of influences is accompanied by an elevated risk of quality alterations. Sentinel lymph node biopsy (SLNB) is currently under investigation for its value in improving determination of the nodal status. Worldwide, the data of 800 to 1000 patients from about 20 relatively small studies are available that focus rather on colon than rectal cancer patients. SLNB may be of clinical value for the collective of patients that are initially node-negative after H&E staining but reveal small micrometastases or isolated tumor cells in the SLN after intensified histopathological workup. If further studies confirm that these patients benefit from adjuvant therapy, the method may have an important effect on the therapy and prognosis of colon cancer patients as well. Another potential application could be the determination of the nodal status after endoscopic excision of early cancer to avoid bowel resection and lymphonodectomy.
abstract_id: PUBMED:20839064
Individual surgeon, pathologist, and other factors affecting lymph node harvest in stage II colon carcinoma. is a minimum of 12 examined lymph nodes sufficient? Background: Insufficient lymph node harvest in presumed stage II colon carcinomas can result in understaging and worsened cancer outcomes. The purpose of this study was to evaluate factors affecting the number of lymph node examined, their corresponding impact on cancer outcomes, and the optimal number of examined nodes with reference to the standard of 12.
Materials And Methods: We evaluated all patients undergoing surgery alone for stage II colon cancer included in our colorectal cancer database since 1976.
Results: A total of 901 patients were included. Mean follow-up exceeded 8 years. The individual pathologist had no statistically significant association with the number of lymph nodes examined. Harvest of at least 12 nodes was related to surgery after 1991 (85% vs 69%, P < 0.001), right vs left colon carcinomas (85% vs 72%, P < 0.001), individual surgeon (P = 0.018), and length of specimen at different cutoffs of at least 30, 25, and 20 cm (P < 0.001). Increasing age was associated with fewer examined lymph nodes (Spearman correlation = -0.22, P < 0.001). Fewer than 12 nodes and T4N0 staging independently affected overall survival (P = 0.003 and P = 0.022, respectively), disease-free survival (P = 0.010 and P = 0.09, respectively), disease-specific mortality (P = 0.009 and P < 0.001, respectively), and overall recurrence (P = 0.13 and P = 0.023, respectively). A minimal number of more than 12 examined nodes had no significant effect on cancer outcomes.
Conclusions: A number of factors influenced lymph node harvest in stage II colon cancer. However, lymph node assessment of at least 12 nodes was the only modifiable factor optimizing cancer outcomes.
abstract_id: PUBMED:24080680
Degree of specialisation of the surgeon influences lymph node yield after right-sided hemicolectomy. Aim: To investigate the degree to which specialisation or case-load of the surgeon is associated with the number of lymph nodes isolated from pathology specimens after right-sided hemicolectomy.
Method: Data from 6 hospitals with well-defined catchment areas included in the Uppsala/Örebro Regional Oncology Centre Colon Cancer Register 1997-2006 were used to assess 821 patients undergoing right-sided hemicolectomy for stages I-III colon cancer. Factors influencing the lymph node yield were evaluated.
Results: A surgeon with colorectal accreditation and a university pathology department were both associated with a significantly higher proportion of patients having 12 or more lymph nodes isolated from surgical specimens after right-sided hemicolectomy in both unadjusted and multivariate analyses. Emergency surgery did not affect the lymph node yield.
Conclusion: The degree of specialisation of the surgeon influences the number of lymph nodes isolated from specimens obtained during routine right-sided colon cancer surgery.
abstract_id: PUBMED:26113047
Lymph Node Yield in Colon Cancer: Individuals Can Make the Difference. Aim: To investigate the influence of individual surgeons and pathologists on examining an adequate (i.e. ≥10) number of lymph nodes in colon cancer resection specimens.
Patients And Methods: The number of lymph nodes was evaluated in surgically treated patients for colon cancer at our hospital from 2008 through 2010, excluding patients who had received neo-adjuvant treatment. The patient group consisted of 156 patients with a median age of 73 (interquartile range (IQR) 63-82 years) and a median of 12 lymph nodes per patient (IQR 8-15). In 106 patients (67.9%), 10 or more nodes were histopathologically examined.
Results: At univariate analysis, the examination of ≥10 nodes was influenced by tumour size (p = 0.05), tumour location (p = 0.015), type of resection (p = 0.034), individual surgeon (p = 0.023), and pathologist (p = 0.005). Neither individual surgeons nor pathologists did statistically and significantly influence the chance of finding an N+ status. Age (p = 0.044), type of resection (p = 0.007), individual surgeon (p = 0.012) and pathologist (p = 0.004) were independent prognostic factors in a multivariate model for finding ≥10 nodes.
Conclusion: Though cancer staging was not affected in this study, individual efforts by surgeons and pathologists play a critical role in achieving optimal lymph node yield through conventional methods.
Answer: The variation in lymph node assessment after colon cancer resection is influenced by multiple factors, including patient characteristics, surgeon practices, pathologist involvement, and hospital attributes. The majority of the variation in lymph node assessment is related to non-modifiable patient-specific factors, accounting for 79% of the variation (PUBMED:21174232). However, there is also significant variation attributed to the hospital (73%), pathologist (19%), and surgeon (8.2%) after accounting for all explanatory variables (PUBMED:21174232).
Further studies have shown that hospital-level predictors of suboptimal lymph node examination include nonacademic status, rural setting, and a low annual number of resections for colon cancer. The percent of total clustering variance attributed to surgeons, pathologists, and hospitals was 8%, 23%, and 70%, respectively (PUBMED:28088321). Additionally, tumor factors were responsible for 71% of the variation in lymph node yield, whereas surgeon and facility characteristics contributed 17% and 12%, respectively (PUBMED:38205923).
The individual surgeon and pathologist also play a role in lymph node retrieval, with statistical significance found between surgeons and the number of lymph nodes retrieved (PUBMED:22808760). However, another study found no statistical difference in the number of lymph nodes retrieved based on the surgeon, pathologist, or pathology technician, suggesting that the origin of a low lymph node count is multifactorial (PUBMED:20026828).
Hospital characteristics such as prior guideline performance, teaching status, and American College of Surgeon's Oncology Group membership were associated with guideline-recommended lymph node evaluation (PUBMED:23047124). The degree of specialization of the surgeon also influences the number of lymph nodes isolated from specimens (PUBMED:24080680). Individual efforts by surgeons and pathologists are critical in achieving optimal lymph node yield (PUBMED:26113047).
In summary, while patient factors are the largest determinant of lymph node evaluation variation, there is also considerable variation attributable to the surgeon, pathologist, and hospital, with collaborative efforts and quality improvement strategies needed to optimize lymph node examination in colon cancer resection. |
Instruction: Associations between depression and diabetes in the community: do symptom dimensions matter?
Abstracts:
abstract_id: PUBMED:25127227
Associations between depression and diabetes in the community: do symptom dimensions matter? Results from the Gutenberg Health Study. Objectives: While a bidirectional relationship between diabetes and depression has been established, there is little knowledge if the associations are due to somatic-affective or cognitive-affective dimensions of depression.
Research Design And Methods: In a population-based, representative survey of 15.010 participants we therefore studied the associations of the two dimensions of depression with diabetes and health care utilization among depressed and diabetic participants. Depression was assessed by the Patient Health Questionnaire PHQ-9.
Results: We found a linear and consistent association between the intensity of depression and the presence of diabetes increasing from 6.9% in no or minimal depression to 7.6% in mild, 9% in moderate and 10.5% in severe depression. There was a strong positive association between somatic-affective symptoms but not with cognitive-affective symptoms and diabetes. Depression and diabetes were both independently related to somatic health care utilisation.
Conclusions: Diabetes and depression are associated, and the association is primarily driven by the somatic-affective component of depression. The main limitation of our study pertains to the cross-sectional data acquisition. Further longitudinal work on the relationship of obesity and diabetes should differentiate the somatic and the cognitive symptoms of depression.
abstract_id: PUBMED:33468519
Specific Dimensions of Depression Have Different Associations With Cognitive Decline in Older Adults With Type 2 Diabetes. Objective: Depression is highly frequent in older adults with type 2 diabetes and is associated with cognitive impairment, yet little is known about how various depression dimensions differentially affect cognition. We investigated longitudinal associations of specific depression dimensions with cognitive decline.
Research Design And Methods: Participants (N = 1,002) were from the Israel Diabetes and Cognitive Decline study, were ≥65 years of age, had type 2 diabetes, and were not experiencing dementia at baseline. Participants underwent a comprehensive neuropsychological battery at baseline and every 18 months thereafter, including domains of episodic memory, attention/working memory, semantic categorization/language, and executive function, and Z-scores of each domain were averaged and further normalized to calculate global cognition. Depression items from the 15-item Geriatric Depression Scale were measured at each visit and subcategorized into five dimensions: dysphoric mood, withdrawal-apathy-vigor (entitled apathy), anxiety, hopelessness, and memory complaint. Random coefficients models examined the association of depression dimensions with baseline and longitudinal cognitive functioning, adjusting for sociodemographics and baseline characteristics, including cardiovascular risk factors, physical activity, and use of diabetes medications.
Results: In the fully adjusted model at baseline, all dimensions of depression, except for anxiety, were associated with some aspect of cognition (P values from 0.01 to <0.001). Longitudinally, greater apathy scores were associated with faster decline in executive function (P = 0.004), a result that withstood adjustment for multiple comparisons. Associations of other depression dimensions with cognitive decline were not significant (P > 0.01).
Conclusions: Apathy was associated with a faster cognitive decline in executive function. These findings highlight the heterogeneity of depression as a clinical construct rather than as a single entity and point to apathy as a specific risk factor for cognitive decline among older adults with type 2 diabetes.
abstract_id: PUBMED:33808913
Association of Preterm Birth with Depression and Particulate Matter: Machine Learning Analysis Using National Health Insurance Data. This study uses machine learning and population data to analyze major determinants of preterm birth including depression and particulate matter. Retrospective cohort data came from Korea National Health Insurance Service claims data for 405,586 women who were aged 25-40 years and gave births for the first time after a singleton pregnancy during 2015-2017. The dependent variable was preterm birth during 2015-2017 and 90 independent variables were included (demographic/socioeconomic information, particulate matter, disease information, medication history, obstetric information). Random forest variable importance was used to identify major determinants of preterm birth including depression and particulate matter. Based on random forest variable importance, the top 40 determinants of preterm birth during 2015-2017 included socioeconomic status, age, proton pump inhibitor, benzodiazepine, tricyclic antidepressant, sleeping pills, progesterone, gastroesophageal reflux disease (GERD) for the years 2002-2014, particulate matter for the months January-December 2014, region, myoma uteri, diabetes for the years 2013-2014 and depression for the years 2011-2014. In conclusion, preterm birth has strong associations with depression and particulate matter. What is really needed for effective prenatal care is strong intervention for particulate matters together with active counseling and medication for common depressive symptoms (neglected by pregnant women).
abstract_id: PUBMED:26476474
Associations between anxiety and depression symptoms and cognitive testing and neuroimaging in type 2 diabetes. Aims: Anxiety, depression, accelerated cognitive decline, and increased risk of dementia are observed in individuals with type 2 diabetes. Anxiety and depression may contribute to lower performance on cognitive tests and differences in neuroimaging observed in individuals with type 2 diabetes.
Methods: These relationships were assessed in 655 European Americans with type 2 diabetes from 504 Diabetes Heart Study families. Participants completed cognitive testing, brain magnetic resonance imaging, the Brief Symptom Inventory Anxiety subscale, and the Center for Epidemiologic Studies Depression-10.
Results: In analyses adjusted for age, sex, educational attainment, and use of psychotropic medications, individuals with comorbid anxiety and depression symptoms had lower performance on all cognitive testing measures assessed (p≤0.005). Those with both anxiety and depression also had increased white matter lesion volume (p=0.015), decreased gray matter cerebral blood flow (p=4.43×10(-6)), decreased gray matter volume (p=0.002), increased white and gray matter mean diffusivity (p≤0.001), and decreased white matter fractional anisotropy (p=7.79×10(-4)). These associations were somewhat attenuated upon further adjustment for health status related covariates.
Conclusions: Comorbid anxiety and depression symptoms were associated with cognitive performance and brain structure in a European American cohort with type 2 diabetes.
abstract_id: PUBMED:29851173
The course of apathy in late-life depression treated with electroconvulsive therapy; a prospective cohort study. Objectives: Apathy, a lack of motivation, is frequently seen in older individuals, with and without depression, with substantial impact on quality of life. This prospective cohort study of patients with severe late-life depression treated with electroconvulsive therapy (ECT) aims to study the course of apathy and the predictive value of vascular burden and in particular white matter hyperintensities on apathy course.
Methods: Information on apathy (defined by a score of >13 on the Apathy Scale), depression severity, vascular burden, and other putative confounders was collected in at 2 psychiatric hospitals on patients with late-life depression (aged 55 to 87 years, N = 73). MRI data on white matter hyperintensities were available in 52 patients. Possible risk factors for apathy post-ECT were determined using regression analyses.
Results: After treatment with ECT, 52.0% (26/50) of the depression remitters still suffered from clinically relevant apathy symptoms. In the entire cohort, more patients remained apathetic (58.9%) than depressed (31.5%). Presence of apathy post-ECT was not associated with higher age, use of benzodiazepines, or severity of apathy and depression at baseline. Less response in depressive symptomatology after ECT predicted post-treatment apathy. The presence of vascular disease, diabetes mellitus and smoking, and white matter hyperintensities in the brain was not associated with post-treatment apathy.
Conclusions: Apathy may perpetuate in individual patients, despite remission of depressive symptoms. In this cohort of patients with late-life depression, post-ECT apathy is not associated with white matter hyperintensities.
abstract_id: PUBMED:31008648
Depression and multimorbidity: Considering temporal characteristics of the associations between depression and multiple chronic diseases. Objectives: Depression frequently co-occurs with multiple chronic diseases in complex, costly, and dangerous patterns of multimorbidity. The field of health psychology may benefit from evaluating the temporal characteristics of depression's associations with common diseases, and from determining whether depression is a central connector in multimorbid disease clusters. The present review addresses these issues by focusing on 4 of the most prevalent diseases: hypertension, ischemic heart disease, arthritis, and diabetes.
Method: Study 1 assessed how prior chronic disease diagnoses were associated with current depression in a large, cross-sectional, population-based study. It assessed depression's centrality using network analysis accounting for disease prevalence. Study 2 presents a systematic scoping review evaluating the extent to which depression was prospectively associated with the onset of the 4 prevalent chronic diseases.
Results: In Study 1 depression had the fourth highest betweenness centrality ranking of 26 network nodes and centrally connected many existing diseases and unhealthy behaviors. In Study 2 depression was associated with subsequent incidence of ischemic heart disease and diabetes across multiple meta-analyses. Insufficient information was available about depression's prospective associations with incident hypertension and arthritis.
Conclusions: Depression is central in patterns of multimorbidity and is associated with incident disease for several of the most common chronic diseases, justifying the focus on screening and treatment of depression in those at risk for developing chronic disease. Future research should investigate the mediating and moderating roles of health behaviors in the association between depression and the staggered emergence over time of clusters of multimorbid chronic diseases. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
abstract_id: PUBMED:35782452
Associations Among Depression, Hemoglobin A1c Level, and Prognosis in Patients With Coronary Artery Disease: A Prospective Study. Background: Depression is ubiquitous in patients with coronary artery disease (CAD). The relationship between depression and hemoglobin A1c (HbA1c) is controversial. The combined effect of high HbA1c and depression on prognosis is unclear, especially in non-diabetic CAD patients. We sought to explore these associations.
Methods: 558 CAD patients were included in this prospective study. Patients were grouped by HbA1c levels and the status of clinical depression. The average follow-up period was about 2.2 years, and Cox proportional hazards models were used to compare the differences of prognosis in all the groups.
Results: Clinical depression had no associations with HbA1c in all CAD patients (P for Pearson correlation = 0.74). In the all four groups, compared to group 1 (patients without clinical depression and low HbA1c), group 3 (without clinical depression and high HbA1c) had a higher risk of MACE (adjusted hazard ratio [aHR], 1.97; 95% confidence interval [CI], 1.2-3.25) and composite events (aHR, 1.67; 95% CI, 1.09-2.053). Group 4 (patients with clinical depression and high HbA1c) had higher HRs for MACE (aHR, 2.9; 95%CI, 1.32-6.38) and composite events (aHR, 2.12; 95% CI, 1.06-4.25). In CAD patients without diabetes, patients with clinical depression and high HbA1c had a higher risk of MACE (HR, 2.71; 95% CI, 1.02-7.19), non-cardiac readmission (HR,3.48; 95% CI, 1.26-9.57) and composite events (HR,2.44; 95% CI, 1.08-5.53) than those with no clinical depression and low HbA1c. In patients with comorbidities of depression and diabetes, patients with depression and high HbA1c more likely to experienced non-cardiac readmissions (HR, 4.49; 95% CI, 1.31-15.38) than patients with no depression and low HbA1c only. In all the above analysis, p-values for interaction between clinical depression and HbA1c were not statistically significant.
Conclusions: The presence of both depression and high HbA1c lead to a worse prognosis in CAD patients than one risk factor alone, no matter with or without the comorbidity of diabetes in these CAD patients. For patients with CAD and depression, lower HbA1c may be required.
abstract_id: PUBMED:35008732
Pathomechanisms of Vascular Depression in Older Adults. Depression in older individuals is a common complex mood disorder with high comorbidity of both psychiatric and physical diseases, associated with high disability, cognitive decline, and increased mortality The factors predicting the risk of late-life depression (LLD) are incompletely understood. The reciprocal relationship of depressive disorder and age- and disease-related processes has generated pathogenic hypotheses and provided various treatment options. The heterogeneity of depression complicates research into the underlying pathogenic cascade, and factors involved in LLD considerably differ from those involved in early life depression. Evidence suggests that a variety of vascular mechanisms, in particular cerebral small vessel disease, generalized microvascular, and endothelial dysfunction, as well as metabolic risk factors, including diabetes, and inflammation that may induce subcortical white and gray matter lesions by compromising fronto-limbic and other important neuronal networks, may contribute to the development of LLD. The "vascular depression" hypothesis postulates that cerebrovascular disease or vascular risk factors can predispose, precipitate, and perpetuate geriatric depression syndromes, based on their comorbidity with cerebrovascular lesions and the frequent development of depression after stroke. Vascular burden is associated with cognitive deficits and a specific form of LLD, vascular depression, which is marked by decreased white matter integrity, executive dysfunction, functional disability, and poorer response to antidepressive therapy than major depressive disorder without vascular risk factors. Other pathogenic factors of LLD, such as neurodegeneration or neuroimmune regulatory dysmechanisms, are briefly discussed. Treatment planning should consider a modest response of LLD to antidepressants, while vascular and metabolic factors may provide promising targets for its successful prevention and treatment. However, their effectiveness needs further investigation, and intervention studies are needed to assess which interventions are appropriate and effective in clinical practice.
abstract_id: PUBMED:36542288
Hormonal factors moderate the associations between vascular risk factors and white matter hyperintensities. To examine the moderation effects of hormonal factors on the associations between vascular risk factors and white matter hyperintensities in men and women, separately. White matter hyperintensities were automatically segmented and quantified in the UK Biobank dataset (N = 18,294). Generalised linear models were applied to examine (1) the main effects of vascular and hormonal factors on white matter hyperintensities, and (2) the moderation effects of hormonal factors on the relationship between vascular risk factors and white matter hyperintensities volumes. In men with testosterone levels one standard deviation higher than the mean value, smoking was associated with 27.8% higher white matter hyperintensities volumes in the whole brain. In women with a shorter post-menopause duration (one standard deviation below the mean), diabetes and higher pulse wave velocity were associated with 28.8% and 2.0% more deep white matter hyperintensities, respectively. These findings highlighted the importance of considering hormonal risk factors in the prevention and management of white matter hyperintensities.
abstract_id: PUBMED:37803500
Neuroticism is associated with increase in depression symptoms over time in older adults with type 2 diabetes. Objectives: The likelihood of depression symptoms in those with type 2 diabetes (T2D) is high. Psychological risk factors enhancing comorbidity of depression symptoms in T2D are yet to be determined. The present study examines the cross-sectional and longitudinal relationship between personality traits and distinct depression dimensions in older adults with T2D.
Methods: Participants were older adults (age ≥65yeas) with T2D from the Israel Diabetes and Cognitive Decline (IDCD) study (N = 356), with complete data on depression [Geriatric Depression Scale (GDS) - 15 item version] and its dimensions- namely, dysphoric mood, apathy, hopelessness, memory complains and anxiety, and on personality [Big Five Inventory (BFI)]. Logistic and mixed linear regression models examined cross-sectional and longitudinal associations while adjusting for socio-demographics, cognition, cardiovascular and diabetes-related factors.
Results: Cross-sectionally, high neuroticism was associated with high scores in total GDS and in all depression-dimensions, except memory complaints. Higher extroversion was associated with lower total GDS and with lower scores on all depression dimensions, except anxiety. High levels of neuroticism were associated with increase in total number of depression symptoms over time.
Conclusions: In older adults with T2D, neuroticism and extroversion are associated with most depression dimensions suggesting that these traits relate to a global depression symptomatology rather than to any specific dimension or phenomenology. High neuroticism was associated with increase in depression symptoms over time, highlighting its role in the development of depression symptoms in older adults with T2D.
Answer: Yes, symptom dimensions do matter in the associations between depression and diabetes in the community. Research has shown that there is a linear and consistent association between the intensity of depression and the presence of diabetes, with the association primarily driven by the somatic-affective component of depression rather than the cognitive-affective symptoms. This suggests that the physical manifestations of depression, such as changes in sleep, appetite, and energy levels, are more closely linked to diabetes than the cognitive aspects like feelings of guilt or difficulty concentrating (PUBMED:25127227).
Moreover, specific dimensions of depression have been found to have different associations with cognitive decline in older adults with type 2 diabetes. For instance, apathy, which is characterized by a lack of interest or motivation, was associated with a faster cognitive decline in executive function. This highlights the heterogeneity of depression as a clinical construct and points to apathy as a specific risk factor for cognitive decline among older adults with type 2 diabetes (PUBMED:33468519).
In addition, the presence of both depression and high hemoglobin A1c (HbA1c) levels leads to a worse prognosis in patients with coronary artery disease than either risk factor alone, regardless of whether they have comorbid diabetes. This suggests that for patients with coronary artery disease and depression, lower HbA1c levels may be required (PUBMED:35782452).
Furthermore, neuroticism, a personality trait characterized by emotional instability, has been associated with an increase in depression symptoms over time in older adults with type 2 diabetes. This indicates that psychological factors such as personality traits can influence the comorbidity of depression symptoms in individuals with type 2 diabetes (PUBMED:37803500).
Overall, these findings underscore the importance of considering the different dimensions of depression in relation to diabetes and other health outcomes, as they may have distinct implications for prognosis, treatment, and management of these conditions. |
Instruction: Is lymphatic endoglin expression a risk marker for breast cancer metastasis?
Abstracts:
abstract_id: PUBMED:23531181
Is lymphatic endoglin expression a risk marker for breast cancer metastasis? Results of a pilot study. Background: Studies have identified endoglin as a biological marker that is overexpressed on the microvessels of certain solid cancers (breast, colorectal cancer, and head and neck squamous cell cancers). There is, at present, no immunohistochemical marker that can discriminate between lymph node-negative and or lymph node-positive breast cancer tissue.
Methods: The expression of endoglin was quantified by immunohistochemistry and assessment of microvessel density in 53 surgical specimens. These were comprised of breast tumor tissue that had not spread to the regional lymph nodes (lymph node-negative breast tumor tissue: 20 specimens), breast tumor tissue had spread to regional lymph nodes (lymph node-positive breast tumor tissue: 21 specimens), and normal breast tissue as a control (12 specimens).
Results: Significant difference was observed between the expression of endoglin on microvessels of lymph node-negative and lymph node-positive breast cancer tissue (p<0.05). This significant difference was shown to be due to endoglin expression on lymphatic vessels (p<0.02), rather than on blood vessels (p>0.05).
Conclusions: These findings are the first to suggest that endoglin expression on breast tumor lymphatic vessels may have diagnostic potential as a discriminator between lymph node-negative and lymph node-positive breast cancer. Further studies would be required to confirm this.
abstract_id: PUBMED:33945869
Nectin-4 promotes lymphangiogenesis and lymphatic metastasis in breast cancer by regulating CXCR4-LYVE-1 axis. Tumor-induced lymphangiogenesis promotes tumor progression by generating new lymphatic vessels that helps in tumor dissemination to regional lymph nodes and distant sites. Recently, the role of Nectin-4 in cancer metastasis and angiogenesis has been studied, but its role in lymphangiogenesis is unknown. Here, we systematically delineated the role of Nectin-4 in lymphangiogenesis and its regulation in invasive duct carcinoma (IDC). Nectin-4 expression positively correlated with occurrence risk factors associated with breast cancer (alcohol, smoke, lifestyle habit, etc), CXCR4 expression, and LYVE-1-lymphatic vessel density (LVD). LVD was significantly higher in axillary lymph node (ALN) than primary tumor. Depleting Nectin-4, VEGF-C or both attenuated the important lymphangiogenic marker LYVE-1 expression, tube formation, and migration of ALN derived primary cells. Nectin-4 stimulated the expressions of CXCR4 and CXCL12 under hypoxic conditions in ALN derived primary cells. Further, Nectin-4 augmented expressions of lymphatic metastatic markers (e.g. eNOS, TGF-β, CD-105) and MMPs. Induced expressions of Nectin-4 along with other representative metastatic markers were noted in lymph and blood circulating tumor cells (LCTCs and BCTCs) of local and distant metastatic samples. Thus, Nectin-4 displayed a predominant role in promoting tumor-induced lymphangiogenesis and lymphatic metastasis by modulating CXCR4/CXCL12-LYVE-1- axis.
abstract_id: PUBMED:37719885
CD105 expression in cancer-associated fibroblasts: a biomarker for bone metastasis in early invasive ductal breast cancer patients. Introduction: Bone metastasis is one of the causes that mainly decrease survival in patients with advanced breast cancer. Therefore, it is essential to find prognostic markers for the occurrence of this type of metastasis during the early stage of the disease. Currently, cancer-associated fibroblasts, which represent 80% of the fibroblasts present in the tumor microenvironment, are an interesting target for studying new biomarkers and developing alternative therapies. This study evaluated the prognostic significance of the CD105 expression in cancer-associated fibroblasts in early breast cancer patients. Methods: Immunohistochemistry was used to assess CD105 expression in invasive ductal breast carcinomas (n = 342), analyzing its association with clinical and pathological characteristics. Results: High CD105 expression in cancer-associated fibroblasts was associated with an increased risk of metastatic occurrence (p = 0.0003), particularly bone metastasis (p = 0.0005). Furthermore, high CD105 expression was associated with shorter metastasis-free survival, bone metastasis-free survival, and overall survival (p = 0.0002, 0.0006, and 0.0002, respectively). CD105 expression also constituted an independent prognostic factor for metastasis-free survival, bone metastasis-free survival, and overall survival (p = 0.0003, 0.0006, and 0.0001, respectively). Discussion: The high CD105 expression in cancer-associated fibroblasts is an independent prognostic marker for bone metastasis in early breast cancer patients. Therefore, the evaluation of CD105(+) CAFs could be crucial to stratify BCPs based on their individual risk profile for the development of BM, enhancing treatment strategies and outcomes.
abstract_id: PUBMED:14978873
The immunohistochemical expression of CD105 is a marker for high metastatic risk and worse prognosis in breast cancers The quantification of angiogenesis in human solid tumors has been shown to be an indicator of prognosis and tumor microvasculature is a candidate target for antiangiogenic therapy. CD105 (endoglin) is significantly expressed in activated endothelial cells in culture and in tumor microvessels. Quantification of CD105 immunocytochemical expression that may be of significant clinical relevance, has not been accurately evaluated as yet. In the present report, CD105 expression on frozen sections was investigated using immunohistochemical assays in a series of 929 patients and correlated with long-term (median = 11.3 years) follow-up. The CD105 immunostaining was observed on endothelial cells mostly in small cells. The number of vessels and the immunostained surface were evaluated in so called "hot spots" within tumor stroma. Both the number of vessels and immunostained surface were correlated to the patients' outcome (overall survival, disease free survival, metastases) in the whole group of patients and also specifically in node negative subgroup. Univariate (Kaplan Meier) analysis showed that the number of CD105 positive microvessels (cut-off n = 15) was significantly correlated with poor overall survival, among all patients (p = 0.001). This correlation was less significant in the group of node negative patients (p = 0.035). Marked CD105 expression was also correlated with high metastasis risk among all patients (p = 0.006) and among node negative patients as well (p = 0.001). In multivariate analysis (Cox model) CD105 immunodetection was identified as an independent prognostic indicator. Our results suggest that CD105 immunohistochemical expression has a practical clinical relevance for identifying node negative patients with poor prognosis. Moreover, the CD105 immunodetection may also be considered as a potential tool for selecting patients that could benefit from specific antiangiogenic therapy, using anti CD105 conjugates.
abstract_id: PUBMED:11769465
Angiogenesis and metastasis marker of human tumors. Tumor growth and metastasis are dependent on angiogenesis. Therefore, certain angiogenesis markers may be useful as metastasis markers and/or the targets for antiangiogenic therapy. We and others have been studying endoglin(EDG; CD105) for such purposes. EDG is a proliferation-associated antigen of endothelial cells and essential for angiogenesis. In addition, EDG is a component of the transforming growth factor(TGF)-beta receptor complex. Expression of EDG is up-regulated in tumor-associated angiogenic vasculature compared with normal tissue vasculature. Microvessel density detected for EDG expression in breast cancer tissues showed a statistically significant correlation with overall and disease-free survival. In addition, elevated serum EDG was associated with metastasis in patients with colorectal, breast, and other solid tumors. On the other hand, We have been targeting EDG on tumor vasculature to suppress tumor growth and metastasis by systemic(i.v.) administration of anti-EDG monoclonal antibodies(mAbs) and immunoconjugates(IMCs). To thid end, we have been using three animal models, i.e., severe combined immunodeficient(SCID) mouse model of MCF-7 human breast cancer, human skin/SCID mouse chimera model bearing MCF-7 tumor, and syngeneic metastasis model of colon-26 adenocarcinoma cells in BALB/c mice. In addition, antiangiogenic activities of anti-EDG mAbs and IMCs were evaluated in mice using the dorsal air sac assay. The IMCs were prepared by coupling deglycosylated ricin A-chain or 125I to individual anti-EDG mAbs. These anti-EDG IMCs and mAbs showed substantial antitumor efficacy and antimetastatic activities without showing severe toxicity. Recently, we generated a recombinant human/mouse chimeric anti-EDG mAb to facilitate clinical application of the mAb.
abstract_id: PUBMED:25803686
CD105 expression on CD34-negative spindle-shaped stromal cells of primary tumor is an unfavorable prognostic marker in early breast cancer patients. Several studies have confirmed that the breast tumor microenvironment drives cancer progression and metastatic development. The aim of our research was to investigate the prognostic significance of the breast tumor microenvironment in untreated early breast cancer patients. Therefore, we analyzed the association of the expression of α-SMA, FSP, CD105 and CD146 in CD34-negative spindle-shaped stromal cells, not associated with the vasculature, in primary breast tumors with classical prognostic marker levels, metastatic recurrence, local relapse, disease-free survival, metastasis-free survival and the overall survival of patients. In the same way, we evaluated the association of the amount of intra-tumor stroma, fibroblasts, collagen deposition, lymphocytic infiltration and myxoid changes in these samples with the clinical-pathological data previously described. This study is the first to demonstrate the high CD105 expression in this stromal cell type as a possible independent marker of unfavorable prognosis in early breast cancer patients. Our study suggests that this new finding can be useful prognostic marker in the clinical-pathological routine.
abstract_id: PUBMED:9761112
Role of transforming growth factor beta3 in lymphatic metastasis in breast cancer. Transforming growth factor-betas (TGFbetas) play a prominent role in tumour growth and metastasis by enhancing angiogenesis and suppressing immune surveillance. Despite the increased interest in the effect of TGFbetas on tumour progression, little is known about the importance of TGFbeta3 and its receptor CD105 in breast cancer. In the present study, we measured the plasma levels of TGFbeta3, CD105-TGFbeta3 complexes and TGFbeta1 in 80 patients with untreated early-stage breast cancer using an enhanced chemiluminescence ELISA method. Of the 80 patients, 14 were histologically confirmed as having axillary lymph node metastases, while the remainder had no evidence of lymph node involvement. The results showed that levels of both TGFbeta3 and CD105-TGFbeta3 complex were significantly elevated in patients with positive lymph nodes compared to those without node metastasis. Furthermore, the levels of both TGFbeta3 and CD105-TGFbeta3 complex correlated with lymph node status. The only patient who died of the disease had very high plasma levels of TGFbeta3 and CD105-TGFbeta3 complex and positive lymph nodes; this patient developed lung metastases within 2 years of diagnosis. No significant correlation was seen between either TGFbeta3 or CD105-TGFbeta3 complex levels and tumour stage, size or histological grade. Plasma TGFbeta1 levels were not correlated with node metastasis, tumour stage, grade or size. Our data suggest that plasma levels of TGFbeta3 and CD105-TGFbeta3 complex may be of prognostic value in the early detection of metastasis of breast cancer.
abstract_id: PUBMED:15067342
Prognostic significance of angiogenesis evaluated by CD105 expression compared to CD31 in 905 breast carcinomas: correlation with long-term patient outcome. Our purpose was to determine the respective prognostic significance of CD105 and CD31 immunoexpression in node negative patients with breast carcinoma, since angiogenesis induces blood borne metastases and death in carcinomas. CD105 (endoglin) has been reported as expressed by activated endothelial cells and consequently should better reflect neoangiogenesis in malignant tumors. Comparison of CD31 and CD105 immunocytochemical expression was undertaken in a series of 905 breast carcinomas. Results were compared to patients' long-term (median = 11.3 years) outcome. Univariate (Kaplan-Meier) analysis showed that the number of CD105+ microvessels (cut-off 15 vessels) correlated significantly with poor overall survival (p=0.001). This correlation was less significant in node negative patients (p=0.035). The number of CD31+ microvessels (cut-off 25 vessels) similarly correlated with poor survival (p=0.032) but not in the subgroup of node negative patients. Marked CD105 expression also correlated with a high risk for metastasis in all patients (p=0.0002) and in the subset of node negative patients (p=0.001). Similarly metastasis risk in node negative patients correlated with marked CD31 immunocytochemical expression (p=0.02). Multivariate analysis (Cox model) identified CD105, but not CD31 immunoexpression, as an independent prognostic indicator. Our results suggest that: i) in breast carcinomas, immunoselection of microvessels containing activated CD105 labelled endothelial cells is endowed with a stronger prognostic significance, as compared to CD31 vessels labelling; ii) the CD105 immunoexpression may be considered as a potential tool for selecting node negative patients with a poorer outcome and higher metastasis risk; iii) in these patients specific antiangiogenic therapy targeted by anti-CD105 conjugates can be further developed.
abstract_id: PUBMED:28940371
ITIH5 induces a shift in TGF-β superfamily signaling involving Endoglin and reduces risk for breast cancer metastasis and tumor death. ITIH5 has been proposed being a novel tumor suppressor in various tumor entities including breast cancer. Recently, ITIH5 was furthermore identified as metastasis suppressor gene in pancreatic carcinoma. In this study we aimed to specify the impact of ITIH5 on metastasis in breast cancer. Therefore, DNA methylation of ITIH5 promoter regions was assessed in breast cancer metastases using the TCGA portal and methylation-specific PCR (MSP). We reveal that the ITIH5 upstream promoter region is particularly responsible for ITIH5 gene inactivation predicting shorter survival of patients. Notably, methylation of this upstream ITIH5 promoter region was associated with disease progression, for example, abundantly found in distant metastases. In vitro, stably ITIH5-overexpressing MDA-MB-231 breast cancer clones were used to analyze cell invasion and to identify novel ITIH5-downstream targets. Indeed, ITIH5 re-expression suppresses invasive growth of MDA-MB-231 breast cancer cells while modulating expression of genes involved in metastasis including Endoglin (ENG), an accessory TGF-β receptor, which was furthermore co-expressed with ITIH5 in primary breast tumors. By performing in vitro stimulation of TGF-β signaling using TGF-β1 and BMP-2 we show that ITIH5 triggered a TGF-β superfamily signaling switch contributing to downregulation of targets like Id1, known to endorse metastasis. Moreover, ITIH5 predicts longer overall survival (OS) only in those breast tumors that feature high ENG expression or inversely regulated ID1 suggesting a clinical and functional impact of an ITIH5-ENG axis for breast cancer progression. Hence, we provide evidence that ITIH5 may represent a novel modulator of TGF-β superfamily signaling involved in suppressing breast cancer metastasis.
abstract_id: PUBMED:10754488
Plasma levels of soluble CD105 correlate with metastasis in patients with breast cancer. CD105 (endoglin), a receptor for transforming growth factor (TGF) beta1 and beta3 in vascular endothelial cells, is highly up-regulated in blood vessels of tissues where neovascularisation occurs. It modulates endothelial-mesenchymal signalling and is essential for angiogenesis. Indeed, CD105 knock-out mice die from malvascularisation by 11.5 day p.c. In the present study CD105, TGFbeta1 and CD105/TGFbeta1 complexes were quantified in plasma samples from 77 healthy individuals and 92 patients with early stage breast cancer prior to any treatment. When compared with normal controls, both CD105 and CD105/TGFbeta1 complex levels were significantly elevated in breast cancer patients, whereas TGFbeta1 levels were lower in cancer patients. The most important finding to emerge was that CD105 levels were significantly increased in patients who developed distant metastasis compared with disease-free patients. While there was no significant difference between CD105 levels in controls compared to disease-free patients, it was significantly higher in patients with metastatic disease. Thus patients who had died following local relapse or distant metastases possessed the highest levels of CD105. Neither CD105/TGFbeta1 complex nor TGFbeta1 levels correlated with tumour progression. Our data indicate that CD105 might be a valuable novel angiogenic marker for identifying breast cancer patients who are at high risk of developing metastasis.
Answer: Yes, lymphatic endoglin expression appears to be a risk marker for breast cancer metastasis. A pilot study found a significant difference in the expression of endoglin on microvessels between lymph node-negative and lymph node-positive breast cancer tissue, with the difference attributed to endoglin expression on lymphatic vessels rather than on blood vessels. This suggests that endoglin expression on breast tumor lymphatic vessels may have diagnostic potential as a discriminator between lymph node-negative and lymph node-positive breast cancer (PUBMED:23531181).
Additionally, CD105, which is another name for endoglin, has been associated with an increased risk of metastatic occurrence, particularly bone metastasis, in early breast cancer patients. High CD105 expression in cancer-associated fibroblasts was linked to shorter metastasis-free survival, bone metastasis-free survival, and overall survival, and was identified as an independent prognostic factor for these outcomes (PUBMED:37719885).
Furthermore, the immunohistochemical expression of CD105 has been identified as a marker for high metastatic risk and worse prognosis in breast cancers. Studies have shown that the number of CD105 positive microvessels was significantly correlated with poor overall survival and high metastasis risk among all patients, including node-negative patients. CD105 immunodetection was also identified as an independent prognostic indicator (PUBMED:14978873).
Other studies have confirmed the prognostic significance of CD105 expression in breast carcinomas, suggesting that it is a stronger prognostic indicator compared to CD31, another endothelial marker. CD105 expression was associated with a higher risk for metastasis and poorer outcome in node-negative patients (PUBMED:15067342).
In summary, the expression of lymphatic endoglin (CD105) is indeed a risk marker for breast cancer metastasis, with multiple studies supporting its role as a prognostic indicator for metastatic risk and patient outcomes. |
Instruction: Is single umbilical artery an independent risk factor for perinatal mortality?
Abstracts:
abstract_id: PUBMED:20024571
Is single umbilical artery an independent risk factor for perinatal mortality? Objective: To evaluate perinatal outcome of fetuses with isolated single umbilical artery (SUA), and specifically to examine whether an isolated SUA is an independent risk factor for perinatal mortality.
Methods: A population-based study was conducted, comparing pregnancies of women with and without SUA. Deliveries occurred between the years 1988-2006, in a tertiary medical center. Multiple gestations, chromosomal abnormalities and malformations were excluded from the analysis. Stratified analysis was performed using multiple logistic regression models to evaluate the association between SUA and perinatal mortality, while controlling for confounders.
Results: Out of 194,809 deliveries, 243 (0.1%) were of fetuses with isolated SUA. Fetuses with SUA were smaller (2,844 ± 733 vs. 3,197 ± 530 g, P < 0.001), and were delivered at an earlier gestational age (38.3 ± 3.0 vs. 39.3 ± 2.1 weeks, P < 0.001), when compared with fetuses with normal umbilical vessels. Mothers to fetuses with isolated SUA tended to have a history of infertility treatments (4.5 vs. 1.7%; P = 0.001) when compared with the comparison group. Fetuses with SUA had more complications, including fetal growth restriction (FGR 9.5 vs. 1.9%, P < 0.001), polyhydramnios (11.5 vs. 3.7%; P < 0.001) and oligohydramnios (6.6 vs. 2.2%; P < 0.001). Deliveries of SUA fetuses had higher rates of placental abruption (3.3 vs. 0.7%; P < 0.001), placenta previa (1.2 vs. 0.4%; P = 0.03) and cord prolapse (2.9 vs. 0.4%; P < 0.001). Higher rates of cesarean deliveries were noted in this group (23.9 vs. 12.2%; P < 0.001). SUA newborns had higher rates of low Apgar scores (<7) in one (11.8 vs. 3.7%; P < 0.001) and 5 min (3.5 vs. 0.4%; P < 0.001). Higher rates of perinatal mortality were noted in the SUA group, as compared to fetuses with normal umbilical vessels (6.6 vs. 0.9%, OR 7.78; 95% CI 4.7-13.0; P < 0.001). Using a multiple logistic regression model, controlling for possible confounders, such as FGR, oligohydramnios, polyhydramnios, prolapse of cord, maternal hypertension and diabetes mellitus, isolated SUA remained an independent risk factor for perinatal mortality (adjusted OR = 3.91, 95% CI 2.06-7.43; P < 0.001).
Conclusion: Isolated SUA in our population was noted as an independent risk factor for perinatal mortality.
abstract_id: PUBMED:27048509
Isolated single umbilical artery is an independent risk factor for perinatal mortality and adverse outcomes in term neonates. Objective: To determine whether an isolated single umbilical artery (iSUA) is an independent risk factor for perinatal mortality in term neonates with normal estimated fetal weight (EFW) prior to delivery.
Method: A population-based study was conducted, including all deliveries occurring between 1993 and 2013, in a tertiary medical center. Pregnancies with and without iSUA were compared. Multiple gestations, chromosomal, and structural abnormalities were excluded from the cohort. Only pregnancies delivered at term with normal EFW evaluated prior to delivery were included. Stratified analysis was performed using multiple logistic regression models to evaluate the risk of adverse outcomes and perinatal mortality for iSUA fetuses.
Results: During the study period, 233,123 deliveries occurred at "Soroka" University Medical Center, out of which 786 (0.3 %) were diagnosed with iSUA. Different pregnancy complications were more common with iSUA fetuses including: placental abruption (OR = 3.4), true knot of cord (OR = 3.5) and cord prolapse (OR = 2.8). Induction of labor and cesarean delivery were also more common in these pregnancies (OR = 1.5 and OR = 1.9, respectively). iSUA neonates had lower Apgar scores at 1 and 5 min (OR = 1.8, OR = 1.9, respectively) compared to the control group and perinatal mortality rates were higher both antenatally (IUFD, OR = 8.1) and postnatally (PPD, OR = 6.1).
Conclusion: iSUA appears to be an independent predictor of adverse perinatal outcomes in term neonates.
abstract_id: PUBMED:28975407
Isolated single umbilical artery poses neonates at increased risk of long-term respiratory morbidity. Purpose: To investigate whether children born with isolated single umbilical artery (iSUA) at term are at an increased risk for long-term pediatric hospitalizations due to respiratory morbidity.
Methods: Design: a population-based cohort study compared the incidence of long-term, pediatric hospitalizations due to respiratory morbidity in children born with and without iSUA at term.
Setting: Soroka University Medical Center.
Participants: all singleton pregnancies of women who delivered between 1991 and 2013.
Main Outcome Measure(s): hospitalization due to respiratory morbidity.
Analyses: Kaplan-Meier survival curves were used to estimate cumulative incidence of respiratory morbidity. A Cox hazards model analysis was used to establish an independent association between iSUA and pediatric respiratory morbidity of the offspring while controlling for clinically relevant confounders.
Results: The study included 232,281 deliveries. 0.3% were of newborns with iSUA (n = 766). Newborns with iSUA had a significantly higher rate of long-term respiratory morbidity compared to newborns without iSUA (7.6 vs 5.5%, p = 0.01). Using a Kaplan-Meier survival curve, newborns with iSUA had a significantly higher cumulative incidence of respiratory hospitalizations (log rank = 0.006). In the Cox model, while controlling for the maternal age, gestational age, and birthweight, iSUA at term was found to be an independent risk factor for long-term respiratory morbidity (adjusted HR = 1.39, 95% CI 1.08-1.81; p = 0.012).
Conclusion: Newborns with iSUA are at an increased risk for long-term respiratory morbidity.
abstract_id: PUBMED:26149153
Risk Factors and Perinatal Outcomes of Velamentous Umbilical Cord Insertion. Objective: To explore the risk factors of velamentous umbilical cord insertion(VCI)and the impact of VCI on perinatal outcomes.
Methods: The clinical data of 588 VCI patients who were treated in Beijing Gynecology and Obstetrics Hospital from January 2006 to January 2011 were retrospectively analyzed. In addition,61,143 non-VCI subjects were enrolled as the control group. The possible risk factors of VCI and the impact of VCI on perinatal outcomes were analyzed. In addition,the causes of perinatal deaths were analyzed.
Results: The gemellary pregnancy,multiple pregnancy,in vitro fitilization(IVF),placenta praevia,and placenta succenturiata/placenta bipartite were found to be the risk factors of VCI. The incidences of low birth weight,intrauterine growth restriction,asphyxia of newborns,deaths of fetuses or neonates,and single umbilical artery in the VCI group were significantly higher than those in the control group(all P<0.05). In 678 perineonates with VCI,the total death toll of perineonates was 7(1.0%),among whom the death causes included angiorrhexis of placenta praevia(n=1),preterm birth and low birth weight(n=3),torsion of cord(n=1),prolapse of cord(n=1),and placental abruption(n=1).
Conclusions: The risk factors of VCI should be carefully monitored. A diagnosis of VCI,if any,should be correctly made by using modern ultrasound techniques before delivery,so as to lower the mortality of perineonates.
abstract_id: PUBMED:31603530
Isolated single umbilical artery and the risk of adverse perinatal outcome and third stage of labor complications: A population-based study. Introduction: Isolated single umbilical artery (iSUA) refers to single umbilical artery cords with no other fetal malformations. The association of iSUA to adverse outcome of pregnancy has not been consistently reported, and whether iSUA carries increased risk of third stage of labor complications has not been studied. We aimed to investigate the risk of adverse perinatal outcome, third stage of labor complications, and associated placental and cord characteristics in pregnancies with iSUA. A further aim was to assess the risk of recurrence of iSUA and anomalous cord or placenta characteristics in Norway.
Material And Methods: This was a population-based study of all singleton pregnancies with gestational age >16 weeks at birth using data from the Medical Birth Registry of Norway from 1999 to 2014 (n = 918 933). Odds ratios (OR) with 95% confidence intervals were calculated for adverse perinatal outcome (preterm birth, perinatal and intrauterine death, low Apgar score, transferral to neonatal intensive care ward, placental and cord characteristics [placental weight, cord length and knots, anomalous cord insertion, placental abruption and previa]), and third stage of labor complications (postpartum hemorrhage and the need for manual placental removal or curettage) in pregnancies with iSUA, and recurrence of iSUA using generalized estimating equations and logistic regression.
Results: Pregnancies with iSUA carried increased risk of adverse perinatal outcome (OR 5.06, 95% confidence interval [CI] 4.26-6.02) and perinatal and intrauterine death (OR 5.62, 95% CI 4.69-6.73), and a 73% and 55% increased risk of preterm birth and small-for-gestational-age neonate, respectively. The presence of iSUA also carried increased risk of a small placenta, placenta previa and abruption, anomalous cord insertion, long cord, cord knot and third stage of labor complications. Women with iSUA, long cord or anomalous cord insertion in one pregnancy carried increased risk of iSUA in the subsequent pregnancy.
Conclusions: The presence of ISUA was associated with a more than five times increased risk of intrauterine and perinatal death and with placental and cord complications. The high associated risk of adverse outcome justifies follow up with assessment of fetal wellbeing in the third trimester, intrapartum surveillance and preparedness for third stage of labor complications.
abstract_id: PUBMED:7871485
Acardia: predictive risk factors for the co-twin's survival. This study aimed to find factors in acardiac pregnancies that could be used to predict survival rates of the pump fetus. Five cases of acardia at Monash Medical Centre were found, and all case reports available in the literature from 1960 to 1991 (184 cases) were collected and analyzed. Acardia is more common in nulliparous women and in monoamniotic monochorionic pregnancy. The acardiac fetus usually has a two-vessel umbilical cord and is most likely to develop structures supplied by the lower aortic branches. Delivery is more often preterm (mean gestation = 31.1 weeks) than normal twins. The overall perinatal mortality for the pump fetus is 35% in twins and 45% in triplets. Factors associated with a statistically significant increase in perinatal mortality for the pump fetus include delivery before 32 weeks gestation, acardiacus anceps form of acardia, and the presence of arms, ears, larynx, trachea, pancreas, kidney, or small intestine in the acardiac fetus. Active intervention in these pregnancies is reasonable.
abstract_id: PUBMED:1475216
Infants with single umbilical artery studied in a national registry. 2: Survival and malformations in infants with single umbilical artery. Registry data on all infants born in Sweden between 1983 and 1986 are reviewed to describe perinatal mortality and malformation rate of infants with single umbilical artery (SUA). Since SUA is much more common in infants with chromosomal anomalies and in twins this analysis is confined to the 1694 singletons who were not recorded as having chromosomal anomalies. There was significantly increased risk of perinatal mortality among these infants largely due to an increased frequency of congenital malformations, low birthweight and preterm delivery. Nevertheless, at least 36% of the perinatal mortality occurred in the absence of malformations or low birthweight. We found the overall risk of severe malformation in SUA infants to be increased 4.3 times. Some malformations were especially common, such an anorectal atresia and oesophageal atresia.
abstract_id: PUBMED:7997408
Infants with single umbilical artery studied in a national registry. 3: A case control study of risk factors. This case control study reports associations between single umbilical artery (SUA) in newborns and some maternal biological characteristics. The study is based on chromosomally normal singleton infants born in Sweden between 1983 and 1990. Information on the maternal characteristics studied was obtained prospectively. There were 2920 cases identified and 5840 controls were selected. An association was found with: previous perinatal death, retained placenta, placenta praevia, maternal diabetes, epilepsy and hydramnios. Increased odds ratios were seen also for spontaneous abortion and abruptio placentae but did not reach statistical significance. No association was found with previous induced abortion, involuntary childlessness, or the use of contraceptives after the last menstrual period.
abstract_id: PUBMED:23775879
Relationship of isolated single umbilical artery to fetal growth, aneuploidy and perinatal mortality: systematic review and meta-analysis. Objective: To review the available literature on outcome of pregnancy when an isolated single umbilical artery (iSUA) is diagnosed at the time of the mid-trimester anomaly scan.
Methods: We searched MEDLINE (1948-2012), EMBASE (1980-2012) and the Cochrane Library (until 2012) for relevant citations reporting on outcome of pregnancy with iSUA seen on ultrasound. Data were extracted by two reviewers. Where appropriate, we pooled odds ratios (ORs) for the dichotomous outcome measures: small for gestational age (SGA), perinatal mortality and aneuploidy. For birth weight we determined the mean difference with 95% CI.
Results: We identified three cohort studies and four case-control studies reporting on 928 pregnancies with iSUA. There was significant heterogeneity between cohort and case-control studies. Compared to fetuses with a three-vessel cord, fetuses with an iSUA were more likely to be SGA (OR 1.6 (95% CI, 0.97-2.6); n = 489) or suffer perinatal mortality (OR 2.0 (95% CI, 0.9-4.2); n = 686), although for neither of the outcomes was statistical significance reached. The difference in mean birth weight was 51 g (95% CI, -154.7 to 52.6 g): n = 407), but again this difference was not statistically significant. We found no evidence that fetuses with iSUA have an increased risk for aneuploidy.
Conclusion: In view of the non-significant association between iSUA and fetal growth and perinatal mortality, and in view of the heterogeneity in studies on aneuploidy, we feel that large-scale, prospective cohort studies are needed to reach definitive conclusions on the appropriate work-up in iSUA pregnancies. At present, targeted growth assessment after diagnosis of iSUA should not be routine practice.
abstract_id: PUBMED:30941337
Perinatal Outcomes of Small for Gestational Age Neonates Born With an Isolated Single Umbilical Artery. Objective: To investigate pregnancy outcomes of small for gestational age (SGA) neonates born with isolated single umbilical artery (iSUA) compared to SGA neonates without iSUA. Study Design: This was a population-based retrospective cohort analysis. The study group was defined as a singleton SGA neonate born with iSUA, while an SGA neonate without iSUA comprised the comparison group. We evaluated adverse perinatal outcomes in all SGA neonates born at the Soroka University Medical Center between the years 1998-2013. Multiple gestations, fetuses with known congenital malformations or chromosomal abnormalities and patients with lack of prenatal care were excluded from the study. Multivariate logistic regression models were constructed to identify independent factors associated with adverse perinatal outcomes. Results: Of 12,915 SGA deliveries, 1.2% (162) were complicated with iSUA. Women in the study group were older with a significantly lower gestational age at delivery compared with the comparison group. Rates of women who conceived after infertility treatments were higher in the study group. Additionally, patients in the study group had significantly higher rates of preterm deliveries, placental abruption, cord prolapse, non-reassuring fetal heart rates and cesarean delivery were noted in the study group. These neonates had a significantly lower birth weight (1988.0 ± 697 vs. 2388.3 ± 481 p < 0.001) and higher rates of low APGAR scores at the first and fifth minutes after birth compared with controls. Perinatal mortality was also found to be significantly higher among SGA neonates complicated with iSUA. Preterm delivery as well as perinatal mortality were found independently associated with iSUA among SGA neonates (aOR 4.01, 95% CI 2.88-5.59, aOR 2.24, 95% CI 1.25-4.01, respectively). Conclusion: SGA pregnancies complicated with iSUA are at higher risk for adverse pregnancy and perinatal outcomes as compared to SGA pregnancies without iSUA.
Answer: Yes, single umbilical artery (SUA) has been identified as an independent risk factor for perinatal mortality. Studies have shown that fetuses with isolated SUA are at a higher risk for perinatal mortality compared to fetuses with normal umbilical vessels. Even after controlling for possible confounders such as fetal growth restriction, oligohydramnios, polyhydramnios, prolapse of cord, maternal hypertension, and diabetes mellitus, isolated SUA remained an independent risk factor for perinatal mortality (PUBMED:20024571). Another study confirmed that isolated single umbilical artery (iSUA) is an independent predictor of adverse perinatal outcomes in term neonates, including higher rates of perinatal mortality both antenatally and postnatally (PUBMED:27048509). Furthermore, a population-based study found that pregnancies with iSUA carried an increased risk of adverse perinatal outcomes, including perinatal and intrauterine death (PUBMED:31603530). These findings suggest that the presence of SUA should be considered a significant risk factor for perinatal mortality, and pregnancies with SUA may require closer monitoring and management to mitigate associated risks. |
Instruction: Vertical reduction mammaplasty utilizing the superomedial pedicle: is it really for everyone?
Abstracts:
abstract_id: PUBMED:22859543
Vertical reduction mammaplasty utilizing the superomedial pedicle: is it really for everyone? Background: Classically, the vertical-style reduction mammaplasty utilizing a superomedial pedicle has been limited to smaller reductions secondary to concerns for poor wound healing and nipple necrosis.
Objectives: The authors reviewed a large cohort of patients who underwent a vertical-style superomedial pedicle reduction mammaplasty in an attempt to demonstrate its safety and efficacy in treating symptomatic macromastia.
Methods: A retrospective review was performed of 290 patients (558 breasts) who underwent a vertical-style superomedial pedicle reduction mammaplasty. All procedures were conducted by one of 4 plastic surgeons over 6 years (JDR, MAA, DLV, DRA).
Results: The average resection weight was 551.7 g (range, 176-1827 g), with 4.6% of resections greater than 1000 g. A majority of patients (55.2%) concomitantly underwent liposuction of the breast. The total complication rate was 22.7%, with superficial dehiscence (8.8%) and hypertrophic scarring (8.8%) comprising the majority. Nipple sensory changes occurred in 1.6% of breasts, with no episodes of nipple necrosis. The revision rate was 2.2%. Patients with complications had significantly higher resection volumes and nipple-to-fold distances (P = .014 and .010, respectively).
Conclusions: The vertical-style superomedial pedicle reduction mammaplasty is safe and effective for a wide range of symptomatic macromastia. The nipple-areola complex can be safely transposed, even in patients with larger degrees of macromastia, with no episodes of nipple necrosis. The adjunctive use of liposuction should be considered safe. Last, revision rates were low, correlating with a high level of patient satisfaction.
abstract_id: PUBMED:20574476
Superomedial pedicle reduction with short scar. Reduction mammaplasty combining a superomedial pedicle with a circumareolar/vertical pattern skin excision avoids an inferior pedicle that can interfere with vertical scar technique, yet it is flexible enough to allow for a short transverse skin excision. This technique is suitable for small to moderate-size reductions.
abstract_id: PUBMED:32538571
Application of liposuction technique assisted superomedial pedicle with vertical incision in reduction mammaplasty Objective: To explore the effectiveness of liposuction technique assisted superomedial pedicle with a vertical incision in reduction mammaplasty.
Methods: Between March 2014 and March 2019, 65 patients (127 sides) with breast hypertrophy had undergone breast reduction by using liposuction technique assisted superomedial pedicle with a vertical incision. The patients were 21 to 58 years old, with an average of 42.2 years. Body mass index ranged from 18.8 to 26.5 kg/m 2, with an average of 21.3 kg/m 2. Among them, 62 cases were bilateral operations and 3 cases were unilateral operation. The degree of mastoptosis was rated as degreeⅡ in 73 sides and degree Ⅲ in 54 sides according to the Regnault criteria.
Results: The unilateral breast removed 432 g on average (range, 228-932 g); the distance of nipple upward was 4.5-9.5 cm (mean, 6.5 cm); the volume of unilateral liposuction was 50-380 mL (mean, 148 mL). There were 2 sides (1.58%) of unilateral intramammary hematomas after operation, 4 sides (3.15%) of bilateral breast vertical incisions slightly split, and 1 side (0.79%) of the nipple-areola epidermis necrosis. All patients were followed up 6 months to 5 years, with an average of 18 months. During the follow-up, there was no evident re-dropping of the breast and no enlargement of the areola. No patient underwent scar excision. At last follow-up, the effectiveness was evaluated by the surgeons. There were 52 cases with very satisfactory, 10 cases with satisfactory, and 3 cases with unsatisfactory for the breast shape and symmetry. There were 51 cases with very satisfactory, 11 cases with satisfactory, and 3 cases with unsatisfactory for the nipple position and areola diameter. The incision scar was obvious in 25 cases and was not obvious in 40 cases. The results of self-assessment showed very satisfactory for the breast shape in 48 cases, satisfactory in 12 cases, and unsatisfactory in 5 cases; very satisfactory for the incision scar in 40 cases, satisfactory in 17 cases, and unsatisfactory in 8 cases. Overall evaluation of the patient was very satisfactory in 52 cases, satisfactory in 7 cases, and unsatisfactory in 6 cases.
Conclusion: The liposuction technique assisted superomedial pedicle with a vertical incision in reduction mammaplasty is a safe and reliable surgical method with a satisfactory result.
abstract_id: PUBMED:30579911
"Reduction mammaplasty with superomedial pedicle technique: A literature review and retrospective analysis of 938 consecutive breast reductions". Background: The superomedial pedicle reduction mammaplasty has been noted in the literature to provide superior aesthetic results and longevity as well as shorter operative times. However, the inferior pedicle continues to be the most commonly utilized technique in the United States. There is a lack of large-volume outcome studies examining how the superomedial pedicle technique compares against more established reduction methods.
Methods: A retrospective review of 938 reduction mammaplasties was performed at a single institution over a 10-year period. A literature review of superomedial and inferior pedicle complication rates were performed. Study variables were compared against overall mean complication rates for the two techniques. Logistic regression, paired student T-Tests, and Chi-square analyses were used to calculate adjusted odds ratios and to compare continuous and categorical variables.
Results: Mean reduction weight was 730 g per breast, ranging from 100 to 4700 g. Overall complication rate was 16%, of which 10% were minor complications related to delayed wound healing. No cases of skin flap necrosis occurred. Increased complications were highly correlated with a BMI > 30, breast reduction weights > 831 g, and sternal notch to nipple distances > 35.5 cm.
Conclusions: The superomedial pedicle reduction mammaplasty technique is safe and reliable with a complication rate lower than the inferior pedicle technique. Based on our findings we propose that residents should be exposed to this method of reduction mammaplasty as part of a compilation of techniques learned in residency and that practicing surgeons would benefit from becoming familiar with its applications.
abstract_id: PUBMED:20179473
Vertical reduction mammaplasty combined with a superomedial pedicle in gigantomastia. Vertical reduction mammaplasty using a superomedial pedicle is a well-accepted technique giving good results in mild to moderate breast hypertrophy. We describe modifications of the vertical reduction technique to achieve safe reductions even for very large breasts and minimize unsightly scarring, skin necrosis and poor shape. Over the past 4 years, 162 patients have undergone bilateral breast reduction using the vertical mammaplasty technique with a superomedial dermoglandular pedicle. We present a retrospective study of 23 cases of gigantomastia (reductions over 1100g) who underwent bilateral reduction mammaplasty, using our technique. The mean age was 49 years, BMIs ranged from 28 to 52 kg/m. The mean suprasternal notch-to-nipple distance was 40.5 cm on the right and 41.4 cm on the left. The average resection weight per breast was 1303 g on the right, and 1245 g on the left side. The suprasternal notch-to-nipple distance was reduced by between 13.2 and 36.0 cm (mean, 16.1 cm). Mean follow-up was 14 months. We observed a superficial infection in 2 patients, a deep hematoma in one patient, partial necrosis of the nipple-areola complex in 1, and 2 patients needed correction surgery due to dog-ear formation. By using the described modifications, the nipple and areola were safely transposed on a superomedial dermoglandular pedicle producing good breast shapes, while scarring and complications in vertical reduction mammaplasty for oversized breasts were effectively minimized.
abstract_id: PUBMED:29218474
Superomedial Pedicle Vertical Scar Breast Reduction: Objective and Subjective Assessment of Breast Symmetry and Aesthetics. Background: The superomedial vertical scar breast reduction (SVBR) described by Hall-Findlay is gaining popularity among surgeons worldwide. The aim of this study was to evaluate its long-term aesthetic outcome, the extent of quality of life improvement and the factors that influence patient satisfaction and reviewers' evaluation of aesthetic/surgical outcome.
Methods: In this historical prospective study, we included women who underwent SVBR at least one year prior to enrollment and responded to a quality of life questionnaire. Their breasts were photographed, measured and evaluated by the plastic surgery staff.
Results: A total of 40 patients responded to the questionnaire, and the breasts of 31 of them were measured and photographed. All 31 patients had good breast symmetry according to objective breast measurements. There was a clear correlation between the patients' and the reviewers' scores of breast symmetry, scar appearance and breast shape (r = 0.4-0.65, r = 0.432-0.495 and r = 0.335-0.403, respectively). The factor that most influenced reviewers' and patients' satisfaction with the overall aesthetic outcome was the breast-to-body proportion.
Conclusions: The proportions between the breast size and the patient's body habitus are pivotal to patient satisfaction and should be taken into consideration when planning a reduction mammaplasty. The SVBR technique for breast reduction provided good cosmetic outcome and symmetry over a long-term follow-up.
Level Of Evidence Iv: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
abstract_id: PUBMED:33304571
A successful breastfeeding after vertical scar reduction mammaplasty with superior pedicle: A case report. Introduction: Most of patients undergo reduction mammaplasty for aesthetic or therapeutic reasons without consider the effect on breastfeeding function. Vertical scar mammaplasty with superior pedicle is expected to be a breast reduction procedure that can keep maintain the function of breastfeeding. This is the first recorded report of breastfeeding after vertical scar reduction mammaplasty with superior pedicle in Indonesia.
Presentation Of Case: A 23 years old woman presented to the outpatient clinic with enlargement of both breast for 3 years. Physical examination showed bilateral breast enlargement. No tenderness, nodules, nor axillary lymph node enlargement were found. The patient was managed with vertical scar mammaplasty with superior pedicle. The patient was followed up with complication of skin excess and scarring on the bilateral submammary folds. We performed excision and resection procedures to eliminate the skin excess and scars without further complications. The patient was married and gave birth to her first and second child after two and five years following mammaplasty. The patient was able to provide exclusive breastfeeding for both of her children.
Discussion: Vertical scar mammaplasty with superior pedicle surgery is a surgical technique that combines a superior pedicle for the areola and performs a central-inferior quadrant resection for breast reduction. It only takes the tissue and glands that are located in the lower quadrant and still maintains the surrounding tissue and glands. This technique also maintains the integrity of nipple-areola complex (NAC) which also important in the lactation process.
Conclusion: Vertical scar mammaplasty with superior pedicle can be one of the superior techniques in breast reduction which can maintain the breastfeeding function thereby increasing patient satisfaction.
abstract_id: PUBMED:36862951
The Underused Superomedial Pedicle Reduction Mammaplasty: Safe and Effective Outcomes. Background: The superomedial pedicle for reduction mammaplasty remains less commonly performed than the inferior pedicle. This study seeks to delineate the complication profiles and outcomes for reduction mammaplasty using a superomedial pedicle technique in a large series.
Methods: A retrospective review was conducted of all consecutively performed reduction mammaplasty cases at a single institution by two plastic surgeons over a 2-year period. All consecutive superomedial pedicle reduction mammaplasty cases for benign symptomatic macromastia were included.
Results: A total of 462 breasts were analyzed. Mean age was 38.3 ± 13.38 years, mean body mass index was 28.5 ± 4.95, and mean reduction weight was 644.4 ± 299.16 g. Regarding surgical technique, a superomedial pedicle was used in all cases; Wise-pattern incision was used in 81.4%, and short-scar incision was used in 18.6%. The mean sternal notch-to-nipple measurement was 31.2 ± 4.54 cm. There was a 19.7% rate of any complication, the majority of which were minor in nature, including any wound healing complications treated with local wound care (7.5%) and scarring with intervention in the office (8.6%). There was no statistically significant difference in breast reduction complications and outcomes using the superomedial pedicle, regardless of sternal notch-to-nipple distance. Body mass index ( P = 0.029) and breast reduction specimen operative weight ( P = 0.004) were the only significant risk factors for a surgical complication, and with each additional gram of reduction weight, the odds of a surgical complication increased by 1.001. Mean follow-up time was 40.5 ± 7.1 months.
Conclusion: The superomedial pedicle is an excellent option for reduction mammaplasty, portending a favorable complication profile and long-term outcomes.
Clinical Question/level Of Evidence: Therapeutic, IV.
abstract_id: PUBMED:38509317
Superomedial Pedicle Technique and Management of Circulation Problems in Gigantomastia : Treatment of Gigantomastia. Breast reduction surgeries encompass a wide range of methods that are continuously evolving to discover more reliable and satisfactory techniques. This presentation aims to address the research gap by sharing outcomes and experiences using the superomedial pedicle in gigantomastia, as well as the implemented protocol for managing nipple-areola complex (NAC) ischemia. The Wise pattern and superomedial pedicle reduction mammaplasty method were utilized in treating 19 patients (38 breasts). The average age of the patients was 41.47 years, with a basal mass index (BMI) of 33.27 kg/m2. The mean sternal notch to nipple (SN-N) length for the entire population was found to be 40.97 cm. On both sides, this length was statistically similar at 41.11 cm on the right side and 40.84 cm on the left side. The average weight of resected tissue from all patients was calculated to be 1793.42 g, with slightly higher weight on the right side at 1800 g compared to the left side's weight of 1786.84 g. Postoperative NAC ischemia occurred in three patients, one bilateral case, and two unilateral cases. The study revealed that in both the groups with and without NAC ischemia, the average values were as follows: age, which ranged from 45.33 to 40.75 years; BMI, ranging from 35.01 kg/m2 to 32.95 kg/m2; SN-N distance, which varied from 40 cm to 41.09 cm; and excision material weights, ranging from 1650 g to 1810.29 g. The p-value in the comparisons was found to be greater than 0.05. These results indicate that age, BMI, SN-N distance, and excision material weight did not have an impact on NAC vascularity issues. All NACs were successfully saved through a protocol involving hyperbaric oxygen therapy (HOT) and vacuum-assisted therapy (VAT). The study suggests that utilizing a superomedial flap is a viable option for treating gigantomastia and highlights the effectiveness of their outlined protocol in managing postoperative complications. While acknowledging the need for comparative studies, the study proposes incorporating HOT and VAT into protocols aimed at saving NACs.Level of Evidence IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
abstract_id: PUBMED:26528085
The Superomedial Pedicle Wise-Pattern Breast Reduction: Reproducible, Reliable, and Resilient. The combination of the superomedial pedicle with the traditional Wise-pattern skin resection has gained increasing popularity for its versatility and ability to achieve significant reduction of breast parenchyma and skin envelope with improved contour and lasting results. Here the authors review the pertinent anatomy and discuss their surgical technique and its benefits. In addition, a retrospective chart review of 80 patients (151 breasts) who underwent superomedial pedicle Wise-pattern breast reduction between 2010 and 2014 was performed. Mean specimen weights and complication rates, including seroma, delayed wound healing, nipple loss, infection, and reoperation were calculated for the cohort.
Answer: Vertical reduction mammaplasty utilizing the superomedial pedicle is a technique that has been traditionally limited to smaller reductions due to concerns about poor wound healing and nipple necrosis. However, several studies have demonstrated its safety and efficacy for a wide range of symptomatic macromastia, suggesting that it can be suitable for many patients, though not necessarily everyone.
A retrospective review of 290 patients who underwent this procedure showed that it is safe and effective even for larger reductions, with no episodes of nipple necrosis and a low revision rate, indicating high patient satisfaction (PUBMED:22859543). Another study with 65 patients also reported satisfactory results with the use of liposuction-assisted superomedial pedicle with a vertical incision, further supporting its reliability (PUBMED:32538571).
Moreover, a literature review and retrospective analysis of 938 consecutive breast reductions using the superomedial pedicle technique found it to be safe and reliable with a lower complication rate than the inferior pedicle technique, suggesting that it should be a part of the techniques learned in residency (PUBMED:30579911). Even in cases of gigantomastia, modifications of the vertical reduction technique with a superomedial pedicle have been successfully applied, minimizing scarring and complications (PUBMED:20179473).
However, certain factors such as higher body mass index (BMI), greater breast reduction weights, and longer sternal notch-to-nipple distances have been associated with increased complications (PUBMED:30579911). Additionally, while the superomedial pedicle technique has been shown to provide good cosmetic outcomes and symmetry over a long-term follow-up, the proportions between the breast size and the patient's body habitus are crucial for patient satisfaction (PUBMED:29218474).
In summary, while the superomedial pedicle technique for vertical reduction mammaplasty has been shown to be safe and effective for a wide range of patients, individual factors such as BMI, reduction weight, and breast-to-body proportions must be considered. It may not be the best choice for every single patient, but it is a viable option for many who suffer from symptomatic macromastia. |
Instruction: Does assessment of pretest probability of disease improve the utility of echocardiography in suspected endocarditis in children?
Abstracts:
abstract_id: PUBMED:12640373
Does assessment of pretest probability of disease improve the utility of echocardiography in suspected endocarditis in children? Objective: To compare the yield rate (YR) of echocardiography when evaluating children with suspected infectious endocarditis (IE) in both the actual clinical setting and in the hypothetic setting where strict clinical criteria are applied. Study design Medical records of 101 children undergoing echocardiography for suspected IE were reviewed. Echocardiograms with positive findings were identified and the actual diagnostic YR was calculated. With the use of clinical criteria proposed by von Reyn (VR), the probability of IE was retrospectively classified as (1) rejected, (2) possible, or (3) probable. Theoretic YR of echocardiography was calculated for each classification.
Results: The actual YR of echocardiography was 12% (12/101). The YR of echocardiography by VR class was 0% in rejected, 20% in possible, and 80% in probable cases (chi(2) = 55.1, P <.0001). Echocardiography did not change the probability of IE in any patient classified as rejected, but allowed reassignment of disease probability in a significant proportion of patients with possible or probable IE.
Conclusions: The YR of echocardiography was significant when clinical probability of IE was intermediate-to-high, and low, with marginal clinical utility, when clinical probability was low. Strict pretest assessment of disease probability may lead to more effective utilization of echocardiography in this population.
abstract_id: PUBMED:8641002
Diagnostic value of echocardiography in suspected endocarditis. An evaluation based on the pretest probability of disease. Background: We hypothesized that for the diagnosis of endocarditis, (1) transthoracic echocardiography (TTE) would be most valuable in patients with an intermediate clinical probability of the disease and (2) transesophageal echocardiography (TEE) would be most useful in patients with an intermediate probability when TTE either does not yield an adequate study or indicates an intermediate probability of endocarditis. We also sought to investigate the influence of echocardiographic results on antibiotic usage and its duration.
Methods And Results: TTE and TEE were performed in 105 consecutive patients with suspected endocarditis. Patients were classified as having either low, intermediate, or high probability of endocarditis on the basis of clinical criteria and separately on the basis of both TTE and TEE findings. TTE and TEE classified the majority (82% and 85%, respectively) of the 67 patients with a low clinical probability of endocarditis as having a low likelihood of the disease. Of the 14 patients with intermediate clinical probability, 12 had technically adequate TTE studies; 10 of these (83%) were classified as either high or low probability. All patients with intermediate clinical probability were classified as high or low probability by TEE. The majority of the 24 patients with high clinical probability were placed in the low-likelihood category by echocardiography (15 by TTE and 12 by TEE). There was concordance between TTE and TEE in 83% of all cases. TEE was useful for the diagnosis of endocarditis in patients with prosthetic valves and in those in whom TTE indicated an intermediate probability; these constituted < 20% of patients in our study. The course of antibiotic therapy was influenced only by the clinical profile and not by the echocardiographic results.
Conclusions: Echocardiography should not be used to make a diagnosis of endocarditis in those with a low clinical probability of the disease. In those with an intermediate or high clinical probability, TTE should be the diagnostic procedure of choice. TEE for the diagnosis of endocarditis should be reserved only for patients who have prosthetic valves and in whom TTE is either technically inadequate or indicates an intermediate probability of endocarditis.
abstract_id: PUBMED:12915922
Can structured clinical assessment using modified Duke's criteria improve appropriate use of echocardiography in patients with suspected infective endocarditis? Background: Although echocardiography has been incorporated into the diagnostic algorithm of patients with suspected infective endocarditis, systematic usage in clinical practice remains ill defined.
Objective: To test whether the rigid application of a predefined standardized clinical assessment using the Duke criteria by the research team would provide improved diagnostic accuracy of endocarditis when compared with usual clinical care provided by the attending team.
Methods: Between April 1, 2000 and March 31, 2001, 101 consecutive inpatients with suspected endocarditis were examined prospectively and independently by both teams. The clinical likelihood of endocarditis was graded as low, moderate or high. All patients underwent transthoracic echocardiography and appropriate transesophageal echocardiography if deemed necessary. All diagnostic and therapeutic outcomes were evaluated prospectively.
Results: Of 101 consecutive inpatients (age 50+/-16 years; 62 males) enrolled, 22% subsequently were found to have endocarditis. The pre-echocardiographic likelihood categories as graded by the clinical and research teams were low in nine and 37 patients, respectively, moderate in 83 and 40 patients, respectively, and high in nine and 24 patients, respectively, with only a marginal agreement in classification (kappa=0.33). Of the 37 patients in the low likelihood group and 40 in the intermediate group, no endocarditis was diagnosed. In 22 of 24 patients classified in the high likelihood group, there was echocardiographic evidence of vegetations suggestive of endocarditis. Discriminating factors that increased the likelihood of endocarditis were a prior history of valvular disease, the presence of an indwelling catheter, positive blood cultures, and the presence of a new murmur and a vascular event. General internists, rheumatologists and intensive care physicians were more likely to order echocardiography in patients with low clinical probability of endocarditis, of which pneumonia was the most common alternative diagnosis.
Conclusion: Although prediction of clinical likelihood varies between observers, endocarditis is generally found only in those individuals with a moderate to high pre-echocardiographic clinical likelihood. Strict adherence to indications for transthoracic echocardiography and transesophageal echocardiography may help to facilitate more accurate diagnosis within the moderate likelihood category. Patients with low likelihood do not derive additional diagnostic benefit with echocardiography although other factors such as physician reassurance may continue to drive diagnostic demand.
abstract_id: PUBMED:31673735
The Utility of Echocardiography in Pediatric Patients with Structurally Normal Hearts and Suspected Endocarditis. The objective of this study was to evaluate the utility of transthoracic echocardiography (TTE) in children with structurally normal hearts suspected of having infective endocarditis (IE). We hypothesized that the diagnostic yield of TTE is minimal in low-risk patients with normal hearts. We performed a retrospective chart review of TTEs performed for concern for endocarditis at a pediatric tertiary care referral center in Portland, Oregon. Three hundred patients met inclusion criteria (< 21 years old, completed TTE for IE from 2005 to 2015, no history of congenital heart disease or endocarditis). We recorded findings that met the modified Duke criteria (MDC) including fever, positive blood culture, and vascular/immunologic findings; presence of a central line; whether or not patients were diagnosed with IE clinically; and if any changes to antibiotic regimens were made based on TTE. Ten patients (3%) had echocardiograms consistent with IE. When compared to the clinical diagnosis of IE, the positive predictive value (PPV) of one positive blood culture without other major/minor MDC was 0. Similarly, the PPV of two positive blood cultures without other major/minor criteria was 0.071. Patients should be evaluated using the MDC to assess the clinical probability of IE prior to performing a TTE. Patients with a low probability for IE should not undergo TTE as it has a low diagnostic yield and patients are unlikely to be diagnosed with disease.
abstract_id: PUBMED:10492311
Echocardiography in patients with suspected endocarditis: a cost-effectiveness analysis. Purpose: We sought to determine the appropriate use of echocardiography for patients with suspected endocarditis.
Patients And Methods: We constructed a decision tree and Markov model using published data to simulate the outcomes and costs of care for patients with suspected endocarditis.
Results: Transesophageal imaging was optimal for patients who had a prior probability of endocarditis that is observed commonly in clinical practice (4% to 60%). In our base-case analysis (a 45-year-old man with a prior probability of endocarditis of 20%), use of transesophageal imaging improved quality-adjusted life expectancy (QALYs) by 9 days and reduced costs by $18 per person compared with the use of transthoracic echocardiography. Sequential test strategies that reserved the use of transesophageal echocardiography for patients who had an inadequate transthoracic study provided similar QALYs compared with the use of transesophageal echocardiography alone, but cost $230 to $250 more. For patients with prior probabilities of endocarditis greater than 60%, the optimal strategy is to treat for endocarditis without reliance on echocardiography for diagnosis. Patients with a prior probability of less than 2% should receive treatment for bacteremia without imaging. Transthoracic imaging was optimal for only a narrow range of prior probabilities (2% or 3%) of endocarditis.
Conclusion: The appropriate use of echocardiography depends on the prior probability of endocarditis. For patients whose prior probability of endocarditis is 4% to 60%, initial use of transesophageal echocardiography provides the greatest quality-adjusted survival at a cost that is within the range for commonly accepted health interventions.
abstract_id: PUBMED:15165946
Assessment of atrial pathologies in children using transesophageal echocardiography Objective: Transesophageal echocardiography (TEE) is indicated for suspected atrial septal pathology and for monitoring of interventional procedures such as an atrial septal defect (ASD) closure during cardiac catheterization. Transesophageal echocardiography also helps to demonstrate postoperative complications and residual defects of complex congenital cardiac anomalies.
Methods: Transesophageal echocardiography was performed in 112 pediatric patients with or suspected atrial pathology at our institution between 1999-2002, using the standard techniques. The mean age was 8.7+/-4.2 years.
Results: In 45 of 112 children the suspected atrial defects were confirmed with the TEE. Patent foramen ovale was correctly predicted in 13.4% of patients by TEE, but only in 8.7% of patients by echocardiography. Multiple ASD's were correctly defined in 4.1%, and high venosus defects were documented in 6.1% of children by the TEE. We used TEE in 13% of patients for detecting atrial vegetations in patients with possible endocarditis, and evaluation of the postoperative care of atrial surgery such as Fontan or Senning operations and total correction of abnormal pulmonary venous return. Successful transcatheter closure of 7 ASD's was accomplished under TEE guidance.
Conclusion: Transesophageal echocardiography allows a much more detailed evaluation of atrial morphology than transthoracic echocardiography even in infants. Transesophageal echocardiography is also indicated during interventional procedures and postoperative evaluation of the atrial pathology.
abstract_id: PUBMED:26850679
An Approach to Improve the Negative Predictive Value and Clinical Utility of Transthoracic Echocardiography in Suspected Native Valve Infective Endocarditis. Background: In patients with suspected native valve infective endocarditis, current guidelines recommend initial transthoracic echocardiography (TTE) followed by transesophageal echocardiography (TEE) if clinical suspicion remains. The guidelines do not account for the quality of initial TTE or other findings that may alter the study's diagnostic characteristics. This may lead to unnecessary TEE when initial TTE was sufficient to rule out vegetation.
Methods: The objective of this study was to determine if the use of a strict definition of negative results on TTE would improve the performance characteristics of TTE sufficiently to exclude vegetation. A retrospective analysis of patients at a single institution with suspected native valve endocarditis who underwent TTE followed by TEE within 7 days between January 1, 2007, and February 28, 2014, was performed. Negative results on TTE for vegetation were defined by either the standard approach (no evidence of vegetation seen on TTE) or by applying a set of strict negative criteria incorporating other findings on TTE. Using TEE as the gold standard for the presence of vegetation, the diagnostic performance of the two transthoracic approaches was compared.
Results: In total, 790 pairs of TTE and TEE were identified. With the standard approach, 661 of the transthoracic studies had negative findings (no vegetation seen), compared with 104 studies with negative findings using the strict negative approach (meeting all strict negative criteria). The sensitivity and negative predictive value of TTE for detecting vegetation were substantially improved using the strict negative approach (sensitivity, 98% [95% CI, 95%-99%] vs 43% [95% CI, 36%-51%]; negative predictive value, 97% [95% CI, 92%-99%] vs 87% [95% CI, 84%-89%]).
Conclusions: The ability of TTE to exclude vegetation in patients is excellent when strict criteria for negative results are applied. In patients at low to intermediate risk with strict negative results on TTE, follow-up TEE may be unnecessary.
abstract_id: PUBMED:12848698
The prospective role of transesophageal echocardiography in the diagnosis and management of patients with suspected infective endocarditis. Study Objectives: Transesophageal echocardiography (TEE) has a high sensitivity for the diagnosis of infective endocarditis (IE), but the prospective role of TEE when added to a careful clinical examination has not been well-studied.
Design: We compared the results of TEE to a clinical evaluation by an infectious disease specialist in 43 consecutive patients in whom TEE was ordered to rule out IE. Prior to TEE, the patients were classified on clinical grounds as to their likelihood of IE using a modification of the von Reyn criteria. Changes in management occurring as a result of TEE also were evaluated.
Measurements And Results: TEE was positive in 11 patients, negative in 29, and indeterminate in 3. TEE was positive in 6 (46%) of 13 high probability patients, 2 (67%) of medium probability patients, and 3 (11%) of 27 low probability patients. A change in management based on TEE occurred in 4 (31%) patients with high probability, in no patients with medium probability, and in 1 (4%) patient with low probability.
Conclusions: TEE confirms IE in patients with high probability of IE and often leads to a management change. However, TEE is unlikely to establish the diagnosis or change management in patients with low probability.
abstract_id: PUBMED:3772619
Echocardiography, endocarditis, and clinical information bias. Although clinical information provided to the interpreter of imaging tests may improve disease detection, it may also bias the interpreter towards certain diagnoses, increasing the chance of false positives. To determine the possibility of this bias, the authors studied patients who were referred for echocardiography with a clinical suspicion of endocarditis. Hospital charts from a two-year period were reviewed to determine clinical data available to the echocardiographer, echocardiogram results, and the final diagnosis. Four clinical features, when present at the time of echocardiography, were associated with increased numbers of false-positive results. Test specificity was 97% (34/35) for patients without any of these features, but dropped to 80% (16/20) when two or more features were present. The authors conclude that clinical information may bias echocardiogram interpretations such that both test specificity and the posttest probability of disease may be overestimated when tests are used in clinical practice.
abstract_id: PUBMED:21449828
Failure of clinical features of low probability endocarditis. The early echo remains essential. Objective: To investigate if earlier reported retrospectively derived criteria for predicting absence of infective endocarditis (IE) on transthoracic echocardiography could be prospectively confirmed or improved with transoesophageal echocardiography (TOE).
Design: Prospective analysis of the relationship between predefined clinical IE features and findings on TOE in 708 IE suspected patients.
Results: The previously reported criteria were rejected as 1/10 of our confirmed IE patients fulfilled criteria for predicting absence of IE. However, our study generated another model of low probability of IE: This disease was absent in 99.4% of patients with negative blood cultures and absence of vascular phenomena and predisposing cardiac conditions. Such patients accounted for 25% of our population of patients suspected of IE.
Conclusions: The utility of earlier reported clinical criteria for predicting absence of IE proved insufficient. Instead the study generated new simpler criteria of low probability of IE. However, these included negative blood cultures, but echocardiography must not be postponed while awaiting the results of blood cultures. Therefore the proposed new criteria only apply to patients with documented negative blood cultures when the suspicion of IE arises, in our study only 10% of the population. Accordingly, the study documented the essential role of early echocardiography in suspected IE.
Answer: Yes, the assessment of pretest probability of disease does improve the utility of echocardiography in suspected endocarditis in children. Studies have shown that the yield rate (YR) of echocardiography is significant when the clinical probability of infectious endocarditis (IE) is intermediate-to-high, and low when the clinical probability is low (PUBMED:12640373). This suggests that strict pretest assessment of disease probability can lead to more effective utilization of echocardiography in pediatric populations.
Furthermore, the application of strict clinical criteria, such as the modified Duke's criteria, can improve diagnostic accuracy and appropriate use of echocardiography. When these criteria are applied, endocarditis is generally found only in individuals with a moderate to high pre-echocardiographic clinical likelihood, indicating that patients with low likelihood do not derive additional diagnostic benefit from echocardiography (PUBMED:12915922).
Additionally, a study found that in patients with structurally normal hearts and suspected IE, the diagnostic yield of transthoracic echocardiography (TTE) is minimal in low-risk patients, reinforcing the importance of evaluating patients using criteria to assess clinical probability prior to performing TTE (PUBMED:31673735).
In summary, the assessment of pretest probability of disease can enhance the utility of echocardiography by ensuring that it is used more judiciously and effectively, particularly in pediatric patients with suspected endocarditis. This approach can prevent unnecessary testing in low-probability cases while ensuring that those with a higher likelihood of IE receive the appropriate diagnostic evaluation (PUBMED:12640373; PUBMED:12915922; PUBMED:31673735). |
Instruction: Are sleep disorders associated with increased mortality in asthma patients?
Abstracts:
abstract_id: PUBMED:27855675
Are sleep disorders associated with increased mortality in asthma patients? Background: South Korea has experienced problems regarding poor management of symptoms of asthma patients and remarkable increases in sleep disorders. However, few studies have investigated these issues. We examined the relationship between sleep disorders and mortality in asthma patients to suggest effective alternatives from a novel perspective.
Methods: We used data from the National Health Insurance Service (NHIS) National Sample Cohort 2004-2013, which included medical claims filed for 186,491 patients who were newly diagnosed with asthma during the study period. We performed survival analyses using a Cox proportional hazards model with time-dependent covariates to examine the relationship between sleep disorders and mortality in asthma patients.
Results: There were 5179 (2.78%) patients who died during the study period. Sleep disorders in patients previously diagnosed with asthma were associated with a higher risk of mortality (hazard ratio [HR]: 1.451, 95% confidence interval [CI]: 1.253-1.681). In addition, significant interaction was found between sleep disorders and Charlson comorbidity index.
Conclusions: Our findings suggest that an increased prevalence of sleep disorders in asthma patients increases the risk of mortality. Considering the worsening status of asthma management and the rapid growth of sleep disorders in South Korea, clinicians and health policymakers should work to develop interventions to address these issues.
abstract_id: PUBMED:29659630
Body Mass Index of 92,027 patients acutely admitted to general hospitals in Denmark: Associated clinical characteristics and 30-day mortality. Background: Data are sparse on the range of BMI among patients acutely admitted to general hospitals. We investigated BMI values and associated patient characteristics, reasons for hospital admission, and mortality in Denmark.
Methods: We identified all persons with an acute inpatient admission 2011-2014 in Central Denmark Region and assessed BMI measurements recorded in the Clinical Information System. We used cross-sectional and cohort analyses to examine the BMI distribution and its association with demographic characteristics, comorbidities, medication use, tobacco smoking, reasons for admission, and 30-day mortality.
Results: Among 92,027 acutely admitted patients (median age 62 years, 49% female) with a BMI measurement, 4% had a BMI (kg/m2) <18.5, 42% a BMI between 18.5 and 25, 34% a BMI between 25 and 30, and 20% a BMI ≥30. Compared with normal-weight patients, 30-day mortality was high among patients with BMI <18.5 (7.5% vs. 2.8%, age- and smoking-adjusted odds ratio (aOR) 2.4; 95% confidence interval (CI): 2.0-2.9, whereas patients with overweight (aOR 0.7; 95% CI: 0.6-0.8) and obesity class I (aOR 0.8; 95% CI: 0.6-0.9)). Compared with the total population, patients with BMI <18.5 were older (68 years median); more were female (73%); more had comorbidities (Charlson Comorbidity Index score >0 in 42% vs. 33% overall), more were current smokers (45% vs. 27% overall), and acute admissions due to respiratory diseases or femoral fractures were frequent. In contrast, patients with BMI ≥30 were relatively young (59 years median), fewer smoked (24%): type 2 diabetes, sleep disorders, cholelithiasis, and heart failure were frequent diagnoses. Prevalence of therapies for metabolic syndrome, pain, and psychiatric disorders increased with higher BMI, while patients with BMI <18.5 frequently used asthma medications, glucocorticoids, and antibiotics.
Conclusion: In patients acutely admitted to general hospitals, reasons for hospital admission and associated clinical characteristics differ substantially according to BMI range. BMI <18.5 is a clinical predictor of high short-term mortality.
abstract_id: PUBMED:36372874
The effect and relative importance of sleep disorders for all-cause mortality in middle-aged and older asthmatics. Background: Previous studies observed that sleep disorders potentially increased the risk of asthma and asthmatic exacerbation. We aimed to examine whether excessive daytime sleepiness (EDS), probable insomnia, objective short sleep duration (OSSD), and obstructive sleep apnea (OSA) affect all-cause mortality (ACM) in individuals with or without asthma.
Methods: We extracted relevant data from the Sleep Heart Health Study established in 1995-1998 with an 11.4-year follow-up. Multivariate Cox regression analysis with a proportional hazards model was used to estimate the associations between ACM and four sleep disorders among asthmatic patients and individuals without asthma. Dose-response analysis and machine learning (random survival forest and CoxBoost) further evaluated the impact of sleep disorders on ACM in asthmatic patients.
Results: A total of 4538 individuals with 990 deaths were included in our study, including 357 asthmatic patients with 64 deaths. Three multivariate Cox regression analyses suggested that OSSD (adjusted HR = 2.67, 95% CI: 1.23-5.77) but not probable insomnia, EDS or OSA significantly increased the risk of ACM in asthmatic patients. Three dose-response analyses also indicated that the extension of objective sleep duration was associated with a reduction in ACM in asthmatic patients compared to very OSSD patients. Severe EDS potentially augmented the risk of ACM compared with asthmatics without EDS (adjusted HR = 3.08, 95% CI: 1.11-8.56). Machine learning demonstrated that OSSD of four sleep disorders had the largest relative importance for ACM in asthmatics, followed by EDS, OSA and probable insomnia.
Conclusions: This study observed that OSSD and severe EDS were positively associated with an increase in ACM in asthmatic patients. Periodic screening and effective intervention of sleep disorders are necessary for the management of asthma.
abstract_id: PUBMED:24705657
Cerebral palsy patients discovered dead during sleep: experience from a comprehensive tertiary pediatric center. Objectives: It is not uncommon for children with cerebral palsy (CP) to be discovered dead during sleep (DDDS); however, the factors associated with this pattern of mortality remain unknown. The current study aims to describe the mortality associated with children with CP from a single, tertiary care center who were DDDS.
Methods: A retrospective (case-only) design to examine proportionate mortality and patient characteristics and co-morbidities that may be related to children DDDS between 1993 and 2011.
Results: There were 177 patients with CP whose deaths were reported to our institution during the study period, of which 19 were DDDS at home. The period proportionate mortality (PPM) was 114.5 per 1000. The average age at time of death was 17 years and 6 months (minimum, 6 years; maximum, 25 years). All but one of the DDDS patients had gastrointestinal feeding tubes, seizure disorders, respiratory disorders, and were non-ambulatory. Very importantly, our DDDS patients manifested clusters of respiratory disorders, namely recurrent aspiration pneumonia (10/19), asthma pneumonitis (4/19), food/vomitius inhalation (6/19), reactive airway disease (16/19), respiratory failure (14/19), chronic bronchitis (7/19), chronic obstructive lung disease (9/19), and nocturnal respiratory insufficiency (16/19).
Conclusions: Respiratory disorders, severe motor disability, seizures, and intellectual status are possible co-morbidities that may be associated with DDDS. There is a need for further study in order to understand what type of monitoring and care (if any) may help prevent DDDS related to these co-morbidities and sleep disorders/abnormalities.
abstract_id: PUBMED:35040430
Sleep problems and associations with cardiovascular disease and all-cause mortality in asthma-chronic obstructive pulmonary disease overlap: analysis of the National Health and Nutrition Examination Survey (2007-2012). Study Objectives: The impact of sleep problems (ie, sleep duration and presence of sleep disorders) on cardiovascular morbidity and all-cause mortality in adults with asthma-chronic obstructive pulmonary disease overlap (ACO) is unknown.
Methods: Using the National Health and Nutrition Examination Survey database (2007-2012 cycles) and National Death Index data, we identified 398 persons with ACO. Data on self-reported physician-diagnosed sleep disorders and cardiovascular disease were collected. Sleep duration in hours was categorized as short (≤ 5 hours), normal (6-8 hours), and long (≥ 9 hours). Associations between sleep duration and presence of sleep disorders and cardiovascular disease and all-cause mortality were analyzed in regression models adjusted for age, sex, race, smoking status, and body mass index.
Results: Presence of sleep disorders was more commonly reported in the ACO group (24.7%) compared to all other groups. The ACO group had a higher proportion of short sleepers (27.6%) compared to controls (11.7%) and chronic obstructive pulmonary disease (19.2%) and a higher proportion of long sleepers (6.9%) compared to chronic obstructive pulmonary disease (5.5%). Presence of sleep disorders was associated with increased risk for cardiovascular disease (odds ratio = 2.48; 95% confidence interval, 1.65-3.73) and death (hazard ratio = 1.44; 95% confidence interval, 1.03-2.02); risk did not vary between groups. A stronger association existed between sleep duration and increased risk for cardiovascular and all-cause mortality in ACO compared to chronic obstructive pulmonary disease and controls.
Conclusions: These results suggest that persons with ACO may represent a high-risk group that should be targeted for more aggressive intervention for sleep problems, a modifiable risk factor.
Citation: Baniak LM, Scott PW, Chasens ER, et al. Sleep problems and associations with cardiovascular disease and all-cause mortality in asthma-chronic obstructive pulmonary disease overlap: analysis of the National Health and Nutrition Examination Survey (2007-2012). J Clin Sleep Med. 2022;18(6):1491-1501.
abstract_id: PUBMED:17911092
Sleep quality predicts quality of life and mortality risk in haemodialysis patients: results from the Dialysis Outcomes and Practice Patterns Study (DOPPS). Background: Poor sleep quality (SQ) affects many haemodialysis (HD) patients and could potentially predict their morbidity, mortality, quality of life (QOL) and patterns of medication use.
Methods: Data on SQ were collected from 11,351 patients in 308 dialysis units in seven countries in the Dialysis Outcomes and Practice Patterns Study (DOPPS) between 1996 and 2001 through a patient self-reported SQ scale, ranging from 0 (worst) to 10 (best). A score of <6 reflected poor SQ. Sleep disturbance was also assessed by self-reported daytime sleepiness, feeling drained and nocturnal awakening. Logistic and multiple linear regression were used to assess predictors of SQ and associations with QOL. Cox regression examined associations with mortality. Analyses accounted for case-mix, facility clustering and country.
Results: Nearly half (49%) of patients experienced poor SQ. Mean SQ scores varied by country, ranging from 4.9 in Germany to 6.5 in Japan. Patients with poor SQ were more likely to be prescribed antihistamines, antidepressants, anti-inflammatories, narcotics, gastrointestinal (GI) medications, anti-asthmatics or hypnotics. Physical exercise at least once a week (vs < once a week) was associated with lower odds of poor SQ (AOR = 0.55-0.85, P < 0.05). Poorer SQ was associated with significantly lower mental and physical component summary (MCS/PCS) scores (MCS scores 1.9-13.2 points lower and PCS scores 1.5-7.7 points lower when SQ scores were <10 vs 10). The RR of mortality was 16% higher for HD patients with poor SQ.
Conclusions: Poor SQ is common among HD patients in DOPPS countries and is independently associated with several QOL indices, medication use patterns and mortality. Assessment and management of SQ should be an important component of care.
abstract_id: PUBMED:37865213
Joint associations of asthma and sleep duration with cardiovascular disease and all-cause mortality: a prospective cohort study. Purpose: This study aimed to investigate the joint association of asthma and sleep duration with cardiovascular disease (CVD) and mortality risk.
Methods: This prospective cohort study included 366,387 participants from the UK Biobank. The participants were divided into three groups based on their sleep duration (short: <7 h/d; referent: 65+ years: 7-8 h/d; ages 39-64 years: 7-9 h/d; and long: 65+ years: >8 h/d; ages 39-64 years: >9 h/d). Cox proportional hazards models were used to examine the association between asthma and sleep duration on CVD and all-cause mortality.
Results: Participants with asthma and short sleep duration showed increased risk of CVD (hazard ratio [HR] 1.42; 95% confidence interval [CI] 1.34-1.51) and all-cause mortality (HR, 1.26; 95% CI, 1.17-1.36), compared with participants with no asthma in the referent sleep duration group. We documented significant additive interactions between asthma and short sleep duration in relation to CVD (relative excess risk due to interaction [RERI], 0.13; 95% CI, 0.04-0.23) and all-cause mortality (RERI, 0.12; 95% CI, 0.01-0.23).
Conclusions: Asthma and short sleep duration may have additive interactions on CVD and all-cause mortality risk, highlighting the importance of controlling asthma in combination with improving sleep duration.
abstract_id: PUBMED:37251328
Herbal medicine for the treatment of obesity-associated asthma: a comprehensive review. Obesity is fast growing as a global pandemic and is associated with numerous comorbidities like cardiovascular disease, hypertension, diabetes, gastroesophageal reflux disease, sleep disorders, nephropathy, neuropathy, as well as asthma. Studies stated that obese asthmatic subjects suffer from an increased risk of asthma, and encounter severe symptoms due to a number of pathophysiology. It is very vital to understand the copious relationship between obesity and asthma, however, a clear and pinpoint pathogenesis underlying the association between obesity and asthma is scarce. There is a plethora of obesity-asthma etiologies reported viz., increased circulating pro-inflammatory adipokines like leptin, resistin, and decreased anti-inflammatory adipokines like adiponectin, depletion of ROS controller Nrf2/HO-1 axis, nucleotide-binding domain, leucine-rich-containing family, pyrin domain-containing-3 (NLRP3) associated macrophage polarization, hypertrophy of WAT, activation of Notch signaling pathway, and dysregulated melanocortin pathway reported, however, there is a very limited number of reports that interrelates these pathophysiologies. Due to the underlying complex pathophysiologies exaggerated by obese conditions, obese asthmatics respond poorly to anti-asthmatic drugs. The poor response towards anti-asthmatic drugs may be due to the anti-asthmatics approach only that ignores the anti-obesity target. So, aiming only at the conventional anti-asthmatic targets in obese-asthmatics may prove to be futile until and unless treatment is directed towards ameliorating obesity pathogenesis for a holistic approach towards amelioration of obesity-associated asthma. Herbal medicines for obesity as well as obesity-associated comorbidities are fast becoming safer and more effective alternatives to conventional drugs due to their multitargeted approach with fewer adverse effects. Although, herbal medicines are widely used for obesity-associated comorbidities, however, a limited number of herbal medicines have been scientifically validated and reported against obesity-associated asthma. Notable among them are quercetin, curcumin, geraniol, resveratrol, β-Caryophyllene, celastrol, tomatidine to name a few. In view of this, there is a dire need for a comprehensive review that may summarize the role of bioactive phytoconstituents from different sources like plants, marine as well as essential oils in terms of their therapeutic mechanisms. So, this review aims to critically discuss the therapeutic role of herbal medicine in the form of bioactive phytoconstituents against obesity-associated asthma available in the scientific literature to date.
abstract_id: PUBMED:30810886
Relationship between neuroticism and sleep quality among asthma patients: the mediation effect of mindfulness. Objective: The aims of this study were to investigate the prevalence of sleep disturbance; to validate the associations between neuroticism, mindfulness, and sleep quality; and to further examine whether mindfulness mediates the relationship between neuroticism and sleep quality among asthma patients.
Methods: This study was conducted with 193 asthma patients from outpatient clinics. They completed questionnaires including the neuroticism subscale of the Big Five Inventory (BFI), the Pittsburgh Sleep Quality Index (PSQI), and the Mindful Attention Awareness Scale (MAAS). Structural equation model was used to analyze the relationships among neuroticism, mindfulness, and sleep quality, with mindfulness as a mediator.
Results: The mean global PSQI score was 7.57 (SD = 3.25), and 69.9% of asthma patients reported poor sleep quality (cutoff score > 5). Structural equation model analysis showed that neuroticism was significantly associated with global PSQI scores (β = 0.198, P = 0.006), and mindfulness (β = - 0.408, P < 0.001), respectively; mindfulness was associated with global PSQI scores (β = - 0.250, P = 0.006). Furthermore, mindfulness mediated the relationship between neuroticism and global PSQI scores, in which the mediation effect was 0.102 (- 0.408 × - 0.250), and the bootstrapped 95% CI did not include zero (0.032, 0.208, P = 0.021).
Conclusions: Sleep disturbance is a serious health concern among asthma patients. This study illuminated the latent mediating mechanism of mindfulness on neuroticism and sleep quality, and implied that intervention and prevention programs on mindfulness might be beneficial in improving sleep quality in asthma patients.
abstract_id: PUBMED:30202023
Real-world retrospective cohort study ARCTIC shows burden of comorbidities in Swedish COPD versus non-COPD patients. This study aimed to generate real-world evidence to assess the burden of comorbidities in COPD patients, to effectively manage these patients and optimize the associated healthcare resource allocation. ARCTIC is a large, real-world, retrospective cohort study conducted in Swedish COPD patients using electronic medical record data collected between 2000 and 2014. These patients were studied for prevalence of various comorbidities and for association of these comorbidities with exacerbations, mortality, and healthcare costs compared with an age-, sex-, and comorbidities-matched non-COPD reference population. A total of 17,479 patients with COPD were compared with 84,514 non-COPD reference population. A significantly higher prevalence of various comorbidities was observed in COPD patients 2 years post-diagnosis vs. reference population, with the highest percentage increase observed for cardiovascular diseases (81.8% vs. 30.7%). Among the selected comorbidities, lung cancer was relatively more prevalent in COPD patients vs. reference population (relative risk, RR = 5.97, p < 0.0001). Ischemic heart disease, hypertension, depression, anxiety, sleep disorders, osteoporosis, osteoarthritis, and asthma caused increased mortality rates in COPD patients. Comorbidities that were observed to be significantly associated with increased number of severe exacerbations in COPD patients included heart failure, ischemic heart disease, depression/anxiety, sleep disorders, osteoporosis, lung cancer, and stroke. The cumulative healthcare costs associated with comorbidities over 2 years after the index date were observed to be significantly higher in COPD patients (€27,692) vs. reference population (€5141) (p < 0.0001). The data support the need for patient-centered treatment strategies and targeted healthcare resource allocation to reduce the humanistic and economic burden associated with COPD comorbidities.
Answer: Yes, sleep disorders are associated with increased mortality in asthma patients. A study using data from the National Health Insurance Service National Sample Cohort in South Korea found that sleep disorders in patients previously diagnosed with asthma were associated with a higher risk of mortality (hazard ratio [HR]: 1.451) (PUBMED:27855675). Another study from the Sleep Heart Health Study with an 11.4-year follow-up indicated that objective short sleep duration (OSSD) significantly increased the risk of all-cause mortality (ACM) in asthmatic patients (adjusted HR = 2.67) and that severe excessive daytime sleepiness (EDS) potentially augmented the risk of ACM compared with asthmatics without EDS (adjusted HR = 3.08) (PUBMED:36372874). Furthermore, a study analyzing the National Health and Nutrition Examination Survey data found that the presence of sleep disorders was associated with increased risk for cardiovascular disease and death in adults with asthma-chronic obstructive pulmonary disease overlap (ACO), suggesting that persons with ACO may represent a high-risk group that should be targeted for more aggressive intervention for sleep problems (PUBMED:35040430). Additionally, a prospective cohort study including participants from the UK Biobank showed that participants with asthma and short sleep duration had increased risk of CVD and all-cause mortality compared with participants with no asthma in the referent sleep duration group (PUBMED:37865213). These findings collectively suggest that sleep disorders and inadequate sleep duration are significant risk factors for increased mortality among asthma patients. |
Instruction: Patients with refractory back pain treated in the emergency department: is immediate interlaminar epidural steroid injection superior to hospital admission and standard medical pain management?
Abstracts:
abstract_id: PUBMED:25794216
Patients with refractory back pain treated in the emergency department: is immediate interlaminar epidural steroid injection superior to hospital admission and standard medical pain management? Background: Hospital admissions for back pain are prolonged, costly, and common. Epidural steroid injections are frequently performed in an outpatient setting with an excellent safety and efficacy profile.
Objectives: The purpose was to review data from patients with severe pain that did not respond to aggressive medical treatment in the emergency department (ED) and determine the effectiveness of an interlaminar epidural steroid injection (ESI) in this patient population.
Study Design: Retrospective matched cohort design.
Setting: Single urban emergency department at a tertiary referral center.
Methods: A retrospective cohort comparison pairing 2 groups that both failed aggressive pain control in the ED was performed. The epidural injection group (1ESI) received an interlaminar ESI while in the ED. The standard therapy group (2ST) was admitted for medical pain management. Groups were matched for pain intensity, age, and symptom duration.
Results: Thirty-five patients in 1ESI (NRS 8.8, 5 - 10, 0.35), and 28 patients in 2ST (NRS 8.9, 4 - 10, 1.7). Pain score after ESI 0.33 (0 - 2, 0.6); all were discharged. Pain score on day 1 of hospital admission for 2ST was 8.7 (7 - 10, 1.5). Total ED time was 8 hours for 1ESI and 13 hours for 2ST (P < 0.002). 1ESI patients received less narcotics while in the ED (P < 0.002) and were discharged home with less narcotics than 2ST (< 0.002). Average inpatient length of stay (LOS) for 2ST was 5 (1.5 - 15, 3.3) days. Cost of care was over 6 times greater for those patients admitted for pain management (P < 0.001).
Limitations: Retrospective design, non-randomized sample, and a small patient population.
Conclusion: An ED patient cohort with severe refractory pain was treated with an interlaminar ESI after failing maximal medical pain management while in the ED. Complete pain relief was achieved safely and rapidly. The need for inpatient admission was eliminated after injection. Costs were lower in the group that received an epidural injection. Narcotic requirements upon discharge were decreased as well.
abstract_id: PUBMED:28204730
The Effectiveness and Risks of Fluoroscopically Guided Lumbar Interlaminar Epidural Steroid Injections: A Systematic Review with Comprehensive Analysis of the Published Data. Objective: To determine the effectiveness and risks of fluoroscopically guided lumbar interlaminar epidural steroid injections.
Design: Systematic review of the literature with comprehensive analysis of the published data.
Interventions: Three reviewers with formal training in evidence-based medicine searched the literature on fluoroscopically guided lumbar interlaminar epidural steroid injections. A larger team consisting of five reviewers independently assessed the methodology of studies found and appraised the quality of the evidence presented.
Outcome Measures: The primary outcome assessed was pain relief. Other outcomes such as functional improvement, reduction in surgery rate, decreased use of opioids/medications, and complications were noted, if reported. The evidence on each outcome was appraised in accordance with the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) system of evaluating evidence.
Results: The search yielded 71 primary publications addressing fluoroscopically guided lumbar interlaminar epidural steroid injections. There were no explanatory studies and all pragmatic studies identified were of low quality, yielding evidence comparable to observational studies.
Conclusions: The body of evidence regarding effectiveness of fluoroscopically guided interlaminar epidural steroid injection is of low quality according to GRADE. Studies suggest a lack of effectiveness of fluoroscopically guided lumbar interlaminar epidural steroid injections in treating primarily axial pain regardless of etiology. Most studies on radicular pain due to lumbar disc herniation and stenosis do, however, report statistically significant short-term improvement in pain.
abstract_id: PUBMED:34183200
Ultrasound-Guided Caudal Epidural Steroid Injection for Back Pain: A Case Report of Successful Emergency Department Management of Radicular Low Back Pain Symptoms. Background: Radicular low back pain is difficult to treat and commonly encountered in the Emergency Department (ED). Pain associated with acute radiculopathy results in limited ability to work, function, and enjoy life, and is associated with increased risk of chronic opioid therapy. In this case report, we describe the first ED-delivered ultrasound-guided caudal epidural steroid injection (ESI) used to treat medication-refractory lumbar radiculopathy, which resulted in immediate and sustained resolution of pain.
Case Report: A 56-year old man with a past medical history of chronic lumbar radiculopathy presented to the ED with acute low back and right lower-extremity pain. Based on history and physical examination, a right L5 radiculopathy was suspected. His pain was poorly controlled despite multimodal analgesia, at which point he was offered admission or an ultrasound-guided caudal ESI. The procedure was performed using dexamethasone, preservative-free normal saline, and preservative-free 1% lidocaine solution, after which the patient reported 100% resolution of his pain and requested discharge from the ED. Why Should an Emergency Physician Be Aware of This? The safety and efficacy of ultrasound-guided caudal ESIs have been established, but there is a paucity of literature exploring their application in the ED. We present a case of a refractory lumbar radiculopathy successfully treated with an ultrasound-guided caudal ESI. ED-performed epidurals can be one additional tool in the emergency physician arsenal to treat acute or chronic lumbar radiculopathy.
abstract_id: PUBMED:36425921
Comparative evaluation of midline versus parasagittal interlaminar epidural steroid injection for management of symptomatic lumbar intervertebral disc herniation. Background And Aims: Epidural steroid injections (ESIs) with or without local anaesthetics have been used for the past several years for the treatment of back pain, especially for radicular symptoms. The aim of this prospective study was to compare the efficacy of midline with parasagittal approach for interlaminar ESI in the management of symptomatic lumbar intervertebral disc herniation.
Methods: Sixty patients (aged 20-60 years) with pain pattern consistent with lumbar radiculopathy caused by lumbar intervertebral disc herniation and who did not respond to conservative treatment were included in the study. They were randomly divided in two groups of 30 each: group I (MILESI, n = 30) consisting of midline interlaminar ESI, and group II (PSILESI, n = 30) consisting of parasagittal interlaminar ESI. They were administered a combination of 80 mg of methylprednisolone acetate (40 mg/ml) and 6 ml of 0.25% bupivacaine (total volume of 8 ml). Pain, patient satisfaction, and the Oswestry Disability Index (ODI) were assessed at different time intervals before and after the procedure for up to six months.
Results: The improvement in pain score after ESI was statistically significant in both the groups at all intervals of time, with no significant difference between the two groups. The mean pain score was <3 from two weeks onwards after the injection. The pain score decreased by more than five points and it was around two points at the end of the six-month study period. Around 50% of patients in both groups had excellent satisfaction.
Conclusion: Both techniques were effective in providing good analgesia. Pain relief and improvement in disability were clinically better with the parasagittal interlaminar approach.
abstract_id: PUBMED:30429743
A comparative study between interlaminar nerve root targeted epidural versus infraneural transforaminal epidural steroids for treatment of intervertebral disc herniation. Background: Low back pain (LBP) is one of the most common musculoskeletal abnormalities. Epidural corticosteroid injections (ESIs) have been used long time ago for treatment of lumbar radiculopathy or discogenic back pain in case of failed medical and conservative management. Different techniques for ESIs include the interlaminar, the caudal, and the transforaminal approaches.
Purpose: The aim of our study is to compare between the efficacy of infraneural transforaminal ESI and lumbar paramedian nerve root targeted interlaminar steroid injection in reduction of unilateral radicular pain secondary to disc prolapse.
Patients And Methods: This prospective double-blind randomized study was performed on 40 patients randomized into two equal groups, each of 20: the infraneural transforaminal ESI (IN group) and the interlaminar parasagittal ESI (IL group). Patients with backache without leg radiation, or with focal motor neurological deficit, previous spine surgery, S1 radiculopathy, lumbar ESI in the past month, systemic steroid used recently within 4 weeks before the procedure, allergy to any medication or addiction to opioids, and pregnancy were excluded from the study. The duration and efficacy of pain relief (defined as ≥40% reduction of pain perception) by 0-10 visual analog scale (VAS) is the primary outcome. Functional assessment using Modified Oswestry Disability Questionnaire (MODQ) and possible side effects and complications are the secondary outcomes.
Results: The VAS and MODQ scores were significantly lower in both groups in comparison with the basal values. There was also a lower VAS in the infraneural group than the parasagittal (IL) group up to 6 months after injection.
Conclusion: The infraneural (IN) epidural steroid is more favorable than the parasagittal (IL) interlaminar epidural steroid owing to its long-term improvement in physical function than the parasagittal technique with no serious side effects.
abstract_id: PUBMED:20859317
Flushing following interlaminar lumbar epidural steroid injection with dexamethasone. Background: Epidural steroid injections are commonly used in managing radicular pain. Most complications related to epidural injections are minor and self-limited. Flushing is considered as one such minor side effect. Flushing has been studied using various steroid preparations including methylprednisone, triamcinolone, and betamethasone but its frequency has never been studied using dexamethasone.
Objective: This study evaluates the frequency of flushing associated with fluoroscopy-guided lumbar epidural steroid injections using dexamethasone.
Study Design: Retrospective cohort design study. Patients presenting with low back pain were evaluated and offered a fluoroscopically guided lumbar epidural steroid injection using dexamethasone via an interlaminar approach as part of a conservative care treatment plan.
Setting: University-based Pain Management Center.
Intervention: All injections were performed consecutively over a 2-month period by one staff member using 16 mg (4 mg/mL) of dexamethasone. A staff physician specifically asked each participant about the presence of flushing following the procedure prior to discharge on the day of injection and again on follow-up within 48 hours after the injections. The answers were documented as "YES" or "NO."
Results: A total of 150 participants received fluoroscopically guided interlaminar epidural steroid injection. All participants received 16 mg (4 mg/mL) of dexamethasone with 2 mL of 0.2% ropiviciane. Overall incidence of flushing was 42 out of 150 cases (28%). Of the 42 participants who experienced flushing, 12 (28%) experienced the symptom prior to discharge following the procedure. Twenty-seven of the 42 (64%) were female (P < 0.05). All the participants who experienced flushing noted resolution by 48 hours. No other major side effects or complications were noted.
Limitations: Follow-up data were solely based on subjective reports by patients via telecommunication. Follow-up time was limited to only 48 hours, which overlooks the possibility that more participants might have noted flushing after the 48 hour limit.
Conclusions: Flushing is commonly reported following epidural steroid injections. With an incidence of 28%, injections using dexamethasone 16 mg by interlaminar epidural route appear to be associated with more flushing reaction than previously reported with other steroid preparations. Additionally, female participants are more likely to experience flushing though the reactions seem to be self-limiting with resolution by 48 hours.
abstract_id: PUBMED:27630917
Efficacy of Epidural Steroid Injection in Management of Lumbar Prolapsed Intervertebral Disc: A Comparison of Caudal, Transforaminal and Interlaminar Routes. Introduction: Epidural steroid is an important modality in the conservative management of prolapsed lumbar disc and is being used for over 50 years. However, controversy still persists regarding their effectiveness in reducing the pain and improving the function with literature both supporting and opposing them are available.
Aim: To study the efficacy of epidural steroid injection in the management of pain due to prolapsed lumbar intervertebral disc and to compare the effectiveness between caudal, transforaminal and interlaminar routes of injection.
Materials And Methods: A total of 152 patients with back pain with or without radiculopathy with a lumbar disc prolapse confirmed on MRI, were included in the study and their pre injection Japanese Orthopaedic Association (JOA) Score was calculated. By simple randomization method (picking a card), patients were enrolled into one of the three groups and then injected methyl prednisone in the epidural space by one of the techniques of injection i.e. caudal, transforaminal and interlaminar. Twelve patients didn't turn up for the treatment and hence were excluded from the study. Remaining 140 patients were treated and were included for the analysis of the results. Eighty two patients received injection by caudal route, 40 by transforaminal route and 18 by interlaminar route. Post injection JOA Score was calculated at six month and one year and effectiveness of the medication was calculated for each route. The data was compared by LSD and ANOVA method to prove the significance. Average follow-up was one year.
Results: At one year after injecting the steroid, all three routes were found to be effective in improving the JOA Score (Caudal route in 74.3%, transforaminal in 90% and interlaminar in 77.7%). Transforaminal route was significantly more effective than caudal (p=0.00) and interlaminar route (p=0.03) at both 6 months and one year after injection. No significant difference was seen between the caudal and interlaminar route (p=0.36).
Conclusion: The management of low back pain and radicular pain due to a prolapsed lumbar intervertebral disc by injecting methyl prednisone in epidural space is satisfactory in the current study. All three injection techniques are effective with the best result obtained by transforaminal route.
abstract_id: PUBMED:16158340
Transforaminal epidural steroid injection and its complications Interlaminary epidural steroid injections have been used in pain management for many years. However, either incomplete clinical recoveries or increase of anatomical knowledge and experience lead to investigation of different techniques. Transforaminal approach has lead to rather favourable results, but it has also the risk of severe complications especially when used in the cervical area. So, investigators have started to search for how to decrease these complications and even started to investigate for more safe techniques. In this review, transforaminal epidural steroid injection techniques and their complications are examined.
abstract_id: PUBMED:21392252
Incidence and characteristics of complications from epidural steroid injections. Objective: Epidural steroid injections are frequently used in the management of spinal pain, but reports on the incidence of complications from this procedure vary. This study seeks to determine the incidence of complications resulting from this procedure, and to compare the rate of complications in transforaminal vs interlaminar injections.
Design: The design of the study was a retrospective chart review of epidural steroid injections in our academic physiatry practice over a 7-year period. A query of our electronic medical record identified all injection patients who contacted their physician or had a clinic visit or emergency department visit within 10 days of the procedure. Charts were individually reviewed for both major complications and minor complaints.
Results: A total of 4265 injections were performed on 1,857 patients over 7 years; 161 cervical interlaminar injections, 123 lumbar interlaminar injections, 17 caudal injections, and 3,964 lumbar transforaminal injections. No major complications were identified. There were 103 minor complications, for an overall complication per injection rate of 2.4%. The most common complications were increased pain (1.1%), pain at injection site (0.33%), persistent numbness (0.14%), and "other" (0.80%). Complications were less common in transforaminal injections (2.1%), than in interlaminar injections (6.0%). One patient experienced a self-limited headache resulting from dural puncture during an interlaminar injection.
Conclusions: Fluoroscopically guided epidural steroid injections are a safe and well-tolerated intervention for cervical or lumbar pain and radiculopathy. Minor complications are uncommon, and most involve increases in pain. Transforaminal injections may result in fewer minor complications than interlaminar injections.
abstract_id: PUBMED:22159989
Incidence of inadvertent intra-articular lumbar facet joint injection during fluoroscopically guided interlaminar epidural steroid injection. Objective: To determine the incidence of inadvertent lumbar facet joint injection during an interlaminar epidural steroid injection (ESI).
Materials And Methods: A total of 686 interlaminar lumbar ESIs were performed from January 1, 2009 to December 31, 2009. Archived images from these cases were retrospectively reviewed on the PACS. Positive cases of inadvertent lumbar facet joint injection were identified by the characteristic sigmoid-shaped contrast pattern projecting over the posterior elements on the lateral view and/or ovoid contrast projecting over the facet joints on the anteroposterior (AP) view.
Results: Eight positive events were identified (1.2%). There was no statistically significant gender or lumbar level predilection. In 3/8 of the positive cases (37.5%), the inadvertent facet joint injection was recognized by the operator. The needle was repositioned as a result, and contrast within the posterior epidural space was documented by the end of the procedure. In 5/8 of the positive cases (62.5%), the patients reported an immediate decrease in the presenting pain.
Conclusions: The incidence of inadvertent lumbar facet joint injection during an interlaminar epidural steroid injection is low. Recognizing the imaging features of this event permits the operator to redirect the needle tip into the epidural space and/or identify the facet joint(s) as a source of the patient's presenting pain.
Answer: Based on the provided abstracts, immediate interlaminar epidural steroid injection (ESI) appears to be superior to hospital admission and standard medical pain management for patients with refractory back pain treated in the emergency department (ED). A retrospective matched cohort study compared two groups of patients who failed aggressive pain control in the ED: one group received an interlaminar ESI while in the ED (1ESI), and the other group was admitted for medical pain management (2ST). The study found that the 1ESI group experienced complete pain relief, were discharged without the need for inpatient admission, and had lower costs of care compared to the 2ST group. Additionally, the 1ESI patients required less narcotics while in the ED and upon discharge (PUBMED:25794216).
However, it is important to note that the body of evidence regarding the effectiveness of fluoroscopically guided interlaminar epidural steroid injections is of low quality according to the GRADE system. While most studies on radicular pain due to lumbar disc herniation and stenosis report statistically significant short-term improvement in pain, the effectiveness in treating primarily axial pain regardless of etiology is questioned (PUBMED:28204730).
Other studies have also shown the efficacy of different approaches for ESI, such as ultrasound-guided caudal ESI (PUBMED:34183200), midline versus parasagittal interlaminar ESI (PUBMED:36425921), and interlaminar nerve root targeted epidural versus infraneural transforaminal epidural steroids (PUBMED:30429743). These studies suggest that ESIs can be effective in managing back pain and radicular symptoms, with some approaches potentially offering better analgesia or longer-term improvement in physical function.
In conclusion, immediate interlaminar ESI in the ED may be a superior option for patients with severe refractory back pain compared to hospital admission and standard medical pain management, leading to rapid pain relief and reduced healthcare costs. However, the overall quality of evidence for the effectiveness of this intervention is low, and further high-quality studies are needed to confirm these findings. |
Instruction: Lead screening among high-risk urban children. Are the 1991 Centers for Disease Control and Prevention guidelines feasible?
Abstracts:
abstract_id: PUBMED:15514235
High-intensity targeted screening for elevated blood lead levels among children in 2 inner-city Chicago communities. Objectives: We assessed the prevalence of elevated blood lead levels (> or = 10 micrograms of lead per deciliter of blood), risk factors, and previous blood lead testing among children in 2 high-risk Chicago, Ill, communities.
Methods: Through high-intensity targeted screening, blood lead levels were tested and risks were assessed among a representative sample of children aged 1 to 5 years who were at risk for lead exposure.
Results: Of the 539 children who were tested, 27% had elevated blood lead levels, and 61% had never been tested previously. Elevated blood lead levels were associated with chipped exterior house paint.
Conclusions: Most of the children who lived in these communities--where the prevalence for elevated blood lead levels among children was 12 times higher than the national prevalence--were not tested for lead poisoning. Our findings highlight the need for targeted community outreach that includes testing blood lead levels in accordance with the American Academy of Pediatrics' recommendations.
abstract_id: PUBMED:19026427
Screening for lead poisoning: a geospatial approach to determine testing of children in at-risk neighborhoods. Objective: To develop a spatial strategy to assess neighborhood risk for lead exposure and neighborhood-level blood lead testing of young children living in the city of Atlanta, Georgia.
Study Design: This ecologic study used existing blood lead results of children aged <or=36 months tested and living in one of Atlanta's 236 neighborhoods in 2005. Geographic information systems used Census, land parcel, and neighborhood spatial data to create a neighborhood priority testing index on the basis of proxies for poverty (Special Supplemental Nutrition Program for Women, Infants and Children [WIC] enrollment) and lead in house paint (year housing built).
Results: In 2005, only 11.9% of Atlanta's 18,627 children aged <or=36 months living in the city had blood lead tests, despite a high prevalence of risk factors: 75,286 (89.6%) residential properties were built before 1978, and 44% of children were enrolled in WIC. Linear regression analysis indicated testing was significantly associated with WIC status (P < .001) but not with old housing.
Conclusions: This neighborhood spatial approach provided smaller geographic areas to assign risk and assess testing in a city that has a high prevalence of risk factors for lead exposure. Testing may be improved by collaboration between pediatricians and public health practitioners.
abstract_id: PUBMED:19661858
Recommendations for blood lead screening of Medicaid-eligible children aged 1-5 years: an updated approach to targeting a group at high risk. Lead is a potent, pervasive neurotoxicant, and elevated blood lead levels (EBLLs) can result in decreased IQ, academic failure, and behavioral problems in children. Eliminating EBLLs among children is one of the 2010 U.S. national health objectives. Data from the National Health and Nutrition Examination Survey (NHANES) indicate substantial decreases both in the percentage of persons in the United States with EBLLs and in mean BLLs among all age and ethnic groups, including children aged 1--5 years. Historically, children in low-income families served by public assistance programs have been considered to be at greater risk for EBLLs than other children. However, evidence indicates that children in low-income families are experiencing decreases in BLLs, suggesting that the EBLL disparity between Medicaid-eligible children and non--Medicaid-eligible children is diminishing. In response to these findings, the CDC Advisory Committee on Childhood Lead Poisoning Prevention is updating recommendations for blood lead screening among children eligible for Medicaid by providing recommendations for improving BLL screening and information for health-care providers, state officials, and others interested in lead-related services for Medicaid-eligible children. Because state and local officials are more familiar than federal agencies with local risk for EBLLs, CDC recommends that these officials have the flexibility to develop blood lead screening strategies that reflect local risk for EBLLs. Rather than provide universal screening to all Medicaid children, which was previously recommended, state and local officials should target screening toward specific groups of children in their area at higher risk for EBLLs. This report presents the updated CDC recommendations and provides strategies to 1) improve screening rates of children at risk for EBLLs, 2) develop surveillance strategies that are not solely dependent on BLL testing, and 3) assist states with evaluation of screening plans.
abstract_id: PUBMED:11099622
Elevated blood lead levels and blood lead screening among US children aged one to five years: 1988-1994. Objectives: To estimate the proportion of children 1 to 5 years of age who received blood lead testing during 1988-1994 and to assess whether predictors of testing coincided with predictors of elevated blood lead levels.
Design: Cross-sectional analysis of data from the Third National Health and Nutrition Examination Survey. Participants. US children 1 to 5 years of age. Outcome Measures. Prevalence of blood lead testing and elevated blood lead levels among children 1 to 5 years of age and odds ratios for factors predicting blood lead testing and elevated blood lead levels.
Results: Overall, 6.3% had elevated blood lead levels and 10.2% had undergone previous blood lead tests. Being of minority race/ethnicity, living in an older home, residing in the Northeast or Midwest regions of the United States, being on Medicaid, having a head of household with <12 years of education, and having a history of anemia were significant factors in both models. Additional independent risk factors for an elevated blood lead level included being sampled in phase 1 of the survey, being 1 to 2 years of age, not having a regular doctor, and being sampled during the summer months. Additional independent correlates of a previous blood lead test included having moved less than twice in one's lifetime, having a female head of household, and having parents whose home language was English. Of an estimated 564 000 children 1 to 5 years of age who had elevated blood lead levels and no previous screening test in 1993, 62% were receiving Medicaid, 40% lived in homes built before 1946, and 34% were black, non-Hispanic.
Conclusions: Lead screening was more frequent among children with risk factors for lead exposure. However, among children with elevated blood lead levels, only one third had been tested previously. In 1993 an estimated 564 000 children 1 to 5 years of age had elevated blood lead levels but were never screened. Physicians should screen Medicaid-eligible children and should follow state or local health department recommendations about identifying and screening other at-risk children. In areas where no health department guidelines exist, physicians should screen all children or screen based on known risk factors.
abstract_id: PUBMED:7643848
Blood lead levels among children in a managed-care organization--California, October 1992-March 1993. Despite substantial progress in reducing exposures to lead among children, as recently as 1991, 9% of children in the United States had blood lead levels (BLLs) of > or = 10 micrograms/dL (1)--levels that can adversely affect intelligence and behavior. In 1991, CDC recommended screening all children for lead exposure except those residing in communities in which large numbers or percentages previously had been screened and determined not to have lead poisoning (2). Subsequently, the California Department of Health Services (CDHS) issued a directive to all California health-care providers participating in the Child Health and Disability Prevention Program to routinely screen children for lead poisoning in accordance with the 1991 CDC guidelines (3). This report presents finding of BLL testing during 1992-1993 from a managed-care organization that provides primary-care services to Medicaid beneficiaries in several locations in California (i.e., Los Angeles County, Orange County, San Bernardino County, Riverside County, Sacramento, and Placerville).
abstract_id: PUBMED:9099784
Statewide assessment of lead poisoning and exposure risk among children receiving Medicaid services in Alaska. Objective: Lead poisoning is a well-recognized public health concern for children living in the United States. In 1992, Health Care Financing Administration (HCFA) regulations required lead poisoning risk assessment and blood lead testing for all Medicaid-enrolled children ages 6 months to 6 years. This study estimated the prevalence of blood lead levels (BLLs) >/=10 microg/dL (>/=0.48 micromol/L) and the performance of risk assessment questions among children receiving Medicaid services in Alaska.
Design: Measurement of venous BLLs in a statewide sample of children and risk assessment using a questionnaire modified from HCFA sample questions.
Setting: Eight urban areas and 25 rural villages throughout Alaska.
Patients: Nine hundred sixty-seven children enrolled in Medicaid, representing a 6% sample of 6-month- to 6-year-old Alaska children enrolled in Medicaid.
Outcome Measure(s): Determination of BLL and responses to verbal-risk assessment questions.
Results: BLLs ranged from <1 microg/dL (<0.048 micromol/L) to 21 microg/dL (1.01 micromol/L) (median, 2.0 microg/dL or 0.096 micromol/L). The geometric mean BLLs for rural and urban children were 2.2 microg/dL (0.106 micromol/L) and 1.5 microg/dL (0.072 micromol/L), respectively. Six (0.6%) children had a BLL >/=10 microg/dL; only one child had a BLL >/=10 microg/dL (11 microg/dL or 0.53 micromol/L) on retesting. Children whose parents responded positively to at least one risk factor question were more likely to have a BLL >/=10 microg/dL (prevalence ratio = 3.1; 95% confidence interval = 0.4 to 26.6); the predictive value of a positive response was <1%.
Conclusions: In this population, the prevalence of lead exposure was very low (0.6%); only one child tested (0.1%) maintained a BLL >/=10 microg/dL on confirmatory testing; no children were identified who needed individual medical or environmental management for lead exposure. Universal lead screening for Medicaid-enrolled children is not an effective use of public health resources in Alaska. Our findings identify an example of the importance in considering local and regional differences when formulating screening recommendations and regulations, and continually reevaluating the usefulness of federal regulations.
abstract_id: PUBMED:17975528
Interpreting and managing blood lead levels < 10 microg/dL in children and reducing childhood exposures to lead: recommendations of CDC's Advisory Committee on Childhood Lead Poisoning Prevention. Lead is a common environmental contaminant, and exposure to lead is a preventable risk that exists in all areas of the United States. Lead is associated with negative outcomes in children, including impaired cognitive, motor, behavioral, and physical abilities. In 1991, CDC defined the blood lead level (BLL) that should prompt public health actions as 10 microg/dL. Concurrently, CDC also recognized that a BLL of 10 microg/dL did not define a threshold for the harmful effects of lead. Research conducted since 1991 has strengthened the evidence that children's physical and mental development can be affected at BLLs < or =10 microg/dL. This report summarizes the findings of a review of clinical interpretation and management of BLLs < or =10 microg/dL conducted by CDC's Advisory Committee on Childhood Lead Poisoning Prevention. This report provides information to help clinicians understand BLLs < or =10 microg/dL, identifies gaps in knowledge concerning lead levels in this range, and outlines strategies to reduce childhood exposures to lead. In addition, this report summarizes scientific data relevant to counseling, blood lead screening, and lead exposure risk assessment. To aid in the interpretation of BLLs, clinicians should understand the laboratory error range for blood lead values and, if possible, select a laboratory that achieves routine performance within +/-2 microg/dL. Clinicians should obtain an environmental history on all children they examine, provide families with lead prevention counseling, and follow blood lead screening recommendations established for their areas. As local and patient circumstances permit, clinicians should consider early referral to developmental programs for children at high risk for exposure to lead and consider more frequent rescreening of children with BLLs approaching 10 microg/dL, depending on the potential for exposure to lead, child age, and season of testing. In addition, clinicians should direct parents to agencies and sources of information that will help them establish a lead-safe environment for their children. For these preventive strategies to succeed, partnerships between health-care providers, families, and local public health and housing programs should be strengthened.
abstract_id: PUBMED:9566943
Preventing childhood lead poisoning: the challenge of change. Because of their rapid growth, immature biologic systems, and their developmental characteristics, children are uniquely vulnerable to exposure to environmental hazards. One of these is lead. Revised lead screening guidelines, published by the Centers for Disease Control and Prevention in Fall 1997, no longer advocate universal screening in some places. These guidelines will (1) require new policies from local public health agencies, (2) require new approaches for clinicians and managed care organizations, especially those with Medicaid-recipient enrollees, to conduct screening of children who may be at risk for exposure to lead, (3) offer new challenges for environmental follow-up to children identified with elevated lead levels, and (4) provide opportunities for collaboration between managed care and public health agencies.
abstract_id: PUBMED:19254973
Trends in blood lead levels and blood lead testing among US children aged 1 to 5 years, 1988-2004. Objectives: To evaluate trends in children's blood lead levels and the extent of blood lead testing of children at risk for lead poisoning from national surveys conducted during a 16-year period in the United States.
Methods: Data for children aged 1 to 5 years from the National Health and Nutrition Examination Survey III Phase I, 1988-1991, and Phase II, 1991-1994 were compared to data from the survey period 1999-2004.
Results: The prevalence of elevated blood lead levels, >/=10 microg/dL, among children decreased from 8.6% in 1988-1991 to 1.4% in 1999-2004, which is an 84% decline. From 1988-1991 and 1999-2004, children's geometric mean blood lead levels declined in non-Hispanic black (5.2-2.8 microg/dL), Mexican American (3.9-1.9 microg/dL), and non-Hispanic white children (3.1 microg/dL to 1.7 microg/dL). However, levels continue to be highest among non-Hispanic black children relative to Mexican American and non-Hispanic white children. Blood lead levels were distributed as follows: 14.0% were <1.0 microg/dL, 55.0% were 1.0 to <2.5 microg/dL, 23.6% were 2.5 to <5 microg/dL, 4.5% were 5 to <7.5 microg/dL, 1.5% were 7.5 to <10 microg/dL, and 1.4% were >/=10 microg/dL. Multivariable analysis indicated that residence in older housing, poverty, age, and being non-Hispanic black are still major risk factors for higher lead levels. Blood lead testing of Medicaid-enrolled children increased to 41.9% from 19.2% in 1988-1991. Only 43.0% of children with elevated blood lead levels had previously been tested.
Conclusions: Children's blood lead levels continue to decline in the United States, even in historically high-risk groups for lead poisoning. To maintain progress made and eliminate remaining disparities, efforts must continue to test children at high risk for lead poisoning, and identify and control sources of lead. Coordinated prevention strategies at national, state, and local levels will help achieve the goal of elimination of elevated blood lead levels.
abstract_id: PUBMED:9082173
Targeted screening for childhood lead exposure in a low prevalence area--Salt Lake County, Utah, 1995-1996. During 1991-1994, an estimated 930,000 U.S. children aged < 6 years had blood lead levels (BLLs) > or = 10 micrograms/dL, and the risk for an elevated BLL was greatest among children who were non-Hispanic black or Mexican American, from low-income families, living in large metropolitan areas, or living in housing built before 1946. Because risk for lead exposure is associated with several different factors, it can vary greatly across relatively small areas. To establish the local prevalence and distribution of childhood lead exposure and develop local blood lead screening recommendations, the Salt Lake City-County Health Department (SLCCHD) offered free blood lead screening to all children aged 12-36 months enrolled at the seven Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) clinics in Salt Lake County, Utah (1995 population: 812,000), during January-October 1995. This report presents findings of the screenings at WIC clinics, describes the design and promotion of local targeted screening recommendations, and describes the resulting increases in appropriate BLL screenings among children.
Answer: The 1991 Centers for Disease Control and Prevention (CDC) guidelines recommended screening all children for lead exposure, except those in communities where previous screenings showed low risk for lead poisoning (PUBMED:7643848). However, the feasibility of these guidelines in high-risk urban settings has been a subject of discussion and research.
Studies have shown that in high-risk urban areas, the prevalence of elevated blood lead levels (EBLLs) among children can be significantly higher than the national average. For instance, in two inner-city Chicago communities, 27% of the children tested had EBLLs, and 61% had never been tested previously (PUBMED:15514235). This indicates a gap in lead screening and the need for targeted community outreach and testing in accordance with the American Academy of Pediatrics' recommendations.
Moreover, in Atlanta, Georgia, despite a high prevalence of risk factors for lead exposure, only 11.9% of children aged ≤36 months living in the city had blood lead tests in 2005 (PUBMED:19026427). This suggests that the implementation of the CDC guidelines was not fully achieved, and testing may be improved by collaboration between pediatricians and public health practitioners.
The CDC's Advisory Committee on Childhood Lead Poisoning Prevention has since updated recommendations for blood lead screening, suggesting that universal screening may not be necessary for all Medicaid children. Instead, state and local officials should target screening toward specific groups of children in their area at higher risk for EBLLs (PUBMED:19661858).
In summary, while the 1991 CDC guidelines aimed to screen all children for lead exposure, the implementation in high-risk urban areas has faced challenges. The prevalence of EBLLs in these areas is high, and many children at risk have not been tested. Updated recommendations suggest a more targeted approach to screening, focusing on high-risk groups rather than universal screening. This approach may be more feasible and effective in managing lead exposure among urban children at high risk. |
Instruction: Narcotic use in the hospital: reasonably safe?
Abstracts:
abstract_id: PUBMED:30067126
Perioperative Narcotic Use and Carpal Tunnel Release: Trends, Risk Factors, and Complications. Background: The goals of the study were to: (1) evaluate trends in preoperative and prolonged postoperative narcotic use in carpal tunnel release (CTR); (2) characterize risks for prolonged narcotic use; and (3) evaluate narcotic use as an independent risk factor for complications following CTR. Methods: A query of a large insurance database from 2007-2016 was conducted. Patients undergoing open or endoscopic CTR were included. Revision surgeries or patients undergoing median nerve repair at the forearm, upper extremity fasciotomies, or with distal radius fractures were excluded. Preoperative use was defined as narcotic use between 1 to 4 months prior to CTR. A narcotic prescription between 1 and 4 months after surgery was considered prolonged postoperative use. Demographics, comorbidities, and other risk factors for prolonged postoperative use were assessed using a regression analysis. Subgroup analysis was performed according to the number of preoperative narcotic prescriptions. Narcotic use as a risk factor for complications, including chronic regional pain syndrome (CRPS) and revision CTR, was assessed. Results: In total, 66 077 patients were included. A decrease in prescribing of perioperative narcotics was noted. Risk factors for prolonged narcotic use included preoperative narcotic use, drug and substance use, lumbago, and depression. Preoperative narcotics were associated with increased emergency room visits, readmissions, CRPS, and infection. Prolonged postoperative narcotic use was linked to CRPS and revision surgery. Conclusions: Preoperative narcotic use is strongly associated with prolonged postoperative use. Both preoperative and prolonged postoperative prescriptions narcotic use correlated with increased risk of complications. Preoperative narcotic use is associated with a higher risk of postoperative CRPS.
abstract_id: PUBMED:36583832
Prospective analysis of home narcotic consumption and management of excess narcotic prescription following adolescent idiopathic scoliosis surgery. Purpose: The aim of this study was to identify factors associated with the outpatient narcotic intake of patients following posterior spinal fusion (PSF) for adolescent idiopathic scoliosis (AIS) and to introduce a safe and effective method of disposing of unused narcotics.
Methods: Following Institutional Review Board approval, retrospective review of prospectively collected data from patients undergoing PSF for AIS took place. Pain scores, narcotic use, patient demographic data, pre-, intra-, and postoperative parameters, and discharge data were gathered via chart review. Patients were divided into two groups according to home narcotic use, high use (top 25th percentiles) and low use (bottom 75th percentiles), and multivariate statistical analysis was conducted. Narcotic surplus was collected during postoperative clinic visits and disposed of using biodegradable bags.
Results: Statistical analysis of 27 patients included in the study showed that patients with a higher home narcotic use correlated with increased length of hospitalization with an average of 3.4 days compared to the lower-use group of 2.8 day (p = 0.03). Higher-use group also showed increased inpatient morphine milligram equivalent than the lower-use group. There was no significant difference of home narcotic use when looking at patient age, height, weight, BMI, levels fused, intraoperative blood loss, or length of surgery. A total of 502 narcotic doses were disposed of in the clinic.
Conclusion: Our study suggests that there are not a significant number of patient- or surgical-level factors predisposing patients to increased home narcotic usage following spinal fusion for adolescent idiopathic scoliosis.
Level Of Evidence: Level I, prospective study.
abstract_id: PUBMED:31321248
Narcotic Use and Resiliency Scores Do Not Predict Changes in Sleep Quality 6 Months After Arthroscopic Rotator Cuff Repair. Background: Patients with rotator cuff disease commonly complain of difficulty sleeping. Arthroscopic repair has been associated with improved sleep quality in many patients with rotator cuff tears; however, some individuals continue to suffer from sleep disturbance postoperatively.
Purpose: To determine whether changes in sleep quality following rotator cuff repair are predicted by a patient's narcotic use or ability to cope with stress (resilience).
Study Design: Case series; Level of evidence, 4.
Methods: A total of 48 patients undergoing arthroscopic rotator cuff repair were prospectively enrolled and completed the Connor-Davidson Resilience Scale (CD-RISC) preoperatively. The Pittsburgh Sleep Quality Index (PSQI) was administered preoperatively and at multiple intervals postoperatively for 6 months. Narcotic utilization was determined via a legal prescriber database. Pre- and postoperative sleep scores were compared using paired t tests and the McNemar test. Linear regression was used to determine whether narcotic use or CD-RISC score predicted changes in sleep quality.
Results: An increased number of patients experienced good sleep at 6 months postoperatively (P < .01). Mean ± SD nocturnal pain frequency improved from 2.5 ± 1.0 at baseline to 0.9 ± 1.1 at 6 months. CD-RISC score had a positive predictive value on changes in PSQI score (R2 = 0.09, P = .028) and nocturnal pain frequency (R2 = 0.08, P = .041) at 2 weeks. Narcotic use did not significantly predict changes in PSQI score or nocturnal pain frequency (P > .05).
Conclusion: Most patients with rotator cuff disease will experience improvement in sleep quality following arthroscopic repair. Patients demonstrated notable improvements in nocturnal pain frequency as soon as 6 weeks following surgery. CD-RISC resiliency scores had a significant positive predictive value on changes in sleep quality and nocturnal pain frequency at 2 weeks. Narcotic use was not associated with change in sleep quality.
abstract_id: PUBMED:2846900
Implications of methadone maintenance for theories of narcotic addiction. Clinical success in rehabilitation of heroin addicts with maintenance treatment requires stability of the blood level in a pharmacologically effective range (optimally, 150 to 600 ng/mL)-a phenomenon that emphasizes the central importance of narcotic receptor occupation. It is postulated that the high rate of relapse of addicts after detoxification from heroin use is due to persistent derangement of the endogenous ligand-narcotic receptor system and that methadone in an adequate daily dose compensates for this defect. Some patients with long histories of heroin use and subsequent rehabilitation on a maintenance program do well when the treatment is terminated. The majority, unfortunately, experience a return of symptoms after maintenance is stopped. The treatment, therefore, is corrective but not curative for severely addicted persons. A major challenge for future research is to identify the specific defect in receptor function and to repair it. Meanwhile, methadone maintenance provides a safe and effective way to normalize the function of otherwise intractable narcotic addicts.
abstract_id: PUBMED:29435275
Safe use of chemicals by professional users and health care specialists. The awareness of Greek professional users and health care specialists regarding the safe use of chemicals was investigated, to be the best of our knowledge, for the first time after the introduction of Regulations (EC) 1907/2006 (REACH) and 1272/2008 (CLP) on chemicals. A total of 200 professional users and 150 health care specialists from various regions of Greece contributed to the use of a closed-ended, anonymous and validated questionnaire. The findings showed that over 85% of the responders were not aware of classification, labelling and packaging (CLP) and 67.8% of the responders were unaware of any changes made in the labeling of the products they were using. The majority (>75%) of individuals were cognizant that they were using hazardous products; however, the perception of hazard varied significantly between the two groups (P=0.012) and statistically were dependent on the educational (P=0.022) and the profession (P=0.014) level. One third of the professional users read the label as the main source of information for the product, while for health care specialists the number increased to 65% and a strong correlation was detected with the educational level (P=0.017). In both groups, 7% of professional users and health care specialists declared that hazard communication through product labeling is not well understood. The use of personal protective equipment (PPE) is almost universal for health care specialists with women being more sensitive (P=0.041), while 25% of the professional users do not use any PPE. Almost 60% of the health care specialists are required to provide instructions regarding the safe use of chemicals or the action to be undertaken in case of accident. In the latter situation, the National Poisoning Centre is the reference point for information. Limited use of the safety data sheets has been observed both for professional users (18%) and health care specialists (23%). In conclusion, rising awareness campaigns are needed, in collaboration with trade unions and health care professional associations, in order to alert professionals regarding the safe use of chemicals and protect human health and the environment.
abstract_id: PUBMED:24704676
Does preoperative narcotic use adversely affect outcomes and complications after spinal deformity surgery? A comparison of nonnarcotic- with narcotic-using groups. Background Context: The role of preoperative (preop) narcotic use and its influence on outcomes after spinal deformity surgery are unknown. It is important to determine which patient factors and comorbidities can affect the success of spinal deformity surgery, a challenging surgery with high rates of complications at baseline.
Purpose: To evaluate if preop narcotic use persists after spinal deformity surgery and whether the outcomes are adversely affected by preop narcotic use.
Study Design/setting: Retrospective evaluation of prospectively collected data.
Patient Sample: Two hundred fifty-three adult patients (230 females/23 males) undergoing primary spinal deformity surgery were enrolled from 2000 to 2009.
Outcome Measures: Preoperative and postoperative (postop) narcotic use and changes in Oswestry Disability Index (ODI), Scoliosis Research Society (SRS) pain, and SRS total scores.
Methods: Preoperative, 2-year postop, and latest follow-up pain medication use were collected along with ODI, SRS pain, and SRS scores. Preoperative insurance status, surgical and hospitalization demographics, and complications were collected. All patients had a minimum 2-year follow-up (average 47.4 months).
Results: One hundred sixty-eight nonnarcotic (NoNarc) patients were taking no pain meds or only nonsteroidal anti-inflammatories preoperatively. Eighty-five patients were taking mild/moderate/heavy narcotics before surgery. The average age was 48.2 years for the NoNarc group versus 53.6 years for the Narc group (p<.005). There were significantly more patients with degenerative than adult scoliosis in the Narc group (47 vs. 28, p<.001; mild 19 vs. 24, p<.02; moderate 6 vs. 14, p<.0003; heavy 3 vs. 10, p<.0002). Insurance status (private/Medicare/Medicaid) was similar between the groups (p=.39). At latest follow-up, 137/156 (88%) prior NoNarc patients were still not taking narcotics whereas 48/79 (61%) prior narcotic patients were now off narcotics (p<.001). Significant postop improvements were seen in Narc versus NoNarc groups with regard to ODI (26-15 vs. 44-30.3, p<.001), SRS pain (3.36-3.9 vs. 2.3-3.38, p<.001), and overall SRS outcome (3.36-4 vs. 2.78-3.68, p<.001) scores. A comparison of change in outcome scores between the two groups showed a higher improvement in SRS pain scores for the Narc versus NoNarc group (p<.001).
Conclusions: In adults with degenerative scoliosis taking narcotics a significant decrease in pain medication use was noted after surgery. All outcome scores significantly improved postop in both groups. However, the Narc group had significantly greater improvements in SRS pain scores versus the NoNarc group.
abstract_id: PUBMED:1044286
Trends in reported illegal narcotic use in Canada: 1956-1973. Information on reported narcotic users aids in the development of drug control policy as well as programmes of prevention, treatment, and rehabilitation. In Canada, such information may be obtained from a narcotic users index which classifies known narcotic drug users into three categories: "illicit", "licit", and "professional". This paper presents trend data on known narcotic users in Canada from 1956 to 1973 by category, location, initially reported drug, sex and age. Between 1956 and 1973, the number of known "licit" and "professional" narcotic drug users steadily decreased, while the number of "illicit" narcotic drug users increased by 283 per cent, with the greatest increase taking place after 1969. Heroin was the most frequent initially reported drug (representing between 80 per cent and 89 per cent of known "illicit" narcotic drug users). Cocaine, as an initially reported "narcotic", had the largest proportional increase from 1956 to 1973. There were generally more reported male users than female in all age groups, a trend that increased over the time span considered. There were recent dramatic increases in the numbers and rates of reported users in the 20-24 year-old group, which has become the dominant pattern among new cases in recent years. Although the index on which this paper is based does not provide figures on total narcotic use in Canada, it is a valuable resource for epidemiologic research. This narcotic user index may be used to make minimum estimates of the extent and geographic and social distribution of narcotic-related problems in Canada.
abstract_id: PUBMED:465293
Use of narcotic antagonists in anaesthesia. 1 The introduction of morphine antagonists into anaesthesiology has aroused great scientific interest, increased our knowledge of analgesia, and opened new frontiers for therapeutics. 2 The advantages and disadvantages of these compounds are described and assessed. 3 The best choices for treatment of pain are the most potent and long-acting drugs of this group with a wide safety margin. As an antidote to narcotic agonists, naloxone seems to be the most suitable drug in the majority of cases, although in certain conditions drugs having less rapid onset of action and of longer duration may be more desirable. 4 In all cases of treatment after anaesthesia when antagonists to narcotic analgesics are used, drug administration must depend on careful control, which necessitates specific and prolonged patient care by specifically trained and competent staff.
abstract_id: PUBMED:23210701
Pain management in occupational health: a guide for non-narcotic pain relief. Narcotic pain management is currently a topic of concern in the United States; the latest concerns are both legal and ethical. Narcotics are frequently prescribed medications that, when improperly used or supervised, can cause death. Legal concerns include prescribing narcotics without performing detailed health-related evaluations, not recognizing those seeking drugs for personal recreational use, and clients diverting drugs to others for financial gain. Injured workers need to have pain controlled and be mentally safe to perform their job duties. This article identifies types of pain, comorbidities, and alternative methods of pain management beyond narcotic therapy, as well as discusses guidelines used to initiate narcotic therapy when needed.
abstract_id: PUBMED:34810085
Perioperative Opioid Analgesics and Hallux Valgus Correction Surgery: Trends, Risk Factors for Prolonged Use and Complications. In the setting of an opioid epidemic, this study aims to provide evidence on opioid use trends, risk factors for prolonged use, and complications from perioperative opioid consumption in hallux valgus surgery. A national database was queried for patients who underwent hallux valgus correction. Regression analysis identified: (1) risk factors for prolonged postoperative narcotic use; and (2) association between preoperative/prolonged postoperative narcotic use and postoperative complications. A linear regression analysis was used to determine trends. About 20,749 patients were included, of which 3464 patients were prescribed narcotics preoperatively and 4339 were identified as prolonged postoperative narcotic prescription users. Preoperative prescriptions were identified as risk factors for prolonged use. Perioperative narcotic use was observed to be a risk factor for poor outcomes. About 21% of patients were identified as prolonged postoperative narcotic prescription users. Patients undergoing hallux valgus corrective surgery should be counseled regarding their increased risk of complications when using narcotics.
Answer: The safety of narcotic use in the hospital setting is a complex issue, and the abstracts provided offer insights into various aspects of narcotic use, including trends, risk factors, and complications associated with their use in different surgical contexts.
One study found that preoperative narcotic use is strongly associated with prolonged postoperative use and correlated with an increased risk of complications, including chronic regional pain syndrome (CRPS) and revision surgery after carpal tunnel release (CTR) (PUBMED:30067126). Another study on adolescent idiopathic scoliosis surgery did not find significant patient- or surgical-level factors predisposing patients to increased home narcotic usage, suggesting that increased narcotic use may not be easily predictable based on these factors (PUBMED:36583832).
In the context of arthroscopic rotator cuff repair, narcotic use did not predict changes in sleep quality, which is a common concern for patients with rotator cuff disease (PUBMED:31321248). However, a study on spinal deformity surgery found that preoperative narcotic use was associated with a significant decrease in pain medication use postoperatively and greater improvements in pain scores compared to non-narcotic-using groups (PUBMED:24704676).
The abstract discussing methadone maintenance emphasizes that while methadone can normalize the function of narcotic addicts, it is not curative and the majority of patients experience a return of symptoms after maintenance is stopped (PUBMED:2846900). Another abstract highlights the need for awareness campaigns to alert professionals regarding the safe use of chemicals, including narcotics, to protect human health and the environment (PUBMED:29435275).
Lastly, a study on hallux valgus correction surgery indicated that perioperative narcotic use was a risk factor for poor outcomes and that patients should be counseled about the increased risk of complications when using narcotics (PUBMED:34810085).
In summary, while narcotics can be an important part of pain management in the hospital setting, their use must be carefully managed due to the risk of prolonged use and associated complications. The safety of narcotic use in hospitals is reasonably safe when managed properly, but vigilance is required to minimize risks and ensure patient safety. |
Instruction: DRD2 and DRD4 in relation to regular alcohol and cannabis use among adolescents: does parenting modify the impact of genetic vulnerability?
Abstracts:
abstract_id: PUBMED:21106310
DRD2 and DRD4 in relation to regular alcohol and cannabis use among adolescents: does parenting modify the impact of genetic vulnerability? The TRAILS study. Aims: The aims of the present study were to determine the direct effect of DRD2 and DRD4, as well as their interaction with parenting (i.e. rejection, overprotection and emotional warmth), on the development of regular alcohol and cannabis use in 1192 Dutch adolescents from the general population.
Methods: Information was obtained by self-report questionnaires. Perceived rejection, overprotection and emotional warmth were assessed at age 10-12. Regular alcohol and cannabis use were determined at age 15-18 and defined as the consumption of alcohol on 10 or more occasions in the past four weeks, and the use of cannabis on 4 or more occasions in the past four weeks. Models were adjusted for age, sex, parental alcohol or cannabis use, and externalizing behavior.
Results: Carrying the A1 allele of the DRD2 TaqIA polymorphism, or the 7 repeat DRD4, was not directly related to regular alcohol or cannabis use. In addition, adolescent carriers of these genetic risk markers were not more susceptible to the influence of less optimal parenting. Main effects for parenting indicated that overprotection increased the risk of regular alcohol use, whereas the risk of cannabis use was enhanced by parental rejection and buffered by emotional warmth.
Conclusions: Our findings do not support an association between DRD2/DRD4 and regular alcohol and cannabis use in adolescents. Given the substance-specific influences of rejection, overprotection and emotional warmth, these parenting factors might be promising candidates for prevention work.
abstract_id: PUBMED:33556390
Molecular genetics of substance use disorders: An umbrella review. Background: Substance use disorders (SUD) are a category of psychiatric disorders with a large epidemiological and societal impact around the world. In the last decades, a large number of genetic studies have been published for SUDs.
Methods: With the objective of having an overview and summarizing the evidence published up to date, we carried out an umbrella review of all the meta-analyses of genetic studies for the following substances: alcohol, tobacco, cannabis, cocaine, opioids, heroin and methamphetamines. Meta-analyses for candidate gene studies and genome-wide association studies (GWAS) were included.
Results: Alcohol and tobacco were the substances with the largest number of meta-analyses, and cannabis, opioids and cocaine the least studied. The following genes were associated with two or more SUDs: OPRM1, DRD2, DRD4, BDNF and SL6A4. The only genes that had an OR higher than two were the SLC6A4 for all addictions, the ADH1B for alcohol dependence, and BDNF for methamphetamine dependence. GWAS confirmed the possible role of CHRNA5 gene in nicotine dependence and identified novel candidate genes in other SUDs, such as FOXP2, PEX and, AUTS2, which need further functional analyses.
Conclusions: This umbrella review summarizes the evidence of 16 years of research on the genetics of SUDs and provides a broad and detailed overview of results from more than 150 meta-analyses for SUD. The results of this umbrella review will guide the need for future genetic studies geared toward understanding, preventing and treating SUDs.
abstract_id: PUBMED:26755638
Genetic Modification of the Relationship between Parental Rejection and Adolescent Alcohol Use. Aims: Parenting practices are associated with adolescents' alcohol consumption, however not all youth respond similarly to challenging family situations and harsh environments. This study examines the relationship between perceived parental rejection and adolescent alcohol use, and specifically evaluates whether youth who possess greater genetic sensitivity to their environment are more susceptible to negative parental relationships.
Methods: Analyzing data from the National Longitudinal Study of Adolescent Health, we estimated a series of regression models predicting alcohol use during adolescence. A multiplicative interaction term between parental rejection and a genetic index was constructed to evaluate this potential gene-environment interaction.
Results: Results from logistic regression analyses show a statistically significant gene-environment interaction predicting alcohol use. The relationship between parental rejection and alcohol use was moderated by the genetic index, indicating that adolescents possessing more 'risk alleles' for five candidate genes were affected more by stressful parental relationships.
Conclusions: Feelings of parental rejection appear to influence the alcohol use decisions of youth, but they do not do so equally for all. Higher scores on the constructed genetic sensitivity measure are related to increased susceptibility to negative parental relationships.
abstract_id: PUBMED:27444553
A Test-Replicate Approach to Candidate Gene Research on Addiction and Externalizing Disorders: A Collaboration Across Five Longitudinal Studies. This study presents results from a collaboration across five longitudinal studies seeking to test and replicate models of gene-environment interplay in the development of substance use and externalizing disorders (SUDs, EXT). We describe an overview of our conceptual models, plan for gene-environment interplay analyses, and present main effects results evaluating six candidate genes potentially relevant to SUDs and EXT (MAOA, 5-HTTLPR, COMT, DRD2, DAT1, and DRD4). All samples included rich longitudinal and phenotypic measurements from childhood/adolescence (ages 5-13) through early adulthood (ages 25-33); sample sizes ranged from 3487 in the test sample, to ~600-1000 in the replication samples. Phenotypes included lifetime symptom counts of SUDs (nicotine, alcohol and cannabis), adult antisocial behavior, and an aggregate externalizing disorder composite. Covariates included the first 10 ancestral principal components computed using all autosomal markers in subjects across the data sets, and age at the most recent assessment. Sex, ancestry, and exposure effects were thoroughly evaluated. After correcting for multiple testing, only one significant main effect was found in the test sample, but it was not replicated. Implications for subsequent gene-environment interplay analyses are discussed.
abstract_id: PUBMED:15845322
Genetic influences on quantity of alcohol consumed by adolescents and young adults. Objective: To examine genetic and environmental influences on drinking in a nationally representative study of genetically informative adolescents followed into young adulthood.
Method: The average quantity of alcohol used per drinking episode during the past year was analyzed in 4432 youth assessed during adolescence (mean age of 16) and then 1 and 6 years later. The variance of quantity of alcohol consumed was decomposed into three components: additive genetic (a2), shared environmental (c2), non-shared environmental (e2). Four candidate genes were tested for association.
Results: Wave 1 a2-0.52e2-0.48, Wave 2 a2-0.28e2-0.72, Wave 3 a2-0.30e2-0.70. Genetic correlations between Waves 1 and 2 were 0.85, Waves 1 and 3 were 0.34. The DAT1 440 allele was associated at Wave 1 (p=0.007). DRD2 TaqI A1/A2 was associated at Wave 3 (p=0.007). DRD4 and 5HTT were not associated. The DAT1 and DRD2 polymorphisms accounted for 3.1% and 2.0% of the variation, respectively.
Conclusion: Genetic influence on drinking behavior was common in adolescents longitudinally assessed 1 year apart, but was less correlated between these adolescents and their assessment as young adults at a subsequent time point. Polymorphisms in genes of the dopaminergic system appear to influence variation in drinking behavior.
abstract_id: PUBMED:23294086
Differential susceptibility to prevention: GABAergic, dopaminergic, and multilocus effects. Background: Randomized prevention trials provide a unique opportunity to test hypotheses about the interaction of genetic predispositions with contextual processes to create variations in phenotypes over time.
Methods: Using two longitudinal, randomized prevention trials, molecular genetic and alcohol use outcome data were gathered from more than 900 youths to determine whether prevention program participation would, across 2 years, moderate genetic risk for increased alcohol use conferred by the dopaminergic and GABAergic systems.
Results: We found that (a) variance in dopaminergic (DRD2, DRD4, ANKK1) and GABAergic (GABRG1, GABRA2) genes forecast increases in alcohol use across 2 years, and (b) youths at genetic risk who were assigned to the control condition displayed greater increases in alcohol use across 2 years than did youths at genetic risk who were assigned to the prevention condition or youths without genetic risk who were assigned to either condition.
Conclusions: This study is unique in combining data from two large prevention trials to test hypotheses regarding genetic main effects and gene × prevention interactions. Focusing on gene systems purported to confer risk for alcohol use and abuse, the study demonstrated that participation in efficacious prevention programs can moderate genetic risk. The results also support the differential susceptibility hypothesis that some youths, for genetic reasons, are more susceptible than others to both positive and negative contextual influences.
abstract_id: PUBMED:22481050
Genetic influences on craving for alcohol. Introduction: Craving is being considered for inclusion in the Diagnostic and Statistical Manual (DSM) DSM-5. However, little is known of its genetic underpinnings - specifically, whether genetic influences on craving are distinct from those influencing DSM-IV alcohol dependence.
Method: Analyses were conducted in a sample of unrelated adults ascertained for alcohol dependence (N=3976). Factor analysis was performed to examine how alcohol craving loaded with the existing DSM-IV alcohol dependence criteria. For genetic analyses, we first examined whether genes in the dopamine pathway, including dopamine receptor genes (DRD1, DRD2, DRD3, DRD4) and the dopamine transporter gene (SLC6A3), which have been implicated in neurobiological studies of craving, as well as alpha-synuclein (SNCA), which has been previously found to be associated with craving, were associated with alcohol craving in this sample. Second, in an effort to identify novel genetic variants associated with craving, we conducted a genomewide association study (GWAS). For variants that were implicated in the primary analysis of craving, we conducted additional comparisons - to determine if these variants were uniquely associated with alcohol craving as compared with alcohol dependence. We contrasted our results to those obtained for DSM-IV alcohol dependence, and also compared alcohol dependent individuals without craving to non-dependent individuals who also did not crave alcohol.
Results: Twenty-one percent of the full sample reported craving alcohol. Of those reporting craving, 97.3% met criteria for DSM-IV alcohol dependence with 48% endorsing all 7 dependence criteria. Factor analysis found a high factor loading (0.89) for alcohol craving. When examining genes in the dopamine pathway, single nucleotide polymorphisms (SNPs) in DRD3 and SNCA were associated with craving (p<0.05). There was evidence for association of these SNPs with DSM-IV alcohol dependence (p<0.05) but less evidence for dependence without craving (p>0.05), suggesting that the association was due in part to craving. In the GWAS, the greatest evidence of association with craving was for a SNP in the integrin alpha D (ITGAD) gene on chromosome 7 (rs2454908; p=1.8×10(-6)). The corresponding p-value for this SNP with DSM-IV alcohol dependence was similar (p=4.0×10(-5)) but was far less with dependence without craving (p=0.02), again suggesting the association was due to alcohol craving. Adjusting for dependence severity (number of endorsed criteria) attenuated p-values but did not eliminate association.
Conclusions: Craving is frequently reported by those who report multiple other alcohol dependence symptoms. We found that genes providing evidence of association with craving were also associated with alcohol dependence; however, these same SNPs were not associated with alcohol dependence in the absence of alcohol craving. These results suggest that there may be unique genetic factors affecting craving among those with alcohol dependence.
abstract_id: PUBMED:29362512
Dopaminergic Genetic Variation Influences Aripiprazole Effects on Alcohol Self-Administration and the Neural Response to Alcohol Cues in a Randomized Trial. Dopamine (DA) signaling regulates many aspects of Alcohol Use Disorder (AUD). However, clinical studies of dopaminergic medications, including the DA partial agonist aripiprazole (APZ), have been inconsistent, suggesting the possibility of a pharmacogenetic interaction. This study examined whether variation in DA-related genes moderated APZ effects on reward-related AUD phenotypes. The interacting effects of APZ and a variable number tandem repeat (VNTR) polymorphism in DAT1/SLC6A3 (the gene encoding the DA transporter (DAT)) were tested. In addition, interactions between APZ and a genetic composite comprising the DAT1 VNTR and functional polymorphisms in catechol-O-methyltransferase (COMT), DRD2, and DRD4 were evaluated. Ninety-four non-treatment-seeking individuals with AUD were genotyped for these polymorphisms, randomized to APZ (titrated to 15 mg) or placebo for 8 days, and underwent an fMRI alcohol cue-reactivity task (day 7; n=81) and a bar lab paradigm (day 8). Primary outcomes were alcohol cue-elicited ventral striatal (VS) activation and the number of drinks consumed in the bar lab. DAT1 genotype significantly moderated medication effects, such that APZ, relative to placebo, reduced VS activation and bar-lab drinking only among carriers of the DAT1 9-repeat allele, previously associated with lower DAT expression and greater reward-related brain activation. The genetic composite further moderated medication effects, such that APZ reduced the primary outcomes more among individuals who carried a larger number of DAT1, COMT, DRD2, and DRD4 alleles associated with higher DA tone. Taken together, these data suggest that APZ may be a promising AUD treatment for individuals with a genetic predisposition to higher synaptic DA tone.
abstract_id: PUBMED:31502081
Dopamine and Working Memory: Genetic Variation, Stress and Implications for Mental Health. At the molecular level, the neurotransmitter dopamine (DA) is a key regulatory component of executive function in the prefrontal cortex (PFC) and dysfunction in dopaminergic (DAergic) circuitry has been shown to result in impaired working memory (WM). Research has identified multiple common genetic variants suggested to impact on the DA system functionally and also behaviourally to alter WM task performance. In addition, environmental stressors impact on DAergic tone, and this may be one mechanism by which stressors confer vulnerability to the development of neuropsychiatric conditions. This chapter aims to evaluate the impact of key DAergic gene variants suggested to impact on both synaptic DA levels (COMT, DAT1, DBH, MAOA) and DA receptor function (ANKK1, DRD2, DRD4) in terms of their influence on visuospatial WM. The role of stressors and interaction with the genetic background is discussed in addition to discussion around some of the implications for precision psychiatry. This and future work in this area aim to disentangle the neural mechanisms underlying susceptibility to stress and their impact and relationship with cognitive processes known to influence mental health vulnerability.
abstract_id: PUBMED:17158525
Study of dopamine receptors genes polymorphisms in bipolar patients with comorbid alcohol abuse. Unlabelled: Alcoholism is present in approximately 40-60% of bipolar patients. This comorbidity between bipolar disorder and alcoholism is high and may result from existence of common genetic factors for the two disorders. In both disorders, dysregulation of the dopaminergic neurotransmission had been implicated. Association analyses revealed several candidate genes acting in the dopaminergic pathway and polymorphisms in those genes that might be associated with both disorders.
Aim: The aim of this study was to analyse possible relationship between polymorphisms in the dopaminergic pathway genes (one SNP for each dopamine receptor gene 1-4) and alcohol abuse comorbidity in bipolar patients.
Methods: We analysed 317 patients with bipolar disorder. In this group, 42 patients were diagnosed with alcohol abuse. The diagnosis was made for each patient by at least two psychiatrists, using structured clinical interviews for DSM-IV Axis I disorders (SCID). The control group consisted of 350 subjects. We performed RFLP analysis of polymorphisms in four genes: DRD1, DRD2, DRD3, and DRD4.
Results: We have not found association of any of the analysed polymorphisms in the dopamine genes in the group of bipolar patients with comorbid alcohol abuse as compared to the control group. In the male group of bipolar patients with comorbid alcohol abuse, we also have not observed any significant differences between the patients and the control subjects.
Conclusion: Our findings suggest that the analysed polymorphisms of the dopamine genes polymorphisms may not be involved in the shared genetic vulnerability to both, bipolar disorder, and alcohol abuse.
Answer: The study conducted within the TRAILS framework aimed to determine the direct effect of DRD2 and DRD4, as well as their interaction with parenting (i.e., rejection, overprotection, and emotional warmth), on the development of regular alcohol and cannabis use in adolescents. The results indicated that carrying the A1 allele of the DRD2 TaqIA polymorphism or the 7 repeat DRD4 was not directly related to regular alcohol or cannabis use. Furthermore, adolescent carriers of these genetic risk markers were not more susceptible to the influence of less optimal parenting. Main effects for parenting showed that overprotection increased the risk of regular alcohol use, while the risk of cannabis use was enhanced by parental rejection and buffered by emotional warmth. Therefore, the findings do not support an association between DRD2/DRD4 and regular alcohol and cannabis use in adolescents, suggesting that parenting factors might be more significant candidates for prevention work (PUBMED:21106310).
In a broader context, an umbrella review of genetic studies for substance use disorders (SUDs) found that the genes OPRM1, DRD2, DRD4, BDNF, and SL6A4 were associated with two or more SUDs. However, the review also highlighted that the genetic underpinnings of SUDs are complex and that further functional analyses of candidate genes are needed (PUBMED:33556390).
Another study using data from the National Longitudinal Study of Adolescent Health found a gene-environment interaction predicting alcohol use, where the relationship between parental rejection and alcohol use was moderated by a genetic index. Adolescents with more 'risk alleles' for five candidate genes were more affected by stressful parental relationships (PUBMED:26755638).
In summary, while there is evidence of genetic factors playing a role in substance use, the impact of DRD2 and DRD4 on regular alcohol and cannabis use among adolescents does not appear to be significantly modified by parenting styles. Instead, parenting factors themselves, such as rejection and overprotection, may have a more direct influence on adolescent substance use behaviors. |
Instruction: Do smaller adults wait longer for liver transplantation?
Abstracts:
abstract_id: PUBMED:19681972
Do smaller adults wait longer for liver transplantation? A comparison of the UK and the USA data. Background: The number of patients on the UK and the USA liver transplant list is increasing. As size match is an important factor in the UK organ allocation, we studied the effect of recipient size on liver transplantation in the UK and the USA.
Methods: The UK Transplant and United Network for Organ Sharing databases were used to assess difference in access to transplantation between smaller adult patients and their larger counterparts over three time periods. Subsequently, proportions of split, NHBD and living-donor transplants were analyzed.
Results: There were 1576 UK and 29,150 USA patients in our analysis. The UK small patients have been significantly disadvantaged in access to transplantation particularly in early years and in adult only transplant units. This contrasts to the USA where smaller patients have never been disadvantaged and transplantation rates are steadily increasing. Split-liver transplants are being carried out in increasing numbers in the UK but not the USA.
Conclusions: Small adults are still less likely to be transplanted at six months in adult only units in the UK. The lack of size matched organs for smaller adults and the overall decrease in rates of transplantation in the UK may be remedied by careful consideration of allocation policy and increased use of innovative techniques.
abstract_id: PUBMED:24507051
Are hepatocellular carcinoma patients more likely to receive liver resection in regions with longer transplant wait times? In areas with longer liver transplantation (LT) wait times, liver resection (LR) offers an appropriate alternative in selected patients with hepatocellular carcinoma (HCC). We identified adults with HCC undergoing LT or LR from the United States Nationwide Inpatient Sample from 1998-2008. United Network for Organ Sharing regions were assigned lower rank indicating shorter wait time for patients with Model for End-Stage Liver Disease (MELD) scores of 19-24 or ≥ 25. We used multivariate adjusted analysis to assess the odds of LR versus LT comparing patients by region. Of 4,516 patients, 40% received LT and 60% received LR. When ranked by wait times for MELD 19-24, the 3rd, 8th, and 11th ranked regions had decreased odds of LR versus LT (region 3: odds ratio [OR] 0.3, 95% confidence interval [CI] 0.2-0.6; region 8: OR 0.5, 95% CI 0.3-0.9; region 5: OR 0.3, 95% CI 0.2-0.6), whereas the 10th ranked region had increased odds (region 1: OR 1.9, 95% CI 1.1-3.4) compared with the region with the shortest wait time, region 10. When ranked by wait times for MELD ≥25, all regions except the 10th ranked region (region 5) had increased odds compared with the region with the shortest wait time, region 3 (OR 1.6-5.6; P < .001). Regional variations of LT versus LR are not completely explained by transplant wait times.
abstract_id: PUBMED:32812771
HCC Liver Transplantation Wait List Dropout Rates Before and After the Mandated 6-Month Wait Time. Background: Studies have shown significant improvement in hepatocellular carcinoma (HCC) recurrence rates after liver transplantation since the united network of organ sharing (UNOS) implementation of a 6-month wait period prior to accrued exception model for end-stage liver disease (MELD) points enacted on October 8, 2015. However, few have examined the impact on HCC dropout rates for patients awaiting liver transplant. Our objective is to evaluate the outcomes of HCC dropout rates before and after the mandatory 6-month wait policy enacted.
Methods: We conducted a retrospective cohort study on adult patients added to the liver transplant wait list between January 1, 2012, and March 8, 2019 (n = 767). Information was obtained through electronic medical records and organ procurement and transplant network (OPTN) publicly available national data reports.
Results: In response to the 2015 UNOS-mandated 6-month wait time, dropout rates in the HCC patient population at our center increased from 12% pre-mandate to 20.8% post-mandate This increase was similarly reflected in the national dropout rate, which also increased from 26.3% pre-mandate to 29.0% post-mandate.
Discussion: From these changes, it is evident that the UNOS mandate achieved its goal of increasing equity of liver organ allocation, but HCC patients are nonetheless dropping off of the wait list at an increased rate and are therefore disadvantaged.
abstract_id: PUBMED:27374003
Patients With Hepatocellular Carcinoma Have Highest Rates of Wait-listing for Liver Transplantation Among Patients With End-Stage Liver Disease. Background & Aims: Despite recent attention to differences in access to livers for transplantation, research has focused on patients already on the wait list. We analyzed data from a large administrative database that represents the entire US population, and state Medicaid data, to identify factors associated with differences in access to wait lists for liver transplantation.
Methods: We performed a retrospective cohort study of transplant-eligible patients with end-stage liver disease using the HealthCore Integrated Research Database (2006-2014; n = 16,824) and Medicaid data from 5 states (2002-2009; California, Florida, New York, Ohio, and Pennsylvania; n = 67,706). Transplant-eligible patients had decompensated cirrhosis, hepatocellular carcinoma (HCC), and/or liver synthetic dysfunction, based on validated International Classification of Diseases, Ninth Revision-based algorithms and data from laboratory studies. Placement on the wait list was determined through linkage with the Organ Procurement and Transplantation Network database.
Results: In an unadjusted analysis of the HealthCore database, we found that 29% of patients with HCC were placed on the 2-year wait list (95% confidence interval [CI], 25.4%-33.0%) compared with 11.9% of patients with stage 4 cirrhosis (ascites) (95% CI, 11.0%-12.9%) and 12.6% of patients with stage 5 cirrhosis (ascites and variceal bleeding) (95% CI, 9.4%-15.2%). Among patients with each stage of cirrhosis, those with HCC were significantly more likely to be placed on the wait list; adjusted subhazard ratios ranged from 1.7 (for patients with stage 5 cirrhosis and HCC vs those without HCC) to 5.8 (for patients with stage 1 cirrhosis with HCC vs those without HCC). Medicaid beneficiaries with HCC were also more likely to be placed on the transplant wait list, compared with patients with decompensated cirrhosis, with a subhazard ratio of 2.34 (95% CI, 2.20-2.49). Local organ supply and wait list level demand were not associated with placement on the wait list.
Conclusions: In an analysis of US healthcare databases, we found patients with HCC to be more likely to be placed on liver transplant wait lists than patients with decompensated cirrhosis. Previously reported reductions in access to transplant care for wait-listed patients with decompensated cirrhosis underestimate the magnitude of this difference.
abstract_id: PUBMED:27754570
Primary biliary cirrhosis has high wait-list mortality among patients listed for liver transplantation. Patients with primary sclerosing cholangitis (PSC) have frequent episodes of cholangitis with potential for high mortality while waiting for liver transplantation. However, data on wait-list mortality specific to liver disease etiology are limited. Using United Network for Organ Sharing (UNOS) database (2002-2013), of 81 592 listed patients, 11 284 (13.8%) died while waiting for transplant. Primary biliary cirrhosis (PBC) patients (N = 3491) compared to PSC (N = 4905) differed with age (56 vs. 47 years), female gender (88% vs. 33%), black race (6% vs. 13%), and BMI (25 vs. 27), P < 0.0001 for all. A total of 993 (11.8%) patients died while waiting for the transplant list. Using competing risk analysis controlling for baseline recipient factors and accounting for receipt of liver transplantation (LT), PBC compared to patients with PSC had higher overall and 3-month wait-list mortality (21.6% vs. 12.7% and 5.0% vs. 2.9%, respectively, Gray's test P < 0.001), [1.25 (1.07-1.47)]. Repeat analysis including all etiologies showed higher wait-list mortality for PBC compared to most etiologies, except for patients listed for diagnosis of alcoholic liver disease (ALD) + hepatitis C virus (HCV). Patients with PBC have high mortality while waiting for liver transplantation. These novel findings suggest that patients with PBC listed for LT may be considered for model for end-stage disease (MELD) exception points.
abstract_id: PUBMED:28809734
Wait Time for Curative Intent Radio Frequency Ablation is Associated with Increased Mortality in Patients with Early Stage Hepatocellular Carcinoma. Introduction: Radiofrequency ablation (RFA) is a recommended curative intent treatment option for patients with early stage hepatocellular carcinoma (HCC). We investigated if wait times for RFA were associated with residual tumor, tumor recurrence, need for liver transplantation, or death.
Material And Methods: We conducted a retrospective study of patients diagnosed with HCC between January 2010 and December 2013 presenting to University Health Network (UHN) in Toronto, Canada. All patients receiving curative intent RFA for HCC were included. Multivariable Cox regression was used to determine if wait times were associated with clinical outcomes.
Results: 219 patients were included in the study. 72.6% were male and the median age was 62.7 years (IQR 55.6-71). Median tumor size at diagnosis was 21.5 mm (IQR 17-26); median MELD was 8.7 (IQR 7.2-11.4) and 57.1% were Barcelona stage 0. The cause of liver disease was viral hepatitis in 73.5% (Hepatitis B and C). The median time from HCC diagnosis to RFA treatment was 96 days (IQR 75-139). In multivariate analysis, wait time was not associated with requiring liver transplant or tumor recurrence, however, each incremental 30-day wait time was associated with an increased risk of residual tumor (HR = 1.09; 95% CI 1.01-1.19; p = 0.033) as well as death (HR = 1.23; 95% CI 1.11-1.36; p ≤ 0.001).
Conclusion: Incremental 30-day wait times are associated with a 9% increased risk of residual tumor and a 23% increased risk of death. We have identified system gaps where quality improvement measures can be implemented to reduce wait times and allocate resources for future RFA treatment, which may improve both quality and efficiency of HCC care.
abstract_id: PUBMED:28425410
Liver Transplantation for Hepatocellular Carcinoma: Impact of Wait Time at a Single Center. Introduction And Aim: Liver transplantation (LT) provides durable survival for hepatocellular carcinoma (HCC). However, there is continuing debate concerning the impact of wait time and acceptable tumor burden on outcomes after LT. We sought to review outcomes of LT for HCC at a single, large U.S. center, examining the influence of wait time on post-LT outcomes.
Material And Methods: We reviewed LT for HCC at Mayo Clinic in Florida from 1/1/2003 until 6/30/2014. Follow up was updated through 8/1/ 2015.
Results: From 2003-2014, 978 patients were referred for management of HCC. 376 patients were transplanted for presumed HCC within Milan criteria, and the results of these 376 cases were analyzed. The median diagnosis to LT time was 183 days (8 - 4,337), and median transplant list wait time was 62 days (0 - 1815). There was no statistical difference in recurrence-free or overall survival for those with wait time of less than or greater than 180 days from diagnosis of HCC to LT. The most important predictor of long term survival after LT was HCC recurrence (HR: 18.61, p < 0.001). Recurrences of HCC as well as survival were predicted by factors related to tumor biology, including histopathological grade, vascular invasion, and pre-LT serum alpha-fetoprotein levels. Disease recurrence occurred in 13%. The overall 5-year patient survival was 65.8%, while the probability of 5-year recurrence-free survival was 62.2%.
Conclusions: In this large, single-center experience with long-term data, factors of tumor biology, but not a longer wait time, were associated with recurrence-free and overall survival.
abstract_id: PUBMED:28711630
Analysis of Liver Offers to Pediatric Candidates on the Transplant Wait List. Background & Aims: Approximately 10% of children on the liver transplant wait-list in the United States die every year. We examined deceased donor liver offer acceptance patterns and their contribution to pediatric wait-list mortality.
Methods: We performed a retrospective cohort study of children on the US liver transplant wait-list from 2007 through 2014 using national transplant registry databases. We determined the frequency, patterns of acceptance, and donor and recipient characteristics associated with deceased donor liver organ offers for children who died or were delisted compared with those who underwent transplantation. Children who died or were delisted were classified by the number of donor liver offers (0 vs 1 or more), limiting analyses to offers of livers that were ultimately transplanted into pediatric recipients. The primary outcome was death or delisting on the wait-list.
Results: Among 3852 pediatric liver transplant candidates, children who died or were delisted received a median 1 pediatric liver offer (inter-quartile range, 0-2) and waited a median 33 days before removal from the wait-list. Of 11,328 donor livers offered to children, 2533 (12%) were transplanted into children; 1179 of these (47%) were immediately accepted and 1354 (53%) were initially refused and eventually accepted for another child. Of 27,831 adults, 1667 (6.0%; median, 55 years) received livers from donors younger than 18 years (median, 15 years), most (97%) allocated locally or regionally. Of children who died or were delisted, 173 (55%) received an offer of 1 or more liver that was subsequently transplanted into another pediatric recipient, and 143 (45%) died or were delisted with no offers.
Conclusions: Among pediatric liver transplant candidates in the US, children who died or were delisted received a median 1 pediatric liver offer and waited a median of 33 days. Of livers transplanted into children, 47% were immediately accepted and 53% were initially refused and eventually accepted for another child. Of children who died or were delisted, 55% received an offer of 1 or more liver that was subsequently transplanted into another pediatric recipient, and 45% died or were delisted with no offers. Pediatric prioritization in the allocation and development of improved risk stratification systems is required to reduce wait-list mortality among children.
abstract_id: PUBMED:33030554
Association of State Medicaid Expansion With Racial/Ethnic Disparities in Liver Transplant Wait-listing in the United States. Importance: Millions of Americans gained insurance through the state expansion of Medicaid, but several states with large populations of racial/ethnic minorities did not expand their programs.
Objective: To investigate the implications of Medicaid expansion for liver transplant (LT) wait-listing trends for racial/ethnic minorities.
Design, Setting, And Participants: A cohort study was performed of adults wait-listed for LT using the United Network of Organ Sharing database between January 1, 2010, and December 31, 2017. Poisson regression and a controlled, interrupted time series analysis were used to model trends in wait-listing rates by race/ethnicity. The setting was LT centers in the United States.
Main Outcomes And Measures: (1) Wait-listing rates by race/ethnicity in states that expanded Medicaid (expansion states) compared with those that did not (nonexpansion states) and (2) actual vs predicted rates of LT wait-listing by race/ethnicity after Medicaid expansion.
Results: There were 75 748 patients (median age, 57.0 [interquartile range, 50.0-62.0] years; 48 566 [64.1%] male) wait-listed for LT during the study period. The cohort was 8.9% Black and 16.4% Hispanic. Black patients and Hispanic patients were statistically significantly more likely to be wait-listed in expansion states than in nonexpansion states (incidence rate ratio [IRR], 1.54 [95% CI, 1.44-1.64] for Black patients and 1.21 [95% CI, 1.15-1.28] for Hispanic patients). After Medicaid expansion, there was a decrease in the wait-listing rate of Black patients in expansion states (annual percentage change [APC], -4.4%; 95% CI, -8.2% to -0.6%) but not in nonexpansion states (APC, 0.5%; 95% CI, -4.0% to 5.2%). This decrease was not seen when Black patients with hepatitis C virus (HCV) were excluded from the analysis (APC, 3.1%; 95% CI, -2.4% to 8.9%), suggesting that they may be responsible for this expansion state trend. Hispanic Medicaid patients without HCV were statistically significantly more likely to be wait-listed in the post-Medicaid expansion era than would have been predicted without Medicaid expansion (APC, 13.2%; 95% CI, 4.0%-23.2%).
Conclusions And Relevance: This cohort study found that LT wait-listing rates have decreased for Black patients with HCV in states that expanded Medicaid. Conversely, wait-listing rates have increased for Hispanic patients without HCV. Black patients and Hispanic patients may have benefited differently from Medicaid expansion.
abstract_id: PUBMED:28353492
Wait Time of Less Than 6 and Greater Than 18 Months Predicts Hepatocellular Carcinoma Recurrence After Liver Transplantation: Proposing a Wait Time "Sweet Spot". Background: It has been postulated that short wait time before liver transplant (LT) for hepatocellular carcinoma (HCC) results in the inclusion of tumors with aggressive biology, but prolonged wait time could result in a shift to more aggressive tumor behavior. We therefore test the hypothesis that a wait time "sweet spot" exists with a lower risk for HCC recurrence compared with the other 2 extremes.
Methods: This multicenter study included 911 patients from 3 LT centers with short, medium, and long wait times (median of 4, 7, and 13 months, respectively) who received Model for End Stage Liver Disease exception listing for HCC from 2002 to 2012.
Results: Wait time, defined as time from initial HCC diagnosis to LT, was less than 6 months in 32.4%, 6 to 18 months in 53.7%, and greater than 18 months in 13.9%. Waitlist dropout was observed in 18.4% at a median of 11.3 months. Probability of HCC recurrence at 1 and 5 years were 6.4% and 15.5% with wait time of less than 6 or greater than 18 months (n = 343) versus 4.5% and 9.8% with wait time of 6 to 18 months (n = 397), respectively (P = 0.049). When only pre-LT factors were considered, wait time of less than 6 or greater than 18 months (HR, 1.6; P = 0.043) and AFP greater than 400 at HCC diagnosis (HR, 3.0; P < 0.001) predicted HCC recurrence in multivariable analysis.
Conclusions: This large multicenter study provides evidence of an association between very short (<6 months) or very long (>18 months) wait times and an increased risk for HCC recurrence post-LT. The so-called sweet spot of 6 to 18 months should be the target to minimize HCC recurrence.
Answer: Yes, smaller adults do wait longer for liver transplantation, particularly in the UK. The study comparing the UK and the USA data found that in the UK, small patients have been significantly disadvantaged in access to transplantation, especially in early years and in adult-only transplant units. This contrasts with the USA, where smaller patients have never been disadvantaged, and transplantation rates are steadily increasing. The lack of size-matched organs for smaller adults and the overall decrease in rates of transplantation in the UK may be remedied by careful consideration of allocation policy and increased use of innovative techniques (PUBMED:19681972). |
Instruction: Is chronic kidney disease associated with a high ankle brachial index in adults at high cardiovascular risk?
Abstracts:
abstract_id: PUBMED:21123957
Is chronic kidney disease associated with a high ankle brachial index in adults at high cardiovascular risk? Aim: Chronic kidney disease (CKD) is an important risk factor for cardiovascular disease (CVD) events. A high ankle brachial index (ABI), a marker of lower arterial stiffness, is associated with CVD events. It remains unknown whether high ABI is associated with CKD. The objectives of this study were to determine the association of CKD with high ABI in adults at high CVD risk.
Methods: The study enrolled hospital-based patients at high CVD risk and measured kidney function and ABI. The glomerular filtration rate (GFR) was estimated using the Modification of Dietin Renal Desease (MDRD) equation and ABI was categorized as low (< 0.90), low-normal (0.90 to 1.09), normal (1.10 to 1.40), and high (≥ 1.40 or incompressible). Logistic regression was used to evaluate the associations of CKD with ABI categories.
Results: Among 6412 participants, 25% had CKD, 25% had an ABI measurement < 0.90, and 1% had an ABI > 1.40. In models adjusted for age, sex, hypertension, diabetes, body mass index, low-density and high-density lipoprotein cholesterol, and smoking, only low ABI was associated with an increased risk of CKD; however, both low ABI (OR: 2.1, 1.6-2.8) and high ABI (OR: 2.4, 1.0-6.4) were associated with an increased risk of CKD in diabetic individuals. Additionally, only low ABI was associated with advanced eGFR levels.
Conclusions: High ABI values are associated with an increased risk of CKD in diabetic individuals at high cardiovascular risk. Future studies are required to speculate whether high ABI might lead to diminished kidney function through nonatherosclerotic pathways and to understand the mechanisms linking them to CVD events and diabetes.
abstract_id: PUBMED:30682724
High ankle-brachial index and risk of cardiovascular or all-cause mortality: A meta-analysis. Background And Aims: Studies on high ankle-brachial index (ABI) to predict mortality risk have yielded conflicting results. This meta-analysis aimed to evaluate the association between abnormally high ABI and risk of cardiovascular or all-cause mortality.
Methods: Pubmed and Embase databases were systematically searched for relevant articles published up to August 15, 2018. Longitudinal observational studies that evaluated the association between abnormally high ABI at baseline and risk of cardiovascular or all-cause mortality were included. Pooled results were expressed as risk ratio (RR) with 95% confidence intervals (CI) for the abnormal high versus the reference normal ABI category.
Results: Eighteen studies enrolling 60,467 participants were included. Abnormally high ABI was associated with an increased risk of all-cause mortality (RR 1.50; 95% CI 1.27-1.77) and cardiovascular mortality (RR 1.84; 95% CI 1.54-2.20). The pooled RR of all-cause mortality was 1.45 (95% CI 1.16-1.82) for the general population, 1.67 (95% CI 1.03-2.71) for chronic kidney disease (CKD)/hemodialysis patients, and 1.55 (95% CI 1.10-2.20) for suspected or established cardiovascular disease (CVD) patients, respectively. The pooled RR of cardiovascular mortality was 1.84 (95% CI 1.43-2.38) for the general population, 4.28 (95% CI 2.18-8.40) for CKD/hemodialysis patients, and 1.58 (95% CI 1.22-2.05) for suspected or established CVD patients, respectively.
Conclusions: Abnormally high ABI is independently associated with an increased risk of all-cause mortality. However, interpretation of the association between abnormally high ABI and cardiovascular mortality should be done with caution because of the likelihood of publication bias.
abstract_id: PUBMED:34724144
Ankle-brachial index predicts renal outcomes and all-cause mortality in high cardiovascular risk population: a nationwide prospective cohort study in CORE project. Background: Low ankle-brachial index (ABI) related ischemic events are common among individuals with chronic kidney disease (CKD). It is also associated with an increased risk of rapid renal function decline. The presence of peripheral artery disease (PAD) with low ABI among patients with high cardiovascular (CV) risk increases limb loss and mortality.
Aims: To estimate the association between abnormal ABI and renal endpoints and all-cause mortality.
Methods: A multicenter prospective cohort study was conducted among subjects with high CV risk or established CV diseases in Thailand. The subjects were divided into 3 groups based on ABI at baseline > 1.3, 0.91-1.3, and ≤ 0.9, respectively. Primary composite outcome consisted of estimated glomerular filtration rate (eGFR) decline over 40%, eGFR less than 15 mL/min/1.73 m2, doubling of serum creatinine and initiation of dialysis. The secondary outcome was all-cause mortality. Cox regression analysis and Kaplan-Meier curve were performed.
Results: A total of 5543 subjects (3005 men and 2538 women) were included. Cox proportional hazards model showed a significant relationship of low ABI (ABI ≤ 0.9) and primary composite outcome and all-cause mortality. Compared with the normal ABI group (ABI 0.91-1.3), subjects with low ABI at baseline significantly had 1.42-fold (95% CI 1.02-1.97) and 2.03-fold (95% CI 1.32-3.13) risk for the primary composite outcome and all-cause mortality, respectively, after adjusting for variable factors.
Conclusion: Our study suggested that PAD independently predicts the incidence of renal progression and all-cause mortality among Thai patients with high CV risk.
abstract_id: PUBMED:32448827
State of the Art Review: Brachial-Ankle PWV. The brachial-ankle pulse wave velocity (brachial-ankle PWV), which is measured simply by wrapping pressure cuffs around the four extremities, is a simple marker to assess the stiffness of the medium- to large- sized arteries. The accuracy and reproducibility of its measurement have been confirmed to be acceptable. Risk factors for cardiovascular disease, especially advanced age and high blood pressure, are reported to be associated with an increase of the arterial stiffness. Furthermore, arterial stiffness might be involved in a vicious cycle with the development/progression of hypertension, diabetes mellitus and chronic kidney disease. Increase in the arterial stiffness is thought to contribute to the development of cardiovascular disease via pathophysiological abnormalities induced in the heart, brain, kidney, and also the arteries themselves. A recent independent participant data meta-analysis conducted in Japan demonstrated that the brachial-ankle PWV is a useful marker to predict future cardiovascular events in Japanese subjects without a previous history of cardiovascular disease, independent of the conventional model for the risk assessment. The cutoff point may be 16.0 m/s in individuals with a low risk of cardiovascular disease (CVD), and 18.0 m/s in individuals with a high risk of CVD and subjects with hypertension. In addition, the method of measurement of the brachial-ankle PWV can also be used to calculate the inter-arm systolic blood pressure difference and ankle-brachial pressure index, which are also useful markers for cardiovascular risk assessment.
abstract_id: PUBMED:25350835
Association of interleg difference of ankle brachial index with overall and cardiovascular mortality in chronic hemodialysis patients. Background: The ankle-brachial index (ABI) is associated with peripheral vascular atherosclerosis, adverse cardiovascular outcomes, and all-cause mortality. However, there were limited data available on studying the effect of interleg ABI difference.
Methods: We investigated the association of the interleg ABI difference with overall and cardiovascular mortality in chronic hemodialysis in a retrospective observational cohort of 369 Taiwanese patients undergoing chronic hemodialysis.
Results: An interleg ABI difference of ≥0.15 in hemodialysis patients had significant predictive power for all-cause and cardiovascular mortality in crude analysis. The hazard ratio (HR) for all-cause mortality was 3.00 [95% confidence interval (CI), 1.91-4.71]; the HR for cardiovascular mortality was 3.13 (95% CI, 1.82-5.38). After adjustment for confounding variables, this difference continued to have significant predictive power for all-cause mortality but lost its predictive power for fatal cardiac outcome. ABI <0.9 and high brachial-ankle pulse wave velocity were independently associated with an interleg ABI difference of ≥0.15 in hemodialysis patients. Moreover, in the subgroup analysis, we found that this difference was an independent factor for overall and cardiovascular mortality, particularly in elder patients, female patients, or those with ABI <0.9.
Conclusion: Detection of an interleg ABI difference of ≥0.15 was an independent risk factor for overall mortality in hemodialysis patients but it may affect cardiovascular mortality through the effect of peripheral vascular disease.
abstract_id: PUBMED:24529128
Fibroblast growth factor 23, the ankle-brachial index, and incident peripheral artery disease in the Cardiovascular Health Study. Background: Fibroblast growth factor 23 (FGF23) has emerged as a novel risk factor for mortality and cardiovascular events. Its association with the ankle-brachial index (ABI) and clinical peripheral artery disease (PAD) is less known.
Methods: Using data (N = 3143) from the Cardiovascular Health Study (CHS), a cohort of community dwelling adults >65 years of age, we analyzed the cross-sectional association of FGF23 with ABI and its association with incident clinical PAD events during 9.8 years of follow up using multinomial logistic regression and Cox proportional hazards models respectively.
Results: The prevalence of cardiovascular disease (CVD) and traditional risk factors like diabetes, coronary artery disease, and heart failure increased across higher quartiles of FGF23. Compared to those with ABI of 1.1-1.4, FGF23 per doubling at baseline was associated with prevalent PAD (ABI < 0.9) although this association was attenuated after adjusting for CVD risk factors, and kidney function (OR 0.91, 95% CI 0.76-1.08). FGF23 was not associated with high ABI (>1.4) (OR 1.06, 95% CI 0.75-1.51). Higher FGF23 was associated with incidence of PAD events in unadjusted, demographic adjusted, and CVD risk factor adjusted models (HR 2.26, 95% CI 1.28-3.98; highest versus lowest quartile). The addition of estimated glomerular filtration and urine albumin to creatinine ratio to the model however, attenuated these findings (HR 1.46, 95% CI, 0.79-2.70).
Conclusions: In community dwelling older adults, FGF23 was not associated with baseline low or high ABI or incident PAD events after adjusting for confounding variables. These results suggest that FGF23 may primarily be associated with adverse cardiovascular outcomes through non atherosclerotic mechanisms.
abstract_id: PUBMED:33979425
High ankle-brachial index predicts cardiovascular events and mortality in hemodialysis patients with severe secondary hyperparathyroidism. Introduction: Vascular calcification related to severe secondary hyperparathyroidism (SHPT) is an important cause of cardiovascular and bone complications, leading to high morbidity and mortality in patients with chronic kidney disease (CKD) undergoing hemodialysis (HD). The present study aimed to analyze whether ankle-brachial index (ABI), a non-invasive diagnostic tool, is able to predict cardiovascular outcomes in this population.
Methods: We selected 88 adult patients on HD for at least 6 months, with serum iPTH>1,000pg/mL. We collected clinical data, biochemical and hormonal parameters, and ABI (sonar-Doppler). Calcification was assessed by lateral radiography of the abdomen and by simple vascular calcification score (SVCS). This cohort was monitored prospectively between 2012 and 2019 for cardiovascular outcomes (death, myocardial infarction (MI), stroke, and calciphylaxis) to estimate the accuracy of ABI in this setting.
Results: The baseline values were: iPTH: 1770±689pg/mL, P: 5.8±1.2 mg/dL, corrected Ca: 9.7±0.8mg/dL, 25(OH)vit D: 25.1±10.9ng/mL. Sixty-five percent of patients had ABI>1.3 (ranging from 0.6 to 3.2); 66% had SVCS≥3, and 45% aortic calcification (Kauppila≥8). The prospective evaluation (51.6±24.0 months), provided the following cardiovascular outcomes: 11% of deaths, 17% of nonfatal MI, one stroke, and 3% of calciphylaxis. After adjustments, patients with ABI≥1.6 had 8.9-fold higher risk of cardiovascular events (p=0.035), and ABI≥1.8 had 12.2-fold higher risk of cardiovascular mortality (p=0.019).
Conclusion: The presence of vascular calcifications and arterial stiffness was highly prevalent in our population. We suggest that ABI, a simple and cost-effective diagnostic tool, could be used at an outpatient basis to predict cardiovascular events in patients with severe SHPT undergoing HD.
abstract_id: PUBMED:24499079
Rate of ankle-brachial index decline predicts cardiovascular mortality in hemodialysis patients. Chronic kidney disease is a risk factor for cardiovascular mortality and morbidity of cardiovascular events (CVEs). We obtained baseline data regarding blood biochemistry, ankle-brachial index (ABI), brachial-ankle pulse wave velocity (baPWV) and echocardiographic parameters from 300 patients on hemodialysis in 2005. We also measured ABI and baPWV annually from June 2005 until June 2012 and calculated rates of changes in ABI and baPWV to identify factors associated with CVEs. Seventy-three patients died of cardiovascular disease and 199 CVEs occurred in 164 patients during the study period. Cardiac, cerebrovascular and peripheral artery disease (PAD) events occurred in 124, 43 and 32 patients, respectively, and 30 patients had more than two types of CVEs. Analysis using the Cox proportional hazards model showed that a higher rate of decline in ABI (hazard ratio [HR], 4.034; P < 0.001) was the most significant risk factor for decreased patient survival. Multivariate Cox analysis revealed that a higher rate of ABI decline (HR, 2.342; P < 0.001) was a significant risk factor for cardiac events, and that a lower baseline ABI was a risk factor for cerebrovascular (HR, 0.793; P = 0.03) and PAD (HR, 0.595; P < 0.0001) events. Our findings suggested that the rate of a decline in ABI and the baseline ABI value are potent correlation factors for survival and CVE morbidity among patients on hemodialysis in Japan.
abstract_id: PUBMED:23329891
Inflammation and oxidative stress are associated with the prevalence of high aankle-brachial index in metabolic syndrome patients without chronic renal failure. Aims: High ankle-brachial index (ABI) is marker of increased cardiovascular morbidity and mortality, while the relationship and mechanism between high ABI and metabolic syndrome (MetS) are unclear. The objectives of this study were to determine the relationship and possible mechanism of MetS with high ABI.
Methods: 341 participants without CRF were recruited. Among these participants, 58 participants (ABI ≥ 1.3) were include in high ABI group and the other 283 participants (0.9 < ABI < 1.3) were include in normal ABI group. Furthermore, these 341 participants were also divided into MetS group (n = 54) and non-MetS group (n = 287). All participants received examinations including body mass index (BMI), ABI and related biochemical parameters.
Results: Compared with non-MetS group, the prevalence of high ABI was higher in MetS group (27.8% vs. 15%, p < 0.05). Participants with 3-4 metabolic risk factors had higher prevalence of high ABI than those with 0-1 metabolic risk factors (27.8% vs. 12.7%, p < 0.05). The prevalence of high ABI in overweight participants was higher than those with normal body weight. And the participants with hypertension also had higher prevalence of high ABI than normotensive participants. BMI, high-sensitivity C-reactive protein (hsCRP) and superoxide dismutase (SOD) were all higher in high ABI group than normal ABI group (p < 0.05).
Conclusions: More metabolic risk factors have increased the risk of high ABI. Inflammation and oxidative stress are associated with prevalence of high ABI in metabolic syndrome patients without chronic renal failure.
abstract_id: PUBMED:28197971
Abnormal ankle-brachial index and risk of cardiovascular or all-cause mortality in patients with chronic kidney disease: a meta-analysis. Prognostic role of ankle-brachial index (ABI) in patients with chronic kidney disease (CKD) is controversial. We aimed to evaluate whether abnormal ABI was an independent predictor of cardiovascular or all-cause mortality in CKD patients with or without hemodialysis by conducting a meta-analysis. We systematically searched Pubmed and Embase databases for prospective observational studies that investigated baseline abnormal ABI and subsequent cardiovascular or all-cause mortality risk in CKD patients with or without hemodialysis. An ABI value of 0.9 to 1.3 was defined as normal. Pooled hazard risk (HR) with 95% confidence interval (CI) was calculated for the abnormal vs. normal ABI category. Six studies enrolling 5820 patients were identified and analyzed. Overall, abnormal ABI was associated with an increased risk of all-cause mortality (HR 2.26; 95% CI 1.60-3.18) and cardiovascular mortality (HR 3.58; 95% CI 2.53-5.06). Subgroup analysis indicated that patients with abnormally low ABI increased by 2.45-fold all-cause mortality and 5.18-fold cardiovascular mortality. Similarly, an abnormally high ABI increased by 1.94-fold all-cause mortality and 4.04-fold cardiovascular mortality. In addition, the effect of abnormal ABI on all-cause mortality was more pronounced among hemodialysis patients (HR 3.06; 95% CI 2.30-4.07) but not in CKD patients (HR 1.42; 95% CI 0.98-2.05). Abnormally low and high ABI are independently associated with cardiovascular or all-cause mortality risk in maintenance hemodialysis patients. This meta-analysis highlighted an U-shaped relationship between ABI and mortality risk in CKD patients undergoing hemodialysis. However, findings of this meta-analysis were undermined by the small number of included studies.
Answer: Yes, chronic kidney disease (CKD) is associated with a high ankle brachial index (ABI) in adults at high cardiovascular risk. A study found that high ABI values are associated with an increased risk of CKD in diabetic individuals at high cardiovascular risk (PUBMED:21123957). Additionally, a meta-analysis indicated that abnormally high ABI was associated with an increased risk of all-cause mortality and cardiovascular mortality, with the pooled risk ratio for all-cause mortality being higher in CKD/hemodialysis patients compared to the general population (PUBMED:30682724). Furthermore, another study suggested that peripheral artery disease (PAD), indicated by low ABI, independently predicts the incidence of renal progression and all-cause mortality among Thai patients with high cardiovascular risk (PUBMED:34724144).
Moreover, high ABI has been shown to predict cardiovascular events and mortality in hemodialysis patients with severe secondary hyperparathyroidism (PUBMED:33979425), and the rate of ABI decline has been identified as a significant risk factor for cardiovascular mortality in hemodialysis patients (PUBMED:24499079). Lastly, a meta-analysis highlighted an U-shaped relationship between ABI and mortality risk in CKD patients undergoing hemodialysis, with both abnormally low and high ABI being independently associated with increased cardiovascular or all-cause mortality risk (PUBMED:28197971).
These findings suggest that both low and high ABI are important markers in adults with CKD at high cardiovascular risk, with high ABI being particularly associated with an increased risk of adverse outcomes in this population. |
Instruction: Could interferon still play a role in metastatic renal cell carcinoma?
Abstracts:
abstract_id: PUBMED:22964169
Could interferon still play a role in metastatic renal cell carcinoma? A randomized study of two schedules of sorafenib plus interferon-alpha 2a (RAPSODY). Background: Sorafenib has proven efficacy in metastatic renal cell carcinoma (mRCC). Interferon (IFN) has antiangiogenic activity that is thought to be both dose- and administration-schedule dependent.
Objective: To compare two different schedules of IFN combined with sorafenib.
Design, Setting, And Participants: Single-stage, prospective, noncomparative, randomized, open-label, multicenter, phase 2 study on previously untreated patients with mRCC and Eastern Cooperative Oncology Group performance status 0-2.
Intervention: Sorafenib 400mg twice daily plus subcutaneous IFN, 9 million units (MU) three times a week (Arm A) or 3 MU five times a week (Arm B).
Outcome Measurements And Statistical Analysis: Primary end points were progression-free survival (PFS) for each arm and safety. Data were evaluated according to an intent-to-treat analysis.
Results And Limitations: A total of 101 patients were evaluated. Median PFS was 7.9 mo in Arm A and 8.6 mo in Arm B (p=0.049) and the median duration of response was 8.5 and 19.2 mo, respectively (p=0.0013). Nine partial responses were observed in Arm A, and three complete and 14 partial responses were observed in Arm B (17.6% vs 34.0%; p=0.058); 24 and 21 patients (47% and 42%), respectively, achieved stable disease. The most common grade 3-4 toxicities were fatigue plus asthenia (28% vs 16%; p=0.32) and hand-foot skin reactions (20% vs 18%).
Conclusions: Sorafenib plus frequent low-dose IFN showed good efficacy and tolerability. Further investigations should be warranted to identify a possible positioning of this intriguing regimen (6% complete response rate) in the treatment scenario of mRCC.
abstract_id: PUBMED:19891125
What role do combinations of interferon and targeted agents play in the first-line therapy of metastatic renal cell carcinoma? Interferons (IFNs) are a class of cytokines with pleotropic actions that regulate a variety of cellular activities. Clinical trials with recombinant IFNs (IFN-alpha2a and IFN-alpha2b) have demonstrated clinical activity in patients with advanced renal cell carcinoma (RCC). Their efficacy is characterized by a low overall tumor regression rate of < 15%, progression-free survival of 4-5 months, and overall median survival of 10-18 months. This cytokine became the standard of care for patients with metastatic RCC and was then used as the comparator arm in a series of phase II and III clinical trials that have defined a new treatment paradigm for patients with advanced RCC. This paradigm uses the tyrosine kinase inhibitors (TKIs) sorafenib and sunitinib, the mammalian target of rapamycin (mTOR) inhibitor temsirolimus, and the vascular endothelial growth factor monoclonal antibody bevacizumab. These 3 categories of agents were then investigated in combination with IFN-alpha in a series of preclinical and clinical studies. The collective data from these reports suggest the combination of IFN-alpha and bevacizumab is active and has a role in RCC therapy, whereas combinations with the TKIs or mTOR inhibitors have limited efficacy and/or excessive toxicity. The clinical and preclinical studies leading to these conclusions are reviewed herein.
abstract_id: PUBMED:19906524
Redefining the role of interferon in the treatment of malignant diseases. Interferon (IFN) is a cytokine with a long history of use as immunotherapy in the treatment of various solid tumours and haematological malignancies. The initial use of IFN in cancer therapy was based on its antiproliferative and immunomodulatory effects, and it has been shown more recently to have cytotoxic and anti-angiogenic properties. These features make it a rational anticancer therapy; however, advances in our understanding of the molecular mechanisms involved in cancer development and growth and the availability of effective, alternative therapies have led to IFN therapy being superseded in many cancers. IFN is still commonly used in renal cell carcinoma (RCC), melanoma and myeloproliferative disorders, in which its optimal dose and treatment duration remain to be established despite extensive clinical experience. Preclinical studies of the mechanism of action of IFN suggest that different antitumour effects are relevant at different doses, providing a rationale to explore the use of different dose regimens of IFN, particularly when combined with other therapies. In particular, the advent of novel anti-angiogenic therapies in RCC means that the role of IFN needs to be re-examined with a focus on how best to maximise efficacy and minimise toxicity when used with these agents. This review will focus on the therapeutic use of IFN in these disorders, provide an overview of available data and consider what the data suggest regarding the potential optimal use of IFN in the future.
abstract_id: PUBMED:27463642
The evolving role of monoclonal antibodies in the treatment of patients with advanced renal cell carcinoma: a systematic review. Introduction: While the majority of the vascular endothelial growth factor (VEGF) and mammalian target of rapamycin (mTOR) inhibitors currently used for the therapy of metastatic renal cell carcinoma (mRCC) are small molecule agents inhibiting multiple targets, monoclonal antibodies are inhibitors of specific targets, which may decrease off-target effects while preserving on-target activity. A few monoclonal antibodies have already been approved for mRCC (bevacizumab, nivolumab), while many others may play an important role in the therapeutic scenario of mRCC.
Areas Covered: This review describes emerging monoclonal antibodies for treating RCC. Currently, bevacizumab, a VEGF monoclonal antibody, is approved in combination with interferon for the therapy of metastatic RCC, while nivolumab, a Programmed Death (PD)-1 inhibitor, is approved following prior VEGF inhibitor treatment. Other PD-1 and PD-ligand (L)-1 inhibitors are undergoing clinical development.
Expert Opinion: Combinations of inhibitors of the PD1/PD-L1 axis with VEGF inhibitors or cytotoxic T-lymphocyte antigen (CTLA)-4 inhibitors have shown promising efficacy in mRCC. The development of biomarkers predictive for benefit and rational tolerable combinations are both important pillars of research to improve outcomes in RCC.
abstract_id: PUBMED:18367110
The once and future role of cytoreductive nephrectomy. The role of nephrectomy in the setting of metastatic renal cell carcinoma has long been controversial and has continued to evolve over the last two decades. The practice of cytoreductive nephrectomy has only recently been widely accepted following the publication of 2 large multi-center randomized controlled trials that established a survival benefit for those patients undergoing nephrectomy followed by interferon treatment. Half a decade later, the new paradigm looks set to be questioned with the rapid emergence of tyrosine kinase inhibitors (TKIs). This article reviews the evolution of cytoreductive nephrectomy and speculates on its role in the new frontier of molecular targeting for metastatic renal cell carcinoma.
abstract_id: PUBMED:8545353
Immunotherapy in metastatic cancer of the kidney Evidence accumulated over the last 15 years has clearly demonstrated that metastatic renal cell cancer is an excellent model of the effect of immunotherapy for the treatment of cancer. To date, at least 2 cytokines, interleukin-2 and alpha-interferon have been found to be effective. Objective response is obtained in 15 to 30% of the patients treated with interleukin-2 and 10 to 30% with interferon. Complete response can be achieved in 5% of the cases. Clinically, the best results are seen in patients in good general health and lung metastasis. Complete response for more than 5 years is often observed. Combination protocols with both cytokines and other combinations with infusion of activated lymphocytes have not shown to be more effective than one cytokine alone. It may be possible to obtain higher response rates by combining cytokines with chemotherapy protocols. Surgery still has a role to play however, particularly in patients with an isolate accessible metastasis. Among the perspectives for new immunotherapies, interleukin-12, a strong stimulator of the natural killer population is in phase II trials. Other possibilities include the use of selective populations of lymphocytes as adoptive immunotherapy or combinations using immunotherapy and surgery. Despite the enthusiasm generated by these new techniques, it is imperative to continue rigorous clinical trials in order to develop immunotherapy as a reliable routine treatment.
abstract_id: PUBMED:34804826
Overexpressing IFITM family genes predict poor prognosis in kidney renal clear cell carcinoma. Background: The interferon-inducible transmembrane (IFITM) proteins are localized in the endolysosomal and plasma membranes, conferring cellular immunity to various infections. However, the relationship with carcinogenesis remains poorly elucidated. In the present study, we investigated the role of IFITM in kidney renal clear cell carcinoma (KIRC).
Methods: We utilized the online databases of Oncomine, UALCAN and Human Protein Atlas to analyze the expression of IFITMs and validate their levels in human KIRC cells by qPCR and western blot. Furthermore, we evaluated prognostic significance with the Gene Expression Profiling Interactive Analysis tool (Kaplan-Meier (KM) Plotter) and delineated the immune cell infiltration profile related to IFITMs with the TIMER2.0 database.
Results: IFITMs were overexpressed in KIRC and varied in subtypes and tumor grades. High expression of IFITMs indicated a poor prognosis and more immune cell infiltration, especially endothelial cells and cancer-associated fibroblasts. IFITMs were associated with immune genes, which correlated with poor prognosis of renal clear cell carcinoma. We also explored the enriched network of IFITMs co-occurrence genes and their targeted transcription factors and miRNA. The expression of IFITMs correlated with hub mutated genes of KIRC.
Conclusions: IFITMs play a crucial role in the oncogenesis of KIRC and could be a potential surrogate marker for treatment response to targeted therapies.
abstract_id: PUBMED:27034725
The role of neoadjuvant therapy in the management of locally advanced renal cell carcinoma. In the past decade, the armamentarium of targeted therapy agents for the treatment of metastatic renal cell carcinoma (RCC) has significantly increased. Improvements in response rates and survival, with more manageable side effects compared with interleukin 2/interferon immunotherapy, have been reported with the use of targeted therapy agents, including vascular endothelial growth factor (VEGF) receptor tyrosine kinase inhibitors (sunitinib, sorafenib, pazopanib, axitinib), mammalian target of rapamycin (mTOR) inhibitors (everolimus and temsirolimus) and VEGF receptor antibodies (bevacizumab). Current guidelines reflect these new therapeutic approaches with treatments based on risk category, histology and line of therapy in the metastatic setting. However, while radical nephrectomy remains the standard of care for locally advanced RCC, the migration and use of these agents from salvage to the neoadjuvant setting for large unresectable masses, high-level venous tumor thrombus involvement, and patients with imperative indications for nephron sparing has been increasingly described in the literature. Several trials have recently been published and some are still recruiting patients in the neoadjuvant setting. While the results of these trials will inform and guide the use of these agents in the neoadjuvant setting, there still remains a considerable lack of consensus in the literature regarding the effectiveness, safety and clinical utility of neoadjuvant therapy. The goal of this review is to shed light on the current body of evidence with regards to the use of neoadjuvant treatments in the setting of locally advanced RCC.
abstract_id: PUBMED:25941590
Differential potency of regulatory T cell-mediated immunosuppression in kidney tumors compared to subcutaneous tumors. In many cancers, regulatory T cells (Treg) play a crucial role in suppressing the effector immune response thereby permitting tumor development. Indeed, in mouse models, their depletion can promote the regression of tumors of various origins, including renal cell carcinoma when located subcutaneous (SC). In the present study, we aimed to assess the importance of Treg immunosuppression in the physiologic context of metastatic renal carcinoma (Renca) disease. To that purpose we inoculated renal tumors orthotopically, intra-kidney (IK), in mice. Treg depletions were performed using anti-CD4 antibody in wild type mice or diphtheria toxin (DT) in Foxp3DTR transgenic mice. Our main observation was that Treg were not the key immunosuppressive component of the IK tumoral microenvironment, compared to the same tumors located SC. We demonstrated that the CD8+ effector immune response was still suppressed in IK tumors when compared to SC tumors, following Treg depletion. Furthermore, the level of program cell death protein (PD)-1 was increased on the surface of CD4+ T cells infiltrating IK tumors compared to SC tumors. Finally, the Treg-independent immunosuppression, occurring in IK tumors, was potent enough to inhibit regression of concomitant SC tumors, normally responsive to Treg depletion. Our findings provide further insight into the immunosuppressive nature of the immune response generated in the kidney microenvironment, suggesting that it can have additional mechanisms in addition to Treg. These observations might help to identify better targets from the kidney tumor microenvironment for future cancer therapies.
abstract_id: PUBMED:33850351
The current role of cytoreductive nephrectomy for metastatic renal cell carcinoma. The management of metastatic renal cell carcinoma (mRCC) continues to be a therapeutic challenge; however, the options for systemic therapy in this setting have exploded over the past 20 years. From the advent of toxic cytokine therapy to the subsequent discovery of targeted therapy (TT) and immune checkpoint inhibitors, the landscape of viable treatment options continues to progress. With the arrival of cytokine therapy, two randomized trials demonstrated a survival benefit for upfront cytoreductive nephrectomy (CN) plus interferon therapy and this approach became the standard for surgical candidates. However, it was difficult to establish the role and the timing of CN with the subsequent advent of TT, just a few years later. More recently, two randomized phase III studies completed in the TT era questioned the use of CN and brought to light the role of risk stratification while selecting patients for CN. Careful identification of the mRCC patients who are likely to have a rapid progression of the disease is essential, as these patients need prompt systemic therapy. With the continued advancement of systemic therapy using the immune checkpoint inhibitors as a first line therapy, the role of CN will continue to evolve.
Answer: Yes, interferon (IFN) could still play a role in metastatic renal cell carcinoma (mRCC). Clinical trials with recombinant IFNs, such as IFN-alpha2a and IFN-alpha2b, have demonstrated clinical activity in patients with advanced RCC, characterized by a low overall tumor regression rate of less than 15%, progression-free survival of 4-5 months, and overall median survival of 10-18 months (PUBMED:19891125). IFN has been used as a standard of care and as a comparator arm in clinical trials that have defined new treatment paradigms for advanced RCC, which include tyrosine kinase inhibitors (TKIs) and other targeted therapies (PUBMED:19891125).
A study comparing two different schedules of IFN combined with sorafenib showed that sorafenib plus frequent low-dose IFN had good efficacy and tolerability, suggesting that further investigations should be warranted to identify a possible positioning of this regimen in the treatment scenario of mRCC (PUBMED:22964169). The role of IFN in cancer therapy has been based on its antiproliferative, immunomodulatory, cytotoxic, and anti-angiogenic properties, and despite advances in alternative therapies, IFN is still commonly used in RCC, melanoma, and myeloproliferative disorders (PUBMED:19906524).
Moreover, monoclonal antibodies such as bevacizumab, a VEGF monoclonal antibody, are approved in combination with IFN for the therapy of metastatic RCC (PUBMED:27463642). The evolving role of IFN in the context of new therapeutic agents and combinations is an area of ongoing research, and the optimal dose and treatment duration of IFN remain to be established (PUBMED:19906524).
In summary, while the treatment landscape for mRCC has expanded to include various targeted therapies and immune checkpoint inhibitors, IFN still has a role, particularly in combination regimens and as part of ongoing clinical investigations to optimize its use in the current therapeutic scenario (PUBMED:19891125; PUBMED:22964169; PUBMED:19906524; PUBMED:27463642). |
Instruction: Substance use among medical students: time to reignite the debate?
Abstracts:
abstract_id: PUBMED:18807312
Substance use among medical students: time to reignite the debate? Background: Substance use among medical students could impact on the conduct, safety and efficiency of future doctors. Despite serious medicolegal, ethical and political ramifications, there is little research on the subject, especially from the Indian subcontinent. We aimed to explore the patterns of substance use among a sample of medical students from the Indian subcontinent.
Methods: An opportunistic, cross-sectional survey of medical students from 76 medical schools attending an inter-medical school festival. A brief self-reported questionnaire was used to identify current and lifetime use of tobacco, alcohol, cannabis, heroin and non-prescription drugs. Multivariable logistic regression analysis was used to identify factors associated with illicit substance use.
Results: Responses from 2135 medical students were analysed. Current alcohol and tobacco (chewable or smoked) use was reported by 7.1% and 6.1% of the respondents, respectively. Lifetime use of illicit substances was reported by 143 (6.7%) respondents. Use of illicit substances was strongly associated with use of tobacco, alcohol and non-prescription drugs.
Conclusion: This study provides a snapshot of the problem of substance use among medical students from the Indian subcontinent. The reported prevalence of alcohol and illicit substance use in our sample was lower, while tobacco use was similar, when compared with data from western studies. Further research is needed from the Indian subcontinent to study nationwide patterns of substance use among medical students, and to identify important determinants and reinforce protective factors. Strategies need to be developed for supporting students with a substance use problem.
abstract_id: PUBMED:18235869
Psychoactive substance use among medical students in a Nigerian university. The study was aimed at determining the prevalence, pattern and factors associated with psychoactive substance use among medical students in the University of Ilorin, Nigeria. All consenting medical students were requested to compile a 22-item modified, pilot-tested semi- structured self-report questionnaire based on the World Health Organization's guidelines for student substance use survey. It was found that the most currently used substances were mild stimulants (33.3%), alcohol (13.6%), sedatives (7.3%) and tobacco (3.2%). Except for tobacco, the use of these substances seemed to be only instrumental. Substance use was directly associated with male gender, living alone, self-reported study difficulty, being a clinical student, and being aged 25 years or more. There was an inverse relationship of substance use with religiosity and good mental health.
abstract_id: PUBMED:36893511
Prevalence of cannabis use disorder among individuals using medical cannabis at admission to inpatient treatment for substance use disorders. Introduction: Cannabis is used for medical and recreational purposes and may result in cannabis use disorder (CUD). This study explored the prevalence of cannabis use disorder and other psychiatric comorbidities among inpatients undergoing treatment for substance use disorder who reported medical cannabis use at admission.
Methods: We assessed CUD and other substance use disorders based on DSM-5 symptoms, anxiety with the Generalized Anxiety Disorder scale (GAD-7), depression with the Patient Health Questionnaire (PHQ-9), and post-traumatic stress disorder with the PTSD Checklist for DSM-5 (PCL-5). We compared the prevalence of CUD and other psychiatric comorbidities between inpatients who endorsed the use of cannabis for medical purposes only vs those endorsing use for medical and recreational purposes.
Results: Among 125 inpatients, 42% reported medical use only, and 58% reported medical and recreational use (dual motives). For CUD, 28% of Medical-Only and 51% of Dual-Use motives patients met the diagnostic criteria for CUD (p = 0.016). High psychiatric comorbidities were present: 79% and 81% screened positive for an anxiety disorder, 60% and 61% screened positive for depression, and 66% and 57% screened positive for PTSD for the Medical-Only and Dual-Use inpatients, respectively.
Conclusions: Many treatment-seeking individuals with substance use disorder who report medical cannabis use meet criteria for CUD, particularly those reporting concurrent recreational use.
abstract_id: PUBMED:37323748
Status of Substance use among Undergraduate Medical Students in a Selected Government Medical College in Puducherry - An Explanatory Mixed Method Study. Background: Studies have shown increase in health-risking behavior and a decline in health-promoting behavior among medical students during their stay in medical school. This study aims to determine the prevalence and reason for substance abuse among the undergraduate medical students in a selected medical college in Puducherry.
Material And Methods: This was a facility-based explanatory mixed method study conducted from May 2019 to July 2019. Assessment of their substance abuse was done using ASSIST questionnaire. Substance use was summarized as proportions with 95% CI.
Results: A total of 379 participants were included in the study. The mean age of the study participants was 20 years (± 1.34). The most prevalent substance use was alcohol (10.8%). About 1.9% and 1.6% of students surveyed consume tobacco and cannabis, respectively.
Conclusion: Facilitating factors for substance use as perceived by the participants were stress, peer pressure, easy availability of substances, socialization, curiosity, and awareness knowledge about safe limits of alcohol and tobacco.
abstract_id: PUBMED:29072119
Prevalence, perceptions, and consequences of substance use in medical students. Background: Research regarding the health and wellness of medical students has led to ongoing concerns regarding patterns of alcohol and drug use that take place during medical education. Such research, however, is typically limited to single-institution studies or has been conducted over 25 years ago.
Objective: The objective of the investigation was to assess the prevalence and consequences of medical student alcohol and drug use and students' perceptions of their medical school's substance-use policies.
Design: A total of 855 medical students representing 49 medical colleges throughout the United States participated in an online survey between December 2015 and March 2016.
Results: Data showed that 91.3% and 26.2% of medical students consumed alcohol and used marijuana respectively in the past year, and 33.8% of medical students consumed five or more drinks in one sitting in the past two weeks. Differences in use emerged regarding demographic characteristics of students. Consequences of alcohol and drug use in this sample of medical students included but were not limited to interpersonal altercations, serious suicidal ideation, cognitive deficits, compromised academic performance, and driving under the influence of substances. Forty percent of medical students reported being unaware of their medical institution's substance-use policies.
Conclusions: Findings suggest that substance use among medical students in the US is ongoing and associated with consequences in various domains. There is a lack of familiarity regarding school substance-use policies. Although there has been some progress in characterizing medical student alcohol use, less is known about the factors surrounding medical students' use of other substances. Updated, comprehensive studies on the patterns of medical student substance use are needed if we are to make the necessary changes needed to effectively prevent substance-use disorders among medical students and support those who are in need of help.
abstract_id: PUBMED:28662352
Longitudinal associations between outpatient medical care use and substance use among rural stimulant users. Background: Negative views toward substance use treatment among some rural substance users and limited treatment resources in rural areas likely affect substance use utilization. It is therefore important to determine whether accessing healthcare options other than substance use treatment, specifically outpatient medical care (OMC), is associated with reductions in substance use.
Objectives: We examined whether use of OMC was associated with reductions in substance use among rural substance users over a three-year period. We also explored whether substance user characteristics, including substance-use severity and related-problems, moderated this potential relationship.
Methods: Data were collected from an observational study of 710 (61% male) stimulant users using respondent-driven sampling. Participants were recruited from rural counties of Arkansas, Kentucky, and Ohio.
Results: We found a significant main effect of having at least one OMC visit (relative to none) on fewer days of alcohol, crack cocaine, and methamphetamine use over time. Fewer days of alcohol, crack cocaine, and methamphetamine use were reported in participants with at least one OMC visit (relative to those with none) among those reporting higher Addiction Severity Index employment and psychiatric severity scores, and low education, respectively.
Conclusion: Our findings extend the results from prior studies with urban substance users to show that contact with an outpatient medical care clinic is associated with reductions in substance use over time among rural substance users with especially poorer functioning. These findings highlight the potential importance of OMCs in addressing unhealthy substance use in rural communities.
abstract_id: PUBMED:35656413
Magnitude of Substance Use and Its Associated Factors Among the Medical Students in India and Implications for Medical Education: A Narrative Review. Background: Medical students are at an increased risk of developing substance use and related problems (SURP) because of the inherent stress associated with the professional medical course apart from the developmental risk factors. However, this is under-researched. Moreover, a comprehensive review on the prevalence of SURP among the medical undergraduates (UGs) and associated factors is lacking from India. To fill this gap, the current research work is aimed to review the existing literature on the magnitude of the SURP among UGs of India and its determinants.
Methods: PubMed, Medline, and Google Scholar databases were searched for the original articles studying the prevalence of SURP among medical UGs of India, published from inception till date. Non-original articles, studies on behavioral addictions, and those not directly assessing the prevalence of SURP among the medical UGs were excluded.
Results: A total of 39 studies were found eligible for the review. Alcohol (current use: 3.2%-43.8%), followed by tobacco (3.7%-28.8%) and cannabis (1.6%-15%), were the common substances used by the medical students. Among the females, an increasing trend of substance use, particularly of nonprescription sedatives (even higher than males), alcohol, and smoking, was seen. Family history, peer pressure, transition from school to college life, and progression in the medical course were important associated factors.
Conclusion: Sensitizing medical students and college authorities, increasing the duration of training on SURP in medical curricula, and providing psychological support for the students with SURP could address this issue.
abstract_id: PUBMED:38420921
Substance use and its association with mental health among Swiss medical students: A cross-sectional study. Background: Studies on mental health and substance use among medical students indicated worrying prevalence but have been mainly descriptive.
Aim: To evaluate the prevalence of substance use in a sample of medical students and investigate whether mental health variables have an influence on substance use.
Methods: The data were collected as part of the first wave of the ETMED-L, an ongoing longitudinal open cohort study surveying medical students at the University of Lausanne (Switzerland). N = 886 students were included and completed an online survey including measures of mental health (depression, suicidal ideation, anxiety, stress, and burnout) and use of and risk related with several substances (tobacco, alcohol, cannabis, cocaine, stimulants, sedatives, hallucinogens, opioids, nonmedical prescription drugs, and neuroenhancement drugs). We evaluated the prevalence of use of each substance and then tested the association between mental health and substance use in an Exploratory Structural Equation Modeling framework.
Results: Statistical indices indicated a four-factor solution for mental health and a three-factor solution for substance use. A factor comprising risk level for alcohol, tobacco, and cannabis use - which were the most prevalent substances - was significantly associated with a burnout factor and a factor related to financial situation and side job stress. There was a significant association between a factor comprising depression, anxiety, and suicidal ideation and a factor related to the use of sedatives, nonmedical prescription drugs and neuroenhancement drugs. Although their use was less prevalent, a factor comprising the risk level of stimulants and cocaine use was significantly but more mildly related to the burnout factor. A factor comprising stress related to studies and work/life balance as well as emotional exhaustion was not related to substance use factors.
Conclusion: In this sample of medical students, the prevalence of substance use was substantial and poorer mental health status was related with higher substance use risk levels.
abstract_id: PUBMED:24971216
Who guards the guards: drug use pattern among medical students in a nigerian university. Background: Several studies have examined the prevalence and pattern of substance use among medical students in Nigeria. Few of these studies have specifically examined the relationship between the psychological distress and psychoactive substance use among these students. Yet, evidence world-wide suggests that substance use among medical students might be on the rise and may be related to the level of stress among them.
Aim: The present study is the first study aimed to determine the prevalence, pattern and factors associated with psychoactive substance use among medical students of Olabisi Onabanjo University, Ogun State, Nigeria.
Subjects And Methods: The World Health Organization student drug use questionnaire was used to evaluate for substance use among 246 clinical medical students between September and October 2011. General health questionnaire (GHQ) 12 was used to assess for psychological distress among these students. Statistical analysis was performed using the SPSS version 16. (Chicago, USA). Proportions were compared using the Chi-square test while a value of P < 0.05 was considered statistically significant. Fisher exact test was used instead of Chi-square when the number in the cell is less than 5.
Results: Lifetime prevalence of substance use among medical students was 65% (165/246). It was found that the most commonly used substances were alcohol 63.4% (156/246), mild stimulants 15.6% (38/246), tobacco 15% (37/246) and sedatives 6.1% (15/246). Substance use was associated with gender, frequency of participation in religious activities and GHQ scores.
Conclusion: Psychoactive substance use is a major problem among medical students. Psychological well-being plays a significant role in substance use among these students. There is a need for adequate screening and assessment for substance use disorders among these students and incorporating stress management strategies in their curriculum.
abstract_id: PUBMED:27475408
Factors associated with having a medical marijuana card among Veterans with recent substance use in VA outpatient treatment. Psychiatric symptoms, somatic problems, and co-occurring substance use have been associated with medical marijuana consumption among civilian patients with substance use disorders. It is possible that these factors may impact Veterans' ability to engage in or adhere to mental health and substance use disorder treatment. Therefore, we examined whether psychiatric functioning, substance use, and somatic problems were associated with medical marijuana use among Veterans receiving substance use disorder and/or mental health treatment. Participants (n=841) completed screening measures for a randomized controlled trial and 67 (8%) reported that they had a current medical marijuana card. Most of these participants (78%) reported using marijuana to treat severe/chronic pain. Significant bivariate differences revealed that, compared to participants without a medical marijuana card, those with a card were more likely to be in a middle income bracket, unemployed, and they had a significantly higher number of recent days of marijuana use, synthetic marijuana use, and using sedatives prescribed to them. Additionally, a significantly higher proportion of participants with a medical marijuana card scored above the clinical cutoff for posttraumatic stress disorder (PTSD) symptoms, had significantly higher severity of sleep-related problems, and reported a higher level of pain. These findings highlight the co-occurrence of substance use, PTSD symptoms, sleep-related problems, and chronic pain among Veterans who use medical marijuana. Future research should investigate the inter-relationships among medical marijuana use and other clinical issues (e.g., PTSD symptoms, sleep, pain) over time, and potential implications of medical marijuana use on treatment engagement and response.
Answer: The debate on substance use among medical students is indeed a pertinent one, as evidenced by the various studies that have explored the prevalence, patterns, and factors associated with substance use in this population. Research has shown that substance use among medical students can have serious implications for their conduct, safety, and efficiency as future doctors, raising medicolegal, ethical, and political concerns (PUBMED:18807312).
Studies from different regions have reported varying prevalence rates of substance use among medical students. For instance, a study from the Indian subcontinent found that current alcohol and tobacco use was reported by 7.1% and 6.1% of respondents, respectively, with lifetime use of illicit substances reported by 6.7% of respondents (PUBMED:18807312). In contrast, a study from a Nigerian university reported that the most currently used substances were mild stimulants (33.3%), alcohol (13.6%), sedatives (7.3%), and tobacco (3.2%) (PUBMED:18235869). Another study from Puducherry, India, found that the most prevalent substance use was alcohol (10.8%), with lower percentages for tobacco and cannabis use (PUBMED:37323748).
The factors associated with substance use among medical students include male gender, living alone, study difficulty, being a clinical student, being aged 25 years or more, religiosity, and mental health status (PUBMED:18235869; PUBMED:38420921). Family history, peer pressure, transition from school to college life, and progression in the medical course were also identified as important determinants (PUBMED:35656413).
In the United States, a study reported that a significant percentage of medical students consumed alcohol and marijuana, with some experiencing serious consequences such as interpersonal altercations, suicidal ideation, cognitive deficits, compromised academic performance, and driving under the influence (PUBMED:29072119).
Given the substantial prevalence of substance use and its association with poorer mental health status among medical students, as well as the potential impact on their future professional lives, it is clear that the debate on substance use among medical students should be reignited. There is a need for updated research, comprehensive studies, and the development of strategies to prevent substance use disorders and support students in need of help (PUBMED:29072119; PUBMED:18807312; PUBMED:35656413). |
Instruction: Can prostate-specific antigen and prostate-specific antigen velocity be used for prostate cancer screening in men older than 70 years?
Abstracts:
abstract_id: PUBMED:21074211
Prostate cancer in men 70 years old or older, indolent or aggressive: clinicopathological analysis and outcomes. Purpose: Currently there is a lack of consensus on screening recommendations for prostate cancer with minimal guidance on the cessation of screening in older men. We defined the clinicopathological features and outcomes for men 70 years old or older who were diagnosed with prostate cancer.
Materials And Methods: The Center for Prostate Disease Research database was queried for all men diagnosed with prostate cancer from 1989 to 2009. The patients were stratified into age quartiles and by race. Cox proportional hazard models were used to compare clinicopathological features across patient stratifications. Kaplan-Meier analysis was used to compare biochemical recurrence-free, prostate cancer specific and overall survival.
Results: Of the 12,081 men evaluated 3,650 (30.2%) were 70 years old or older. These men had a statistically significant higher clinical stage, biopsy grade and prediagnosis prostate specific antigen velocity (p < 0.0001). For those patients who underwent prostatectomy, pathological stage, grade and surgical margin status were all significantly higher in men 70 years old or older. Biochemical recurrence and secondary treatment were also more common in this age group (p < 0.0001). Multivariate analysis revealed age 70 years or older as a significant predictor of biochemical recurrence after prostatectomy (HR 1.45, p = 0.0054). Overall survival was lowest in men age 70 years or older who had surgery, but interestingly the mean time to death was comparable regardless of age.
Conclusions: Our findings indicate that as men age, parameters consistent with more aggressive disease become more prevalent. The etiology of this trend is unknown. However, these data may have implications for current screening and treatment recommendations.
abstract_id: PUBMED:18267331
Can prostate-specific antigen and prostate-specific antigen velocity be used for prostate cancer screening in men older than 70 years? Objectives: We evaluated the lower threshold of prostate-specific antigen (PSA) and prostate-specific antigen velocity (PSAV) in a population of men over 70 years of age.
Methods: Between January 1988 and December 2005, 4038 men over 70 years of age including 605 African-American (AA) men and 3433 non-AA men from the Duke Prostate Center Outcomes database had determination of serum PSA and PSAV. We used receiver operating characteristic (ROC) curves to display the data graphically.
Results: The median age for all men on the study was 75 years. The area under the curve (AUC) for PSA in AA men and non-AA men was 0.84 and 0.76, respectively. For PSAV the AUC was 0.71 versus 0.54, respectively. The largest relative sensitivity and specificity in AA men was achieved at the established PSA cut-point of 4.0 ng/mL: 85% and 71%, respectively. The best cut-point in non-AA men was 3.4 ng/mL, which resulted in a sensitivity and specificity of 72% and 73%, respectively. The AUC of ROC curves within various age subgroups tends to be stable regardless of how the ages are grouped. In a multivariate logistic regression model age, PSA and PSAV were significant predictors of cancer status in the AA subset. Age and PSA were significant predictors in the non-AA subset.
Conclusions: The AUC of ROC curves within various age subgroups tends to be stable; therefore, we are led to believe that a PSA or PSAV cutoff for safely commending discontinuation of PCa screening is not apparent from these data.
abstract_id: PUBMED:25284269
Impact of the 2008 U.S. Preventative Services Task Force recommendation on frequency of prostate-specific antigen screening in older men. Objectives: To evaluate the effect of the 2008 U.S. Preventative Services Task Force recommendation against prostate-specific antigen (PSA) screening in men aged 75 and older on frequency of PSA screening in elderly men.
Design: Retrospective, cross-sectional analysis.
Setting: Fifteen community primary care practices in western Massachusetts.
Participants: Men aged 65 and older with one or more annual physicals between January 1, 2006, and December 31, 2010.
Measurements: PSA testing was determined from the electronic health record. Mixed-effects logistic regression was used to model the rate of PSA testing over time for two age groups: 65 to 74, and 75 and older.
Results: Of the 7,833 men in this study, 60% were younger than 75. PSA screening rates were consistently lower in men aged 75 and older. Annual rates, adjusted for number of clinic visits, ranged from 12% to 28% in men aged 75 and older, and 37% to 49% in men aged 65 to 74. In the 2 years before the guideline was released, there was already a slow decline in screening rate in men aged 75 and older, whereas the screening rate in men aged 65 to 74 was rising. Compared to 2008, there was a 36% relative reduction in screening rate in 2009 and a 51% relative reduction in 2010 for men aged 75 and older, and a 12% relative reduction in screening rate in 2009 and a 24% relative reduction in 2010 for men aged 65 to 74.
Conclusion: The 2008 recommendation appeared to reduce PSA screening rates in older men in 2009 and 2010; there was a substantial reduction in men aged 75 and older and a more modest reduction in men aged 65 to 74.
abstract_id: PUBMED:23833155
Shared decision making in prostate-specific antigen testing with men older than 70 years. Background: Little is known about how shared decision making (SDM) is being carried out between older men and their health care providers. Our study aimed to describe the use of SDM key elements and assess their associations with prostate-specific antigen (PSA) testing among older men.
Methods: We conducted descriptive and logistic regression modeling analyses using the 2005 and 2010 National Health Interview Survey data.
Results: Age-specific prevalence of PSA testing was similar in 2005 and 2010. In 2010, 44.1% of men aged ≥70 years had PSA testing. Only 27.2% (95% confidence interval, 22.2-32.9) of them reported having discussions about both advantages and disadvantages of testing. Multiple regression analyses showed that PSA-based screening was positively associated with discussions of advantages only (P < .001) and with discussions of both advantages and disadvantages (P < .001) compared with no discussion. Discussion of scientific uncertainties was not associated with PSA testing.
Conclusions: Efforts are needed to increase physicians' awareness of and adherence to PSA-based screening recommendations. Given that discussions of both advantages and disadvantages increased the uptake of PSA testing and discussion of scientific uncertainties has no effect, additional research about the nature, context, and extent of SDM and about patients' knowledge, values, and preferences regarding PSA-based screening is warranted.
abstract_id: PUBMED:20299039
Prostate cancer screening in men 75 years old or older: an assessment of self-reported health status and life expectancy. Purpose: Opinions vary regarding the appropriate age at which to stop prostate specific antigen screening. Some groups recommend screening men with a greater than 10-year life expectancy while the United States Preventive Services Task Force recommends against screening men 75 years old or older. In this study we evaluated the influence of health status and life expectancy on prostate specific antigen screening in older men in the United States before the 2008 United States Preventive Services Task Force guidelines.
Materials And Methods: The study cohort comprised 718 men age 75 years or older without a history of prostate cancer who responded to the 2005 National Health Interview Survey, representing an estimated 4.47 million noninstitutionalized men in the United States. Life expectancy was estimated from age and self-reported health status.
Results: Overall 19% of the men were 85 years old or older and 27% reported fair or poor health. In the previous 2 years 52% had a prostate specific antigen screening test. After adjustment for age, race, education and physician access, men with fair or poor health were less likely to receive prostate specific antigen screening than those with excellent or very good health (adjusted OR 0.51, 95% CI 0.33-0.80, p = 0.003). Overall 42% of the men predicted to live less than 5 years and 65% of those predicted to live more than 10 years reported having recent prostate specific antigen screening.
Conclusions: Before the United States Preventive Services Task Force recommendation, health status and life expectancy were used to select older men for prostate specific antigen screening. However, many men expected to live less than 5 years were screened. A strict age cutoff of 75 years reduces over screening but also prohibits screening in healthy older men with a long life expectancy who may benefit from screening.
abstract_id: PUBMED:30546897
Evaluation of prostate-specific antigen density in the diagnosis of prostate cancer combined with magnetic resonance imaging before biopsy in men aged 70 years and older with elevated PSA. There is an increasing proportion of individuals aged 70 years and older, as well as an increasing life expectancy worldwide. The present study may guide the management of older patients with elevated prostate specific antigen (PSA). The medical records of 241 older men aged >70 years who underwent multiparametric magnetic resonance imaging (mpMRI) before prostate biopsy (PBx) at our institution were reviewed retrospectively. Multiple variables were evaluated as predictors for the diagnosis of prostate cancer (PCa). The variables included serum PSA level, digital rectal examination, size of region of interest on mpMRI, prostate volume and PSA density. PCa was positive in 162 (67.2%). Prostate volume and PSA density were significant PCa predictors (P<0.001). In patients aged 70-75 and >75 years, PSA density was significantly higher in patients with PCa (0.21 ng/ml/cc, P=0.014 and 0.24 ng/ml/cc, P<0.001, respectively). Similarly, PSA density was significant higher in patients with significant PCa (0.24 ng/ml/cc, P=0.004 and 0.29 ng/ml/cc, P<0.001, respectively). The cut-off value of PSA density was calculated using receiver operating characteristic curves. Area under curve of PSA density was 0.698, and the best cut-off value was 0.20 ng/ml/cc. These results indicate that the combination of PSA density with mpMRI before PBx is a helpful method and can be a decision-making model for a selection of PBx.
abstract_id: PUBMED:27459245
Estimating the harms and benefits of prostate cancer screening as used in common practice versus recommended good practice: A microsimulation screening analysis. Background: Prostate-specific antigen (PSA) screening and concomitant treatment can be implemented in several ways. The authors investigated how the net benefit of PSA screening varies between common practice versus "good practice."
Methods: Microsimulation screening analysis (MISCAN) was used to evaluate the effect on quality-adjusted life-years (QALYs) if 4 recommendations were followed: limited screening in older men, selective biopsy in men with elevated PSA, active surveillance for low-risk tumors, and treatment preferentially delivered at high-volume centers. Outcomes were compared with a base model in which annual screening started at ages 55 to 69 years and were simulated using data from the European Randomized Study of Screening for Prostate Cancer.
Results: In terms of QALYs gained compared with no screening, for 1000 screened men who were followed over their lifetime, recommended good practice led to 73 life-years (LYs) and 74 QALYs gained compared with 73 LYs and 56 QALYs for the base model. In contrast, common practice led to 78 LYs gained but only 19 QALYs gained, for a greater than 75% relative reduction in QALYs gained from unadjusted LYs gained. The poor outcomes for common practice were influenced predominantly by the use of aggressive treatment for men with low-risk disease, and PSA testing in older men also strongly reduced potential QALY gains.
Conclusions: Commonly used PSA screening and treatment practices are associated with little net benefit. Following a few straightforward clinical recommendations, particularly greater use of active surveillance for low-risk disease and reducing screening in older men, would lead to an almost 4-fold increase in the net benefit of prostate cancer screening. Cancer 2016;122:3386-3393. © 2016 American Cancer Society.
abstract_id: PUBMED:36462252
Prostate cancer screening with Prostate-Specific Antigen (PSA) to men over 70 years old in an urban health zone, 2018-2020: A cross-sectional study Objectives: To describe the epidemiology and estimate the cost of Prostate-Specific Antigen (PSA) screening tests to men ≥ 70 years old in an urban health zone.
Methods: A cross-sectional study was performed. We obtained every PSA test made in the health zone from 2018 to 2020, and classified them retrospectively as screening (PSAc) or not according to pre-established criteria, reviewing electronic health records. Testing rates were calculated by centres and clinical specialities. The standard population was provided by the city register of inhabitants (VM70). Cost estimation was made using our health system's price list.
Results: Two thousand and thirty six PSA, of 888 men ≥ 70 years old were obtained, and 350 met screening classification criteria. Six adenocarcinomas were diagnosed from those tests. We estimated 76.07 PSAc/1000 VM70-year from any centre, 1.45 tests for each screened individual, and 15.71% prevalence. The standard population was 1534 men (mean 2018-2020, SD 45.37). Patients who were screened (median age 75, SD 4.04) were younger than those not screened. We estimated a total screening test cost of 9,751 €.
Conclusions: The epidemiology and cost of PSA screening tests to men ≥ 70 years old are reported, both in primary health care and in the hospital. PSA screening tests are common practice amongst professionals attending elderly men in our health zone, mostly in primary care. The screening testing rate of men without prostate cancer is similar to that reported in the literature.
abstract_id: PUBMED:36606360
Breast and prostate cancer screening rates by cognitive status in US older adults. Introduction: For most older adults with dementia, the short-term harms and burdens of routine cancer screening likely outweigh the delayed benefits. We aimed to provide a more updated assessment of the extent that US older adults with dementia receive breast and prostate cancer screenings.
Methods: Using the Health and Retirement Study (HRS) Wave 12 (2014-2015) linked to Medicare, we examine rates of breast and prostate cancer screenings in adults 65+ years by cognitive status. We used claims data to identify eligibility for screening and receipt of screening. We used a validated method using HRS data to define cognitive status.
Results: The analytic sample included 2439 women in the breast cancer screening cohort and 1846 men in the prostate cancer screening cohort. Average ages were 76.8 years for women and 75.6 years for men, with 9.0% and 7.6% with dementia in each cohort, respectively. Among women with dementia, 12.3% were screened for breast cancer. When stratified by age, 10.6% of those 75+ and have dementia were screened for breast cancer. When stratified by predicted life expectancy, 10.4% of those with predicted life expectancy of <10 years and have dementia were screened for breast cancer. Among men with dementia, 33.9% were screened for prostate cancer. When stratified by age, 30.9% of those 75+ and have dementia were screened for prostate cancer. When stratified by predicted life expectancy, 34.4% of those with predicted life expectancy of <10 years and have dementia were screened for prostate cancer. Using multivariable logistic regression, dementia was associated with lower odds of receiving breast cancer screening (OR 0.36, 95% CI 0.23-0.57) and prostate cancer screening (OR 0.58, 95% CI 0.36-0.96).
Discussion: Our results suggest potential over-screening in older adults with dementia. Better supporting dementia patients and caregivers to make informed cancer screening decisions is critical.
abstract_id: PUBMED:28083474
Metastatic Prostate Cancer Diagnosed by Bone Marrow Aspiration in an Elderly Man Not Undergoing PSA Screening. Prostate cancer screening by PSA testing remains controversial, particularly in the elderly. Practice guidelines from most clinical societies suggest discontinuing PSA screening at age 70 while the USPSTF recommends against screening at any age. Recent reports have demonstrated an increased incidence of metastatic prostate cancer, with men aged 75 or older accounting for roughly half of those newly diagnosed at an incurable stage. We herein describe the case of an elderly gentleman with no history of prostate cancer screening who presented with anorexia and back pain of unclear etiology. Evaluation with bone marrow aspiration revealed a diagnosis of metastatic prostate cancer.
Answer: The use of prostate-specific antigen (PSA) and prostate-specific antigen velocity (PSAV) for prostate cancer screening in men older than 70 years has been evaluated in several studies, but the findings suggest that the decision to use these measures for screening in this age group is complex.
One study found that in a population of men over 70 years of age, the area under the curve (AUC) for PSA was 0.84 for African-American men and 0.76 for non-African-American men, indicating a fair level of accuracy in predicting prostate cancer. For PSAV, the AUC was 0.71 versus 0.54, respectively. The study concluded that the AUC of ROC curves within various age subgroups tends to be stable, suggesting that a PSA or PSAV cutoff for safely recommending discontinuation of prostate cancer screening is not apparent from these data (PUBMED:18267331).
Another study indicated that men 70 years old or older diagnosed with prostate cancer had statistically significant higher clinical stage, biopsy grade, and prediagnosis PSA velocity, suggesting more aggressive disease in this age group. Multivariate analysis revealed age 70 years or older as a significant predictor of biochemical recurrence after prostatectomy. However, the study also noted that overall survival was lowest in men age 70 years or older who had surgery, but the mean time to death was comparable regardless of age (PUBMED:21074211).
Furthermore, research has shown that following the 2008 U.S. Preventative Services Task Force recommendation against PSA screening in men aged 75 and older, there was a significant reduction in PSA screening rates in older men, indicating a shift in practice patterns following the guidelines (PUBMED:25284269).
In summary, while PSA and PSAV can be used for prostate cancer screening in men older than 70 years, the decision to screen should be individualized, taking into account the potential for more aggressive disease in this age group, the stability of AUC across age subgroups, and the recent trends in screening practices following recommendations against routine screening in older men. |
Instruction: Oral contrast media in CT: improvement by addition of guar?
Abstracts:
abstract_id: PUBMED:9465948
Oral contrast media in CT: improvement by addition of guar? Purpose: To evaluate the additional effect of guar with iotrolan as an oral contrast medium.
Method: In a clinical double-blind randomised study a viscous iotrolan (11.2 mg iodine/ml)/guar (4 g/l)-suspension was compared with aqueous solutions of pure iotrolan (11.2 mg iodine/ml) and meglumine ioxithalamate (12 mg iodine/ml). The contrast media were evaluated according to filling, distribution, transit time, artifacts, radiodensity, patient acceptance and side effects.
Results: The addition of guar delayed the transit time of the contrast media. Consequently a more homogeneous filling of the bowel with fewer artifacts was observed in comparison to the aqueous contrast media. The results of the pure iotrolan solution were comparable to meglumine ioxithalamate, except for a higher radiodensity in the distal small intestine. The colon showed a better filling with non-viscous contrast media in the given time frame. Pure iotrolan had the best patient acceptance. Two patients considered the iotrolan/guar-solution impossible to drink, the other 18 patients found taste and consistency just about acceptable.
Conclusion: In spite of the discussed advantages, due to a less subjective acceptance the guar/iotrolan solution is not suitable in routine-diagnosis, unless taste and consistency are greatly improved. Individual use is recommended in selected cases and long-term examinations.
abstract_id: PUBMED:17786895
Oral administration of intravenous contrast media: a tasty alternative to conventional oral contrast media in computed tomography Purpose: Many patients dislike oral contrast media due to their bad taste. The aim of the present study was to identify a solution that tastes better while providing the same opacification in order to offer oncological patients an alternative to the routinely used bad tasting oral contrast media.
Materials And Methods: In a single blinded, prospective clinical study, the orally administered intravenous contrast media iohexol (Omnipaque), iopromide (Ultravist), and iotrolan (Isovist) as well as the oral contrast media sodium amidotrizoate (Gastrografin) and ioxithalamate (Telebrix) were each compared to the oral contrast medium lysine amidotrizoate as the reference standard at a constant dilution. The density values of all contrast media with the same dilutions were first measured in a phantom study. The patient study included 160 patients who had undergone a prior abdominal CT scan with lysine amidotrizoate within 6 months. The patients rated their subjective taste impression on a scale of 0 (very bad) to 10 (excellent). In addition, adverse events and opacification were recorded and prices were compared.
Results: The phantom study revealed identical density values. Patients assigned much higher taste impression scores of 8 and 7 to iohexol and iotrolan, respectively, as compared to a score of 3 for the conventional lysine amidotrizoate (p< 0.05). Iopromide and sodium amidotrizoate did not differ significantly from lysine amidotrizoate. The opacification of all contrast media and experienced adverse events did not differ significantly. Iotrolan (ca. 120 euro/100 ml), Iohexol and Iopromide (ca. 70 euro/100 ml) are more expensive than the conventional oral contrast media (ca. 10 - 20 euro/100 ml).
Conclusion: Orally administered solutions of non-ionic contrast media improve patient comfort due to the better taste and provide the same opacification in comparison to conventional oral contrast media. At present, their use should be limited to individual cases due to the higher costs.
abstract_id: PUBMED:9687954
Evaluation of the effects of bioadhesive substances as addition to oral contrast media: an experimental study Purpose: To evaluate the additional effect of bioadhesives in combination with iotrolan and barium as oral contrast media in an animal model.
Method: The bioadhesives Noveon, CMC, Tylose and Carbopol 934 were added to iotrolan and barium. The solutions were administered to rabbits by a feeding tube. The animals were investigated by computed tomography (CT) and radiography after 0.5, 4, 12, 24 and in part after 48 hours. Mucosal coating and contrast filling of the bowel were evaluated.
Results: Addition of bioadhesives to oral contrast media effected long-term contrast in the small intestine and colon, but no improvement in continuous filling and coating of the gastrointestinal tract was detected. Mucosal coating was seen only in short regions of the caecum and small intestine. In CT the best results for coating were observed with tylose and CMC, in radiography additionally with carbopol and noveon. All contrast medium were well tolerated.
Conclusion: The evaluated contrast medium solutions with bioadhesives have shown long-term contrast but no improvement in coating in comparison to conventional oral contrast media.
abstract_id: PUBMED:22806052
Evaluation of guar gum derivatives as gelling agents for microbial culture media. Guar gum, a galactomannan, has been reported to be an inexpensive substitute of agar for microbial culture media. However, its use is restricted probably because of (1) its highly viscous nature even at high temperatures, making dispensing of the media to Petri plates difficult and (2) lesser clarity of the guar gum gelled media than agar media due to impurities present in guar gum. To overcome these problems, three guar gum derivatives, carboxymethyl guar, carboxymethyl hydroxypropyl guar and hydroxypropyl guar, were tested as gelling agents for microbial growth and differentiation. These were also evaluated for their suitability for other routine microbiological methods, such as, enumeration, use of selective and differential media, and antibiotic sensitivity test. For evaluation purpose, growth and differentiation of eight fungi and eight bacteria grown on the media gelled with agar (1.5%), guar gum (4%) or one of the guar gum derivatives (4%), were compared. All fungi and bacteria exhibited normal growth and differentiation on all these media. Generally, growth of most of the fungi was better on guar gum derivatives gelled medium than on agar medium. The enumeration carried out for Serratia sp. and Pseudomonas aeruginosa by serial dilution and pour plate method yielded similar counts in all the treatments. Likewise, the selective succinate medium, specific for P. aeruginosa, did not allow growth of co-inoculated Bacillus sp. even if gelled with guar gum derivatives. The differential medium, Congo red mannitol agar could not differentiate between Agrobacterium tumefaciens and Rhizobium meliloti on color basis, if gelled with guar gum or any of its derivatives However, for antibiotic sensitivity tests for both Gram-positive and -negative bacteria, guar gum and its derivatives were as effective as agar.
abstract_id: PUBMED:16162142
Guar gum: a cheap substitute for agar in microbial culture media. Aims: To determine the possibility of using guar gum, a colloidal polysaccharide, as a cheap alternative to agar for gelling microbial culture media.
Methods And Results: As illustrative examples, 12 fungi and 11 bacteria were cultured on media solidified with either guar gum or agar. All fungi and bacteria exhibited normal growth and differentiation on the media gelled with guar gum. Microscopic examination of the fungi and bacteria grown on agar or guar gum gelled media did not reveal any structural differences. However, growth of most of the fungi was better on guar gum media than agar, and correspondingly, sporulation was also more advanced on the former. Bacterial enumeration studies carried out for Serratia sp. and Pseudomonas sp. by serial dilution and pour-plate method yielded similar counts on both agar and guar gum. Likewise, a selective medium, succinate medium used for growth of Pseudomonas sp. did not support growth of Bacillus sp. when inoculated along with Pseudomonas on both agar or guar gum supplemented medium.
Conclusions: Guar gum, a galactomannan, which is 50 times cheaper than Difco-bacto agar, can be used as a gelling agent in place of agar in microbial culture media.
Significance And Impact Of The Study: As the media gelled with guar gum do not melt at temperature as high as 70 degrees C, these can be used for isolation and maintenance of thermophiles.
abstract_id: PUBMED:6238047
Oral cholecystographic contrast media--a comparative study. Three oral cholecystographic contrast media have been compared, mainly with regard to their ability to outline the gall bladder and also the frequency of undesirable side-effects that they produce. No significant differences were found between the media, and the final choice of a suitable medium for use in X-ray departments may depend on their relative cost. In times of financial restraint, such a decision might increasingly involve hospital pharmacists.
abstract_id: PUBMED:25065767
Guar gum solutions for improved delivery of iron particles in porous media (part 1): porous medium rheology and guar gum-induced clogging. The present work is the first part of a comprehensive study on the use of guar gum to improve delivery of microscale zero-valent iron particles in contaminated aquifers. Guar gum solutions exhibit peculiar shear thinning properties, with high viscosity in static conditions and lower viscosity in dynamic conditions: this is beneficial both for the storage of MZVI dispersions, and also for the injection in porous media. In the present paper, the processes associated with guar gum injection in porous media are studied performing single-step and multi-step filtration tests in sand-packed columns. The experimental results of single-step tests performed by injecting guar gum solutions prepared at several concentrations and applying different dissolution procedures evidenced that the presence of residual undissolved polymeric particles in the guar gum solution may have a relevant negative impact on the permeability of the porous medium, resulting in evident clogging. The most effective preparation procedure which minimizes the presence of residual particles is dissolution in warm water (60°C) followed by centrifugation (procedure T60C). The multi-step tests (i.e. injection of guar gum at constant concentration with a step increase of flow velocity), performed at three polymer concentrations (1.5, 3 and 4g/l) provided information on the rheological properties of guar gum solutions when flowing through a porous medium at variable discharge rates, which mimic the injection in radial geometry. An experimental protocol was defined for the rheological characterization of the fluids in porous media, and empirical relationships were derived for the quantification of rheological properties and clogging with variable injection rate. These relationships will be implemented in the second companion paper (Part II) in a radial transport model for the simulation of large-scale injection of MZVI-guar gum slurries.
abstract_id: PUBMED:7436626
Liver slice uptake of intravenous and oral biliary contrast media. Using the liver slice technique, the uptake of five intravenous (including iodipamide, ioglycamide, iotroxamide, and iodoxamide) and one oral (iopodate) biliary contrast media into the rat liver was investigated. For all six compounds a saturable high-affinity and a nonsaturable low-affinity uptake system could be identified. There is no great difference of the liver uptake of the five intravenous compounds, but the oral compound iopodate is taken up by the rat liver to a much higher extent than the intravenous compounds. Since the liver slice uptake of the biliary contrast media was not clearly depending on metabolic energy, the presented results favor intracellular binding more than an active carrier as uptake mechanism. Human serum albumin strongly reduces hepatic uptake of all six compounds. The inhibition of the uptake was directly depending on the degree of serum albumin binding. No evidence for a carrier role of serum albumin within the uptake process could be found. The uptake mechanism of the liver for the biliary contrast media failed to be very specific, since liver slice uptake could be inhibited by iopanoate, bromosulfthalein, ouabain, and taurocholate and by each of the contrast media itself.
abstract_id: PUBMED:25027836
An evaluation of the use of oral contrast media in abdominopelvic CT. Objectives: To evaluate the diagnostic efficacy of different oral contrast media (OCM) for abdominopelvic CT examinations performed for follow-up general oncological indications. The objectives were to establish anatomical image quality criteria for abdominopelvic CT; use these criteria to evaluate and compare image quality using positive OCM, neutral OCM and no OCM; and evaluate possible benefits for the medical imaging department.
Methods: Forty-six adult patients attending a follow-up abdominopelvic CT for general oncological indications and who had a previous abdominopelvic CT with positive OCM (n = 46) were recruited and prospectively placed into either the water (n = 25) or no OCM (n = 21) group. Three radiologists performed absolute visual grading analysis (VGA) to assess image quality by grading the fulfilment of 24 anatomical image quality criteria.
Results: Visual grading characteristics (VGC) analysis of the data showed comparable image quality with regards to reproduction of abdominal structures, bowel discrimination, presence of artefacts, and visualization of the amount of intra-abdominal fat for the three OCM protocols.
Conclusion: All three OCM protocols provided similar image quality for follow-up abdominopelvic CT for general oncological indications.
Key Points: • Positive oral contrast media are routinely used for abdominopelvic multidetector computed tomography • Experimental study comparing image quality using three different oral contrast materials • Three different oral contrast materials result in comparable CT image quality • Benefits for patients and medical imaging department.
abstract_id: PUBMED:16919558
Contribution of computed tomography with oral media contrast to the diagnosis of esophago-pericardial fistula. The esophago-pericardial fistula is a very rare and usually fatal complication of esophageal cancers. We report a case of a 56-year-old man who presented with chest pain 1 month after concurrent radiochemotherapy for squamous cell esophageal carcinoma. Thoracic computed tomography (CT) with oral iodinated media contrast revealed esophago-pericardial fistula visualizing the fistulous tract. We conclude that CT with oral contrast media may be the first imaging technique of choice to confirm the diagnosis of esophago-pleural fistula.
Answer: The addition of guar to oral contrast media in CT has been evaluated in several studies, with mixed results. In one study, a viscous iotrolan/guar suspension was compared with aqueous solutions of pure iotrolan and meglumine ioxithalamate. The addition of guar delayed the transit time of the contrast media, resulting in a more homogeneous filling of the bowel with fewer artifacts compared to the aqueous contrast media. However, the colon showed better filling with non-viscous contrast media within the given time frame. Despite some advantages, the iotrolan/guar solution had less subjective acceptance among patients, with two finding it impossible to drink and the others considering the taste and consistency barely acceptable. Therefore, the study concluded that the guar/iotrolan solution is not suitable for routine diagnosis unless taste and consistency are greatly improved, though it may be recommended for individual use in selected cases and long-term examinations (PUBMED:9465948).
In another study, guar gum derivatives were evaluated as gelling agents for microbial culture media, which is not directly related to oral contrast media in CT but provides insight into the properties of guar gum and its derivatives. The study found that guar gum derivatives could be used as gelling agents and were as effective as agar in various microbiological applications (PUBMED:22806052).
A separate study evaluated the effects of bioadhesive substances, including guar, as an addition to oral contrast media in an animal model. The study found that while the addition of bioadhesives to oral contrast media provided long-term contrast in the small intestine and colon, there was no improvement in continuous filling and coating of the gastrointestinal tract compared to conventional oral contrast media (PUBMED:9687954).
Overall, the addition of guar to oral contrast media in CT has shown some potential benefits in terms of bowel filling and reducing artifacts, but issues with patient acceptance due to taste and consistency need to be addressed before it can be recommended for routine use. Additionally, the cost and preparation of guar-containing contrast media may also be factors to consider (PUBMED:9465948; PUBMED:17786895). |
Instruction: Should we perform an echocardiogram in hypertensive patients classified as having low and medium risk?
Abstracts:
abstract_id: PUBMED:16321664
Should we perform an echocardiogram in hypertensive patients classified as having low and medium risk? Background: Left ventricular hypertrophy is an important predictor of cardiovascular risk and its detection contributes to risk stratification. However, echocardiography is not a routine procedure and electrocardiography (ECG) underestimates its prevalence.
Objective: To evaluate the prevalence of echocardiographic left ventricular hypertrophy in low and medium risk non-treated hypertensive subjects, in order to find out the percentage of them who would be reclassified as high risk patients.
Methods: Cross-sectional, multicenter study was performed in hospital located hypertension units. An echocardiogram was performed in 197 previously untreated hypertensive patients, > 18 years, classified as having low (61%) or medium (39%) risk, according to the OMS/ISH classification. The presence of left ventricular hypertrophy was considered if left ventricular mass index was > or = 134 or 110 g/m(2) in men and women, respectively (Devereux criteria). A logistic regression analysis was performed to identify factors associated to left ventricular hypertrophy.
Results: The prevalence of left ventricular hypertrophy was 23.9% (95% CI:17.9-29.9), 25.6% in men and 22.6% in women. In the low risk group its prevalence was 20.7% and in medium risk group 29.5%. Factors associated to left ventricular hypertrophy were: years since the diagnosis of hypertension, OR:1.1 (95% CI:1.003-1.227); systolic blood pressure, OR:1.08 (95% CI:1.029-1.138); diastolic blood pressure, OR:0.9 (95% CI:0.882-0.991); and family history of cardiovascular disease, OR:4.3 (95% CI:1.52-12.18).
Conclusions: These findings underline the importance of performing an echocardiogram in low and high risk untreated hypertensive patients in which treatment would otherwise be delayed for even one year.
abstract_id: PUBMED:18047919
Prognostic relevance of metabolic syndrome in hypertensive patients at low-to-medium risk. Background: The prognostic impact of metabolic syndrome (MetS) in the hypertensive population at low-medium risk is unknown. In this study, we evaluated the prognostic relevance of MetS in hypertensive patients at low-medium risk.
Methods: The occurrence of nonfatal and fatal cardiac and cerebrovascular events was evaluated in 802 patients with mild to moderate essential hypertension at low-medium risk according to the 2003 World Health Organization/International Society of Hypertension statement on the management of hypertension. Among these patients, 218 (27.2%) had MetS according to a modified National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) definition (body mass index in place of waist circumference).
Results: During follow-up (6.9 +/- 3.1 years; range, 0.5 to 13.1 years, mean +/- SD), 58 first cardiovascular events occurred. The event rates per 100 patient-years in patients without and with MetS were 0.87 and 1.51, respectively. Event-free survival was significantly different between groups (P = .03). After adjustment for several covariates, Cox regression analysis showed that cardiovascular risk was significantly higher in patients with than in patients without MetS (relative risk, 2.64; 95% confidence interval, 1.52 to 4.58; P = .001). Other independent predictors of outcome were age, smoking habit, 24-h systolic BP, and LDL cholesterol.
Conclusions: Hypertensive patients at low-medium risk with MetS are at higher cardiovascular risk than those without MetS. Metabolic syndrome may be a useful tool for clinicians to identify subjects who are at increased risk when traditional assessment may indicate low-medium risk.
abstract_id: PUBMED:12172313
Change in cardiovascular risk profile by echocardiography in low- or medium-risk hypertension. Background: Clinical decision-making in hypertensive patients is largely based upon assessment of total cardiovascular risk. World Health Organization-International Society of Hypertension (WHO-ISH) guidelines suggest delaying or withholding drug treatment in individuals assessed as at low risk on the basis of a suggested work-up that does not include echocardiography.
Objective: To assess the impact of echocardiography on risk stratification in never-treated individuals classified as at low cardiovascular risk.
Design: A retrospective analysis of a prospective survey.
Setting: Outpatient hypertension clinics of three community hospitals.
Patients: A total of 792 hypertensive adults classified as at low or medium risk, drawn from a larger sample of 1322 never-treated hypertensive patients.
Main Outcome Measures: Change in risk class and need of immediate treatment after echocardiographic evaluation of left ventricular hypertrophy.
Results: Those at low and medium risk according to WHO-ISH (to receive delayed treatment) represented 17 and 43%, respectively, of the whole hypertensive population. The prevalence of left ventricular hypertrophy on echocardiography was 21 and 32% in low- and medium-risk groups, respectively (29% on average).
Conclusions: In untreated hypertensive individuals without overt target-organ damage, in whom treatment would be postponed or avoided according to current WHO-ISH guidelines, echocardiography modifies the risk classification in 29% of the cases, identifying a need for immediate drug treatment. In low-risk untreated hypertensive individuals, echocardiography commonly alters risk stratification based on the initial WHO-ISH work-up.
abstract_id: PUBMED:38116193
Development and Validation of a Risk Prediction Model to Estimate the Risk of Stroke Among Hypertensive Patients in University of Gondar Comprehensive Specialized Hospital, Gondar, 2012 to 2022. Background: A risk prediction model to predict the risk of stroke has been developed for hypertensive patients. However, the discriminating power is poor, and the predictors are not easily accessible in low-income countries. Therefore, developing a validated risk prediction model to estimate the risk of stroke could help physicians to choose optimal treatment and precisely estimate the risk of stroke.
Objective: This study aims to develop and validate a risk prediction model to estimate the risk of stroke among hypertensive patients at the University of Gondar Comprehensive Specialized Hospital.
Methods: A retrospective follow-up study was conducted among 743 hypertensive patients between September 01/2012 and January 31/2022. The participants were selected using a simple random sampling technique. Model performance was evaluated using discrimination, calibration, and Brier scores. Internal validity and clinical utility were evaluated using bootstrapping and a decision curve analysis.
Results: Incidence of stroke was 31.4 per 1000 person-years (95% CI: 26.0, 37.7). Combinations of six predictors were selected for model development (sex, residence, baseline diastolic blood pressure, comorbidity, diabetes, and uncontrolled hypertension). In multivariable logistic regression, the discriminatory power of the model was 0.973 (95% CI: 0.959, 0.987). Calibration plot illustrated an overlap between the probabilities of the predicted and actual observed risks after 10,000 times bootstrap re-sampling, with a sensitivity of 92.79%, specificity 93.51%, and accuracy of 93.41%. The decision curve analysis demonstrated that the net benefit of the model was better than other intervention strategies, starting from the initial point.
Conclusion: An internally validated, accurate prediction model was developed and visualized in a nomogram. The model is then changed to an offline mobile web-based application to facilitate clinical applicability. The authors recommend that other researchers eternally validate the model.
abstract_id: PUBMED:37481586
Pragmatic clinic-based investigation of echocardiogram parameters in asymptomatic patients with type 2 diabetes in routine clinical practice and its association with suggestive coronary artery disease: a pilot study. Background: Patients with diabetes mellitus (DM) have cardiovascular diseases (CVD) as a major cause of mortality and morbidity. The primary purpose of this study was to assess the echocardiographic parameters that showed alterations in patients with type 2 diabetes mellitus(T2DM) with suggestive coronary artery disease (CAD) determined by electrocardiography and the secondary was to assess the relationship of these alterations with established cardiovascular risk factors.
Methods: This cross-sectional, observational pilot study included 152 consecutive patients with T2DM who attended a tertiary DM outpatient care center. All patients underwent clinical examination and history, anthropometric measurements, demographic survey, determination of the Framingham global risk score, laboratory evaluation, basal electrocardiogram, echocardiogram, and measurement of carotid intima-media thickness (CIMT).
Results: From the overall sample, 134 (88.1%) patients underwent an electrocardiogram. They were divided into two groups: patients with electrocardiograms suggestive of CAD (n = 11 [8.2%]) and those with normal or non-ischemic alterations on electrocardiogram (n = 123 [91.79%]). In the hierarchical multivariable logistic model examining all selected independent factors that entered into the model, sex, high triglycerides levels, and presence of diabetic retinopathy were associated with CAD in the final model. No echocardiographic parameters were significant in multivariate analysis. The level of serum triglycerides (threshold) related to an increased risk of CAD was ≥ 184.5 mg/dl (AUC = 0.70, 95% IC [0.51-0.890]; p = 0.026.
Conclusion: Our pilot study demonstrated that no echocardiogram parameters could predict or determine CAD. The combination of CIMT and Framingham risk score is ideal to determine risk factors in asymptomatic patients with T2DM. Patients with diabetic retinopathy and hypertriglyceridemia need further investigation for CAD. Further prospective studies with larger sample sizes are needed to confirm our results.
abstract_id: PUBMED:36096877
Development of a short-form Chinese health literacy scale for low salt consumption (CHLSalt-22) and its validation among hypertensive patients. Background: With the accelerated pace of people's life and the changing dietary patterns, the number of chronic diseases is increasing and occurring at a younger age in today's society. The speedily rising hypertensive patients have become one of the main risk factors for chronic diseases. People should focus on health literacy related to salt consumption and reach a better quality of life. Currently, there is a lack of local assessment tools for low salt consumption in mainland China.
Objective: To develop a short-form version of the Chinese Health Literacy Scale For Low Salt Consumption instrument for use in mainland China.
Methods: A cross-sectional design was conducted on a sample of 1472 people in Liaoxi, China. Participants completed a sociodemographic questionnaire, the Chinese version of the CHLSalt-22, the measuring change in restriction of salt (sodium) in the diet in hypertensives (MCRSDH-SUST), the Brief Illness Perception Questionnaire (BIPQ), and the Benefit-Finding Scales (BFS) to test the hypothesis. Exploratory factor analysis and confirmatory factor analyses were performed to examine the underlying factor structure of the CHLSalt-22. One month later, 37 patients who participated in the first test were recruited to evaluate the test-retest reliability.
Results: The CHLSalt-22 demonstrated adequate internal consistency, good test-retest reliability, satisfactory construct validity, convergent validity and discriminant validity. The CHLSalt-22 count scores were correlated with age, sex, body mass index (BMI), education level, income, occupation, the Measuring Change in Restriction of Salt (sodium) in Diet in Hypertensives (MCRSDH-SUST), the Brief Illness Perception Questionnaire (BIPQ), and the Benefit-Finding Scales (BFS).
Conclusion: The results indicate that the Chinese Health Literacy Scale For Low Salt Consumption (CHLSalt-22) version has good reliability and validity and can be considered a tool to assess health literacy related to salt consumption in health screenings.
abstract_id: PUBMED:32213225
Plasma magnesium and the risk of new-onset hyperuricaemia in hypertensive patients. We aimed to evaluate the relationship of plasma Mg with the risk of new-onset hyperuricaemia and examine any possible effect modifiers in hypertensive patients. This is a post hoc analysis of the Uric acid (UA) Sub-study of the China Stroke Primary Prevention Trial (CSPPT). A total of 1685 participants were included in the present study. The main outcome was new-onset hyperuricaemia defined as a UA concentration ≥417 μmol/l in men or ≥357 μmol/l in women. The secondary outcome was a change in UA concentration defined as UA at the exit visit minus that at baseline. During a median follow-up duration of 4·3 years, new-onset hyperuricaemia occurred in 290 (17·2 %) participants. There was a significantly inverse relation of plasma Mg with the risk of new-onset hyperuricaemia (per sd increment; OR 0·85; 95 % CI 0·74, 0·99) and change in UA levels (per sd increment; β -3·96 μmol/l; 95 % CI -7·14, -0·79). Consistently, when plasma Mg was analysed as tertiles, a significantly lower risk of new-onset hyperuricaemia (OR 0·67; 95 % CI 0·48, 0·95) and less increase in UA levels (β -8·35 μmol/l; 95 % CI -16·12, -0·58) were found among participants in tertile 3 (≥885·5 μmol/l) compared with those in tertile 1 (<818·9 μmol/l). Similar trends were found in males and females. Higher plasma Mg levels were associated with a decreased risk of new-onset hyperuricaemia in hypertensive adults.
abstract_id: PUBMED:23940194
Hypertensive retinopathy and risk of stroke. Although assessment of hypertensive retinopathy signs has been recommended for determining end-organ damage and stratifying vascular risk in persons with hypertension, its value remains unclear. In this study, we examine whether hypertensive retinopathy predicts the long-term risk of stroke in those with hypertension. A total of 2907 participants with hypertension aged 50 to 73 years at the 1993 to 1995 examination, who had gradable retinal photographs, no history of diabetes mellitus, stroke, and coronary heart disease at baseline and data on incident stroke, were included from the Atherosclerosis Risk in Communities (ARIC) Study. Retinal photographs were assessed for hypertensive retinopathy signs and classified as none, mild, and moderate/severe. Incident events of any stroke, cerebral infarction, and hemorrhagic stroke were identified and validated. After a mean follow-up period of 13.0 years, 165 persons developed incident stroke (146 cerebral infarctions and 15 hemorrhagic strokes). After adjusting for age, sex, blood pressure, and other risk factors, persons with moderate hypertensive retinopathy were more likely to have stroke (moderate versus no retinopathy: multivariable hazard ratios, 2.37 [95% confidence interval, 1.39-4.02]). In participants with hypertension on medication with good control of blood pressure, hypertensive retinopathy was related to an increased risk of cerebral infarction (mild retinopathy: hazard ratio, 1.96 [95% confidence interval, 1.09-3.55]; and moderate retinopathy: hazard ratio, 2.98 [95% confidence interval, 1.01-8.83]). Hypertensive retinopathy predicts the long-term risk of stroke, independent of blood pressure, even in treated patients with hypertension with good hypertension control. Retinal photographic assessment of hypertensive retinopathy signs may be useful for assessment of stroke risk.
abstract_id: PUBMED:34283210
Hypertensive Retinopathy and the Risk of Stroke Among Hypertensive Adults in China. Purpose: This study aimed to investigate the association between hypertensive retinopathy and the risk of first stroke, examine possible effect modifiers in hypertensive patients, and test the appropriateness of the Keith-Wagener-Barker (KWB) classification for predicting stroke risk.
Methods: In total, 9793 hypertensive participants (3727 males and 6066 females) without stroke history from the China Stroke Primary Prevention Trial were included in this study. The primary outcome was first stroke.
Results: Over a median follow-up of 4.4 years, 592 participants experienced their first stroke (509 ischemic, 77 hemorrhagic, and six unclassifiable strokes). In total, 5590 participants were diagnosed with grade 1 retinopathy (57.08%), 1466 with grade 2 retinopathy (14.97%), 231 with grade 3 retinopathy (2.36%), and three with grade 4 retinopathy (0.03%). Grades 1 and 2 were merged and classified as mild retinopathy, and grades 3 and 4 were merged and classified as severe retinopathy. There was a significant positive association between hypertensive retinopathy and the risk of first stroke and first ischemic stroke, and no effect modifiers were found. The hazard ratios (HRs) for first stroke were as follows: mild versus no retinopathy, 1.26 (95% confidence interval [CI], 1.01-1.58, P = 0.040), and severe versus no retinopathy, 2.40 (95% CI, 1.49-3.84, P < 0.001). The HRs for ischemic stroke were as follows: severe versus no retinopathy, 2.35 (95% CI, 1.41-3.90, P = 0.001), and nonsignificantly increased HRs for mild versus no retinopathy, 1.26 (95% CI, 0.99-1.60, P = 0.057).
Conclusions: There was a significant positive association between hypertensive retinopathy and the risk of first stroke in patients with hypertension, indicating that hypertensive retinopathy may be a predictor of the risk of stroke. A simplified two-grade classification system based on the KWB classification is recommended for predicting stroke risk.
abstract_id: PUBMED:29444719
The significance of atherosclerosis in hypertensive patients with epistaxis. Background: The relationship between hypertension and epistaxis is controversial and poorly understood. The present research investigated atherosclerosis as a potential risk factor in hypertensive patients with epistaxis.
Methods: A prospective study of 141 hypertensive patients with epistaxis was conducted. The laboratory tests included full blood count, lipid profile and coagulation profile. All patients underwent funduscopic examination of the eye and were classified in terms of four retinopathy grades.
Results: There were strong positive correlations between the number of nosebleeds and retinopathy grade and low-density lipoprotein cholesterol level. There were weak correlations between the number of nosebleeds and blood pressure readings and triglycerides levels. Patients with grade III retinopathy, suggesting atherosclerosis, suffered from more frequent nosebleeds than other patients.
Conclusion: Atherosclerosis is one of the potential risk factors in hypertensive patients with epistaxis. This may have an impact on treatment choices.
Answer: Based on the provided abstracts, there is evidence to support the performance of an echocardiogram in hypertensive patients classified as having low and medium risk. The prevalence of left ventricular hypertrophy (LVH), which is an important predictor of cardiovascular risk, was found to be 23.9% in a study population of untreated hypertensive patients classified as low or medium risk according to the OMS/ISH classification. The study suggests that an echocardiogram could reclassify a significant percentage of these patients as high risk, which would impact their treatment plan (PUBMED:16321664).
Another study found that hypertensive patients at low-medium risk with metabolic syndrome (MetS) are at higher cardiovascular risk than those without MetS, indicating that additional tools like MetS may be useful for clinicians to identify subjects at increased risk when traditional assessment may indicate low-medium risk (PUBMED:18047919).
Furthermore, a retrospective analysis of a prospective survey showed that echocardiography modifies the risk classification in 29% of untreated hypertensive individuals without overt target-organ damage, identifying a need for immediate drug treatment (PUBMED:12172313).
These findings suggest that echocardiography can provide valuable information for risk stratification in hypertensive patients initially classified as low or medium risk, potentially leading to earlier and more appropriate management of their condition. |
Instruction: A survey of women obtaining emergency contraception: are they interested in using the copper IUD?
Abstracts:
abstract_id: PUBMED:35873134
Attitudes Toward the Copper IUD in Sweden: A Survey Study. Background: While the efficacy and safety of the contraceptive copper intrauterine device (IUD) have been affirmed, alongside its importance for the prevention of unintended pregnancies, some studies have pointed to negative attitudes toward the device. In recent years, social media communication about it has included claims about systemic side effects, unsubstantiated by medical authorities. Research from the Swedish context is sparse. This study investigates attitudes toward the copper IUD and any correlations between negative attitudes toward or experiences of the device, and (1) sociodemographic characteristics, (2) the evaluation of the reliability of different sources of information, and (3) trust in healthcare and other societal institutions.
Methods: A survey was distributed online to adult women in Sweden (n = 2,000). Aside from descriptive statistics, associations between negative attitudes toward or experiences of the copper IUD and sociodemographic and other variables were calculated using logistic regressions and expressed as odds ratios (ORs) with 95% confidence intervals (95% CIs). Open survey responses (n = 650) were analyzed thematically.
Results: While many reported positive attitudes toward and experiences of the IUD, 34.7% of all respondents reported negative attitudes and 45.4% of users reported negative experiences. Negative attitudes were strongly correlated with negative experiences. Negative attitudes and experiences were associated with low income, but no conclusive associations were identified with other socioeconomic variables. Negative attitudes and experiences were associated with lower levels of confidence in and satisfaction with healthcare, as well as lower self-assessed access and ability to assess the origin and reliability of information about the IUD. In open responses, negative comments were prevalent and included references to both common and unestablished perceived side-effects. Respondents pointed to problematic aspects of information and knowledge about the copper IUD and called for improved healthcare communication and updated research.
Conclusion: Healthcare provider communication about the copper IUD should promote reproductive autonomy and trust by providing clear information about potential side effects and being open to discuss women's experiences and concerns. Further research on copper IUD dissatisfaction and ways in which health professionals do and may best respond to it is needed.
abstract_id: PUBMED:21477687
A survey of women obtaining emergency contraception: are they interested in using the copper IUD? Background: This study aims to determine if women presenting for emergency contraception (EC) at family planning clinics may be interested in using the copper intrauterine device (IUD) for EC.
Study Design: This convenience sample survey was offered to women who presented for EC at four participating clinics in urban Utah. Anonymous written questionnaires were distributed. The outcome variable of interest was interest in using the copper IUD for EC.
Results: Of survey respondents, 320 (34.0%) of 941 said they would be interested in an EC method that was long term, highly effective and reversible. Interested women were not significantly different from noninterested women in relation to age, marital status, education, household income, gravidity, previous abortions, previous sexually transmitted infections (STIs) or relationship status. One hundred twenty women (37.5% of those interested or 12.8% of all those surveyed) would wait an hour, undergo a pelvic exam to get the method and would still want the method knowing it was an IUD. However, only 12.3% of these women could also pay $350 or more for the device. Multivariable regression found the following predictors of interest in the IUD among EC users: non-Hispanic minorities (OR=2.12, 95% CI=1.14-3.93), desire to never be pregnant in the future (OR=2.87, 95% CI=1.38-5.66) and interest in adoption (OR=1.96, 95% CI=1.00-5.73) or abortion (OR=2.68, 95% CI=1.24-4.14) if pregnant when presenting for EC.
Conclusion: While one third of EC users surveyed at family planning clinics were interested in a long-term, highly effective method of contraception, only a small portion of all EC users may be interested in the copper IUD for EC. Cost is a potential barrier.
abstract_id: PUBMED:25242973
Levonorgestrel-releasing IUD versus copper IUD in control of dysmenorrhea, satisfaction and quality of life in women using IUD. Background: The levonorgestrel-releasing IUD can help the treatment of dysmenorrhea by reducing the synthesis of endometrial prostaglandins as a conventional treatment.
Objective: This study was performed to assess the frequency of dysmenorrhea, satisfaction and quality of life in women using Mirena IUDs as compared to those using copper IUDs.
Materials And Methods: This double-blind randomized clinical trial was performed between 2006 and 2007 on 160 women aged between 20 to 35 years who attended Shahid Ayat Health Center of Tehran, and they were clients using IUDs for contraception. 80 individuals in group A received Mirena IUD and 80 individuals in group B received copper (380-A) IUD. Demographic data, assessment of dysmenorrhea, and follow-up 1, 3 and 6 months after IUD replacement were recorded in questionnaires designed for this purpose. To assess the quality of life, SF36 questionnaire was answered by the attending groups, and to assess satisfaction, a test with 3 questions was answered by clients.
Results: Dysmenorrhea significantly was decreased in both groups six months after IUD insertion as compared to the first month (p<0.001). However, statistically, Mirena reduced dysmenorrhea faster and earlier compared to cupper IUD (p<0.003). There isn't any significant difference between these two groups in satisfaction and quality of life outcomes.
Conclusion: There is no difference between these two groups in terms of the satisfaction and quality of life, therefor the usage of Mirena IUD is not a preferred contraception method.
abstract_id: PUBMED:12179197
Effective, despite misconceptions. The IUD with progesterone must be replaced once a year, while the IUD with copper may be left in for up to 10 years. Many women have misconceptions about the intrauterine device, or IUD. The IUD is a safe and effective contraceptive method for many women. It is more effective than the condom, diaphragm, or spermicides, and also is more convenient than these methods, since it doesn't interfere with different sexual activities. The IUD is a small, T-shaped plastic device containing either copper or progesterone, and is inserted by a doctor into a woman's uterus. A thin plastic thread (1-2 inches long) protrudes through the cervix so that the user can cross-check to make sure the device is in place. Depending on the IUD type, small amounts of copper or progesterone are slowly released to prevent fertilization of the egg, or to prevent a fertilized egg from attaching to the uterine wall. The IUD with progesterone must be replaced once a year, while the IUD with copper may be left in for up to 10 years. The IUD is a good choice for women who can't take oral contraceptives and for women who have completed their families but want surgical sterilization. Because the IUD carries a slightly higher risk of ectopic pregnancy (a fertilized egg that implants outside the uterus), it is not usually recommended for women who already are at increased risk of the condition, such as those who have had a pelvic infection, a previous ectopic pregnancy or multiple sex partners. Common side effects of the IUD include some discomfort while the IUD is being inserted and some cramping and spotting during the first few weeks after insertion. Menstrual periods usually become slightly longer and heavier in women using the copper IUD, but become lighter in women using the progesterone IUD. Occasionally, the IUD will slip partially out of the uterus or be expelled entirely in the first few months.
abstract_id: PUBMED:33644745
Clinical availability of the copper IUD in rural versus urban settings: A simulated patient study. Objective: To assess the proportion of Washington state clinics that offer the copper IUD in rural vs urban settings.
Study Design: We employed a simulated patient model to survey clinics in the Human Health Resources and Services Administration 340B database to primarily assess the availability of the copper IUD.
Results: We successfully surveyed 194/212 (92%) clinics. More urban than rural clinics reported copper IUD availability (76/97 [78%] vs 49/97 [51%]; p < 0.01).
Conclusions: Rural clinics are less likely than urban clinics to have the copper IUD available.
Implications: The frequency of unintended pregnancies is high in the United States. We should focus our attention on decreasing barriers to the copper IUD as a long-acting reversible contraceptive, particularly for women living in rural settings.
abstract_id: PUBMED:307872
Acute-phase proteins in women with a copper IUD (author's transl) Acute-phase (AP) proteins haptoglobin, alpha1-antitrypsin and C-reactive protein were measured in 50 women before and 10 to 30 weeks after insertion of a copper T-200. No statistically significant increase in AP proteins was found. Since these proteins are synthesized in the liver, the results indicate the lack of a systemic humoral reaction of the organism in women with a copper-containing IUD.
abstract_id: PUBMED:7389354
Five years private practice experience of nulliparous women using copper IUD's. This is a study of 296 nulliparous women who used Copper IUD's as their method of contraception. At the time of first insertion the women had an average age of 21.6 years. The gross cumulative pregnancy rate rose to 11.8 over 5 years. The gross cumulative expulsion rate rose to 13.6 over 5 years and the removal rate for bleeding and pain to 20.4. The net rates were lower over 5 years (pregnancy 8.6, expulsion 11.2 and bleeding and pain 17.3). 29 women expelled their IUD's, of whom 20 underwent reinsertion. 43 women requested removal of their IUD's because of bleeding and pain, of whom 6 requested a reinsertion at a later date. Taking reinsertions into account, the continuation rate over 5 years was 55.2. Inability to insert the IUD was encountered in 28 women (8.6%). In 20 of the women cervical stenosis precluded the insertion of the IUD without local anesthetic, and in the other 8, the pain and/or syncope encountered during sounding the uterus precluded the continuation of the procedure. Insertion problems were encountered in 31 women (10.5%). Syncope occurred in 4 women and one of the women developed a 'grand mal' seizure. Clinically significant pelvic infection occurred within 30 days of insertion in 5 women (approximately 1% of insertions), and one woman developed pelvic infection from gonorrhoea.
abstract_id: PUBMED:788452
Cervical bacterial flora in women fitted with a copper-releasing intra-uterine contraceptive device (IUD). A bacterial culture was taken from the cervix in 85 sexually active women before, and 3 and 6 months after, insertion of either a copper-releasing or an inert intra-uterine contraceptive device (IUD). Sixty had a Copper-T (TCu-200) and 25 a Lippes loop D. Although in more than a quarter of the patients the bacterial flora increased slightly in diversity and abundance after IUD insertion, there was no difference in effect between the TCu-200 and Lippes loop D.
abstract_id: PUBMED:2323218
The influence of copper surface area on menstrual blood loss and iron status in women fitted with an IUD. The influence of copper surface area on menstrual blood loss (MBL) was evaluated in 34 healthy women (mean age 36.4 +/- 1.4 yr, range 27-46 yr), who were fitted with a Multiload intrauterine device (IUD) with either 250 mm2 (MLCu-250) or 375 mm2 (MLCu-375) copper wire. MBL prior to IUD insertion was 54.4 +/- 10.3 ml for women subsequently fitted with a MLCu-250 and 56.9 +/- 6.9 ml for women fitted with a MLCu-375. An increase (p less than 0.01) in MBL was recorded 3 months after IUD insertion for both the women fitted with a MLCu-250 (86.4 +/- 10.3 ml) and a MLCu-375 (81.1 +/- 8.3 ml). This increase in MBL remained unchanged throughout the study period of one year. At no point were there any significant differences in MBL or increase in MBL between women fitted with a MLCu-250 or MLCu-375. There were no significant differences in serum ferritin, blood hemoglobin, hematocrit or erythrocyte indices before IUD insertion in the women grouped according to type of IUD, nor were any significant changes recorded in any of these parameters after IUD insertion. Thus, our findings that the increase in copper surface area from 250 mm2 to 375 mm2 had no effect on MBL were also substantiated by the hematological findings.
abstract_id: PUBMED:3123135
Early chorionic activity in women bearing inert IUD, copper IUD and levonorgestrel-releasing IUD. Early chorionic activity was assessed in the premenstrual days by means of serum HCG beta-fraction. As control, a group of women with no contraceptive use was studied; early chorionic activity was detected in 31.8% of the cycles. In the group bearing an inert IUD the incidence was 20%, which did not differ from the control; while in the medicated IUD groups (Cu-IUD and LNG-IUD) the incidences were 4.8% and nil, respectively. Both medicated IUD groups showed a significant difference when compared with the control, as well as the inert IUD groups. The meaning of these findings, pointing out differences in the main mechanism of action between inert and medicated IUDs, is discussed.
Answer: According to a survey study conducted to determine if women presenting for emergency contraception (EC) at family planning clinics might be interested in using the copper intrauterine device (IUD) for EC, it was found that a portion of these women were indeed interested in a long-term, highly effective method of contraception. Of the survey respondents, 320 (34.0%) of 941 expressed interest in an EC method that was long term, highly effective, and reversible. Among those interested, 120 women (37.5% of those interested or 12.8% of all surveyed) would be willing to wait an hour, undergo a pelvic exam, and would still want the method knowing it was an IUD. However, the cost was a significant barrier, as only 12.3% of these women could also afford to pay $350 or more for the device. The study concluded that while one-third of EC users surveyed at family planning clinics were interested in a long-term, highly effective method of contraception, only a small portion of all EC users may be interested in the copper IUD for EC, with cost being a potential barrier (PUBMED:21477687). |
Instruction: Does the culture of a medical practice affect the clinical management of diabetes by primary care providers?
Abstracts:
abstract_id: PUBMED:19299263
Does the culture of a medical practice affect the clinical management of diabetes by primary care providers? Objectives: The financing and organization of primary care in the United States has changed dramatically in recent decades. Primary care physicians have shifted from solo practice to larger group practices. The culture of a medical practice is thought to have an important influence on physician behavior. This study examines the effects of practice culture and organizational structure (while controlling for patient and physician characteristics) on the quality of physician decision-making.
Methods: Data were obtained from a balanced factorial experiment which employed a clinically authentic video-taped scenario of diabetes with emerging peripheral neuropathy.
Results: Our findings show that several key practice culture variables significantly influence clinical decision-making with respect to diabetes. Practice culture may contribute more to whether essential examinations are performed than patient or physician variables or the structural characteristics of clinical organizations.
Conclusions: Attention is beginning to focus on physician behavior in the context of different organizational environments. This study provides additional support for the suggestion that organization-level interventions (especially focused on practice culture) may offer an opportunity to reduce health care disparities and improve the quality of care.
abstract_id: PUBMED:28638617
Clinical pharmacists in primary care: Provider satisfaction and perceived impact on quality of care provided. Purpose: The purpose of this study is to evaluate primary care provider satisfaction and perceived impact of clinical pharmacy services on the disease state management in primary care.
Methods: A cross-sectional survey with 24 items and 4 domains was distributed anonymously to pharmacy residency program directors across the United States who were requested to forward the survey to their primary care provider colleagues. Primary care providers were asked to complete the survey.
Results: A total of 144 primary care providers responded to the survey, with 130 reporting a clinical pharmacist within their primary care practice and 114 completing the entire survey. Primary care providers report pharmacists positively impact quality of care (mean = 5.5 on Likert scale of 1-6; standard deviation = 0.72), high satisfaction with pharmacy services provided (5.5; standard deviation = 0.79), and no increase in workload as a result of clinical pharmacists (5.5; standard deviation = 0.77). Primary care providers would recommend clinical pharmacists to other primary care practices (5.7; standard deviation = 0.59). Primary care providers perceived specific types of pharmacy services to have the greatest impact on patient care: medication therapy management (38.6%), disease-focused management (29.82%), and medication reconciliation (11.4%). Primary care providers indicated the most valuable disease-focused pharmacy services as diabetes (58.78%), hypertension (9.65%), and pain (11.4%).
Conclusion: Primary care providers report high satisfaction with and perceived benefit of clinical pharmacy services in primary care and viewed medication therapy management and disease-focused management of diabetes, hypertension, and pain as the most valuable clinical pharmacy services. These results can be used to inform development or expansion of clinical pharmacy services in primary care.
abstract_id: PUBMED:26294052
Primary Care Providers' Knowledge and Practices of Diabetes Management During Ramadan. There are an estimated 3.5 million Muslims in North America. During the holy month of Ramadan, healthy adult Muslims are to fast from predawn to after sunset. While there are exemptions for older and sick adults, many adults with diabetes fast during Ramadan. However, there are risks associated with fasting and specific management considerations for patients with diabetes. We evaluated provider practices and knowledge regarding the management of patients with diabetes who fast during Ramadan. A 15-question quality improvement survey based on a literature review and the American Diabetes Association guidelines was developed and offered to providers at the outpatient primary care and geriatric clinics at an inner-city hospital in New York City. Forty-five providers completed the survey. Most respondents did not ask their Muslim patients with diabetes if they were fasting during the previous Ramadan. Knowledge of fasting practices during Ramadan was variable, and most felt uncomfortable managing patients with diabetes during Ramadan. There is room for improvement in educating providers about specific cultural and medical issues regarding fasting for patients with diabetes during Ramadan.
abstract_id: PUBMED:31196189
Knowledge, attitude and practice among non-ophthalmic health care providers regarding eye management of diabetics in private sector of Riyadh, Saudi Arabia. Background: The levels of knowledge, attitude and practice among primary physicians concerning both diabetic retinopathy screening and treatment of sight threatening diabetic retinopathy have been studied by different groups, such as medical students, pharmacists, Primary Health Care staff and opticians. In some studies, the levels were very high, while in others it was noted to be less than desired.
Aim: This study's intent is to estimate and improve level of Knowledge (K), Attitude (A) and Practice (P) among non-ophthalmic health care providers regarding eye management of diabetes and barriers that people with diabetes face in Saudi Arabia.
Methods: This cross-sectional survey targeted medical doctors (except ophthalmologists) working at private sector institutions in Riyadh. They were interviewed using closed-ended questions for knowledge (8), attitude (5), practice (5), and reasons for their current KAP status comprised of 8 questions. The level of Knowledge was assessed as good if its score was (> 50%); positive attitude (> 50%) and excellent practice (> 75%) were estimated and associated to the risk factors.
Results: Out of the 355 participants that were interviewed, the percentages of good knowledge, positive attitude and excellent practice concerning diabetic retinopathy (DR)were 193 [54.3% (95% CI 49.2-59.5)], 111 [31.3% (95% CI 26.4-36.1)], and 145 [40.8% (95% CI 35.7-46.0) participants, respectively. Gender, place of work and type of doctor were not significantly associated with the level of KAP. Salient reasons for low KAP status included a busy schedule (54.6%), less resources (75.2%), inadequate periodic training in eye care (69%), and absence of retinal evaluation training (49.6%).
Conclusions: Improving KAP level is urgently needed. Addressing underlying causes of low KAP could enhance eye care of people with diabetes. Additionally, training for primary health care providers for early detection of DR and timely management of sight threatening diabetic retinopathy (STDR) is necessary.
abstract_id: PUBMED:34785433
Discontinuity in care: Practice closures among primary care providers and patient health care utilization. We evaluate the consequences for patients of being matched to a new primary care provider due to practice closures. Using an event study and population-level data of patients and providers in Denmark, we find that the transition between providers is smooth; among re-matched patients, there is little change in primary care utilization at the extensive margin. Second, we document a 17% increase in fee-for-service per visit and a large increase in the probability that the patient initiates drug therapy targeting chronic and underdiagnosed diseases (hypertension, hyperlipidemia, and diabetes). Additionally, the re-matched patients are more likely to be admitted to inpatient care for these diseases. The increase in therapeutic initiation is not primarily because the new providers are relatively predisposed to prescribing these drugs. Instead, it appears that when patients match to new providers, there is a consequential reassessment of patients' medical needs which leads to the initiation of new treatment.
abstract_id: PUBMED:36908444
Primary care providers' knowledge, attitudes, and practices related to prediabetes in China: A cross-sectional study. Background: The management of prediabetes has great clinical significance, and primary care providers (PCPs) play important roles in the management and prevention of diabetes in China. Nevertheless, little is known about PCPs' knowledge, attitudes, and practices (KAP) regarding prediabetes. This cross-sectional study aimed to assess the KAP regarding prediabetes among PCPs in the Central China region.
Methods: This cross-sectional study was conducted using self-administered KAP questionnaires among PCPs from Central China region.
Results: In total, 720 PCPs completed the survey. Most physicians (85.8%) claimed to be aware of the adverse effects of prediabetes and reported positive attitudes toward prediabetes prevention, but the PCPs' knowledge of prediabetes and management practices showed substantial gaps. The prediabetes knowledge level and practice subscale scores of the PCPs were only 54.7% and 32.6%, respectively, of the corresponding optimal scores. Female PCPs showed higher prediabetes knowledge level scores (p = 0.04) and better practice scores (p = 0.038). Knowledge and attitude scores were inversely correlated with participants' age and duration of practice (p < 0.001). The PCPs who served in township hospitals had significantly higher knowledge and attitude scores than those who served in village clinics (p < 0.001). Furthermore, knowledge and practice scores increased with increasing professional titles. Recent continuing medical education (CME) attendance had a significant positive influence on knowledge of prediabetes (p = 0.029), but more than four-fifths of the surveyed PCPs did not participate in diabetes-related CME in the past year.
Conclusions: Substantial gaps were observed in PCPs' knowledge and practices regarding prediabetes in the Central China region. CME programmes were under-utilized by PCPs. Structured programmes are required to improve PCPs' prediabetes-related knowledge and practices in China.
abstract_id: PUBMED:35550154
Assessing competence of mid-level providers delivering primary health care in India: a clinical vignette-based study in Chhattisgarh state. Background: The global commitment to primary health care (PHC) has been reconfirmed in the declaration of Astana, 2018. India has also seen an upswing in national commitment to implement PHC. Health and wellness centres (HWCs) have been introduced, one at every 5000 population, with the fundamental purpose of bringing a comprehensive range of primary care services closer to where people live. The key addition in each HWC is of a mid-level healthcare provider (MLHP). Nurses were provided a 6-month training to play this role as community health officers (CHOs). But no assessments are available of the clinical competence of this newly inducted cadre for delivering primary care. The current study was aimed at providing an assessment of competence of CHOs in the Indian state of Chhattisgarh.
Methods: The assessment involved a comparison of CHOs with rural medical assistants (RMAs) and medical officers (MO), the two main existing clinical cadres providing primary care in Chhattisgarh. Standardized clinical vignettes were used to measure knowledge and clinical reasoning of providers. Ten ailments were included, based on primary care needs in Chhattisgarh. Each part of clinical vignettes was standardized using expert consultations and standard treatment guidelines. Sample size was adequate to detect 15% difference between scores of different cadres and the assessment covered 132 CHOs, 129 RMAs and 50 MOs.
Results: The overall mean scores of CHOs, RMAs and MOs were 50.1%, 63.1% and 68.1%, respectively. They were statistically different (p < 0.05). The adjusted model also confirmed the above pattern. CHOs performed well in clinical management of non-communicable diseases and malaria. CHOs also scored well in clinical knowledge for diagnosis. Around 80% of prescriptions written by CHOs for hypertension and diabetes were found correct.
Conclusion: The non-physician MLHP cadre of CHOs deployed in rural facilities under the current PHC initiative in India exhibited the potential to manage ambulatory care for illnesses. Continuous training inputs, treatment protocols and medicines are needed to improve performance of MLHPs. Making comprehensive primary care services available close to people is essential to PHC and well-trained mid-level providers will be crucial for making it a reality in developing countries.
abstract_id: PUBMED:31427177
How are medication related problems managed in primary care? An exploratory study in patients with diabetes and primary care providers. Background: Medication self-management is important for patients who are controlling diabetes. Achieving medication self-management goals, may depend on treatment complexity and patients' capacities such as health literacy, knowledge and attitude.
Objectives: The aims of this study were to explore how patients with diabetes self-manage their medications, how patients seek support when experiencing problems and how primary healthcare providers identify patients' medication related problems and provide support.
Methods: Semi-structured interviews were conducted among patients with diabetes receiving primary care and with their primary healthcare providers - GPs, nurses, pharmacists and technicians - between January and June 2017. A purposive sampling strategy was used to identify and select participants. An interview guide based on the Cycle of Complexity model was developed. Interviews were audiotaped and transcribed verbatim. Transcripts were coded with a combination of deductive and inductive codes. A thematic analysis was performed to identify categories and themes in the data. Findings were compared with the Cycle of Complexity model.
Results: Twelve patients and 27 healthcare providers were included in the study. From the transcripts 95 codes, 6 categories and 2 major themes were extracted. Patients used practical solutions and gaining knowledge to manage their medication. Their problems were often related to stress and concerns about using medications. A trusted relationship with the healthcare provider was essential for patients to share problems and ask for support. Informal support was sought from family and peer-patients. Healthcare providers perceive problem identification as challenging. They relied on patients coming forward, computer notifications, clinical parameters and gut-feeling. Healthcare providers were able to offer appropriate support if a medication management problem was known.
Conclusion: Patients are confident of finding their way to manage their medications. However, sharing problems with healthcare providers requires a trusted relationship. This is acknowledged by both patients and healthcare providers.
abstract_id: PUBMED:28994631
Health Care Resource Utilization for Outpatient Cardiovascular Disease and Diabetes Care Delivery Among Advanced Practice Providers and Physician Providers in Primary Care. Although effectiveness of diabetes or cardiovascular disease (CVD) care delivery between physicians and advanced practice providers (APPs) has been shown to be comparable, health care resource utilization between these 2 provider types in primary care is unknown. This study compared health care resource utilization between patients with diabetes or CVD receiving care from APPs or physicians. Diabetes (n = 1,022,588) or CVD (n = 1,187,035) patients with a primary care visit between October 2013 and September 2014 in 130 Veterans Affairs facilities were identified. Using hierarchical regression adjusting for covariates including patient illness burden, the authors compared number of primary or specialty care visits and number of lipid panels and hemoglobinA1c (HbA1c) tests among diabetes patients, and number of primary or specialty care visits and number of lipid panels and cardiac stress tests among CVD patients receiving care from physicians and APPs. Physicians had significantly larger patient panels compared with APPs. In adjusted analyses, diabetes patients receiving care from APPs received fewer primary and specialty care visits and a greater number of lipid panels and HbA1c tests compared with patients receiving care from physicians. CVD patients receiving care from APPs received more frequent lipid testing and fewer primary and specialty care visits compared with those receiving care from physicians, with no differences in the number of stress tests. Most of these differences, although statistically significant, were numerically small. Health care resource utilization among diabetes or CVD patients receiving care from APPs or physicians appears comparable, although physicians work with larger patient panels.
abstract_id: PUBMED:30861333
Implementation and Evaluation of Diabetes Clinical Practice Guidelines in a Primary Care Clinic Serving a Hispanic Community. Background: Diabetes is a major health concern in the United States. Poor quality diabetes care leads to negative outcomes affecting patients and healthcare systems. Research shows evidence-based clinical practice guidelines from the American Diabetes Association, Standards of Medical Care in Diabetes-2017, have improved outcomes in management of diabetes.
Aims: The aim of this improvement project was to improve diabetes care and outcomes in a primary care clinic serving a Hispanic community in Miami-Dade, Florida. Specific objectives of the project were to improve primary care provider attitudes and knowledge of American Diabetes Association guidelines and outcomes in patients with diabetes.
Methods: This two-phase quality improvement project used a pre- and post-test study design. In Phase I, the Diabetes Attitude Survey was administered to determine primary care provider attitudes and knowledge of guidelines. Based on survey results, a 2-hr educational intervention was developed to inform providers regarding the American Diabetes Association clinical practice guidelines. In Phase II, data were collected for 19 clinical quality diabetes outcome measures from agency reports. Guidelines were implemented into practice, and intervention effectiveness was evaluated.
Results: Statistical analysis (paired-sample t-test) revealed postintervention improvement in provider knowledge and attitudes, and patient outcomes. Findings suggest the intervention led to improvements in the quality of diabetes care in this Hispanic clinic.
Linking Evidence To Action: Project success supports the importance of evidence-based clinical practice guideline use to achieve better diabetes outcomes. Implementation of the American Diabetes Association Standards of Medical Care in Diabetes to improve patient outcomes is strongly recommended. It is equally important that clinical agencies educate and update primary care providers to achieve best practice and best patient diabetes outcomes. Other healthcare professionals can use this project design to provide culturally competent care to patients with diabetes.
Answer: Yes, the culture of a medical practice does affect the clinical management of diabetes by primary care providers. A study examining the effects of practice culture and organizational structure on the quality of physician decision-making found that practice culture variables significantly influence clinical decision-making with respect to diabetes. The study suggests that practice culture may contribute more to whether essential examinations are performed than patient or physician variables or the structural characteristics of clinical organizations (PUBMED:19299263). This indicates that the environment in which primary care providers operate can have a notable impact on how they manage diabetes care for their patients. |
Instruction: Do treatment quality indicators predict cardiovascular outcomes in patients with diabetes?
Abstracts:
abstract_id: PUBMED:24205325
Do treatment quality indicators predict cardiovascular outcomes in patients with diabetes? Background: Landmark clinical trials have led to optimal treatment recommendations for patients with diabetes. Whether optimal treatment is actually delivered in practice is even more important than the efficacy of the drugs tested in trials. To this end, treatment quality indicators have been developed and tested against intermediate outcomes. No studies have tested whether these treatment quality indicators also predict hard patient outcomes.
Methods: A cohort study was conducted using data collected from >10.000 diabetes patients in the Groningen Initiative to Analyze Type 2 Treatment (GIANTT) database and Dutch Hospital Data register. Included quality indicators measured glucose-, lipid-, blood pressure- and albuminuria-lowering treatment status and treatment intensification. Hard patient outcome was the composite of cardiovascular events and all-cause death. Associations were tested using Cox regression adjusting for confounding, reporting hazard ratios (HR) with 95% confidence intervals.
Results: Lipid and albuminuria treatment status, but not blood pressure lowering treatment status, were associated with the composite outcome (HR = 0.77, 0.67-0.88; HR = 0.75, 0.59-0.94). Glucose lowering treatment status was associated with the composite outcome only in patients with an elevated HbA1c level (HR = 0.72, 0.56-0.93). Treatment intensification with glucose-lowering but not with lipid-, blood pressure- and albuminuria-lowering drugs was associated with the outcome (HR = 0.73, 0.60-0.89).
Conclusion: Treatment quality indicators measuring lipid- and albuminuria-lowering treatment status are valid quality measures, since they predict a lower risk of cardiovascular events and mortality in patients with diabetes. The quality indicators for glucose-lowering treatment should only be used for restricted populations with elevated HbA1c levels. Intriguingly, the tested indicators for blood pressure-lowering treatment did not predict patient outcomes. These results question whether all treatment indicators are valid measures to judge quality of health care and its economics.
abstract_id: PUBMED:23386729
Treatment quality indicators predict short-term outcomes in patients with diabetes: a prospective cohort study using the GIANTT database. Objective: To assess whether quality indicators for treatment of cardiovascular and renal risk factors are associated with short-term outcomes in patients with diabetes.
Design: A prospective cohort study using linear regression adjusting for confounders.
Setting: The GIANTT database (Groningen Initiative to Analyse Type 2 Diabetes Treatment) containing data from primary care medical records from The Netherlands.
Participants: 15 453 patients with type 2 diabetes mellitus diagnosed before 1 January 2008. Mean age 66.5 years, 47.5% men.
Exposure: Quality indicators assessing current treatment (CT) status or treatment intensification (TI) for patients with diabetes with elevated cardiovascular or renal risk factors.
Main Outcome Measures: Low-density lipoprotein cholesterol (LDL-C), systolic blood pressure (SBP), and albumin:creatinine ratio (ACR) before and after assessment of treatment quality.
Results: Use of lipid-lowering drugs was associated with better LDL-C levels (-0.41 mmol/litre; 95% CI -0.48 to -0.34). Use of blood pressure-lowering drugs and use of renin-angiotensin system inhibitors in patients with elevated risk factor levels was not associated with better SBP and ACR outcomes, respectively. TI was also associated with better LDL-C (-0.82 mmol/litre; CI -0.93 to -0.71) in patients with elevated LDL-C levels, and with better SBP (-1.26 mm Hg; CI -2.28 to -0.24) in patients with two elevated SBP levels. Intensification of albuminuria-lowering treatment showed a tendency towards better ACR (-2.47 mmol/mg; CI -5.32 to 0.39) in patients with elevated ACR levels.
Conclusions: Quality indicators of TI were predictive of better short-term cardiovascular and renal outcomes, whereas indicators assessing CT status showed association only with better LDL-C outcome.
abstract_id: PUBMED:30439793
Lasso Regression for the Prediction of Intermediate Outcomes Related to Cardiovascular Disease Prevention Using the TRANSIT Quality Indicators. Background: Cardiovascular disease morbidity and mortality are largely influenced by poor control of hypertension, dyslipidemia, and diabetes. Process indicators are essential to monitor the effectiveness of quality improvement strategies. However, process indicators should be validated by demonstrating their ability to predict desirable outcomes. The objective of this study is to identify an effective method for building prediction models and to assess the predictive validity of the TRANSIT indicators.
Methods: On the basis of blood pressure readings and laboratory test results at baseline, the TRANSIT study population was divided into 3 overlapping subpopulations: uncontrolled hypertension, uncontrolled dyslipidemia, and uncontrolled diabetes. A classic statistical method, a sparse machine learning technique, and a hybrid method combining both were used to build prediction models for whether a patient reached therapeutic targets for hypertension, dyslipidemia, and diabetes. The final models' performance for predicting these intermediate outcomes was established using cross-validated area under the curves (cvAUC).
Results: At baseline, 320, 247, and 303 patients were uncontrolled for hypertension, dyslipidemia, and diabetes, respectively. Among the 3 techniques used to predict reaching therapeutic targets, the hybrid method had a better discriminative capacity (cvAUCs=0.73 for hypertension, 0.64 for dyslipidemia, and 0.79 for diabetes) and succeeded in identifying indicators with a better capacity for predicting intermediate outcomes related to cardiovascular disease prevention.
Conclusions: Even though this study was conducted in a complex population of patients, a set of 5 process indicators were found to have good predictive validity based on the hybrid method.
abstract_id: PUBMED:33553682
Treatment outcomes of drug resistant tuberculosis patients with multiple poor prognostic indicators in Uganda: A countrywide 5-year retrospective study. Background: Comorbid conditions and adverse drug events are associated with poor treatment outcomes among patients with drug resistant tuberculosis (DR - TB). This study aimed at determining the treatment outcomes of DR - TB patients with poor prognostic indicators in Uganda.
Methods: We reviewed treatment records of DR - TB patients from 16 treatment sites in Uganda. Eligible patients had confirmed DR - TB, a treatment outcome in 2014-2019 and at least one of 15 pre-defined poor prognostic indicators at treatment initiation or during therapy. The pre-defined poor prognostic indicators were HIV co-infection, diabetes, heart failure, malignancy, psychiatric illness/symptoms, severe anaemia, alcohol use, cigarette smoking, low body mass index, elevated creatinine, hepatic dysfunction, hearing loss, resistance to fluoroquinolones and/or second-line aminoglycosides, previous exposure to second-line drugs (SLDs), and pregnancy. Tuberculosis treatment outcomes were treatment success, mortality, loss to follow up, and treatment failure as defined by the World Health Organisation. We used logistic and cox proportional hazards regression analysis to determine predictors of treatment success and mortality, respectively.
Results: Of 1122 DR - TB patients, 709 (63.2%) were male and the median (interquartile range, IQR) age was 36.0 (28.0-45.0) years. A total of 925 (82.4%) had ≥2 poor prognostic indicators. Treatment success and mortality occurred among 806 (71.8%) and 207 (18.4%) patients whereas treatment loss-to-follow-up and failure were observed among 96 (8.6%) and 13 (1.2%) patients, respectively. Mild (OR: 0.57, 95% CI 0.39-0.84, p = 0.004), moderate (OR: 0.18, 95% CI 0.12-0.26, p < 0.001) and severe anaemia (OR: 0.09, 95% CI 0.05-0.17, p < 0.001) and previous exposure to SLDs (OR: 0.19, 95% CI 0.08-0.48, p < 0.001) predicted lower odds of treatment success while the number of poor prognostic indicators (HR: 1.62, 95% CI 1.30-2.01, p < 0.001), for every additional poor prognostic indicator) predicted mortality.
Conclusion: Among DR - TB patients with multiple poor prognostic indicators, mortality was the most frequent unsuccessful outcomes. Every additional poor prognostic indicator increased the risk of mortality while anaemia and previous exposure to SLDs were associated with lower odds of treatment success. The management of anaemia among DR - TB patients needs to be evaluated by prospective studies. DR - TB programs should also optimise DR - TB treatment the first time it is initiated.
abstract_id: PUBMED:18695594
Quality indicators for the prevention and management of cardiovascular disease in primary care in nine European countries. Background: With free movement of labour in Europe, European guidelines on cardiovascular care and the enlargement of the European Union to include countries with disparate health care systems, it is important to develop common quality standards for cardiovascular prevention and risk management across Europe.
Methods: Panels from nine European countries (Austria, Belgium, Finland, France, Germany, Netherlands, Slovenia, United Kingdom and Switzerland) developed quality indicators for the prevention and management of cardiovascular disease in primary care. A two-stage modified Delphi process was used to identify indicators that were judged valid for necessary care.
Results: Forty-four out of 202 indicators (22%) were rated as valid. These focused predominantly on secondary prevention and management of established cardiovascular disease and diabetes. Less agreement on indicators of preventive care or on indicators for the management of hypertension and hypercholesterolaemia in patients without established disease was observed. Although 85% of the 202 potential indicators assessed were rated valid by at least one panel, lack of consensus among panels meant that the set that could be agreed upon among all panels was much smaller.
Conclusion: Indicators for the management of established cardiovascular disease have been developed, which can be used to measure the quality of cardiovascular care across a wide range of countries. Less agreement on how the quality of preventive care should be assessed was observed, probably caused by differences in health systems, culture and attitudes to prevention.
abstract_id: PUBMED:35260956
Quality Indicators for High-Need Patients: a Systematic Review. Background: Healthcare systems are increasingly implementing programs for high-need patients, who often have multiple chronic conditions and complex social situations. Little, however, is known about quality indicators that might guide healthcare organizations and providers in improving care for high-need patients. We sought to conduct a systematic review to identify potential quality indicators for high-need patients.
Methods: This systematic review (CRD42020215917) searched PubMed, CINAHL, and EMBASE; guideline clearing houses ECRI and GIN; and Google scholar. We included publications suggesting, evaluating, and utilizing indicators to assess quality of care for high-need patients. Critical appraisal of the indicators addressed the development process, endorsement and adoption, and characteristics, such as feasibility. We standardized indicators by patient population subgroups to facilitate comparisons across different indicator groups.
Results: The search identified 6964 citations. Of these, 1382 publications were obtained as full text, and 53 studies met inclusion criteria. We identified over 1700 quality indicators across studies. Quality indicator characteristics varied widely. The scope of the selected indicators ranged from detailed criterion (e.g., "annual eye exam") to very broad categories (e.g., "care coordination"). Some publications suggested disease condition-specific indicators (e.g., diabetes), some used condition-independent criteria (e.g., "documentation of the medication list in the medical record available to all care agencies"), and some publications used a mixture of indicator types.
Discussion: We identified and evaluated existing quality indicators for a complex, heterogeneous patient group. Although some quality indicators were not disease-specific, we found very few that accounted for social determinants of health and behavioral factors. More research is needed to develop quality indicators that address patient risk factors.
abstract_id: PUBMED:38113808
Potency of quality indicators in Dutch and international diabetes registries. Background: Diabetes mellitus forms a slow pandemic. Cardiovascular risk and quality of diabetes care are strongly associated. Quality indicators improve diabetes management and reduce mortality and costs. Various national diabetes registries render national quality indicators. We describe diabetes care indicators for Dutch children and adults with diabetes, and compare them with indicators established by registries worldwide.
Methods: Indicator scores were derived from the Dutch Pediatric and Adult Registry of Diabetes Indicator sets of other national diabetes registries were collected and juxtaposed with global and continental initiatives for indicator sets.
Results: This observational cohort study included 3738 patients representative of the Dutch diabetic outpatient population. The Dutch Pediatric and Adult Registry of Diabetes harbors ten quality indicators comprising treatment volumes, HbA1c control, foot examination, insulin pump therapy, and real-time continuous glucose monitoring. Worldwide, nine national registries record quality indicators, with great variety between registries. HbA1c control is recorded most frequently, and no indicator is reported among all registries.
Conclusions: Wide variety among quality indicators recorded by national diabetes registries hinders international comparison and interpretation of quality of diabetes care. The potential of quality evaluation will be greatly enhanced when diabetes care indicators are aligned in an international standard set with variation across countries taken into consideration.
abstract_id: PUBMED:24202130
Improving outcomes for ESRD patients: shifting the quality paradigm. The availability of life-saving dialysis therapy has been one of the great successes of medicine in the past four decades. Over this time period, despite treatment of hundreds of thousands of patients, the overall quality of life for patients with ESRD has not substantially improved. A narrow focus by clinicians and regulators on basic indicators of care, like dialysis adequacy and anemia, has consumed time and resources but not resulted in significantly improved survival; also, frequent hospitalizations and dissatisfaction with the care experience continue to be seen. A new quality paradigm is needed to help guide clinicians, providers, and regulators to ensure that patients' lives are improved by the technically complex and costly therapy that they are receiving. This paradigm can be envisioned as a quality pyramid: the foundation is the basic indicators (outstanding performance on these indicators is necessary but not sufficient to drive the primary outcomes). Overall, these basics are being well managed currently, but there remains an excessive focus on them, largely because of publically reported data and regulatory requirements. With a strong foundation, it is now time to focus on the more complex intermediate clinical outcomes-fluid management, infection control, diabetes management, medication management, and end-of-life care among others. Successfully addressing these intermediate outcomes will drive improvements in the primary outcomes, better survival, fewer hospitalizations, better patient experience with the treatment, and ultimately, improved quality of life. By articulating this view of quality in the ESRD program (pushing up the quality pyramid), the discussion about quality is reframed, and also, clinicians can better target their facilities in the direction of regulatory oversight and requirements about quality. Clinicians owe it to their patients, as the ESRD program celebrates its 40th anniversary, to rekindle the aspirations of the creators of the program, whose primary goal was to improve the lives of the patients afflicted with this devastating condition.
abstract_id: PUBMED:21536606
Review: relation between quality-of-care indicators for diabetes and patient outcomes: a systematic literature review. The authors conducted a systematic literature review to assess whether quality indicators for diabetes care are related to patient outcomes. Twenty-four studies were included that formally tested this relationship. Quality indicators focusing on structure or processes of care were included. Descriptive analyses were conducted on the associations found, differentiating for study quality and level of analysis. Structure indicators were mostly tested in studies with weak designs, showing no associations with surrogate outcomes or mixed results. Process indicators focusing on intensification of drug treatment were significantly associated with better surrogate outcomes in three high-quality studies. Process indicators measuring numbers of tests or visits conducted showed mostly negative results in four high-quality studies on surrogate and hard outcomes. Studies performed on different levels of analysis and studies of lower quality gave similar results. For many widely used quality indicators, there is insufficient evidence that they are predictive of better patient outcomes.
abstract_id: PUBMED:36129044
Longitudinal Adherence to Diabetes Quality Indicators and Cardiac Disease: A Nationwide Population-Based Historical Cohort Study of Patients With Pharmacologically Treated Diabetes. Background Evidence of the cardiovascular benefits of adherence to quality indicators in diabetes care over a period of years is lacking. Methods and Results We conducted a population-based, historical cohort study of 105 656 people aged 45 to 80 with pharmacologically treated diabetes and who were free of cardiac disease in 2010. Data were retrieved from electronic medical records of the 4 Israeli health maintenance organizations. The association between level of adherence to national quality indicators (2006-2010: adherence assessment) and incidence of cardiac outcome; ischemic heart disease or heart failure (2011-2016: outcome assessment) was estimated using Cox proportional hazards models. During 529 551 person-years of follow-up, 19 246 patients experienced cardiac disease. An inverse dose-response association between the level of adherence and risk of cardiac morbidity was shown for most of the quality indicators. The associations were modified by age, with stronger associations among younger patients (<65 years). Low adherence to low-density lipoprotein cholesterol testing (≤2 years) during the first 5 years was associated with 41% increased risk of cardiac morbidity among younger patients. Patients who had uncontrolled low-density lipoprotein cholesterol in all first 5 years had hazard ratios of 1.60 (95% CI, 1.49-1.72) and 1.23 (95% CI, 1.14-1.32), among patients aged <65 and ≥65 years, respectively, compared with those who achieved target level. Patients who failed to achieve target levels of glycated hemoglobin or blood pressure had an increased risk (hazard ratios, 1.50-1.69) for cardiac outcomes. Conclusions Longitudinal adherence to quality indicators in diabetes care is associated with reduced risk of cardiac morbidity. Implementation of programs that measure and enhance quality of care may improve the health outcomes of people with diabetes.
Answer: Yes, treatment quality indicators do predict cardiovascular outcomes in patients with diabetes. A cohort study using data from over 10,000 diabetes patients in the Groningen Initiative to Analyze Type 2 Treatment (GIANTT) database and Dutch Hospital Data register found that lipid and albuminuria treatment status were associated with a lower risk of cardiovascular events and mortality in patients with diabetes. Glucose-lowering treatment status was associated with the composite outcome only in patients with an elevated HbA1c level. Treatment intensification with glucose-lowering drugs was also associated with the outcome (PUBMED:24205325).
Another study using the GIANTT database showed that quality indicators of treatment intensification were predictive of better short-term cardiovascular and renal outcomes, whereas indicators assessing current treatment status showed association only with better LDL-C outcome (PUBMED:23386729).
Furthermore, a nationwide population-based historical cohort study of patients with pharmacologically treated diabetes in Israel demonstrated that longitudinal adherence to quality indicators in diabetes care is associated with reduced risk of cardiac morbidity. Patients who failed to achieve target levels of glycated hemoglobin or blood pressure had an increased risk for cardiac outcomes (PUBMED:36129044).
These findings suggest that certain treatment quality indicators, particularly those related to lipid and albuminuria-lowering treatment status, as well as glucose-lowering treatment intensification in specific populations, are valid measures for predicting cardiovascular outcomes in patients with diabetes. However, the evidence for blood pressure-lowering treatment indicators was not as strong, indicating that not all treatment indicators may be equally valid for predicting patient outcomes (PUBMED:24205325). |