input
stringlengths
6.82k
29k
Instruction: Is handheld optical coherence tomography reliable in infants and young children with and without nystagmus? Abstracts: abstract_id: PUBMED:24222299 Is handheld optical coherence tomography reliable in infants and young children with and without nystagmus? Purpose: To evaluate the reliability of the spectral domain handheld OCT (HH-OCT) in assessing foveal morphology in children with and without nystagmus. Methods: Forty-nine subjects with nystagmus (mean age 43.83 months; range, 1-82 months) and 48 controls (mean age 43.02 months; range, 0 to 83 months) were recruited and scanned using HH-OCT. A minimum of two separate volumetric scans on the same examination day of the fovea were obtained. The images were imported into ImageJ software where manual retinal layer segmentation of the central foveal B-scan was performed. Agreement between scans was assessed by determining the intraclass correlation coefficients (ICC) and Bland-Altman plots. Results: Both the nystagmus and control groups showed an excellent degree of reproducibility between two examinations with ICCs greater than 0.96 for central macular thickness (CMT) and greater than 0.8 for the outer nuclear layer and outer segment of the photoreceptors. The nerve fiber layer, ganglion cell layer, outer plexiform layer, inner segment of the photoreceptors, and retinal pigment epithelium were less reliable with ICCs of less than 0.7. There was no difference in the reliability of scans obtained in children with nystagmus as compared with controls and both groups had good intereye agreement with ICCs greater than 0.94 for CMT. Conclusions: We have shown for the first time that the HH-OCT provides reliable measurements in children with and without nystagmus. This is important, as the HH-OCT will have a greater diagnostic and prognostic role in young children with nystagmus and other eye diseases in the future. abstract_id: PUBMED:26458476 The use of handheld spectral domain optical coherence tomography in pediatric ophthalmology practice: Our experience of 975 infants and children. Purpose: Optical coherence tomography (OCT) is an important imaging tool assessing retinal architecture. In this article, we report a single centers experience of using handheld spectral domain (SD)-OCT in a pediatric population using the Envisu 2300 (Bioptigen Inc., Research Triangle Park, NC, USA). Methods: We studied SD-OCT images from 975 patients imaged from January 2011 to December 2014. The variety of cases that underwent an SD-OCT was analyzed. Cases examples from different case scenarios were selected to showcase unique examples of many diseases. Results: Three hundred and sixty-eight infants (37.7%) were imaged for retinopathy of prematurity, 362 children (37.1%) underwent the test for evaluation of suboptimal vision or an unexplained vision loss, 126 children (12.9%) for evaluation of nystagmus or night blindness, 54 children (5.5%) for an intraocular tumor or a mass lesion such as retinoblastoma, and 65 children (6.7%) for other diseases of the pediatric retina. The unique findings in the retinal morphology seen with some of these diseases are discussed. Conclusion: The handheld SD-OCT is useful in the evaluation of the pediatric retinal diseases. The test is useful in the assessment of vision development in premature children, evaluation of unexplained vision loss and amblyopia, nystagmus and night blindness, and intraocular tumors (including retinoblastoma). abstract_id: PUBMED:27628287 Handheld Optical Coherence Tomography in Paediatric Ophthalmology: Experience of the Department of Ophthalmology in Giessen Introduction: Optical coherence tomography is an important tool for the imaging and analysis of retinal structures. The usability of conventional table-top devices is limited in children. We report on our experiences with a handheld Spectral Domain Optical Coherence Tomography (HH-SD-OCT, Bioptigen™) in infants and young children in our daily practice. Methods: Between October 2014 and April 2016, we investigated 259 patients. Indications and diagnoses were assessed. Individual examples are shown to demonstrate the advantages and disadvantages of the novel technique. Results: It was possible to examine 259 children of at least 7 weeks of age (median: 1.59 years; ± 1.32 SD) with a mean investigation time of 18.3 minutes (± 8.3 SD). The most frequent indication was retinal assessment in prematures (32.8 %). Nystagmus, retinal dystrophies, reduced visual acuity and albinism amounted to additional 37.4 % of all indications. Conclusions: Handheld OCT is a beneficial complement for diagnosis of diseases in paediatric ophthalmology. As a complement to established methods like wide-field fundus photography, HH-SD-OCT allows the physician to assess and follow-up new objective structural information. As the Bioptigen does not have an eye tracker, it is challenging to orient the scan in the posterior retinal pole, in particular in case of instable fixation. This complicates follow-up investigations, which can only be performed with additional high programming and analysis effort. abstract_id: PUBMED:27685500 FOVEAL HYPOPLASIA: DIAGNOSIS USING OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY. Purpose: To describe the clinical presentations and image findings of a patient with foveal hypoplasia. Methods: Observational case report of one patient. The clinical presentation of the patient was documented by fluorescein angiography, optical coherence tomography, and optical coherence tomography angiography. Results: A 55-year-old healthy white female with bilateral foveal hypoplasia without nystagmus was presented. There were no signs suggestive of ocular albinism. Absence of foveal depression was noted on both clinical examination and optical coherence tomography. Lack of foveal avascular zone was confirmed on fluorescein angiography and optical coherence tomography angiography. Conclusion: To the best of our knowledge this is the first report of findings in foveal hypoplasia examined by angio-optical coherence tomography. Optical coherence tomography angiography is an easy, rapid, and noninvasive tool that allows imaging of the retinal microvasculature without intravenous dye injection. abstract_id: PUBMED:15183803 Optical coherence tomography under general anesthesia in a child with nystagmus. Purpose: To describe a method that permits optical coherence tomography scanning to be performed under general anesthesia. Design: Observational case report. Methods: A 7-year-old child underwent optical coherence tomography scanning under general anesthesia. He was positioned prone, with the neck extended to enable vertical facial alignment on the scanner. An anti-Trendelenburg trend on the operating table minimized the degree of neck extension required. Intravenous fluids, modest ventilator airway pressures, and compression stockings were used to minimize perioperative hemodynamic disturbance and the potential for orthostatic hypotension associated with the anti-Trendelenburg trend. Results: High-quality, diagnostically important optical coherence tomography scans were obtained. Conclusion: With appropriate patient positioning and anesthetic management, optical coherence tomography scanning can be performed on uncooperative adults and children under general anesthesia. abstract_id: PUBMED:24161406 Potential of handheld optical coherence tomography to determine cause of infantile nystagmus in children by using foveal morphology. Objective: To investigate the feasibility of handheld (HH) ultra-high-resolution spectral-domain optical coherence tomography (SD-OCT) in young children with nystagmus, to determine its sensitivity and specificity in classifying foveal abnormalities, and to investigate its potential to determine the cause of infantile nystagmus with the use of foveal morphology. Design: Prospective, case-control study. Participants And Controls: A total of 50 patients with nystagmus and 50 healthy control subjects (mean age, 3.2 years; range, 0-8 years). Methods: Each patient was scanned using HH SD-OCT (Bioptigen Inc., Research Triangle Park, NC) without sedation, and foveal morphology was classified into 1 of 4 categories: (1) typical foveal hypoplasia (predicting clinical diagnosis of albinism, PAX6 mutations, or isolated foveal hypoplasia); (2) atypical foveal hypoplasia (predicting achromatopsia); (3) other foveal changes (corresponding to retinal dystrophies); and (4) normal fovea (predicting idiopathic or manifest latent nystagmus). An independent interpretation of the HH SD-OCT scans by masked examiners was performed, and the sensitivity and specificity of the predicted diagnosis were calculated. Main Outcome Measures: The success rate of image acquisition and sensitivity and specificity of the HH SD-OCT in classifying foveal abnormalities. Results: In 94% of examinations, HH SD-OCT was successful. Twenty-three patients had typical foveal hypoplasia (category 1). Of these patients, 21 were diagnosed with albinism and 2 were diagnosed with PAX6 mutations. Five patients were classified as atypical (category 2) and diagnosed with achromatopsia. Six patients had other abnormal foveal morphology (category 3) and were diagnosed with retinal dystrophy. Sixteen patients had normal foveal morphology (category 4). Of these patients, 12 were diagnosed with idiopathic nystagmus and 4 were diagnosed with manifest latent nystagmus. Sensitivities of HH SD-OCT for classifying typical or atypical foveal hypoplasia, other abnormal foveal morphology, and normal morphology were 92.8%, 86.7%, 41.1%, and 88.4%, respectively, with specificities of 91.4%, 94.8%, 97.7% and 95.1%, respectively. Conclusions: We demonstrate excellent feasibility of HH SD-OCT in the diagnosis of conditions associated with infantile nystagmus. The HH SD-OCT classification of foveal abnormalities was highly sensitive and specific. This classification was used to determine the underlying cause of infantile nystagmus. Handheld SD-OCT in early childhood can facilitate focused investigations and earlier diagnosis. This is important in an era when potentially time-sensitive treatment, such as gene therapy, is imminent. abstract_id: PUBMED:32832248 Reliability and Recommended Settings for Pediatric Circumpapillary Retinal Nerve Fiber Layer Imaging Using Hand-Held Optical Coherence Tomography. Purpose: To investigate feasibility and reliability of 3-dimensional full circumpapillary retinal nerve fiber layer (cpRNFL) analysis in children, with and without glaucoma, without the use of sedation and to recommend a protocol for hand-held optical coherence tomography use. Methods: A cohort of pediatric glaucoma patients and normal children were imaged with hand-held optical coherence tomography to assess the feasibility of obtaining full cpRNFL. Two consecutive scans were acquired in a smaller sample to investigate test-retest repeatability and interassessor reproducibility. The cpRNFL thickness was assessed in four quadrants, at several visual angles from the optic nerve center. Results: Scanning was attempted in both eyes of 90 children with pediatric glaucoma and 180 controls to investigate feasibility (mean age, 6.98 ± 4.42 years). Scanning was not possible in 68 eyes of glaucoma children mainly owing to nystagmus, unclear optical media, or high refractive errors. Where three-dimensional imaging was possible, success at obtaining full cpRNFL was 67% in children with glaucoma and 89% for controls. Seventeen children with pediatric glaucoma and 34 controls contributed to reliability analysis (mean age, 6.3 ± 3.63 years). For repeatability intraclass correlation coefficients across quadrants ranged from 0.63 to 0.82 at 4° and improved to 0.88 to 0.94 at 6°. Intraclass correlation coefficients for reproducibility were also highest at 6° (>0.97 across all quadrants). Conclusions: We demonstrate that acquisition and measurement of cpRNFL thickness values using 3-dimensional hand-held optical coherence tomography volumes in awake children is both feasible and reliable and is optimal at 6° from optic nerve center. Translational Relevance: Our recommended protocol provides guidance on how pediatric optic nerve pathologies are managed by clinicians. abstract_id: PUBMED:30157795 Optical coherence tomography findings of falciform retinal detachment complicated with persistent fetal vasculature. Background: Falciform retinal detachment (FRD) usually causes pronounced retinal wrinkles, and the prognosis of visual function is poor. In this present study, we report a rare case of FRD in which optical coherence tomography (OCT) findings revealed a relatively good visual function. Case Presentation: This study involved a 22-year-old female who had previously been diagnosed with FRD at 2 years of age, and who presented with microphthalmus in both eyes with pronounced retinal folds from the optic disc to the inferior-temporal side. Based on the clinical findings, we diagnosed it as persistent fetal vasculature (PFV). We found the visual function in her left eye to be relatively poor, yet from 6 to 22 years of age, the corrected visual acuity in that eye remained at 0.08. Although a nystagmus was present, Goldman perimetry showed a relatively wider visual field than expected. Optical coherence tomography (OCT) findings revealed that the retinal layer structure near the FRD was relatively well maintained, except for the temporal peripheral region. Conclusions: Our findings reveal that OCT examination can be considered useful for predicting the visual function in cases of FRD. abstract_id: PUBMED:25803824 Applications of optical coherence tomography in pediatric clinical neuroscience. For nearly two centuries, the ophthalmoscope has permitted examination of the retina and optic nerve-the only axons directly visualized by the physician. The retinal ganglion cells project their axons, which travel along the innermost retina to form the optic nerve, marking the beginning of the anterior visual pathway. Both the structure and function of the visual pathway are essential components of the neurologic examination as it can be involved in numerous acquired, congenital and genetic central nervous system conditions. The development of optical coherence tomography now permits the pediatric neuroscientist to visualize and quantify the optic nerve and retinal layers with unprecedented resolution. As optical coherence tomography becomes more accessible and integrated into research and clinical care, the pediatric neuroscientist may have the opportunity to utilize and/or interpret results from this device. This review describes the basic technical features of optical coherence tomography and highlights its potential clinical and research applications in pediatric clinical neuroscience including optic nerve swelling, optic neuritis, tumors of the visual pathway, vigabatrin toxicity, nystagmus, and neurodegenerative conditions. abstract_id: PUBMED:28481040 Optical coherence tomography morphology and evolution in cblC disease-related maculopathy in a case series of very young patients. Purpose: To describe the retinal structure of a group of patients affected by methylmalonic aciduria with homocystinuria cblC type, caused by mutations in the MMACHC gene, using spectral domain optical coherence tomography (SD-OCT). Methods: Young patients (n = 11, age 0-74 months) with cblC disease, detected by newborn screening or clinically diagnosed within 40 days of life, underwent molecular analysis and complete ophthalmic examination, including fundus photography and SD-OCT. In one case, we also performed fluorescein angiography (FA) and standard electroretinography (ERG). Results: Molecular analysis of the MMACHC gene fully confirmed cblC disease in nine of 11 patients. Two patients harboured only a single heterozygous pathogenic MMACHC mutation and large unbalanced rearrangements were excluded by array-CGH analysis in both. All patients except two showed a bilateral maculopathy. In general, retinal changes were first observed before one year of age and progressed to a well-established maculopathy. Measurable visual acuities ranged from normal vision, in keeping with age, to bilateral, severe impairment of central vision. Nystagmus was present in six patients. Spectral domain optical coherence tomography (SD-OCT) showed macular thinning with severe alterations in outer, and partial sparing of inner, retinal layers. Conclusion: Patients affected by cblC disease may frequently show an early onset maculopathy with variable ophthalmoscopic appearance. Spectral domain optical coherence tomography (SD-OCT) broadens the knowledge of subtle retinal alterations during the disease's progression and helps to shed light on the pathological mechanism of maculopathy development. Answer: Yes, handheld optical coherence tomography (HH-OCT) has been shown to be reliable in infants and young children with and without nystagmus. A study evaluating the reliability of spectral domain HH-OCT in assessing foveal morphology in children with and without nystagmus found an excellent degree of reproducibility between two examinations with intraclass correlation coefficients (ICCs) greater than 0.96 for central macular thickness (CMT) and greater than 0.8 for the outer nuclear layer and outer segment of the photoreceptors. The study concluded that HH-OCT provides reliable measurements in children with and without nystagmus, which is important for its future diagnostic and prognostic role in young children with nystagmus and other eye diseases (PUBMED:24222299). Additionally, other studies have reported on the usefulness of HH-OCT in pediatric ophthalmology practice, including its application in a variety of pediatric retinal diseases, and have highlighted its utility in the assessment of vision development in premature children, evaluation of unexplained vision loss and amblyopia, nystagmus and night blindness, and intraocular tumors (PUBMED:26458476). Furthermore, experiences with HH-SD-OCT in infants and young children have demonstrated that it is a beneficial complement for the diagnosis of diseases in pediatric ophthalmology, despite challenges in orienting the scan in the posterior retinal pole in cases of unstable fixation (PUBMED:27628287). In summary, the evidence suggests that HH-OCT is a reliable tool for evaluating retinal morphology in infants and young children, including those with nystagmus, and it is increasingly being integrated into pediatric ophthalmology practice for the diagnosis and management of various eye conditions.
Instruction: Intrauterine insemination: is the timing correct? Abstracts: abstract_id: PUBMED:1018498 Perspectives of intrauterine contraception The author describes the current research in intrauterine contraception. It is directed in two ways: to improve the design of inert devices and to construct an optimal bioactive IUD. The design of the inert IUD should be adapted as much as possible to anatomical and functional conditions of the uterus. In developing new bioactive devices it is ained at finding the optimal dosage of metals and hormones as well as the new material acting as carriers of bioactive substances. With new constructions and the determination of the optimale dosage of antifertile and antifibrinolitic substances it is hoped to reduce three main insufficiences of intrauterine contraception: bleeding, expulsion and pregnancy. abstract_id: PUBMED:6778115 Intrauterine contraceptive devices: evaluation by sonography. The sonographic features of intrauterine contraceptive devices (IUD), which allow identification of their intrauterine location, were evaluated in vitro and clinically in 50 patients. Posterior acoustic shadowing was seen in 90% of patients, entrance-exit reflections in 65%, and type-specific morphology in 94% of patients with Lippes Loops and in 81% with Copper 7 devices. Recognition of the characteristic features of the IUD and identification of endometrial mucosa around the IUD will allow unequivocal and accurate determination of its correct intrauterine location. abstract_id: PUBMED:25153438 Repeated surgery for recurrent intrauterine adhesions? Recurrence of intrauterine adhesions is high even though a correct hysteroscopic treatment is performed. So, primary and secondary means to prevent intrauterine adhesions is important. Patient information is a key point on the obstetrical prognosis of this disease. abstract_id: PUBMED:32175351 A preliminary study on a patented intrauterine stent in the treatment of recurrent intrauterine adhesions with poor prognosis. Background: Management of intrauterine adhesions (IUAs) is challenging, mainly because there is no ideal method to prevent its recurrence. Recurrence of moderate to severe IUAs after conventional treatments entails a poorer prognosis in these patients. This study aimed to assess the safety and effectiveness of a patented intrauterine stent as a barrier in the treatment of recurrent IUAs with poor prognosis. Methods: This was a retrospective, observational study of 13 women with IUAs, admitted to the Third Xiangya Hospital of Central South University from June 2018 to September 2019. After conventional treatments, moderate to severe adhesions were still evident. Then a patented intrauterine stent was used as a barrier to prevent the reformation of adhesions after hysteroscopic adhesiolysis (HA). All cases received a second-look hysteroscopy after 2-3 menstrual cycles. American Fertility Society (AFS) scores were noted for all cases. Menstrual pattern, recurrence of adhesions, and reproductive outcomes were also evaluated. Results: The mean number of previous HA of the 13 patients was 2.7, and mean AFS score after conventional treatments was still 8.7. Before the use of the stent, 46.2% of the patients had amenorrhea caused by IUAs. After using the intrauterine stent as a barrier, the mean AFS score was 1.7. The menstrual improvement rate was 100%, while the rate of resumption of normal menstrual volume was 53.8%. The second-look hysteroscopy revealed a correct position of the stents and no obvious recurrence of IUAs in all cases. After the removal of the stent, the patients were followed up for 2-13 months (mean: 7 months), and no amenorrhea was observed, the recurrence rate of IUAs was 25.0% and one patient got pregnant. Conclusions: Even in the recurrent IUAs patients with very poor prognosis, the patented intrauterine stent has been proved to be very effective in preventing the recurrence of adhesions, on condition that its correct position inside the uterine cavity is achieved. Due to the extremely poor nature of the studied population, the recurrent rate of IUAs and pregnancy rate after the removal of the stent is still discouraging, although the improvement in menstruation is inspiring. abstract_id: PUBMED:24947600 The intrauterine device and the intrauterine system. Intrauterine contraception is used by about 100 million women worldwide, making it the most popular form of fertility regulation. In UK community contraception clinics, however, long-acting reversible contraception has increased to 28% of users, and intrauterine contraception accounts for only 8% of methods used by women accessing these services. Potential exists to increase uptake of these more effective methods. In this chapter, we review the clinical advantages, disadvantages and cost-effectiveness of intrauterine contraception. We discuss the management of complications along with advice for trainers, and briefly consider issues in developing countries. abstract_id: PUBMED:618165 Ability to conceive after intentional interruption of intrauterine contraception Out of 3415 first insertions, the intrauterine contraceptive device was removed in 152 women (4.45%) owing to planned pregnancy. Nineteen cases dropped out of evidence, so that 133 remained for further analysis. Out of these 133 women, 120 (90.23%) became pregnant, the reamining 13 (9.77%) did not. In the first three months following the removal of the intrauterine contraceptive device, 77 women 57.89% conceived, 3--6 months after the removal 22 (16.54%), 6--12 months 18 (13.53%), and after one year 3 women (2.26%) were recorded pregnant. The ability to conceive is analysed according to the age of the women surveyed, the length of the use of intrauterine contraceptive devices, and the number of previous births and abortions. It is concluded that the use of intrauterine contraception has no influence on the later ability and speed of conception and that, for this reason, this contraception method could be considered fully reversible. abstract_id: PUBMED:7065991 Intrauterine devices: insertion and removal: practical procedures. Intrauterine devices (IUDs) remain a popular and common method of contraception in Australia. Their insertion and removal are commonplace in general practice. Dr Ken Waldron believes the success of the IUD depends, in part, on observing correct insertion techniques. abstract_id: PUBMED:17425833 Use of three-dimensional ultrasound in evaluating the intrauterine position of a levonorgestrel-releasing intrauterine system. This paper reports the first case of three-dimensional (3-D) transvaginal ultrasonography (TVS) imaging of malpositioned levonorgestrel-releasing intrauterine system (LNG-IUS). In patients carrying LNG-IUS, the application of 3-D TVS with the adjunctive volume contrast imaging in the coronal plane (VCI-C) and inversion rendering modes clearly display the correct spatial position of the LNG-IUS in relation to the uterine cavity much better than two-dimensional ultrasound. abstract_id: PUBMED:7095705 Intrauterine contraception. Risks, indications, trends Intrauterine contraception is used by approximately 1 million fertile women in Germany. IUDs work through various factors, i.e. impaired activity of spermatozoa, defects in the lining surface epithelium of the endometrium, and inhibitory effects towards the blastocyst itself. The main risks of intrauterine contraception are perforation of the uterus at the time of insertion, irregular bleedings, ascending infections and ectopic pregnancies. If intrauterine pregnancy occurs despite the presence of an IUD (2-4%), the rate of abortion is considerably high. In few those cases septic sequelae may complicate the spontaneous abortion. After the removal of the IUD of any type fertility immediately returns to normal with the exception of cases suffering from inflammatory complications in association with an IUD in situ. The indication for the application of an IUD should be seriously considered, the use of a formal sheet for informed consent is advisable. abstract_id: PUBMED:7467621 Intrauterine growth curves for German-speaking countries (author's transl) The classication of a newborn baby according to gestational and intrauterine growth is important for correct diagnosis of fetal disease or impending dangerous situations. Besides the known gestational age the valid intrauterine growth curve is a prerequisite for this classification. New intrauterine growth curves derived from newborns in the four German speaking countries are presented and their general use is encouraged. Answer: The provided abstracts do not contain information directly related to the timing of intrauterine insemination (IUI). Intrauterine insemination is a fertility treatment that involves placing sperm inside a woman's uterus to facilitate fertilization. The timing of IUI is crucial and typically coincides with ovulation to maximize the chances of conception. However, none of the abstracts provided discuss the timing or protocols for IUI. The abstracts focus on various aspects of intrauterine contraception (PUBMED:1018498, PUBMED:6778115, PUBMED:25153438, PUBMED:32175351, PUBMED:24947600, PUBMED:618165, PUBMED:7065991, PUBMED:17425833, PUBMED:7095705, PUBMED:7467621), including the design and effectiveness of intrauterine devices (IUDs), the management of intrauterine adhesions, the use of sonography to evaluate IUD position, and the reversibility of fertility after IUD removal. They also discuss the risks and complications associated with IUDs, such as perforation, irregular bleeding, infections, and ectopic pregnancies, as well as the importance of informed consent and proper insertion techniques. For accurate information on the timing of intrauterine insemination, one would need to consult sources specifically addressing fertility treatments and protocols for IUI.
Instruction: Cytokine Profile in Gout: Inflammation Driven by IL-6 and IL-18? Abstracts: abstract_id: PUBMED:3033070 Interleukin 1 (IL 1) as a mediator of crystal arthritis. Stimulation of T cell and synovial fibroblast mitogenesis by urate crystal-induced IL 1. We reported before that monosodium urate (MSU) crystals were potent stimulators of endogenous pyrogen (EP) production from human and rabbit mononuclear phagocytes, and proposed that this property of MSU crystals may be important in the pathogenesis of gout. EP activity is now attributed to interleukin 1 (IL 1) peptides but IL 1 is not the only pyrogenic monocyte-derived cytokine, since both interferon-alpha (alpha-IFN) and tumor necrosis factor (TNF) are also pyrogenic in rabbits. Using a T cell comitogenic assay based on a murine helper T cell clone that does not respond to IFN or TNF, we now report the release of IL 1 activity from human blood monocytes and synovial fluid mononuclear cells (MNC), following stimulation with MSU crystals. MSU-induced supernatants with IL 1 activity were neutralized with rabbit antiserum to human IL 1 and also stimulated the growth ([3H]thymidine incorporation) of long-term fibroblast-like cell lines derived from human synovial rheumatoid exudate. Two other crystals associated with articular inflammation were tested: hydroxyapatite was a much less potent stimulus compared with MSU crystals, and calcium pyrophosphate dihydrate did not stimulate IL 1 release from human monocytes or synovial fluid MNC. As a model for the inflammatory consequences of acute and chronic overproduction of IL 1, gout is the only sterile inflammatory disease where the local and systemic pathology is compatible with such overproduction; raised IL 1 levels have been found at the site of inflammation, and a necessary etiologic agent, crystalline urate, has been shown unequivocally to be a direct activator of mononuclear IL 1 release. abstract_id: PUBMED:16406656 Plasma interleukin (IL)-18 (interferon-gamma-inducing factor) and other inflammatory cytokines in patients with gouty arthritis and monosodium urate monohydrate crystal-induced secretion of IL-18. To determine whether levels of interleukin (IL)-18, together with those of IL-1beta, tumor necrosis factor-alpha, IL-6, and IL-8, are elevated in the plasma of patients with gouty arthritis, the plasma concentrations of those cytokines were measured in 31 males with gouty arthritis. Further, CD14+ cells were obtained from human blood and thioglycolate medium-induced peritoneal cells obtained from caspase 1-deficient mice, and then separately cultured in the presence of monosodium urate monohydrate (MSU) crystals. In addition, in an animal in vivo experiment, MSU crystals were injected into subcutaneous air pouches of IL-18-deficient mice. The plasma concentrations of IL-18, IL-6, and IL-8 were elevated in the presence of gouty arthritis in the gout patients. In the in vitro study, the presence of MSU crystals stimulated CD14+ cells (monocytes) to secrete IL-18 and increased the activity of caspase 1 in CD14+ cells, whereas there was no significant effect on IL-18 messenger RNA in CD14+ cells and only a slight induction of IL-18 secretion from thioglycolate medium-induced caspase 1-deficient peritoneal cells. In the in vivo experiment, MSU crystals injected into the air pouch promoted neutrophil accumulation along with an increase in concentrations of keratinocyte-derived chemokine (KC) and macrophage inflammatory protein (MIP)-1alpha in air-pouch fluids in both IL-18-deficient and wild-type mice. However, there was no increase in the concentration of IL-18 in air-pouch fluids in either mouse strain. Our results suggest that plasma IL-18, IL-6, IL-8, and C-reactive protein (CRP) levels reflect local inflammation associated with gouty arthritis, though IL-18 does not play an important role in neutrophil accumulation. Further, they suggest that MSU crystals accelerate the processing of IL-18 from an inactive to active form via the activation of caspase 1. abstract_id: PUBMED:22288582 Immunology in clinic review series; focus on autoinflammatory diseases: update on monogenic autoinflammatory diseases: the role of interleukin (IL)-1 and an emerging role for cytokines beyond IL-1. Unlabelled: OTHER THEMES PUBLISHED IN THIS IMMUNOLOGY IN THE CLINIC REVIEW SERIES Allergy, Host Responses, Cancer, Type 1 diabetes and viruses, Metabolic diseases. Summary: The disease-based discovery of the molecular basis for autoinflammatory diseases has led not only to a rapidly growing number of clinically and genetically identifiable disorders, but has unmantled key inflammatory pathways such as the potent role of the alarm cytokine interleukin (IL)-1 in human disease. Following its initial failures in the treatment of sepsis and the moderate success in the treatment of rheumatoid arthritis, IL-1 blocking therapies had a renaissance in the treatment of a number of autoinflammatory conditions, and IL-1 blocking therapies have been Food and Drug Administration (FDA)-approved for the treatment of the autoinflammatory conditions: cryopyrin-associated periodic syndromes (CAPS). CAPS and deficiency of the IL-1 receptor antagonist (DIRA), both genetic conditions with molecular defects in the IL-1 pathway, have provided a pathogenic rationale to IL-1 blocking therapies, and the impressive clinical results confirmed the pivotal role of IL-1 in human disease. Furthermore, IL-1 blocking strategies have shown clinical benefit in a number of other genetically defined autoinflammatory conditions, and diseases with clinical similarities to the monogenic disorders and not yet identified genetic causes. The discovery that IL-1 is not only triggered by infectious danger signals but also by danger signals released from metabolically 'stressed' or even dying cells has extended the concept of autoinflammation to disorders such as gout, and those that were previously not considered inflammatory, such as type 2 diabetes, coronary artery disease, obesity and some degenerative diseases, and provided the conceptual framework to target IL-1 in these diseases. Despite the tremendous success of IL-1 blocking therapy, the use of these agents in a wider spectrum of autoinflammatory conditions has uncovered disease subsets that are not responsive to IL-1 blockade, including the recently discovered proteasome-associated autoinflammatory syndromes such as chronic atypical neutrophilic dermatitis with lipodystrophy and elevated temperatures (CANDLE), Japanese autoinflammatory syndrome with lipodystrophy (JASL), Nakajo-Nishimura syndrome (NNS) and joint contractures, muscle atrophy, panniculitis induced lipodystrophy (JMP), and urge the continued quest to characterize additional dysregulated innate immune pathways that cause autoinflammatory conditions. abstract_id: PUBMED:24407823 Anti-interleukin-1 therapy in the management of gout. Gout is the most common inflammatory arthritis in humans. Current treatment options to control the pain and inflammation of acute and chronic gout include nonsteroidal anti-inflammatory drugs, colchicine, and corticosteroids. However, patients are commonly unresponsive to, intolerant of, or have contraindications to current treatments. Interleukin-1 (IL-1), a proinflammatory cytokine, plays a major role in mediating gouty inflammation. This role of IL-1 has led investigators to explore a new class of anti-inflammatory drugs that inhibit IL-1 signal transduction. IL-1 inhibitors currently in trials for gout include anakinra, rilonacept, and canakinumab. Anakinra is an IL‑1 receptor antagonist that inhibits the activity of both IL‑1α and IL‑1β, rilonacept is a soluble decoy receptor and canakinumab is an anti‑IL‑1β monoclonal antibody. In case cohorts, anakinra was found to be efficacious in combating acute gout pain and inflammation, whereas rilonacept has been found to be efficacious for reducing the risk of recurrent attacks. Canakinumab has been shown to be efficacious in both reducing pain and inflammation in acute attacks, and for reducing the risk of recurrent attacks. All three IL-1 inhibitors are generally well tolerated. This article reviews the current IL-1 inhibitors and the results of trials in which they have been tested for the management of acute and chronic gouty inflammation. abstract_id: PUBMED:30066887 Urate crystals induce macrophage PAF‑AH secretion which is differentially regulated by TGFβ1 and hydrocortisone. The aim of the present study was to establish the role of platelet‑activating factor acetyl hydrolase (PAF‑AH) in the resolution phase of gout using an established in vitro mononuclear cell model. The effects of signalling pathway inhibitors on PAF‑AH secretion, as well as the effects of the common treatments hydrocortisone and colchicine, an antibody against the anti‑inflammatory cytokine transforming growth factor β1 (TGFβ1), and the transcriptional inhibitor actinomycin D, were also investigated. The effect of recombinant PAF‑AH on cytokine secretion by these cells was also determined. Human peripheral blood‑derived monocytes were isolated and differentiated into macrophages. Monocytes and macrophages were stimulated with monosodium monohydrate urate (MSU) crystals or lipopolysaccharide in the presence or absence of AEG3482 [a c‑Jun N‑terminal kinase (JNK) inhibitor], MG132 (a proteasome inhibitor), hydrocortisone or colchicine. Cultures were then analysed for PAF‑AH secretion using ELISA. A 6‑fold upregulation of PAF‑AH secretion was observed following macrophage exposure to MSU crystals for 24 h (29.3±6 vs. 5.4±0.3 ng/ml unstimulated; P<0.05). Following 72 h, PAF‑AH levels decreased significantly (11.1±1.8; P<0.01). Secretion was further enhanced following pre‑treatment with the JNK protein kinase inhibitor AEG3482 prior to MSU crystal stimulation (P<0.05) and was abrogated when cells were preincubated with actinomycin D or the proteasome inhibitor MG132 (50, 100 and 200 µM). The addition of recombinant PAF‑AH (2.5‑10 ng/ml) to MSU crystal‑stimulated immature monocyte cultures significantly decreased pro‑inflammatory interleukin (IL)‑1β (unstimulated 687±124 vs. stimulated 113±30 pg/ml) and IL‑6 secretion (unstimulated 590±50 vs. stimulated 182±19 pg/ml). Treatment of MSU crystal‑stimulated macrophages with hydrocortisone (2 µM) also significantly decreased PAF‑AH release (P<0.05). Neutralising anti‑TGFβ1 addition decreased PAF‑AH dose‑dependently with the highest inhibition observed at 1 µg/ml (P<0.05). The results implicated that PAF‑AH may have an anti‑inflammatory role in the resolution phase of gout. abstract_id: PUBMED:25728511 Interleukin-1 receptor associated kinase inhibitors: potential therapeutic agents for inflammatory- and immune-related disorders. The various cells of innate immune system quickly counter-attack invading pathogens, and mount up "first line" defense through their trans-membrane receptors including Toll-like receptors (TLRs) and interleukin receptors (IL-Rs) that result in the secretion of pro-inflammatory cytokines. Albeit such inflammatory responses are beneficial in pathological conditions, their overstimulation may cause severe inflammatory damage; thus, make this defense system a "double edged sword". IRAK-4 has been evaluated as an indispensable element of IL-Rs and TLR pathways that can regulate the abnormal levels of cytokines, and therefore could be employed to manage immune- and inflammation-related disorders. Historically, the identification of selective and potent inhibitors has been challenging; thus, a limited number of small molecule IRAK-4 inhibitors are available in literature. Recently, IRAK-4 achieved great attention, when Ligand® pharmaceutical and Nimbus Discovery® reported the beneficial potentials of IRAK-4 inhibitors in the pre-clinical evaluation for various inflammatory- and immune-related disorders, but not limited to, such as rheumatoid arthritis, inflammatory bowel disease, psoriasis, gout, asthma and cancer. abstract_id: PUBMED:26564518 Research Advances in NLRP3 Inflammasome-related Regulatory Mechanisms Inflammasome is one of the pattern recognition receptors whose activation directly relates to the maturity and secretion of proinflammatory cytokines interleukin (IL)-1Β and IL-18. Thus, it plays an important role in the humoral immunity. A growing number of studies have found that inflammasome has a close relationship with the pathogenesis of various diseases including atherosclerosis,diabetes, and gout. However,the activation of the inflammasome and its specific regulatory mechanisms remain not clear. This article reviews the possible regulatory mechanisms of the inflammasome NLRP3 in terms of oxidative stress, endoplasmic reticulum stress,and autophagy reaction. abstract_id: PUBMED:17588171 The efficacy of anakinra in an adolescent with colchicine-resistant familial Mediterranean fever. Colchicine is the treatment of choice in familial Mediterranean fever (FMF) for the prevention of both attacks and secondary amyloidosis. The overall nonresponder rate is about 5-10%. Anakinra is known to have good effectiveness in a severe autoinflammatory syndrome [chronic infantile neurological cutaneous and articular (CINCA) syndrome] and other recurrent hereditary periodic fevers. Pyrin--the protein involved in FMF--has a role in activating the proinflammatory cytokine interleukin (IL)-1beta. We report the effectiveness of the addition of an IL-1-receptor inhibitor (anakinra) to colchicine in controlling the febrile attacks and acute phase response in an adolescent with FMF resistant to colchicine. abstract_id: PUBMED:31900383 Interleukin-1β is a potential therapeutic target for periodontitis: a narrative review. Interleukin(IL)-1β, a pro-inflammatory cytokine, was elevated and participates in periodontitis. Not only the link between IL-1β and periodontitis was proved by clinical evidence, but also the increased IL-1β triggers a series of inflammatory reactions and promotes bone resorption. Currently, IL-1β blockage has been therapeutic strategies for autoimmune and autoinflammatory diseases such as rheumatoid arthritis, cryopyrin-associated periodic syndromes, gout and type II diabetes mellitus. It is speculated that IL-1β be a potential therapeutic target for periodontitis. The review focuses on the production, mechanism, present treatments and future potential strategies for IL-1β in periodontitis. abstract_id: PUBMED:24343224 Mechanisms of spontaneous resolution of acute gouty inflammation. Acute gout is an auto-inflammatory disease characterized by self-limiting inflammation in response to the deposition of monosodium urate (MSU) crystals in the joints or tissues. Recognition of MSU triggers activation of the NLRP3 inflammasome, release of active interleukin (IL)-1β, and amplification of the inflammatory response by the surrounding tissue followed by recruitment and activation of inflammatory leukocytes. The shutdown of this inflammatory response is linked to a number of regulatory events ranging from crystal coating and apoptotic cell clearance through to pro-inflammatory cytokine regulation and transforming growth factor β1 (TGFβ1) production. This review will highlight mechanisms that limit acute inflammation triggered by MSU crystals and suggests areas for further research. Answer: The cytokine profile in gout is characterized by elevated levels of several pro-inflammatory cytokines, including interleukin-6 (IL-6) and interleukin-18 (IL-18), which reflect the local inflammation associated with gouty arthritis. Plasma concentrations of IL-18, IL-6, and IL-8 are elevated in patients with gouty arthritis, suggesting their involvement in the inflammatory response (PUBMED:16406656). Monosodium urate (MSU) crystals, which are a key etiological agent in gout, can stimulate the secretion of IL-18 from CD14+ cells (monocytes) and increase the activity of caspase 1, which is involved in the processing of IL-18 from an inactive to an active form (PUBMED:16406656). IL-1 is another critical cytokine in the pathogenesis of gout, as MSU crystals are potent stimulators of IL-1 release from mononuclear phagocytes, contributing to the inflammation seen in gout (PUBMED:3033070). The role of IL-1 in gout is further supported by the therapeutic success of IL-1 blocking therapies, such as anakinra, rilonacept, and canakinumab, in managing acute and chronic gouty inflammation (PUBMED:24407823). While IL-1β is a key mediator of inflammation in gout, the involvement of IL-6 and IL-18 in the inflammatory process is also significant. IL-6 is a pleiotropic cytokine that can contribute to the acute phase response and inflammation, while IL-18 is known to induce interferon-gamma production and has been implicated in the inflammatory response in gout (PUBMED:16406656). In summary, the cytokine profile in gout is characterized by the involvement of multiple pro-inflammatory cytokines, with IL-1β playing a central role in the inflammatory response. IL-6 and IL-18 are also elevated in gout and contribute to the inflammation, supporting the notion that gouty arthritis is driven by a complex network of cytokines, including IL-6 and IL-18.
Instruction: Oxidative stress in Helicobacter pylori infection: does supplementation with vitamins C and E increase the eradication rate? Abstracts: abstract_id: PUBMED:19674132 Oxidative stress in Helicobacter pylori infection: does supplementation with vitamins C and E increase the eradication rate? Aim: This study aims to assess the antioxidant property of vitamins E and C in Helicobacter pylori infection, and to determine if adding them to standard triple therapy plus bismuth subcitrate increases the H. pylori eradication rate. Methods: This study included 160 patients infected with H. pylori, who were randomized into one of two groups. Patients in group A (n = 80) received lansoprazole (30 mg, b.i.d.), amoxicillin (1000 mg, b.i.d.), clarithromycin (500 mg, b.i.d.), and bismuth subcitrate (300 mg, q.i.d.) for 14 days, while patients in group B (n = 80) received vitamin C (500 mg, b.i.d.) and vitamin E (200 IU, b.i.d.) for 30 days, in addition to lansoprazole (30 mg, b.i.d.), amoxicillin (1000 mg, b.i.d.), clarithromycin (500 mg, b.i.d.), and bismuth subcitrate (300 mg, q.i.d.) for 14 days. Total antioxidant capacity (TAC) was evaluated with a Randox kit. Success rate was calculated using both intention-to-treat (ITT) and per-protocol (PP) analyses. Results: One hundred and sixty patients were analyzed using ITT analysis. One hundred and fifty-three patients completed the study. In group A, H. pylori eradication was achieved in 48 (60%) of the 80 patients included in the ITT analysis, and in 48 (64%) of the 75 patients included in the PP analysis. In group B, H. pylori eradication was achieved in 73 (91.25%) of the 80 included in the ITT analysis and in 73 (93.5%) of the 78 patients included in the PP analysis. The eradication rate was significantly higher in group B than in group A (p < .005). TAC was at the lower limit of normal in both groups and the difference between them was not statistically significant (p > .05). Conclusion: In group B, H. pylori eradication rate was 91.25%, which is higher than the ideal 80% eradication rate. The results of the present study show that adding the prescribed doses of vitamins E and C to antimicrobial therapy is effective in eradicating H. pylori infection. abstract_id: PUBMED:21810287 Effect of vitamins C and E supplementation on Helicobacter pylori eradication: a meta-analysis. Vitamins C and E can act as potent antioxidants to reduce the damage caused by reactive oxygen species in gastric mucosa. Whether vitamin supplements for Helicobacter pylori eradication regimen could improve the rate of eradication remains uncertain. Therefore, we performed a meta-analysis to evaluate the efficacy of vitamins C and E supplementation for the eradication of H. pylori. Searches were conducted in the databases PubMed, EMBASE and Cochrane Library. Randomised controlled trials (RCT) that fulfilled the inclusion criteria and addressed the clinical questions of this analysis were further assessed. Of the six RCT included, five had a low methodological quality. Of the six RCT, three compared the efficacy of the eradication regimen v. eradication regimen plus vitamins C and E. The result of the meta-analysis showed a non-significant difference in the eradication rate of H. pylori between the two groups (risk ratio (RR) 0·93, P = 0·76). Another three RCT compared the eradication regimen v. eradication regimen plus vitamin C only, and there too there was no significant difference in the eradication rate (RR 0·83, P = 0·32). In conclusion, vitamins C and/or E supplements to the H. pylori eradication regimen could not improve the eradication rate. However, currently available data do not draw a definitive conclusion about the effectiveness of antioxidant vitamins on H. pylori eradication, owing to the small sample size and low-to-moderate methodological quality. abstract_id: PUBMED:21740452 Supplementing vitamins C and E to standard triple therapy for the eradication of Helicobacter pylori. What Is Known And Objective: Helicobacter pylori eradication rates of currently accepted triple therapy regimens vary between geographic locations and do not exceed 70-80%. Eradication rates are much lower in locations where uncontrolled antibiotic use is common such as Turkey. In the present study, we aimed to test whether supplementing vitamins C and E to standard triple therapy, including a proton pump inhibitor plus amoxicillin plus clarithromycin, increased the H. pylori eradication rate. Methods: Two hundred patients infected with H. pylori were randomized into two groups in an open-label trial. In group A, patients (n = 160) were given standard triple therapy, including lansoprazole 30 mg BID plus amoxicillin 1000 mg BID plus clarithromycin 500 mg BID for 14 days, plus vitamin C 500 mg BID plus vitamin E 200 IU BID for 30 days. In group B, patients (n = 40) were given standard triple therapy for 14 days. The success of H. pylori eradication was defined as a negative ¹⁴C-urea breath test result, 4-6 weeks after the completion of therapy. Comaprisons were by both intention-to-treat (ITT) and per-protocol (PP) analysis. Results And Discussion: Two hundred patients (137 women, 63 men) were analysed using ITT analysis and 195 patients completed the study. In group A, H. pylori eradication was achieved in 132 of the 160 patients (82·5%) included in ITT analysis and 132 of the 157 patients (84%) included in PP analysis. In group B, H. pylori eradication was achieved in 18 of the 40 patients (45%) included in ITT analysis and 18 of the 38 patients (47·4%) included in PP analysis. Eradication rates were significantly higher in group A than in group B (P < 0·005). Eradication rates were not statistically significant between men and women in both groups. What Is New And Conclusion: Adding vitamins C and E to standard triple therapy increases the eradication rate of H. pylori. Vitamins C and E may increase the eradication rate via increasing the effectiveness of the antibiotics by decreasing oxidative stress in the gastric mucosa and strengthening the immune system. abstract_id: PUBMED:9748875 Levels of vitamins A, E and C in serum and gastric juice in relation to gastric mucosa and occurrence of Helicobacter pylori Colonization of the gastric mucosa with Helicobacter pylori H.p. reduces the vitamin C concentration of gastric juice. Eradication of H.p. within four weeks after completed treatment does not exert a significant effect on changes in the concentration of vitamins A, E and C in gastric juice or serum. Despite this after eradication a rising trend of vitamin E in gastric juice was recorded. Substitution of vitamin C and E in gastritis associated with colonization with H.p. has a favourable effect and may reduce the risk of malignization. abstract_id: PUBMED:26510082 Influence of vitamin C and E supplementation on the eradication rates of triple and quadruple eradication regimens for Helicobacter pylori infection. Background/aims: In our study, we aimed to assess the effect of vitamin E and C supplementation to triple and quadruple Helicobacter pylori eradication regimens. Materials And Methods: Four hundred patients with H. pylori infection were classified into four groups. Patients in group A (n=100) received amoxicillin, clarithromycin, and lansoprazole for 2 weeks. In group B, patients (n=100) received vitamins C and E for a month, in addition to amoxicillin, clarithromycin, and lansoprazole for 2 weeks. Patients in group C (n=100) received amoxicillin, clarithromycin, lansoprazole, and bismuth subcitrate for 2 weeks, whereas those in group D (n=100) received vitamins C and E for a month, in addition to amoxicillin, clarithromycin, lansoprazole, and bismuth subcitrate for 2 weeks. H. pylori eradication was assessed with the C14 urea breath test 2 months after the end of the therapy. The eradication rate was assessed using per-protocol (PP) and intention-to-treat (ITT) analyses. Results: Three hundred forty-eight patients finished the study. The eradication of H. pylori was achieved in 63 of 84 patients (75%) by PP and 63 of 100 (63%) by ITT analysis in group A, 60 of 84 (71.4%) by PP and 60 of 100 (60%) by ITT analysis in group B, 72 of 89 (80.9 %) by PP and 72 of 100 (72%) by ITT analysis in group C, and 76 of 91 (83.5%) by PP and 76 of 100 (76%) by ITT analysis in group D. There was no remarkable change between groups A and B (p>0.05). Similar results were also found between groups D and C (p>0.05). Conclusion: This study revealed that supplementing vitamins C and E to either the triple or quadruple therapies did not provide an additional advantage for achieving significantly higher eradication rates for H. pylori. abstract_id: PUBMED:21856267 Efficacy of vitamins supplementation to therapy on Helicobacter pylori eradication in patients with low antioxidant capacity. Background And Objectives: It was shown that supplementation of vitamins C and E to therapy increased Helicobacter pylori eradication rate. In the present study, we aimed to evaluate whether supplementation of antioxidant vitamins to therapy increases H. pylori eradication rates in patients with chronic stress and low antioxidant capacity. Methods: This study included 120 patients who presented to gastroenterology outpatient clinic with H. pylori-positive nonulcer dyspepsia and low total antioxidant capacity. Patients in group A (n=80) were given lansoprazole (30 mg, BID), amoxicillin (1000 mg, BID), and clarithromycin (500 mg, BID) for 14 days, as well as vitamin C (500 mg, BID) and vitamin E (200 IU, BID) for 30 days. Patients in group B (n=40) were given lansoprazole (30 mg, BID), amoxicillin (1000 mg, BID), and clarithromycin (500 mg, BID) for 14 days. Results: Total antioxidant capacity were lower than normal levels in all patients. One hundred and fifteen patients (77 in group A, 38 in group B) were analyzed with per protocol analysis. In group A, H. pylori eradication was achieved in 63.8% of the patients included in the intention to treat analysis and in 66.2% of the patients included in the per protocol analysis. In group B, H. pylori eradication was achieved in 42.5% of the patients included in the intention to treat analysis and in 44.7% of the patients included in the per protocol analysis. Eradication rates were significantly higher in group A than in group B (P<0.005). Conclusions: Supplementation with vitamins C and E increased H. pylori eradication rate of standard triple therapy. abstract_id: PUBMED:20389061 Bioavailability of vitamins E and C: does Helicobacter pylori infection play a role? Background: While the association between bioavailability of vitamins E and C and Helicobacter pylori infection has been extensively researched in gastritis and gastric cancer patients, little is known about this relationship in asymptomatic adults. Aim: To investigate the effect of H. pylori infection on bioavailability of vitamins E and C in asymptomatic adults. Methods: Volunteers from the University of Toronto, aged 18-45 years, were screened, for their H. pylori infection status. H. pylori-negative (n = 32) and asymptomatic H. pylori-positive (n = 27) participants received vitamin C (500 mg) and vitamin E (400 IU) supplements daily for 28 days. Plasma vitamins C, E and thiols concentrations were assessed before (baseline) and after supplementation. Results: Postsupplementation plasma levels of vitamin C and E were significantly higher than presupplementation levels in both groups. Yet, changes in plasma vitamins E and C were not significantly different between the two groups [vitamin C (mumol/l): 13.97 +/- 16.86 vs. 20.87 +/- 27.66, p > 0.05; vitamin E (mumol/l): 15.52 +/- 9.4 vs. 14.47 +/- 15.77; p > 0.05 for H. pylori-negative and H. pylori-positive groups, respectively]. In addition, no significant difference was found in plasma thiols levels between groups (p > 0.05). Conclusion: These findings suggest that H. pylori does not influence antioxidants bioavailability in its asymptomatic stages of infection until a factor or combination of factors triggers the inflammation cascade which may lead to increased oxidative stress and possibly reduced bioavailability of vitamins E and C. abstract_id: PUBMED:24834261 Evaluation of the benefit of addition of clidinium C to a Helicobacter pylori eradication regimen. Aim: This study aimed to evaluate the success of H.pylori eradication therapy in patients with dyspepsia by therapeutics regimes with and without clidinium C. Background: Helicobacter pylori infections are reported in all parts of the world. Appropriate antibiotic therapy can treat infection. The ideal treatment regimen has not been specified. Patients And Methods: In a randomized, double blind clinical trials study, 250 patients with dyspepsia were enrolled. All patients were treated by Omeprazole, Metronidazole, Amoxicillin and Bismuth (OMAB) for two weeks. One tablet clidinium C before each meal was added to this regimen in the intervention group (A). Urea Breath Test (UBT) was carried out after 8-12 weeks after treatment for evaluation of H.pylori eradication. Results: 132 patients in the intervention group (A) and 118 patients in the control group (B) were enrolled to the study. The rate of eradication in group A was significantly higher than group B (62.1% vs. 50%, p=0.04). Conclusion: The results supported the effect of clidinium C for increasing of helicobacter pylori eradication, but further studies need to be performed. abstract_id: PUBMED:12390211 Vitamin C and E supplements to lansoprazole-amoxicillin-metronidazole triple therapy may reduce the eradication rate of metronidazole-susceptible Helicobacter pylori infection. Aim: To test whether vitamin C and E supplements to triple therapy can improve the Helicobacter pylori eradication rate and gastric inflammation. Methods: A total of 104 H. pylori-infected patients were randomized to receive: either lansoprazole, amoxicillin, and metronidazole twice daily for 1 week (triple-only group) or lansoprazole, amoxicillin, metronidazole plus vitamin C (250 mg) and vitamin E (200 mg) twice daily for 1 week, followed immediately by vitamin C and E once daily for 6 consecutive weeks (triple-plus-vitamin group). Eight weeks after the completion of triple therapy, patients were assessed for the effectiveness of H. pylori eradication. The severity of gastric inflammation in histology was assessed for the acute and chronic inflammation scores. Results: Intention-to-treat and per-protocol eradication rates were 59.1% and 64.4% in the triple-only group, and 40% and 44% in the triple-plus-vitamin group. In the patients infected with metronidazole susceptible isolates, the triple-only group had a higher intention-to-treat eradication rate than those in the triple-plus-vitamin group (80% vs. 53.1%, p <.01). However, for the metronidazole resistance isolates, the intention-to-treat eradication rates between the two groups were not different (26.3% vs. 21.7%, p = NS). The improvements of both acute and chronic inflammation scores in histology were not different between the two groups. Conclusion: Adding vitamin C and E to triple therapy cannot improve the H. pylori eradication rate and gastric inflammation. For patients with metronidazole susceptible strain infection, adding these vitamins may even reduce the eradication rate of triple therapy. abstract_id: PUBMED:35946262 Oxidative stress and the importance of H. pylori eradication in patients with functional dyspepsia. Background: To investigage the thiol and disulphide levels in Helicobacter pylori-positive patients with non-ulcer dyspepsia and investigate the change in these levels with eradication therapy. Methods: This is a prospective observational study. A total of 320 patients diagnosed with dyspepsia according to Rome IV criteria were included in the study. First, blood samples were drawn from patients to determine their serum thiol and disulphide levels. Endoscopic biopsy was performed on all patients and the biopsy specimens obtained were examined pathologically. Patients positive for H. pylori were administered eradication therapy. Blood samples were drawn from these patients for the second time, and their serum thiol and disulphide levels were measured. The thiol-disulfide levels of the patients who were successful in H. pylori eradication treatment, with those who were not, were compared before and after the treatment. Results: The mean plasma disulphide level decreased significantly from 14.0 ± 6.6 to 10.9 ± 5.9 μmol/L in H. pylori-positive patients that responded to the H. pylori eradication treatment (P = 0.033). On the other hand, there was an insignificant increase in the mean serum thiol level (341.4 ± 30.5 vs. 342.6 ± 29.8 μmol/L; P = 0.273) and an insignificant decrease in the mean serum disulphide level (15.2 ± 2.5 vs. 14.8 ± 2.3 μmol/L; P = 0.163) in H. pylori-positive patients that did not respond to the H. pylori eradication treatment. Conclusion: The inflammation caused by H. pylori shifted the thiol-disulphide equilibrium in the cell redox system towards the direction of disulphide. The study findings suggest that the restoration of the said hemostatic balance with eradication therapy relieved the organism from oxidative stress. Answer: The impact of supplementing vitamins C and E on the eradication rate of Helicobacter pylori (H. pylori) infection has been investigated in several studies, with varying results. Some studies have found that adding vitamins C and E to standard triple therapy can increase the eradication rate of H. pylori. For instance, one study showed that patients who received vitamins C and E in addition to standard triple therapy had a significantly higher eradication rate (91.25% in intention-to-treat analysis) compared to those who received only the standard therapy (60% in intention-to-treat analysis) (PUBMED:19674132). Another study reported that supplementing vitamins C and E to standard triple therapy increased the eradication rate to 82.5% in intention-to-treat analysis, which was significantly higher than the rate in the control group (45%) (PUBMED:21740452). Additionally, supplementation with vitamins C and E was found to increase H. pylori eradication rates in patients with low antioxidant capacity (PUBMED:21856267). However, other studies have reported no significant benefit from the addition of vitamins C and E to eradication regimens. A meta-analysis found no significant difference in the eradication rate of H. pylori between groups receiving eradication regimen with or without vitamins C and E (PUBMED:21810287). Similarly, another study revealed that supplementing vitamins C and E to either triple or quadruple therapies did not provide an additional advantage for achieving significantly higher eradication rates for H. pylori (PUBMED:26510082). Moreover, one study suggested that adding vitamin C and E to triple therapy might even reduce the eradication rate of metronidazole-susceptible H. pylori infection (PUBMED:12390211). In conclusion, while some studies suggest that vitamins C and E supplementation may enhance the eradication rate of H. pylori, particularly in certain populations or under specific conditions, other studies do not support this finding. The evidence is mixed, and further research with larger sample sizes and higher methodological quality may be needed to draw definitive conclusions about the effectiveness of antioxidant vitamins on H. pylori eradication (PUBMED:21810287).
Instruction: Pre-interns: ready to perform? Abstracts: abstract_id: PUBMED:34223318 Willingness to perform bystander cardiopulmonary resuscitation: A scoping review. Background: Despite the proven effectiveness of rapid initiation of cardiopulmonary resuscitation (CPR) for patients with out-of-hospital cardiac arrest (OHCA) by bystanders, fewer than half of the victims actually receive bystander CPR. We aimed to review the evidence of the barriers and facilitators for bystanders to perform CPR. Methods: This scoping review was conducted as part of the continuous evidence evaluation process of the International Liaison Committee on Resuscitation (ILCOR), and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews. This review included studies assessing barriers or facilitators for the lay rescuers to perform CPR in actual emergency settings and excluded studies that overlapped with other ILCOR systematic reviews/scoping reviews (e.g. dispatcher instructed CPR etc). The key findings were classified into three kinds of factors: personal factors; CPR knowledge; and procedural issues. Results: We identified 18 eligible studies. Of these studies addressing the reduced willingness to respond to cardiac arrest, 14 related to "personal factors", 3 to "CPR knowledge", and 2 to "procedural issues". On the other hand, we identified 5 articles assessing factors increasing bystanders' willingness to perform CPR. However, we observed significant heterogeneity among study populations, methodologies, factors definitions, outcome measures utilized and outcomes reported. Conclusions: We found that a number of factors were present in actual settings which either inhibit or facilitate lay rescuers' performance of CPR. Interventional strategies to improve CPR performance of lay rescuers in the actual settings should be established, taking these factors into consideration. abstract_id: PUBMED:26193106 Pre-eclampsia Pre-eclampsia or toxaemia of pregnancy is a multi-organ disorder in the second half of pregnancy. Approximately 1-3% of all pregnancies in the Netherlands are complicated by this condition. The disease is characterised by vascular damage resulting in hypertension and proteinuria with high morbidity for both mother and child. The underlying cause is a poorly developed placenta. To date the only real treatment comprises medicinal protection against complications and the disorder can be cured only through termination of pregnancy. Complications range from severe hypertension to maternal mortality due to cerebral haemorrhage. Long-term consequences can be severe for both mother and child. For instance, the risk of cardiovascular disease in mothers in later life is significantly increased. Many risk factors have been identified, including diabetes, BMI and an age of above 40. The association between periodontal disease and pre-eclampsia emphasises the importance of good oral hygiene in the periconceptional period. abstract_id: PUBMED:34760522 How to perform first trimester combined screening for pre-eclampsia. Pre-eclampsia is a common condition that affects 2-5% of pregnancies and is associated with significant maternal and perinatal morbidity, and cost to the healthcare system. In those at high risk for pre-eclampsia, interventions such as aspirin intake have been shown to reduce prevalence of disease. However, to implement these interventions, it is essential that high-risk women be identified early in pregnancy. A combined first trimester screening model for pre-eclampsia that includes maternal history, mean arterial pressure, uterine artery Doppler and PlGF has been shown to be the most effective at identifying the population at greatest risk of pre-eclampsia. This article will explore the most appropriate methods of measuring the components of first trimester screening and discuss the use of aspirin for prevention of pre-eclampsia in pregnancy. abstract_id: PUBMED:31181585 Pre-eclampsia/Eclampsia. Pre-eclampsia is a multifactorial and multisystemic disease specific to gestation. It is classically diagnosed by the presence of hypertension associated with proteinuria manifested in a previously normotensive pregnant woman after the 20th week of gestation. Pre-eclampsia is also considered in the absence of proteinuria if there is target organ damage. The present review takes a general approach focused on aspects of practical interest in the clinical and obstetric care of these women. Thus, it explores the still unknown etiology, current aspects of pathophysiology and of the diagnosis, the approach to disease prediction, its adverse outcomes and prevention. Management is based on general principles, on nonpharmacological and on pharmacological clinical treatment of severe or nonsevere situations with emphasis on the hypertensive crisis and eclampsia. Obstetric management is based on preeclampsia without or with signs of clinical and/or laboratory deterioration, stratification of gestational age in < 24 weeks, between 24 and less than 34 weeks, and ≥ 34 weeks of gestation, and guidance on route of delivery. An immediate puerperium approach and repercussions in the future life of pregnant women who develop preeclampsia is also presented. abstract_id: PUBMED:36427183 PERFORM: a non-interventional study assessing the patients' treatment starting with 1L palbociclib in HR+/HER2- ABC. The prospective, non-interventional PERFORM study describes and analyzes the effectiveness of palbociclib in combination with endocrine therapy (aromatase inhibitor or fulvestrant) as first-line treatment for patients with locally advanced or metastatic HR+/HER2- breast cancer in the real-world setting in Germany and Austria. PERFORM will reflect current patient characteristics and routine treatment patterns including treatment sequences and time to subsequent (chemo)therapy. Besides, second-line treatment effectiveness and patient-relevant end points such as longitudinal patient-reported outcome measurements beyond disease progression will be analyzed. Accounting for the heterogenous real-world patient population, data on clinicopathologic subgroups underrepresented in clinical trials such as elderly or male will be analyzed. Taken together, PERFORM will close knowledge gaps from clinical trials in real world. abstract_id: PUBMED:12647282 Physiopathology of pre-eclampsia Background: Pre-eclampsia is a pregnancy-specific disorder that presents major health problems for both mothers and babies. It is a major cause of maternal mortality, especially in developing countries, where it may account for 80% of maternal deaths. In this review we have discussed the putative pathogenetic mechanisms of this severe illness. Methods: Review of the international literature on pre-eclampsia. Results: Pre-eclampsia occurs predominantly in primigravidas and is associated with increased foetal and maternal risk. Generally speaking, one could hypothesise that during pre-eclampsia the oxidative, coagulative and vasomotor balance is altered by increased sensitivity to angiotensin II associated with reduced synthesis of vasodilator prostaglandins, sympathetic nervous system hyperactivity, hyperlipidaemia associated with raised lipoperoxide synthesis and incomplete implant of cytotrophoblast in maternal spiral arteries. General vasoconstriction could then be a consequence with related utero-placental ischaemia and endothelial damage. These features are clinically characterised by arterial hypertension, proteinuria and sodium retention. The predisposition to develop pre-eclampsia seems to be at least partially hereditary and the risk seems to be transmissible through male sons. To explain the susceptibility of some women to develop pre-eclampsia, specific genetic polymorphisms have been associated to increased risk. Conclusions: Many factors are involved in the pathogenesis of pre-eclampsia. Occasionally these factors seem to act together, sometimes they do not and, consequently, it might be difficult to postulate a common pathogenesis for the disorder. Good obstetrical practice will identify known risk factors but will not enable physicians to determine the risk value for a particular pregnancy. Moreover, we are far from understanding the exact mechanisms involved, let alone acting upon them. The achievement of these objectives constitute a remarkable challenge for research. abstract_id: PUBMED:25789896 Pre-interns: ready to perform? Background: There is an expectation that interns can perform the core procedural skill of male catheterisation; however, it is unclear if our medical graduates are competent to do so, because there is no formal practical skills exit assessment in our current programme. Aim: We sought to investigate the level of experience, the self-reported confidence, and measured competency of our pre-intern (PRINT) students to perform the procedural skill of male catheterisation. Methods: We asked 100/147 (68%) PRINT students to complete a questionnaire to elucidate their experience and confidence prior to being practically assessed on a plastic manikin, using a faculty member validated 26-item checklist. Students were also invited to attend focus groups to help identify factors that had contributed to their practical performance. Results: Between 2010 and 2012, 100/147 (68%) PRINT students completed a questionnaire prior to being formatively assessed. The mean score for self-reported confidence was 78.3/100 (95% CI 74.8-81.8), and the mean performance score was 85.6/100 (95% CI 83.2-87.9); however, the correlation coefficient between the confidence score and performance score was weak (r = 0.18). Three focus groups were conducted, with a total of 12/100 (12%) students attending. Although students reported that they had sound knowledge of the skill, the lack of opportunity to perform the skill in the clinical setting had led to mediocre performance outcomes. Conclusion: We found no significant correlation among the level of experience, the self-reported ability and actual performance when students were assessed under direct observation. abstract_id: PUBMED:26947204 Illness representations, knowledge and motivation to perform presymptomatic testing for late-onset genetic diseases. This study addresses the relation between illness representations, knowledge and motivation to perform the presymptomatic testing (PST) of subjects at-risk for Familial Amyloydotic Polyneuropathy (FAP), Huntington's disease (HD) and Machado-Joseph disease (MJD), compared with subjects at-risk for Hereditary Hemochromatosis (HH). The sample comprised a clinical group of 213 subjects at genetic risk for FAP, HD and MJD, and a comparison group of 31 subjects at genetic risk for HH, that answered three open-ended questions relating illness representations, knowledge about the disease, and motivation to perform PST. People at-risk for FAP, HD and MJD use more metaphors, make more references to the family, are more concerned with the future and feel more out of curiosity and to learn, than for HH. These subjects at-risk correspond to the profile of somatic individual or personhood, wherein the unsubjectivation of the disease can function as a coping mechanism. abstract_id: PUBMED:34162648 Knowledge, training and willingness to perform bystander cardiopulmonary resuscitation among university students in Chongqing, China: a cross-sectional study. Objective: To evaluate knowledge of cardiopulmonary resuscitation (CPR) among Chinese college students and their attitude towards participating in CPR training and willingness to perform bystander CPR. Design: A cross-sectional study. Participants: A total of 1128 college students were selected through a multistage stratified random sampling method from 12 universities in Chongqing, China. Primary And Secondary Outcomes: Primary outcomes included CPR knowledge and willingness to participate in training and perform bystander CPR; secondary outcomes included CPR training experience and obstacles to training and performing CPR. Results: The average score on CPR knowledge was 2.078 (±1.342). Only 45.5% of the respondents were willing to participate in CPR training. Women, respondents who were postgraduate or above, with liberal arts as major and with high CPR knowledge level were more willing to participate in CPR training. A total of 47.2% of the respondents were willing to provide simple assistance, such as checking the consciousness and breathing of the patient and dialling 120 (medical emergency call). Only 34.1% indicated their willingness to perform bystander CPR on strangers. Perceived behavioural control, behavioural attitudes and subjective norms are positive predictors of willingness to provide bystander CPR. Conclusions: CPR knowledge and training rate were low among Chongqing college students. Willingness to participate in training and perform bystander CPR was also low. Improving legislation, strengthening training programmes, incorporating CPR training into the school curriculum and reshaping the social and public culture of offering timely help to those in need are recommended strategies to improve bystander CPR performance. abstract_id: PUBMED:30565446 The right of a doctor to inform an employer of contraindications to perform a certain type of work by an employee Background: The aim of this study is to analyze the legal regulations that pertain to the duty of doctor-patient privilege and the possibility of its repeal on the plane to provide information about the patient’s health status to his employer. Material And Methods: The material constitutes the current Polish law regulations relating to the medical duty of doctor–patient privilege as well as provisions defining the scope of information on the patient’s health status that may be transferred to a patient’s employer. Results: The doctor has no right to provide the employer with information on contraindications to perform a specific job, if the information was obtained in the mode of providing health services that are not eligible for the catalogue of preliminary/preventive/control examinations. The basis for the doctor providing information to the employer cannot be an exception to the obligation of medical confidentiality, which is governed by art. 40 § 2 pts 1–6 of the Act of Medical Profession, under which cases of admission of professional secrecy were listed, which makes it impossible to use the cited article in cases other than strictly specified in its content. Conclusions: Polish law specifies the circumstances in which it is possible to provide an employer with the medical information on the patient’s health. However, there are regulations that provide for the information on contraindications to perform a certain type of work or job to be transmitted if the information has been obtained in a manner other than as provided in the ordinance on medical examinations of employees. Med Pr 2018;69(6):613–620 Answer: The study titled "Pre-interns: ready to perform?" (PUBMED:25789896) investigated the level of experience, self-reported confidence, and measured competency of pre-intern (PRINT) students in performing the procedural skill of male catheterisation. The study found that despite the students reporting sound knowledge of the skill, the lack of opportunity to perform the skill in a clinical setting led to mediocre performance outcomes. The study concluded that there was no significant correlation among the level of experience, self-reported ability, and actual performance when students were assessed under direct observation. This suggests that pre-interns may not be fully ready to perform certain procedural skills, highlighting the need for more practical experience and training before entering the intern phase.
Instruction: Continuous retinal vessel diameter measurements: the future in retinal vessel assessment? Abstracts: abstract_id: PUBMED:24767302 Retinal vessel oxygen saturation and vessel diameter in retinitis pigmentosa. Purpose: To assess retinal vessel oxygen saturation and retinal vessel diameter in retinitis pigmentosa. Methods: A retinal oximeter (Oxymap ehf., Reykjavik, Iceland) was used to measure retinal vessel oxygen saturation and vessel diameter in ten patients with retinitis pigmentosa (RP) (mean age 49 years, range 23-71 years). Results were compared with age- and gender-matched healthy individuals. All patients had advanced stage of the disease with visual fields restricted to the macular region. Results: Oxygen saturation in retinal venules was 58.0 ± 6.2% in patients with RP and 53.4 ± 4.8% in healthy subjects (p = 0.017). Oxygen saturation in retinal arterioles was not significantly different between groups (p = 0.65). The mean diameter of retinal arterioles was 8.9 ± 1.6 pixels in patients with RP and 11.4 ± 1.2 in healthy controls (p < 0.0001). The corresponding diameters for venules were 10.1 ± 1.2 (RP) and 15.3 ± 1.7 (healthy, p < 0.0001). Conclusions: Increased venous saturation and decreased retinal vessel diameter suggest decreased oxygen delivery from the retinal circulation in retinitis pigmentosa. This is probably secondary to tissue atrophy and reduced oxygen consumption. abstract_id: PUBMED:20435593 Continuous retinal vessel diameter measurements: the future in retinal vessel assessment? Purpose: To establish an alternative method, sequential and diameter response analysis (SDRA), to determine dynamic retinal vessel responses and their time course in serial stimulation compared with the established method of averaged diameter responses and standard static assessment. Methods: SDRA focuses on individual time and diameter responses, taking into account the fluctuation in baseline diameter, providing improved insight into reaction patterns when compared with established methods as delivered by retinal vessel analyzer (RVA) software. SDRA patterns were developed with measurements from 78 healthy nonsmokers and subsequently validated in a group of 21 otherwise healthy smokers. Fundus photography and retinal vessel responses were assessed by RVA, intraocular pressure by contact tonometry, and blood pressure by sphygmomanometry. Results: Compared with the RVA software method, SDRA demonstrated a marked difference in retinal vessel responses to flickering light (P < 0.05). As a validation of that finding, SDRA showed a strong relation between baseline retinal vessel diameter and subsequent dilatory response in both healthy subjects and smokers (P = 0.001). The RVA software was unable to detect this difference or to find a difference in retinal vessel arteriovenous ratio between smokers and nonsmokers (P = 0.243). However, SDRA revealed that smokers' vessels showed both an increased level of arterial baseline diameter fluctuation before flicker stimulation (P = 0.005) and an increased stiffness of retinal arterioles (P = 0.035) compared with those in nonsmokers. These differences were unrelated to intraocular pressure or systemic blood pressure. Conclusions: SDRA shows promise as a tool for the assessment of vessel physiology. Further studies are needed to explore its application in patients with vascular diseases. abstract_id: PUBMED:11098458 Retinal vessel reaction to 100% O2-breathing--functional imaging using the retinal vessel analyzer with 10 volunteers Background: Retinal vessel diameter assessment is complicated by various components among them dynamic changes due to vasomotoric effects. Measurement of these diameters was usually obtained from fundus photographs. Functional diameter changes induced by external stimuli were difficult to evaluate because of their dynamic nature. The Retinal Vessel Analyzer (RVA) allows continuous on-line measurement of those dynamic changes. Whether functional changes due to 100% O2 breathing can be assessed by RVA is investigated in this study. Materials And Methods: Continuous on-line registration of retinal arterial and venous branch vessels was obtained in 10 healthy volunteers. A baseline was taken during the first minute. Then for 5 minutes 100% O2 was delivered by mask. Further recording ensued for 4 minutes, while breathing room air. Vessel diameter change in percent to baseline was calculated for each individual and for a mean of the group. Results: Each individual demonstrated vasoconstriction. The mean diameter reduction for the group was 6.5% for arteries and 15% for veins. Conclusions: RVA allows assessment of functional retinal branch vessel reactions. Retinal branch vessels diameters are denominators for capillary perfusion. RVA might be able to demonstrate an individual vessel's regulation potential by purposeful stimulation to constrict and dilate. This property could be helpful in understanding pathophysiologic processes as well as improving diagnosis and therapeutic effects in diseases influencing ocular perfusion such as diabetes, retinal vessel occlusion or even glaucoma. Further evaluation of effects of systemic diseases might be an additional application of functional retinal vessel diameter assessment by RVA. abstract_id: PUBMED:30259682 Retinal vessel oxygen saturation and vessel diameter in healthy individuals during high-altitude exposure. Purpose: To assess changes of retinal vessel oxygen saturation and vessel diameter in healthy individuals during high-altitude exposure. Methods: Retinal oxygen saturation and vessel diameter were obtained at sea level (SL, 40 m) and high altitude (HA, 3681 m) on 17 healthy individuals from Beijing (six males, 28.06 ± 8.06 years) using Oxymap T1 and then compared with 21 residents from Yushu (10 males, 28.63 ± 6.00 years). Systemic and ocular parameters were also measured before and after high-altitude exposure. Data were presented as mean ± SD and analysed using paired and independent Student t-test with significance accepted at p < 0.05. Results: Short-term high-altitude exposure of Beijing Group significantly affected all the systemic and ocular parameters, as well as retinal oxygen saturation and vessel diameter ranging from overall quadrant to different quadrants, other than retinal venous oxygen saturation and retinal arterial diameter. However, these changes were not evident in those permanently living at HA. Pearson's correlation analysis revealed correlations between retinal oxygen saturation and systemic and ocular parameters (all p < 0.05). The multivariate linear regression analysis indicated that retinal arterial oxygen saturation was significantly associated with arterial peripheral arterial oxygen saturation (SpO2 ) and subfoveal choroidal thickness. Conclusion: Short-term exposure to HA induces retinal microcirculation disturbance and auto-regulatory response in healthy individuals, which is probably attributed to arterial SpO2 and endothelial dysfunction under hypoxic conditions. abstract_id: PUBMED:30057553 Experimental Characterization and Correlation of Mayer Waves in Retinal Vessel Diameter and Arterial Blood Pressure. Retinal vessels show various biological temporal variations that can impact diagnosis using a static vessel analysis. In this study, Mayer waves in the retinal vessel diameter and arterial blood pressure (BP) signals were characterized, and the temporal correlation between these two modalities was investigated. The arterial and venous vessel diameters and arterial BP were recorded simultaneously on human subjects. The obtained vessel diameters showed vasomotion amplitudes over time. The vessel diameter and BP signals contained multiple signals in the frequency domain and varied over time. The signal characteristics were similar within the measurements. The BP and arterial and venous vessel diameters were correlated. The highest correlation values between the signals were observed for shifts of 1 or 0 periods. The spectrum and amplitudes of the Mayer waves showed a high variability. The Mayer waves in the retinal vessel diameters showed the same characteristics as those in the arterial BP. A temporal dependency between the oscillations in the arterial BP and retinal vessel diameters was shown. abstract_id: PUBMED:29492727 Retinal vessel diameter obtained by optical coherence tomography is spared in Parkinson's disease. Purpose: To define the alterations in retinal vessel diameter in Parkinson's disease (PD) by optical coherence tomography (OCT). Methods: This is a case-control study including 41 eyes of 41 patients with diagnosis of PD and 35 eyes of 35 age- and sex-matched control subjects. All subjects underwent complete neurological and ophthalmological examinations before measurements. Retinal vessel diameters and peripapillary retinal nerve fiber layer (pRNFL) thicknesses were evaluated with spectral domain OCT (SD-OCT) with a circular scan centered at the optic disc. The diameters of the superior nasal and temporal arteries and veins, and inferior nasal and temporal arteries and veins were measured and then compared between the groups. Correlations with the duration of the disease, usage of levodopa, and pRNFL thicknesses between retinal vessel diameters were examined with Pearson and Spearman correlation analysis. Results: Average pRNFL thickness is significantly decreased in PD compared to age- and sex-matched controls (p < 0.05). At all measurement points, retinal artery diameter measurements were decreased in the PD group compared to controls, but the differences did not reach statistical significance. Diameters of the retinal veins also did not show any significant difference in the PD and control groups. Superior temporal artery diameter was significantly decreased in patients using levodopa compared to nonusers (p = 0.022). There were no statistically significant correlations between pRNFL thicknesses or disease duration with retinal vessel diameters in PD group. Conclusions: Parkinson's disease does not seem to have an impact on the retinal vessel diameters obtained by SD-OCT. abstract_id: PUBMED:12195319 Reproducibility of measurements with the retinal vessel analyzer under optimal conditions Background: [corrected] To evaluate the reproducibility of measurements with the Retinal Vessel Analyzer (RVA) in healthy subjects and to describe which measuring conditions should be guaranteed to obtain optimal results. Methods: The diameter of retinal arteries and veins of 20 healthy subjects (M : F = 11 : 9; mean age: 33 +/- 12 years) were measured with the RVA at baseline, after 2 hours, and after 2 weeks. Results: No statistically significant differences in the diameter of retinal arteries and veins between the single measurements were present. Short-term variability of arterial diameter was 1 %, and the intraclass correlation coefficient kappa was 0.96. Long-term variability was 1.8 %, kappa was 0.98. In retinal veins, a short-term variability of 1 % was calculated, with a kappa of 0.97. Long-term variability was 1.5 %, with a kappa of 0.98. Conclusion: Due to the high reproducibility of its measurements, the RVA appears to be a useful device for both analysis and follow-up of retinal vessel diameters. abstract_id: PUBMED:35683496 Exploring Retinal Blood Vessel Diameters as Biomarkers in Multiple Sclerosis. We aimed to determine whether retinal vessel diameters and retinal oxygen saturation in newly diagnosed patients with multiple sclerosis (pwMS) are different from those of a healthy population. Retinal blood vessel diameters were measured using imaging with a spectrophotometric non-invasive retinal oximeter. Twenty-three newly diagnosed untreated relapsing-remitting MS (RRMS) patients (mean age: 32.2 ± 7.5 years, age range = 18-50 years, 56.5% female) were measured and compared to 23 age- and sex-matched healthy controls (HCs) (mean age: 34.8 ± 8.1 years). Patients with Optic Neuritis were excluded. Retinal venular diameter (143.8 µm versus 157.8 µm: mean; p = 0.0013) and retinal arteriolar diameter (112.6 µm versus 120.6 µm: mean; p = 0.0089) were smaller in pwMS when compared with HCs, respectively. There was no significant difference in the oxygen saturation in retinal venules and arterioles in pwMS (mean: 60.0% and 93.7%; p = 0.5980) compared to HCs (mean: 59.3% and 91.5%; p = 0.8934), respectively. There was a significant difference in the median low contrast visual acuity (2.5% contrast) between the pwMS and the HC groups (p = 0.0143) Retinal arteriolar and venular diameter may have potential as objective biomarkers for MS. abstract_id: PUBMED:25809154 Peripapillary retinal vessel diameter correlates with mfERG alterations in retinitis pigmentosa. Purpose: To investigate relationship between the peripapillary retinal vessel diameter and the residual retinal function, measured by mfERG, in patients with retinitis pigmentosa (RP). Patients And Methods: A cross-sectional study based on 23 patients with RP (43 eyes) and 20 controls (40 eyes) was performed. Retinal vessel diameters were measured using a computer-based program of the retinal vessel analyser (RVA; IMEDOS Systems UG, Jena, Germany). We evaluated the mean diameter in all four major retinal arterioles (D-A) and venules (D-V) within 1.0-1.5 optic disc diameters from the disc margin. The data were compared with the N1 amplitudes (measured from the baseline to the trough of the first negative wave), with the N1P1 amplitudes (measured from the trough of the first negative wave to the peak of the first positive wave) of the mfERG overall response and with the mfERG responses averaged in zones [zone 1 (0°-3°), zone 2 (3°-8°), zone 3 (8°-15°) and zone 4 (15°-24°)]. Results: Mean (±SD) D-A and D-V were narrower in patients with RP [84.86 μm (±13.37 μm) and 103.35 μm (±13.65 μm), respectively] when compared to controls [92.81 μm (±11.49 μm) and 117.67 μm (±11.93 μm), respectively; the p-values between groups were p = 0.003 for D-A and p < 0.001 for D-V, linear mixed-effects model]. The RP group revealed clear differences compared to the controls: D-A and D-V became narrower with reduced mfERG responses. D-V correlated significantly with the overall mfERG N1 amplitudes (p = 0.013) and with N1P1 amplitudes (p = 0.016). D-V correlated with the mfERG amplitudes averaged in zones: (zone 2, 3 and 4; p ≤ 0.040) and N1P1 mfERG amplitudes (zones 1, 2, 3 and 4; p ≤ 0.013). Conclusions: Peripapillary retinal vessel diameter is reduced in RP proportionally to functional alterations. abstract_id: PUBMED:33497297 Retinal Vessel Diameter Changes in COVID-19 Infected Patients. Purpose: To evaluate the longitudinal changes in retinal vessel diameters in patients with coronavirus disease 2019 (COVID-19).Methods: This study included 25 patients with COVID-19 (Group 1) and 25 healthy subjects (Group 2). The diameters of peripapillary temporal and nasal retinal arteries and veins were measured at baseline and at 4 months after remission.Results: The baseline diameters of the inferior temporal vein and the artery were increased in group 1 compared to controls (p = .007 and p = .041, respectively). There was also an increase in the diameters of the inferior and superior nasal veins and arteries in group 1 at baseline (p = .001, p = .019, p = .037, and p = .008, respectively). Retinal vessel diameters decreased after remission in all quadrants in comparison to baseline measurements (all p < .05).Conclusion: Increased retinal vessel diameters were measured in COVID-19 patients during the disease. Measurement of retinal vessel diameters may be a noninvasive method of estimating the vascular risk. Answer: Continuous retinal vessel diameter measurements have shown promise as a future method in retinal vessel assessment. The sequential and diameter response analysis (SDRA) method has demonstrated marked differences in retinal vessel responses to flickering light compared with established methods, indicating its potential for improved insight into reaction patterns of retinal vessels (PUBMED:20435593). This method has also revealed differences in baseline retinal vessel diameter and subsequent dilatory response, as well as increased stiffness of retinal arterioles in smokers compared to nonsmokers, which were not detected by the standard retinal vessel analyzer (RVA) software (PUBMED:20435593). The Retinal Vessel Analyzer (RVA) allows continuous on-line measurement of dynamic changes in retinal vessel diameter, which can be induced by external stimuli such as 100% O2 breathing (PUBMED:11098458). This continuous measurement can help assess functional retinal branch vessel reactions, which are important for understanding pathophysiological processes and improving diagnosis and therapeutic effects in diseases influencing ocular perfusion (PUBMED:11098458). Moreover, the reproducibility of measurements with the RVA under optimal conditions has been established, indicating its usefulness for both analysis and follow-up of retinal vessel diameters (PUBMED:12195319). Additionally, continuous measurements have allowed for the characterization and correlation of Mayer waves in retinal vessel diameter and arterial blood pressure, showing a temporal dependency between the oscillations in these two modalities (PUBMED:30057553). In conclusion, continuous retinal vessel diameter measurements, as facilitated by methods like SDRA and RVA, offer a more dynamic and potentially insightful approach to assessing retinal vessel physiology and pathology. These methods could become integral in the future of retinal vessel assessment, providing a more nuanced understanding of retinal vascular health and disease.
Instruction: Is chlamydia screening and testing in Britain reaching young adults at risk of infection? Abstracts: abstract_id: PUBMED:26290483 Is chlamydia screening and testing in Britain reaching young adults at risk of infection? Findings from the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Background: In the context of widespread opportunistic chlamydia screening among young adults, we aimed to quantify chlamydia testing and diagnosis among 16-24 year olds in Britain in relation to risk factors for prevalent chlamydia infection. Methods: Using data from sexually experienced (≥1 lifetime sexual partner) 16-year-old to 24-year-old participants in Britain's third National Survey of Sexual Attitudes and Lifestyles (conducted 2010-2012), we explored socio-demographic and behavioural factors associated with prevalent chlamydia infection (detected in urine; n=1832), self-reported testing and self-reported diagnosis in the last year (both n=3115). Results: Chlamydia prevalence was 3.1% (95% CI 2.2% to 4.3%) in women and 2.3% (1.5% to 3.4%) in men. A total of 12.3% of women and 5.3% men had a previous chlamydia diagnosis. Factors associated with prevalent infection were also associated with testing and diagnosis (eg, increasing numbers of sexual partners), with some exceptions. For example, chlamydia prevalence was higher in women living in more deprived areas, whereas testing was not. In men, prevalence was higher in 20-24 than 16-19 year olds but testing was lower. Thirty per cent of women and 53.7% of men with ≥2 new sexual partners in the last year had not recently tested. Conclusions: In 2010-2012 in Britain, the proportion of young adults reporting chlamydia testing was generally higher in those reporting factors associated with chlamydia. However, many of those with risk factors had not been recently tested, leaving potential for undiagnosed infections. Greater screening and prevention efforts among individuals in deprived areas and those reporting risk factors for chlamydia may reduce undiagnosed prevalence and transmission. abstract_id: PUBMED:21961932 Improving STD testing behavior among high-risk young adults by offering STD testing at a vocational school. Background: Chlamydia trachomatis infection (CT) is the most prevalent bacterial STD. Sexually active adolescents and young adults are the main risk group for CT. However, STD testing rates in this group are low since exposed individuals may not feel at risk, owing-at least in part-to the infection's largely asymptomatic nature. Designing new testing environments that are more appealing to young people who are most at risk of acquiring chlamydia can be an important strategy to improve overall testing rates. Here we evaluate the effect of a school-based sexual health program conducted among vocational school students, aiming to obtain better access for counseling and enhance students' STD testing behavior. Methods: Adolescents (median age 19 years) attending a large vocational school were provided with sexual health education. Students filled in a questionnaire measuring CT risk and were offered STD testing. Using univariate and multivariate analysis, we assessed differences between men and women in STD-related risk behavior, sexual problems, CT testing behavior and determinants of CT testing behavior. Results: Of 345 participants, 70% were female. Of the 287 sexually active students, 75% were at high risk for CT; one third of women reported sexual problems. Of sexually active participants, 61% provided a self-administered specimen for STD testing. Independent determinants for testing included STD related symptoms and no condom use. All CT diagnoses were in the high-CT-risk group. In the high-risk group, STD testing showed an increased uptake, from 27% (previous self-reported test) to 65% (current test). CT prevalence was 5.7%. Conclusions: Vocational school students are a target population for versatile sexual health prevention. When provided with CT testing facilities and education, self selection mechanisms seemed to increase CT testing rate dramatically in this high-CT-risk population expressing sexual problems. Considering the relative ease of testing and treating large numbers of young adults, offering tests at a vocational school is feasible in reaching adolescents for STD screening. Although cost-effectiveness remains an issue counseling is effective in increasing test rates. abstract_id: PUBMED:22510331 Where do young men want to access STI screening? A stratified random probability sample survey of young men in Great Britain. Objectives: Rates of sexually transmitted infections (STIs) in UK young people remain high in men and women. However, the National Chlamydia Screening Programme has had limited success in reaching men. The authors explored the acceptability of various medical, recreational and sports venues as settings to access self-collected testing kits for STIs and HIV among men in the general population and those who participate in sport. Methods: A stratified random probability survey of 411 (weighted n=632) men in Great Britain aged 18-35 years using computer-assisted personal and self-interviews. Results: Young men engaged well with healthcare with 93.5% registered with, and 75.3% having seen, a general practitioner in the last year. 28.7% and 19.8% had previously screened for STIs and HIV, respectively. Willingness to access self-collected tests for STIs (85.1%) and HIV (86.9%) was high. The most acceptable pick-up points for testing kits were general practice 79.9%, GUM 66.8% and pharmacy 65.4%. There was a low acceptability of sport venues as pick-up points in men as a whole (11.7%), but this was greater among those who participated in sport (53.9%). Conclusions: Healthcare settings were the most acceptable places for accessing STI and HIV self-testing kits. Although young men frequently access general practice, currently little STI screening occurs in this setting. There is considerable potential to screen large numbers of men and find high rates of infection through screening in general practice. While non-clinical settings are acceptable to a minority of men, more research is needed to understand how these venues could be used most effectively. abstract_id: PUBMED:18504140 Screening young adults for prevalent chlamydial infection in community settings. Purpose: Community-based testing may identify young adults in the general population with sexually transmitted chlamydial infection. To develop selective screening guidelines appropriate for community settings, the authors conducted a cross-sectional analysis of the National Longitudinal Study of Adolescent Health Wave III (April 2, 2001, to May 9, 2002). Methods: Separately for women and men, we developed three predictive models by using unconditional multiple logistic regression for survey data. To account for racial/ethnic disparity in prevalence, initial models included identical predictor characteristics plus information on 1) respondent's race/ethnicity; or 2) respondent's most recent partner's race/ethnicity; or 3) no information on race/ethnicity. Results: Chlamydia trachomatis diagnosis was available for 10,928 (88.6%) of the sexually experienced respondents. A combination of five characteristics for women and six characteristics for men identified approximately 80% of infections when testing </=50% of the population. Information regarding race/ethnicity dramatically affected algorithm performance. Conclusion: The use of race/ethnicity in any screening algorithm is problematic and controversial, but the model without race information missed many diagnoses in the minority groups. Universal screening in high-prevalence regions and selective screening in low-prevalence regions may be one method of reaching the affected populations while avoiding the stigma of guidelines incorporating race/ethnicity. abstract_id: PUBMED:22941864 Repeat genital Chlamydia trachomatis testing rates in young adults in England, 2010. Objectives: To explore patterns of repeat chlamydia testing among young people in England and factors associated with testing positive at repeat test. Methods: We analysed chlamydia testing among 15 to 24-year-olds in England in a single calendar year (2010) using data from the genitourinary medicine clinic activity dataset (GUMCAD) and tests reported through the National Chlamydia Screening Programme (NCSP). Case records were linked using patient clinic numbers (GUMCAD), or by matching date of birth, gender and postcode (NCSP). Individuals could not be linked between datasets. The incidence of repeat testing was estimated using survival analysis. Risk factors for testing positive at repeat test were explored using multivariable logistic regression. Results: 1 235 058 tests in the NCSP dataset and 502 095 in GUMCAD were included. The incidence of repeat testing was 18.4 and 26.1 per 100 person years in the NCSP dataset and GUMCAD respectively. Among NCSP repeat tests, the proportion testing positive was higher in those reporting recent change of sexual partner (adjusted OR males 1.44; females 1.52), and among those with a positive compared to a negative baseline test (adjusted OR males 2.57; females 1.95). Conclusions: We observed moderate levels of repeat testing within a year. Considering the frequency of partner change among young people, more could be done to encourage re-testing upon change of sexual partner. Increasing re-testing following a positive test could probably identify unresolved or repeat infections that may otherwise go untreated. Work to establish the optimum approach to repeat testing in England is now warranted. abstract_id: PUBMED:33598210 A review of current guidelines and research on the management of sexually transmitted infections in adolescents and young adults. Adolescents and young adults are at high risk for sexually transmitted infections (STIs). Providers have the responsibility to accurately manage these infections to prevent medical complications and the spread of STIs. This article will review the epidemiology, screening recommendations, diagnostic tests, and treatment guidelines for STIs most commonly encountered in this population: Chlamydia trachomatis, Neisseria gonorrhoeae, Herpes simplex virus, and Trichomonas vaginalis, as well as the sexually associated infection bacterial vaginosis. This review will discuss ongoing research that explores ways to improve the management of STIs in adolescents and young adults. abstract_id: PUBMED:33300564 Optimizing Screening for Anorectal, Pharyngeal, and Urogenital Chlamydia trachomatis and Neisseria gonorrhoeae Infections in At-Risk Adolescents and Young Adults in New Orleans, Louisiana and Los Angeles, California, United States. Background: Public health organizations have inconsistent recommendations for screening adolescents and young adults for Chlamydia trachomatis and Neisseria gonorrhoeae infections. Guidelines suggest different combinations of anorectal, pharyngeal, and urogenital testing based on age, sex, and sexual activity. Further evaluation of how identity and behaviors impact the anatomic distribution of C. trachomatis and N. gonorrhoeae infection is needed to optimize future screening practices. Methods: We assessed the positivity of C. trachomatis and N. gonorrhoeae infections at different anatomic sites in a cohort of at-risk sexually active adolescents and young adults aged 12-24 years in New Orleans, Louisiana and Los Angeles, California. Participants were tested for C. trachomatis and N. gonorrhoeae at 3 sites (anorectum, pharynx, and urethral/cervix) every 4 months using self-collected swabs. We stratified anatomic distributions of infection into 4 gender and sexual behavior categories: (1) cisgender men who have sex with men and transgender women (MSMTW); (2) cisgender heterosexual males; (3) cisgender heterosexual females; and (4) gender minorities assigned female at birth. Results: While three-site testing detected all infections, two-site (anorectum and urethra/cervix) testing identified 92%-100% of C. trachomatis or N. gonorrhoeae infections in participants assigned female at birth and cisgender heterosexual males. For MSMTW, two-site anorectal and pharyngeal testing vs single-site anorectal testing increased the proportion of individuals with either infection from 74% to 93%. Conclusions: Sexual behavior and gender identity may influence detection of C. trachomatis and N. gonorrhoeae infections at specific anatomic testing sites. Testing guidelines should incorporate sexual behavior and gender identity. Clinical Trials Registration: NCT03134833. abstract_id: PUBMED:29439753 Genital Chlamydia trachomatis infections in young adults - a school-based bio-behavioural study in urban areas, Poland, 2012 to 2015. Context and AimOne of the most common sexually transmitted pathogens disproportionately affecting young people is Chlamydia trachomatis (CT). This study aimed to assess prevalence of CT among sexually active students (aged 18-19 years) in their final years of high school education in Warsaw and Krakow. Methods: The sample was selected from 61 clusters, each cluster representing one school. We described city, sex, type of school and their association with CT prevalence. To account for non-responders we applied inverse probability weighting. Results: Our study population consisted of 3,136 young adults eligible for CT screening, of whom 2,326 reported having had sexual intercourse within past 12 months. Of the 950 students who agreed to be tested, 39 were infected with CT. Weighted prevalence of CT was 3.9% (95% confidence interval (CI): 2.7-5.1); however, prevalence in the students in Warsaw (6.6%; 95% CI: 3.5-12.4) was six times higher (prevalence ratio (PR) = 5.9; 95% CI: 2.0-17.3) than in Krakow (1.1%; 95% CI: 0.5-2.6). In both settings, female students attending vocational-technical schools were most affected; the prevalence in this group was more than five times higher (PR = 5.2; 95% CI: 1.7-15.6) compared with female peers in high schools and more than three times higher (PR = 3.3; 95% CI: 1.0-10.7) compared with male peers attending vocational-technical schools. Conclusion: Our study suggested prevalence of CT infection among young people in Poland comparable with the European average, supporting implementation of a CT control programme as recommended in international guidelines. abstract_id: PUBMED:36300428 Closing the Equity Gap: An Intervention to Improve Chlamydia and Gonorrhea Testing for Adolescents and Young Adults in Primary Care. Background: Chlamydia trachomatis and Neisseria gonorrhea are the most reported sexually transmitted infections in the United States. Testing rates remain suboptimal and may be subject to implicit bias. We evaluated the effectiveness of an opt-out chlamydia and gonorrhea testing program for adolescents and young adults in improving testing rates and promoting equity. Methods: An opt-out testing program that standardized testing to once annually for 14 to 24-year-old patients was implemented across 28 federally qualified health centers spanning 4 specialties. A quasi-experimental design using interrupted time series analyses evaluated testing and infection rates between baseline, intervention, and pandemic-associated test shortage periods. Reduction in testing inequities based on sex, race, ethnicity, insurance, and language preference were also examined. Results: A total of 57 452 encounters during the baseline, 17 320 during the intervention, and 26 993 during the test supply shortage periods were included. Testing increased from 66.8% to 81.0% (14.2% absolute increase) between baseline and intervention periods. Pediatric clinics demonstrated the largest improvement compared to other settings (absolute increase 30.9%). We found significant reductions in testing inequities for language preference (P < .001), and un-insured and public insured individuals (P < .001). More cases of chlamydia and gonorrhea were detected in the intervention period (chlamydia-29.7; gonorrhea-7.4 per 1000 patients) than in the baseline period (chlamydia-20.7; gonorrhea-4.4 per 1000 patients; P < .001). Conclusions: An opt-out approach increased testing, reduced inequities between some groups and detected more infections than a risk-based approach. Opt-out testing should be considered as an approach to increase detection of chlamydia and gonorrhea and promote equity. abstract_id: PUBMED:21489313 Urine-based testing for Chlamydia trachomatis among young adults in a population-based survey in Croatia: feasibility and prevalence. Background: We assessed the feasibility of collecting urine samples for testing on genital Chlamydia trachomatis infection in a population-based survey, and prevalence of this infection among young people aged 18-25 in Croatia. In Croatia, as in the other countries of Eastern Europe, there is a lack of data on prevalence of C. trachomatis in the general population, including young adults. Methods: We sampled participants using a nationally representative, multi-stage stratified probability sample of young men and women. Detection of C. trachomatis DNA in urine samples was performed by using a real-time PCR assay COBAS® TaqMan® CT Test, v2.0. Results: Overall, 1005 young adults participated in the behavioural part of the survey, and 27.9% men and 37.5% women who were sexually experienced agreed to provide urine samples for testing on C. trachomatis. Using multivariate analysis, women were significantly more likely to provide urine samples than men (aOR = 1.53, 95% CI 1.14-2.06) as were those who reported no condom use at last intercourse (aOR = 1.95, 95% CI 1.44-2.62). Prevalence of C. trachomatis infection among those who were sexually experienced was 7.3% in men and 5.3% in women. Conclusions: Population-based surveys that use probabilistic sampling are a feasible way to obtain population estimates of C. trachomatis prevalence among young adults in Croatia, but it is challenging to obtain an adequate response rate. The prevalence of C. trachomatis among young adults in Croatia found in this study was higher than that found in other European countries with similar survey response rates. Answer: Chlamydia screening and testing in Britain is reaching young adults at risk of infection to some extent, but there are gaps in coverage. The findings from the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) indicate that while the proportion of young adults reporting chlamydia testing was generally higher in those reporting factors associated with chlamydia, many of those with risk factors had not been recently tested. This suggests that there is potential for undiagnosed infections, particularly among individuals in deprived areas and those reporting risk factors for chlamydia (PUBMED:26290483). Despite efforts to increase testing rates, certain groups of young adults remain less likely to be tested. For example, in men, chlamydia prevalence was higher in 20-24 year olds than in 16-19 year olds, but testing was lower in the older age group. Additionally, a significant proportion of women and men with multiple new sexual partners in the last year had not recently tested, indicating missed opportunities for diagnosis and treatment (PUBMED:26290483). Interventions in non-clinical settings, such as vocational schools, have shown promise in increasing testing rates among high-risk young adults. When provided with sexual health education and testing facilities, there was a dramatic increase in chlamydia testing uptake among vocational school students who were at high risk for chlamydia and reported sexual problems (PUBMED:21961932). However, healthcare settings remain the most acceptable places for accessing STI and HIV self-testing kits, with general practice being the most preferred pick-up point for testing kits. This suggests that there is considerable potential to screen large numbers of men and find high rates of infection through screening in general practice (PUBMED:22510331). In summary, while chlamydia screening and testing in Britain is reaching some young adults at risk, there is a need for greater screening and prevention efforts, particularly among high-risk groups and in settings that are more accessible and acceptable to young adults (PUBMED:26290483; PUBMED:21961932; PUBMED:22510331).
Instruction: Intra-abdominal drainage after pancreatic resection: is it really necessary? Abstracts: abstract_id: PUBMED:25805951 Endoscopic ultrasound-guided drainage of postoperative intra-abdominal abscesses. Although endoscopic ultrasound (EUS)-guided drainage has become the standard procedure for pancreatic pseudocysts in recent years and is generally regarded as a safe and effective method, there have been few reports of EUS-guided drainage of postoperative intra-abdominal abscesses. Here we report our experience with 4 cases of postoperative intra-abdominal abscesses for which EUS-guided drainage was performed between May 2011 and May 2014. Distal pancreatectomy had been performed in 3 cases, whereas low anterior resection for rectal cancer was performed in the remaining case. All patients underwent transgastric naso-cystic drainage, which resulted in clinical improvement without complications, even when performed within 4 wk after surgery. On average, the naso-cystic drain was removed 10 d after placement, with no abscess recurrence. Based on these findings, we believe that EUS-guided drainage of postoperative intra-abdominal abscesses is a safe and effective method, although further large-scale investigations are required to confirm our findings. abstract_id: PUBMED:25009383 Is routine drainage necessary after pancreaticoduodenectomy? With the development of imaging technology and surgical techniques, pancreatic resections to treat pancreatic tumors, ampulla tumors, and other pancreatic diseases have increased. Pancreaticoduodenectomy, one type of pancreatic resection, is a complex surgery with the loss of pancreatic integrity and various anastomoses. Complications after pancreaticoduodenectomy such as pancreatic fistulas and anastomosis leakage are common and significantly associated with patient outcomes. Pancreatic fistula is one of the most important postoperative complications; this condition can cause intraperitoneal hemorrhage, septic shock, or even death. An effective way has not yet been found to avoid the occurrence of pancreatic fistula. In most medical centers, the frequency of pancreatic fistula has remained between 9% and 13%. The early detection and routine drainage of anastomotic fistulas, pancreatic fistulas, bleeding, or other intra-abdominal fluid collections after pancreatic resections are considered as important and effective ways to reduce postoperative complications and the mortality rate. However, many recent studies have argued that routine drainage after abdominal operations, including pancreaticoduodenectomies, does not affect the incidence of postoperative complications. Although inserting drains after pancreatic resections continues to be a routine procedure, its necessity remains controversial. This article reviews studies of the advantages and disadvantages of routine drainage after pancreaticoduodenectomy and discusses the necessity of this procedure. abstract_id: PUBMED:24824188 Intra-abdominal drainage after pancreatic resection: is it really necessary? A meta-analysis of short-term outcomes. Introduction: Pancreatic fistula (PF) is the most dreadful complication of patients after pancreatic resection. The use of operative site drains is considered routine all along in pancreatic surgery in order to remove any collections and to act as a warning of hemorrhage or anastomotic leakage. To date few studies investigated the potential benefit and safety of routine drainage compared with no drainage after pancreatic resection and the evidence by literature is not clear. Methods: A systematic review of the literature was carried out performing an unrestricted search in MEDLINE, EMBASE and Cochrane Library up to 28th February 2014. Reference lists of retrieved articles and review articles were manually searched for other relevant studies. The currently available data regarding the incidence of post-operative short-term outcomes after pancreatic resection were meta-analyzed according to the presence or absence of the intra-abdominal drainage. Results: Overall 7 studies were included in the meta-analysis, that is 2 randomized controlled trials (RCTs) and 5 non-RCTs resulting in 2704 patients totally. Intra-abdominal drainage showed to increase the PF (OR 2.31, 95% CI 1.52-3.51), the total post-operative complications (OR 1.52, 95% CI 1.30-1.78) and the re-admission (OR 1.30, 95% CI 1.06-1.61) rates. A non-significant correlation was found with the presence/absence of the drainage about biliary and enteric fistula, post-operative hemorrhage, intra-abdominal infected collection, wound infection and overall mortality rates. Conclusion: The meta-analysis shows that the presence of an intra-abdominal drainage does not improve the post-operative outcome after pancreatic resection. abstract_id: PUBMED:9841995 Is intra-abdominal drainage necessary after pancreaticoduodenectomy? Closed suction drains after pancreaticoduodenectomy are theoretically used to drain potential collections and anastomotic leaks. It is unknown whether such drains are effective, harmful, or affect the outcome after this operation. Eighty-nine consecutive patients underwent pancreaticoduodenectomy for presumed periampullary malignancy and were retrospectively reviewed. Thirty-eight had no intra-abdominal drains and 51 had drains placed at the conclusion of the operation. We analyzed patient, nutritional, laboratory, and operating room factors with end points being complications and length of hospital stay. Intra-abdominal complications were defined as intra-abdominal abscess and pancreatic or biliary fistula. Postoperative interventions were defined as CT-guided drainage and reoperation. Analysis was by Student's t test and chi-square test. Two of eight surgeons contributed 92% of the patients without drains. The groups were equivalent with respect to demographic, nutritional, and operative factors. Time under anesthesia was significantly shorter in the group without drains (P = 0.0001). There was no statistical difference in the rate of fistula, abscess, CT drainage, or length of hospital stay. Intra-abdominal drainage did not significantly alter the risk of fistula, abscess, or reoperation or the necessity for CT-guided intervention after pancreaticoduodenectomy. Routine use of drains after pancreaticoduodenectomy may not be necessary and should be subjected to a randomized trial. abstract_id: PUBMED:7730425 Actinomycosis: value of percutaneous drainage of intra-abdominal abscesses Abdominal actinomycosis was localized in the liver with intra-abdominal abscesses. The disease had evolved for several years before diagnosis. Percutaneous interventional radiology allowed taking biopsies which confirmed the diagnosis. Percutaneous drainage of the intra-abdominal abscesses with antibiotherapy was successful without iterative surgery. abstract_id: PUBMED:34094952 Prophylactic Intra-Peritoneal Drainage After Pancreatic Resection: An Updated Meta-Analysis. Introduction: Prophylactic intra-peritoneal drainage has been considered to be an effective measure to reduce postoperative complications after pancreatectomy. However, routinely placed drainage during abdominal surgery may be unnecessary or even harmful to some patients, due to the possibility of increasing complications. And there is still controversy about the prophylactic intra-peritoneal drainage after pancreatectomy. This meta-analysis aimed to analyze the incidence of complications after either pancreaticoduodenectomy (PD) or distal pancreatectomy (DP) in the drain group and no-drain group. Methods: Data were retrieved from four electronic databases PubMed, EMBASE, the Cochrane Library and Web of Science up to December 2020, including the outcomes of individual treatment after PD and DP, mortality, morbidity, clinically relevant postoperative pancreatic fistula (CR-POPF), bile leak, wound infection, postoperative hemorrhage, delayed gastric emptying (DGE), intra-abdominal abscess, reoperation, intervened radiology (IR), and readmission. Cochrane Collaboration Handbook and the criteria of the Newcastle-Ottawa scale were used to assess the quality of studies included. Results: We included 15 studies after strict screening. 13 studies with 16,648 patients were analyzed to assess the effect of drain placement on patients with different surgery procedures, and 4 studies with 6,990 patients were analyzed to assess the effect of drain placement on patients with different fistula risk. For patients undergoing PD, the drain group had lower mortality but higher rate of CR-POPF than the no-drain group. For patients undergoing DP, the drain group had higher rates of CR-POPF, wound infection and readmission. There were no significant differences in bile leak, hemorrhage, DGE, intra-abdominal abscess, and IR in either overall or each subgroup. For Low-risk subgroup, the rates of hemorrhage, DGE and morbidity were higher after drainage. For High-risk subgroup, the rate of hemorrhage was higher while the rates of reoperation and morbidity were lower in the drain group. Conclusions: Intraperitoneal drainage may benefit some patients undergoing PD, especially those with high pancreatic fistula risk. For DP, current evidences suggest that routine drainage might not benefit patients, but no clear conclusions can be drawn because of the study limitations. abstract_id: PUBMED:19500396 Intra-abdominal pressure alterations after large pancreatic pseudocyst transcutaneous drainage. Background: Acute pancreatitis leads to abdominal hypertension and compartment syndrome. Weeks after the episodes pancreatic fluids sometimes organize to pseudocysts, fluid collections by or in the gland. Aims of the present study were to evaluate the intra-abdominal pressure (IAP) induced by large pancreatic pseudocysts and to examine the effect of their transcutaneous drainage on IAP. Methods: Twenty seven patients with a pancreatic pseudocyst were included. Nine patients with pseudocysts greater than 1l (group A) had CT drainage and eighteen (volume less than 1l) were the control group. The measurements of group A were taken 6 hours before and every morning after the drainage, while for group B, two measurements were performed, one at the day of the initial CT and one 7 days after. Abdominal compliance (Cabd) was calculated. Data were analyzed using student's t-test. Results: Baseline IAP for group A was 9.3 mmHg (S.D. 1.7 mmHg), while the first post-drainage day (PDD) IAP was 5.1 mmHg (S.D. 0.7 mmHg). The second PDD IAP was 5.6 mmHg (S.D. 0.8 mmHg), the third 6.4 mmH (S.D. 1.2 mmHg)g, the fourth 6.9 mmHg (S.D. 1.6 mmHg), the fifth 7.9 mmHg (S.D. 1.5 mmHg), the sixth 8.2 mmHg (S.D. 1.4 mmHg), and the seventh 8.2 mmHg (S.D. 1.5 mmHg). Group B had baseline IAP 8.0 mmHg (S.D. 1.2 mmHg) and final 8.2 mmHg (S.D. 1.4 mmHg). Cabd after drainage was 185.6 ml/mmHg (SD 47.5 ml/mmHg). IAP values were reduced between the baseline and all the post-drainage measurements in group A. IAPs seem to stabilize after the 5th post-drainage day. Baseline IAP was higher in group A than in group B, while the two values, at day 7, were equivalent. Conclusion: The drainage of large pancreatic pseudocyst reduces IAP. Moreover, the IAP seems to rise shortly after the drainage again, but in a way that it remains inferior to the initial value. More chronic changes to the IAP are related to abdominal cavity's properties and have to be further studied. abstract_id: PUBMED:25987799 Systematic review and meta-analysis of prophylactic abdominal drainage after pancreatic resection. Aim: To investigate whether prophylactic abdominal drainage is necessary after pancreatic resection. Methods: PubMed, Web of Science, and the Cochrane Library were systematically searched to obtain relevant articles published before January 2014. Publications were retrieved if they met the selection criteria. The outcomes of interest included: mortality, morbidity, postoperative pancreatic fistula (POPF), clinically relevant pancreatic fistula (CR-PF), abdominal abscess, reoperation rate, the rate of interventional radiology drainage, and the length of hospital stay. Subgroup analyses were also performed for pancreaticoduodenectomy (PD) and for distal pancreatectomy. Begg's funnel plot and the Egger regression test were employed to assess potential publication bias. Results: Nine eligible studies involving a total of 2794 patients were identified and included in this meta-analysis. Of the included patients, 1373 received prophylactic abdominal drainage. A fixed-effects model meta-analysis showed that placement of prophylactic drainage did not have beneficial effects on clinical outcomes, including morbidity, POPF, CR-PF, reoperation, interventional radiology drainage, and length of hospital stay (Ps > 0.05). In addition, prophylactic drainage did not significantly increase the risk of abdominal abscess. Overall analysis showed that omitting prophylactic abdominal drainage resulted in higher mortality after pancreatectomy (OR = 1.56; 95%CI: 0.93-2.92). Subgroup analysis of PD showed similar results to those in the overall analysis. Elimination of prophylactic abdominal drainage after PD led to a significant increase in mortality (OR = 2.39; 95%CI: 1.22-4.69; P = 0.01). Conclusion: Prophylactic abdominal drainage after pancreatic resection is still necessary, though more evidence from randomized controlled trials assessing prophylactic drainage after PD and distal pancreatectomy are needed. abstract_id: PUBMED:32360000 Reduction of intra-abdominal pressure after percutaneous catheter drainage of pancreatic fluid collection predicts survival. Objective: Intra-abdominal hypertension (IAH) can adversely affect the outcome in patients of acute pancreatitis (AP). Effect of percutaneous drainage (PCD) on IAH has not been studied. We studied the effect of PCD on IAH in patients with acute fluid collections. Material And Methods: Consecutive patients of AP undergoing PCD between Jan 2016 and May 2018 were evaluated for severity markers, clinical course, hospital and ICU stay, and mortality. Patients were divided into two groups: with IAH and with no IAH (NIAH). The two groups were compared for severity scores, organ failure, hospital and ICU stay, reduction in IAP and mortality. Results: Of the 105 patients, IAH was present in 48 (45.7%) patients. Patients with IAH had more often severe disease, BISAP ≥2, higher APACHE II scores and computed tomography severity index (CTSI). IAH group had more often OF (87.5% vs. 70.2%, p = 0.033), prolonged ICU stay (12.5 vs. 6.75 days, p = 0.007) and higher mortality (52.1% vs. 15.8%, p < 0.001). After PCD, IAP decreased significantly more in the IAH group (21.85 ± 4.53 mmHg to 12.5 ± 4.42 mmHg) than in the NIAH group (12.68 ± 2.72 mmHg to 8.32 ± 3.18 mmHg), p = <0.001. Reduction of IAP in patients with IAH by >40% at 48 h after PCD was associated with better survival (63.3% vs. 36.7%, p = 0.006). Conclusion: We observed that patients with IAH have poor outcome. PCD decreases IAP and a fall in IAP >40% of baseline value predicts a better outcome after PCD in patients with acute fluid collections. abstract_id: PUBMED:30764599 Endoscopic Ultrasound-Guided Transgastric Drainage of an IntraAbdominal Abscess following Gastrectomy. Endoscopic ultrasound (EUS)-guided transgastric drainage has been performed as a less invasive procedure for pancreatic fistulas and intra-abdominal abscesses occurring after surgery in recent years. However, there are no reports of EUS-guided transgastric drainage of intra-abdominal abscesses following gastrectomy. This case report describes 2 patients who developed an intra-abdominal abscess following gastrectomy and underwent EUS-guided transgastric drainage. Both patients underwent laparoscopy-assisted distal gastrectomy with Billroth-I reconstruction for gastric cancer. The intra-abdominal abscesses were caused by postoperative pancreatic fistula that developed following gastrectomy. One patient underwent naso-cystic drainage and the other underwent only a needle puncture of the abscess cavity. EUS-guided drainage was performed safely and effectively, although 1 patient developed gastroduodenal anastomotic leakage related to this procedure. In summary, EUS-guided transgastric drainage is safe and technically feasible even in post-gastrectomy patients. However, it is necessary to be careful if this procedure is performed in the early period following gastrectomy. Answer: The necessity of intra-abdominal drainage after pancreatic resection is a subject of debate. Some studies suggest that routine drainage may not be necessary and could potentially increase the risk of complications. A meta-analysis of short-term outcomes found that intra-abdominal drainage increased the rates of pancreatic fistula (PF), total post-operative complications, and re-admission rates (PUBMED:24824188). Another study concluded that routine use of drains after pancreaticoduodenectomy may not be necessary and should be subjected to a randomized trial (PUBMED:9841995). Similarly, a systematic review and meta-analysis indicated that prophylactic abdominal drainage did not have beneficial effects on clinical outcomes such as morbidity, postoperative pancreatic fistula (POPF), clinically relevant pancreatic fistula (CR-PF), reoperation, interventional radiology drainage, and length of hospital stay (PUBMED:25987799). However, other studies have shown that prophylactic intra-peritoneal drainage may benefit some patients, particularly those undergoing pancreaticoduodenectomy (PD) with a high pancreatic fistula risk (PUBMED:34094952). In contrast, for distal pancreatectomy (DP), routine drainage might not benefit patients, although clear conclusions are limited due to study limitations. Furthermore, endoscopic ultrasound (EUS)-guided drainage has been reported as a safe and effective method for postoperative intra-abdominal abscesses, even when performed within 4 weeks after surgery (PUBMED:25805951). EUS-guided transgastric drainage has also been performed safely and effectively for intra-abdominal abscesses following gastrectomy (PUBMED:30764599). In summary, the current evidence suggests that the necessity of intra-abdominal drainage after pancreatic resection is not clear-cut and may depend on individual patient factors, the type of surgery performed, and the risk of complications such as pancreatic fistula. Further large-scale investigations and randomized controlled trials are required to confirm the findings and provide more definitive guidance on the practice of intra-abdominal drainage after pancreatic resection.
Instruction: Is low antiepileptic drug dose effective in long-term seizure-free patients? Abstracts: abstract_id: PUBMED:14513159 Is low antiepileptic drug dose effective in long-term seizure-free patients? Objective: To investigate the value of leaving seizure-free patients on low-dose medication. Method: This was an exploratory prospective randomized study conducted at our University Hospital. We evaluated the frequency of seizure recurrence and its risk factors following complete or partial antiepileptic drug (AED) withdrawal in seizure free patients for at least two years with focal, secondarily generalized and undetermined generalized epilepsies. For this reason, patients were divided into two groups: Group 1 (complete AED withdrawal), and Group 2 (partial AED withdrawal). Partial AED withdrawal was established as a reduction of 50% of the initial dose. Medication was tapered off slowly on both groups. Follow-up period was 24 months. Results: Ninety-four patients were followed up: 45 were assigned to complete (Group 1) AED withdrawal and 49 to partial (Group 2) AED withdrawal. Seizure recurrence frequency after two years follow-up were 34.04% in group 1 and 32.69% in Group 2. Survival analysis showed that the probability of remaining seizure free at 6, 12, 18 and 24 months after randomization did not differ between the two groups (p = 0.8). Group 1: 0.89, 0.80, 0.71 and 0.69; group 2: 0.86, 0.82, 0.75 and 0.71. The analysis of risk factors for seizure recurrence showed that more than 10 seizures prior to seizure control was a significant predictive factor for recurrence after AED withdrawal (hazard ratio = 2.73). Conclusion: Leaving seizure free patients on low AED dose did not reduce the risk for seizure recurrence. That is, once the decision of AED withdrawal has been established, it should be complete. abstract_id: PUBMED:34536736 Long-term response and response patterns to antiepileptic drugs in patients with newly diagnosed epilepsy. Objective: This study investigated the long-term response and response patterns to antiepileptic drugs (AEDs) in patients with newly diagnosed epilepsy. Methods: Patients who had been newly diagnosed with epilepsy and had at least 3-year follow-up records were enrolled. Their long-term response and response patterns to AEDs were retrospectively analyzed. Patients were divided into two groups, a controlled group and an uncontrolled group, according to whether 3-year seizure freedom (3YSF) was achieved. Multiple logistic regression analyses were used to identify risk factors associated with a poor drug response. Results: Of the 472 patients with epilepsy, 180 achieved immediate seizure control, 36 achieved early seizure control, 118 achieved late seizure control, and 138 did not achieve 3YSF. Patients who achieved 3YSF (334/472, 70.8%) were categorized into the controlled group. Among them, 53.9% (180/334) achieved 3YSF immediately, 10.8% (36/334) achieved 3YSF within 6 months, and 35.3% (118/334) achieved 3YSF after 6 months. Also in this group, 228 (228/472, 48.3%), 84 (84/472, 17.8%), 15 (15/472, 3.2%), and 7 (7/472, 1.5%) patients achieved 3YSF on the first, second, third, and fourth regimen, respectively. Multivariate analyses showed that multiple seizure types (odds ratio [OR] = 3.903, 95% confidence interval [CI]: 2.098-7.264; P < 0.001] and polytherapy (OR = 5.093, 95% CI: 3.183-8.149; P < 0.001) were independent risk factors for a poor drug response. Conclusion: The 3YSF rate in this cohort was 70.8%. More than half of the patients achieved long-term remission immediately after treatment. The probability of attaining 3YSF decreased with the increase in number of drug regimens, especially in patients who experienced failure of two treatment regimens. abstract_id: PUBMED:26144543 The safety and long-term efficacy of zonisamide as adjunctive therapy for focal epilepsy. Epilepsy is a chronic disease which affects 1% of the population worldwide. As treatment is required for many years or decades, the long-term efficacy and tolerability of antiepileptic drugs are particularly important. Zonisamide (ZNS) is a second-generation antiepileptic drug with a unique structure and multiple mechanisms of action. Here, recently published long-term outcomes of patient cohorts with focal epilepsy undergoing ZNS treatment are discussed. Whereas ZNS monotherapy maintains similar seizure control efficacy to carbamazepine after 2 years of treatment, seizure-free rates are low in pharmacoresistant patients undergoing late adjunctive therapy. In preselected patient populations derived from double-blind studies, long-term seizure reduction and responder rates support sustained ZNS efficacy, good adherence and long-term retention. Adverse effects include somnolence and weight decrease, but data suggest that long-term treatment with ZNS is safe with only rare newly occurring adverse effects, and good long-term tolerability also regarding mood, behavior, cognition and bone maturation. abstract_id: PUBMED:27839670 Long-term outcomes of surgical treatment for epilepsy in adults with regard to seizures, antiepileptic drug treatment and employment. Purpose: There is Class I evidence for short-term efficacy of epilepsy surgery from two randomized controlled studies of temporal lobe resection. Long-term outcome studies are observational. The aim of this narrative review was to summarise long-term outcomes taking the study methodology into account. Methods: A PubMed search was conducted identifying articles on long-term outcomes of epilepsy surgery in adults with regard to seizures, antiepileptic drug treatment and employment. Definitions of seizure freedom were examined in order to identify the proportions of patients with sustained seizure freedom. The quality of the long-term studies was assessed. Results: In a number of high-quality studies 40-50% of patients had been continuously free from seizures with impairment of consciousness 10 years after resective surgery, with a higher proportion seizure-free at each annual follow-up. The proportion of seizure-free adults in whom AEDs have been withdrawn varied widely across studies, from 19-63% after around 5 years of seizure freedom. Few long-term vocational outcome studies were identified and results were inconsistent. Some investigators found no postoperative changes, others found decreased employment for patients with continuing seizures, but no change or increased employment for seizure-free patients. Having employment at baseline and postoperative seizure freedom were the strongest predictors of employment after surgery. Conclusions: Long-term studies of outcomes after epilepsy surgery are by necessity observational. There is a need for more prospective longitudinal studies of both seizure and non-seizure outcomes, considering individual patient trajectories in order to obtain valid outcome data needed for counselling patients about epilepsy surgery. abstract_id: PUBMED:26189956 Clinical features of long-term low-dose levetiracetam treatment for epilepsy. Background: The aim of this study was to assess the rate of response to long-term low-dose levetiracetam (LEV) treatment and the clinical factors associated with response. Methods: The response to low-dose LEV of 43 patients with epilepsy (22 male, 21 female; age range, 5-39 years; median age, 13 years) was retrospectively assessed. Patients aged <15 years received <20 mg/kg/day LEV, whereas those aged ≥15 years received <1000 mg/day LEV. Clinical features were compared between responders to low-dose LEV, responders to the recommended dose, and non-responders. Results: Of the 43 patients who received low-dose LEV, 13 (30%) showed improvement, defined as seizure cessation or >75% seizure reduction over 6 months for patients with monthly, weekly, and daily seizures; and over 1 year for patients with yearly seizures. Efficacy was maintained for >1 year in 10 (77%) of the 13 patients. Long-term response to low-dose LEV was significantly associated with older age at onset and fewer previous treatments with ineffective anti-epileptic drugs. All patients showing long-term response to low-dose LEV developed only focal seizures. Conclusions: Titration of LEV starting from a low dose may be effective in selected patients. Once patients respond to low-dose treatment, maintenance of the effective dosage may prolong response. abstract_id: PUBMED:28235655 Association between antiepileptic drug dose and long-term response in patients with refractory epilepsy. Seizures in patients with medically refractory epilepsy remain a substantial clinical challenge, not least because of the dearth of evidence-based guidelines as to which antiepileptic drug (AED) regimens are the most effective, and what doses of these drugs to employ. We sought to determine whether there were regions in the dosage range of commonly used AEDs that were associated with superior efficacy in patients with refractory epilepsy. We retrospectively analyzed treatment records from 164 institutionalized, developmentally disabled patients with refractory epilepsy, averaging 17years of followup per patient. We determined the change in seizure frequency in within-patient comparisons during treatment with the most commonly used combinations of 12 AEDs, and then analyzed the response to treatment by quartile of the dose range for monotherapy with carbamazepine (CBZ), lamotrigine (LTG), valproate (VPA), or phenytoin (PHT), and the combination LTG/VPA. We found that of the 26 most frequently used AED regimens, only LTG/VPA yielded superior efficacy, similar to an earlier study. For the monotherapies, patients who were treated in the lowest quartile of the dose range had significantly better long-term reduction in seizure frequency compared to those treated in the 2nd and 3rd quartiles of the dose range. Patients with paired exposures to CBZ in both the lowest quartile and a higher quartile of dose range experienced an increase in seizure frequency at higher doses, while patients treated with LTG/VPA showed improved response with escalation of LTG dosage. We conclude that in this population of patients with refractory epilepsy, LTG/VPA was the most effective AED combination. The best response to AEDs used in monotherapy was observed at low dosage. This suggests that routine exposure to maximally tolerated AED doses may not be necessary to identify those patients with drug-resistant seizures who will have a beneficial response to therapy. Rather, responders to a given AED regimen may be identified with exposure to low AED doses, with careful evaluation of the response to subsequent titration to identify non-responders or those with exacerbation of seizure frequency at higher doses. abstract_id: PUBMED:26820223 Effectiveness of antiepileptic therapy in patients with PCDH19 mutations. Purpose: PCDH19 mutations cause epilepsy and mental retardation limited to females (EFMR) or Dravet-like syndromes. Especially in the first years of life, epilepsy is known to be highly pharmacoresistant. The aim of our study was to evaluate the effectiveness of antiepileptic therapy in patients with PCDH19 mutations. Methods: We report a retrospective multicenter study of antiepileptic therapy in 58 female patients with PCDH19 mutations and epilepsy aged 2-27 years (mean age 10.6 years). Results: The most effective drugs after 3 months were clobazam and bromide, with a responder rate of 68% and 67%, respectively, where response was defined as seizure reduction of at least 50%. Defining long-term response as the proportion of responders after 12 months of treatment with a given drug in relation to the number of patients treated for at least 3 months, the most effective drugs after 12 months were again bromide and clobazam, with a long-term response of 50% and 43%, respectively. Seventy-four percent of the patients became seizure-free for at least 3 months, 47% for at least one year. Significance: The most effective drugs in patients with PCDH19 mutations were bromide and clobazam. Although epilepsy in PCDH19 mutations is often pharmacoresistant, three quarters of the patients became seizure-free for at least for 3 months and half of them for at least one year. However, assessing the effectiveness of the drugs is difficult because a possible age-dependent spontaneous seizure remission must be considered. abstract_id: PUBMED:29276500 Initial Response to Antiepileptic Drugs in Patients with Newly Diagnosed Epilepsy As a Predictor of Long-term Outcome. Objective: To investigate the correlation between initial response to antiepileptic drugs (AEDs) and long-term outcomes after 3 years in patients with newly diagnosed epilepsy. Methods: This prospective study included 204 patients with newly diagnosed epilepsy, who were followed-up for at least 36 months. The long-term seizure freedom at 36 months (36MSF) was evaluated in patients with seizure freedom 6 months (6MSF) or 12 months (12MSF) after initial treatment vs those with no seizure freedom after the initial 6 months (6MNSF) or 12 months (12MNSF). Univariate analysis and a multiple logistic regression model were used to analyze the association of potential confounding variables with the initial response to AEDs. Results: The number of patients with 36MSF was significantly higher for patients that had 6MSF (94/131, 71.8%) than those that had 6MNSF [16/73, 21.9%; χ2 = 46.862, p < 0.0001, odd ratio (OR) = 9.051]. The number of patients with 36MSF was significantly higher in patients that had 12MSF (94/118 79.7%) than those that had 12MNSF (19/86, 22.1%; χ2 = 66.720, p < 0.0001, OR = 13.811). The numbers of patients that had 36MSF were not significantly different between patients that experienced 6MSF and 12MSF or between patients that had 6MNSF and 12MNSF. Abnormalities observed in magnetic resonance imaging or computed tomography and the number of seizures before treatment correlated with poor initial 6-month response to AEDs. Significance: The initial 6-month response to AEDs is a valuable predictor of long-term response in patients with newly diagnosed epilepsy. The number of seizures before treatment and brain-imaging abnormalities are two prognostic predictors of initial 6-month seizure freedom. abstract_id: PUBMED:30340125 Long-term outcome of seizure-free patients and risk factors of relapse following antiepileptic drug withdrawal. Objective: This study aimed to investigate the long-term outcome in seizure-free patients and to explore the risk factors of seizure relapse following antiepileptic drug (AED) withdrawal. Methods: This study included 161 patients who had been seizure-free for at least 2 years. These patients were monitored over a follow-up period of at least 3 years or until seizure relapse after AED withdrawal. Patients were grouped into a seizure-free group and a relapse group. Risk factors of seizure relapse were analyzed. Results: During the follow-up period, 72 patients (44.7%) relapsed while 89 patients (55.3%) did not. The average time of relapse was 19.12 ± 27.17 months after the initiation of AED withdrawal, and majority of cases occurred within the first 24 months of AED withdrawal (73.6%). Univariate analysis showed that the likelihood of relapse was higher in patients with uncontrolled seizures beyond the first 6 months of AED therapy initiation, patients with a history of perinatal injury, patients with multiple seizure types, and patients who had been treated with a combination of AEDs. Multiple logistic regression analysis revealed that uncontrolled seizures beyond the first 6 months of AED treatment and a history of perinatal injury were independent risk factors of seizure relapse. Conclusion: In the majority of cases, epilepsy relapse occurred within the first two years after the initiation of AED withdrawal. The independent risk factors of seizure relapse were uncontrolled seizures beyond the first 6 months of AED therapy and cases with a history of perinatal injury. abstract_id: PUBMED:34979339 Efficacy and prognosis of long-term, high-dose steroid therapy for Lennox-Gastaut syndrome. Objective: Lennox-Gastaut syndrome (LGS) is a severe form of developmental and epileptic encephalopathy that is highly resistant to treatment with conventional anti-epileptic drugs and non-pharmacological therapies. In the present study, we aimed to investigate the efficacy of long-term, high-dose steroid therapy and its effect on prognosis in children with LGS. Methods: This prospective study included patients with LGS who received long-term, high-dose steroid therapy beginning in November 2016. Prednisolone (60 mg per day) was administered for 2 weeks, following which the dosage was reduced to 60 mg on alternate days for 12 weeks. The drug was then slowly tapered over the next 3 months. The primary outcome was a reduction in seizure frequency relative to baseline at 14 weeks. The secondary outcome was whether patients had become seizure-free at 1 year. Results: Among 44 patients, 30 (68.2%) experienced a reduction in seizure frequency of more than 50%, including 26 (59.1%) with complete seizure control who were classified as the responder group. The remaining 14 (31.8%) were classified as the non-responder group after 14 weeks of treatment. Twenty patients (45.5%, 20/44) remained seizure-free after 1 year of treatment. However, 10 patients (33.3%, 10/30) in the responder group relapsed within a year. Improvements in electroencephalography (EEG) findings tended to be consistent with seizure outcomes. All patients had side effects of weight gain and Cushing's face, but most adverse effects were mild and transient. Conclusion: Long-term, high-dose steroid therapy can be considered an effective treatment option for children with intractable LGS. Answer: The effectiveness of low antiepileptic drug (AED) doses in long-term seizure-free patients is not clearly supported by the provided abstracts. One study found that leaving seizure-free patients on a low AED dose did not reduce the risk of seizure recurrence compared to complete AED withdrawal. The study concluded that once the decision to withdraw AED has been made, it should be complete (PUBMED:14513159). Another study indicated that the best response to AEDs used in monotherapy was observed at low dosage, suggesting that routine exposure to maximally tolerated AED doses may not be necessary to identify those patients with drug-resistant seizures who will have a beneficial response to therapy (PUBMED:28235655). However, this study was focused on patients with refractory epilepsy, which may not be directly applicable to long-term seizure-free patients. In contrast, a study on long-term low-dose levetiracetam treatment found that titration starting from a low dose may be effective in selected patients and that maintenance of the effective dosage may prolong response in patients who developed only focal seizures (PUBMED:26189956). This suggests that low-dose AED treatment can be beneficial for some patients, but the generalizability of this finding to all long-term seizure-free patients is uncertain. Overall, the evidence from the abstracts does not provide a definitive answer to the question of the effectiveness of low AED doses in long-term seizure-free patients. The decision to maintain or withdraw AEDs, and at what dose, should be individualized based on the patient's history, seizure type, and other clinical factors.
Instruction: Does positron emission tomography prevent nontherapeutic pulmonary resections for clinical stage IA lung cancer? Abstracts: abstract_id: PUBMED:18355490 Does positron emission tomography prevent nontherapeutic pulmonary resections for clinical stage IA lung cancer? Background: The American College of Surgeons Oncology Group (ACOSOG) Z0050 trial demonstrated that positron emission tomography (PET) prevents nontherapeutic thoracotomies in a substantial fraction of patients with known or suspected non-small cell lung cancer (NSCLC). However, the benefit of PET in clinical stage IA patients has been questioned due to the lower prevalence of metastases and poor ability to discriminate benign from malignant lung lesions. This study evaluates whether PET prevents nontherapeutic pulmonary resections in clinical stage IA patients by finding advanced disease or by declaring a nodule as benign. Methods: We reanalyzed all patients with clinical stage IA NSCLC from ACOSOG Z0050. The clinical, PET, and pathologic stages were compared for this prospective cohort. Results: One hundred twenty-two clinical stage IA patients were evaluated and 78.7% (96 of 122; 95% confidence interval [CI], 70.4 to 85.6) were eventually shown to have cancer. PET correctly showed 7.4% (9 of 122; 95% CI, 3.4 to 13.5) of patients to have advanced disease (stages IIIA to IV). However, due to a high false positive rate, the positive predictive value for advanced disease was only 33.3% (9 of 27; 95% CI, 16.5 to 54.0). The negative predictive value of PET to predict benign lesions was only 57% (16 of 28; 95% CI, 37.2 to 75.5). Thus, 43% (12 of 28; 95% CI, 24.5 to 62.8) of patients with a PET negative primary lesion actually had cancer, and all of these had resectable disease (stages IA to IIB). Conclusions: In clinical stage IA lung cancer patients, PET prevents nontherapeutic pulmonary resections less than 10% of the time. If a strategy of no surgery and serial computed tomographic scans is chosen for PET negative lesions, over 40% of patients with NSCLC will have surgery delayed. A prospective trial comparing PET versus resection for clinical stage IA lesions would clarify the value of PET for these patients. abstract_id: PUBMED:19154893 Point: Clinical stage IA non-small cell lung cancer determined by computed tomography and positron emission tomography is frequently not pathologic IA non-small cell lung cancer: the problem of understaging. Objective: There is an increase in interest in limited resection for clinical stage IA non-small cell lung cancer. The purpose of this study was to evaluate the accuracy of the diagnosis of clinical stage IA non-small cell lung cancer when determined by both computed tomography and positron emission tomography scans and to determine factors associated with understaging. Methods: A retrospective review of a prospectively maintained database of patients with non-small cell lung cancer was performed. Patients with clinical stage IA cancer determined by preoperative computed tomography and positron emission tomography scan were reviewed. The influence of the following factors was analyzed with regard to accuracy of clinical staging: tumor size, location, histology, and positron emission tomography positivity. Results: Of the 266 patients identified, cancer was correctly staged in 65%. Final pathologic stages also included IB (15%), IIA (2.6%), IIB (4.1%), IIIA (4.9%), IIIB (7.5%), and IV (.08%). Positive lymph nodes were found in 11.7% of patients. Pathologic T classification changed in 28.2% of patients. Cancer in patients with clinical tumor size greater than 2 cm (n = 68) was significantly more likely to be understaged than in patients with tumors 2 cm or less (49% vs 29%, P = .003). Cancer in patients with a positron emission tomography-positive (positron emission tomography +VE) primary evaluation (n = 218) was also more likely to be understaged (39% vs 15%, P = .001). Of patients with positron emission tomography +VE tumors greater than 2 cm, cancer was clinically understaged in 55%, compared with 32% for positron emission tomography +VE tumors 2 cm or less, and only 17% for positron emission tomography negative (-VE) tumors less than 2 cm. Conclusion: Clinical stage IA lung cancer is frequently understaged in patients. Size greater than 2 cm and positron emission tomography positivity are risk factors for understaging. Limited resection should be undertaken with caution in such patients. abstract_id: PUBMED:23465675 Use of high-resolution computed tomography and positron emission tomography/computed tomography in the management of stage IA adenocarcinoma. Preoperative determination of malignant behavior is critical in choosing suitable therapeutic strategies such as sublobar resection for patients with small lung cancers. The aim of present review was to evaluate high-resolution computed tomography and fluorodeoxyglucose-positron emission tomography/computed tomography as tools for management of clinical stage IA adenocarcinoma. abstract_id: PUBMED:24021765 Prognostic role of positron emission tomography and high-resolution computed tomography in clinical stage IA lung adenocarcinoma. Background: This multicenter study aimed to validate the ability of (18)F-fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) plus high-resolution computed tomography (HR-CT) to predict the malignant behavior and prognosis of early adenocarcinomas of the lung. Methods: We calculated maximum standardized uptake values (maxSUV) from PET/CT images and ground-glass opacity (GGO) ratios on HR-CT images before complete surgical intervention in 610 patients with clinical stage IA lung adenocarcinoma. Pathologic invasiveness and survival were compared with clinical factors and radiographic findings including the maxSUV, which was revised to correct for interinstitutional discrepancies that confer limitations upon multicenter PET studies. Results: Analyses of receiver-operating characteristic curves revealed optimal maxSUV and GGO ratio cutoffs to predict recurrence of 2.9 and 25%, respectively. Both the maxSUV and GGO ratio reflected tumor invasiveness, nodal metastasis, recurrence, and patient survivals, and were significant prognostic factors for recurrence-free and cancer-specific survivals on multivariate Cox analysis (all, p < 0.001). The combination of maxSUV and GGO ratio is a better predictor of malignant tumor grade than either alone. Conclusions: The combination of maxSUV and GGO ratio as well as each alone are important predictors of prognosis in patients with clinical stage IA adenocarcinoma of the lung and should be considered before selecting therapeutic strategies. abstract_id: PUBMED:21440264 Multicenter analysis of high-resolution computed tomography and positron emission tomography/computed tomography findings to choose therapeutic strategies for clinical stage IA lung adenocarcinoma. Objective: The detection rates of small lung cancers, especially adenocarcinoma, have recently increased. An understanding of malignant aggressiveness is critical for the selection of suitable therapeutic strategies, such as sublobar resection. The objective of this study was to examine the malignant biological behavior of clinical stage IA adenocarcinoma and to select therapeutic strategies using high-resolution computed tomography, fluorodeoxyglucose-positron emission tomography/computed tomography, and a pathologic analysis in the setting of a multicenter study. Methods: We performed high-resolution computed tomography and fluorodeoxyglucose-positron emission tomography/computed tomography in 502 patients with clinical T1N0M0 adenocarcinoma before they underwent surgery with curative intent. We evaluated the relationships between clinicopathologic characteristics and maximum standardized uptake values on fluorodeoxyglucose-positron emission tomography/computed tomography, ground-glass opacity ratio, and tumor disappearance rate on high-resolution computed tomography and component of bronchioloalveolar carcinoma on surgical specimens, as well as between these and surgical findings. We used a phantom study to correct the serious limitation of any multi-institution study using positron emission tomography/computed tomography, namely, a discrepancy in maximum standardized uptake values among institutions. Results: Analyses of receiver operating characteristic curves identified an optimal cutoff value to predict high-grade malignancy of 2.5 for revised maximum standardized uptake values, 20% for ground-glass opacity ratio, 30% for tumor disappearance rate, and 30% for bronchioloalveolar carcinoma ratio. Maximum standardized uptake values and bronchioloalveolar carcinoma ratio, tumor disappearance rate, and ground-glass opacity ratio mirrored the pathologic aggressiveness of tumor malignancy, nodal metastasis, recurrence, and prognosis, including disease-free and overall survival. Conclusions: Maximum standardized uptake value is a significant preoperative predictor for surgical outcomes. High-resolution computed tomography and fluorodeoxyglucose-positron emission tomography/computed tomography findings are important to determine the appropriateness of sublobar resection for treating clinical stage IA adenocarcinoma of the lung. abstract_id: PUBMED:19000513 Positron emission tomography for the study of solitary pulmonary nodules Solitary pulmonary nodules are a major clinical challenge for all doctors involved in their study. The aim is always to avoid missing malignant lesions but also to avoid performing unnecessary diagnostic tests. The most recent research suggests that in the near future chest computed tomography will become even more widely used for the early detection of lung cancer. If this occurs, there is likely to be a marked increase in the number of solitary pulmonary nodules detected, making it essential to develop techniques that enable us to manage this problem with an optimal risk-benefit ratio. We review the underlying principles of positron emission tomography and the advances that have been made in its use for the study of solitary pulmonary nodules. In addition, we discuss the possible causes of false positives and negatives in this technique and the strategies aimed at increasing diagnostic yield. abstract_id: PUBMED:20373056 Clinical stage of lung cancer revised by accidental pneumothorax and positron emission tomography. Clinical lung cancer staging is an important basis for choosing treatment modalities and prognosing outcomes. However, it is not always accurate. We reported an interesting process of clinically staging in a case with lung cancer. According to admission computer tomography findings, he was preliminarily diagnosed as stage IV with nonprimary-tumor lobe metastasis and chest wall invasion. Computer tomography findings after accidental pneumothorax resulting from percutaneous puncture biopsy and positron emission tomography findings denied the chest wall invasion and nonprimary-tumor lobe metastasis. His diagnosis was revised as stage IIa that was confirmed subsequently by histological staging after surgery and follow-up. Our experiences indicate computer tomography scanning after artificial pneumothorax is a useful tool for identifying whether or not chest wall is invaded directly by an adjacent tumor, and positron emission tomography scanning needs to be considered for accurately evaluating the nature of concurrent pulmonary nodules in patients with lung cancer. abstract_id: PUBMED:24994703 Detection of non-aggressive stage IA lung cancer using chest computed tomography and positron emission tomography/computed tomography. Objectives: In contrast to lung cancer with ground-glass opacity, the radiological investigation of solid lung cancer has not been well examined. The aim of this study was to explore chest computed tomography (CT) and positron emission tomography (PET)/CT findings with regard to outcomes after lung cancer surgery in order to radiologically classify clinical stage IA lung cancers by tumour aggressiveness. Methods: Three hundred and fifteen clinical stage IA patients were analysed. Four groups were defined by tumour solidity on CT and by the standardized uptake value (SUV) index on PET-CT (tumour maximum SUV/mean right liver lobe SUV). We analysed the association between radiological findings and both pathological invasiveness and postoperative outcome. Results: Group A (n = 84) had an SUV index <1.0 and non-solid tumours, Group B (n = 24) had an SUV index <1.0 and solid tumours, Group C (n = 54) had an SUV index ≥1.0 and non-solid tumours, while Group D (n = 153) had an SUV index ≥1.0 and solid tumours. Invasive lung cancer was found in 2/84 (2.4%) patients in Group A, 1/24 (4.2%) in Group B, 13/54 (24.1%) in Group C and 58/153 (37.9%) in Group D (P < 0.01). The 5-year recurrence-free rate was 100% in Groups A and B, 90.3% in C and 65.7% in D (P < 0.01). The cancer-specific survival rate was 100% in A and B, 94.6% in C and 81.7% in D (P < 0.01). Conclusions: The present results suggest that preoperative PET/CT and thin-section CT findings provide important information for a selection of surgical procedures for clinical stage IA lung cancers. In clinical stage IA lung cancers displaying solid or non-solid density in thin-section findings, an SUV index <1.0 may be a better criterion for detecting non-aggressive lung cancer even in solid lung cancers. abstract_id: PUBMED:19359282 The maximum standardized uptake values on positron emission tomography to predict the Noguchi classification and invasiveness in clinical stage IA adenocarcinoma measuring 2 cm or less in size. This study investigated whether the standardized uptake value (SUV) of the tumor correlated with the Noguchi classification and tumor invasiveness in patients with clinical stage IA adenocarcinoma < or = 2 cm in size. Fifty-four patients that underwent a curative surgical resection for clinical stage IA adenocarcinoma < or = 2 cm from April 2005 to December 2008 had integrated positron emission tomography (PET)-computed tomography (CT) with 18F-fluorodeoxyglucose (FDG) as part of the preoperative workup. The relationships between the maximum SUV (SUVmax) and Noguchi classification, pathological results of intratumoral lymphatic or vascular invasion of tumor cells, and pleural invasion were examined. In comparison to tumors with an SUVmax>1.0, tumors with an SUVmax < or = 1.0 were more frequently classified as Noguchi type A or B (P<0.0001). Tumors with an SUVmax>1.0 had more intratumoral lymphatic or vascular invasion of tumor cells and pleural invasion (P=0.0005 and P=0.0002). These results suggest that an SUVmax is an important predictor for the Noguchi classification and tumor invasiveness in patients with clinical stage IA adenocarcinoma < or =2 cm in size. abstract_id: PUBMED:16782729 F-18]Fluorodeoxyglucose positron emission tomography can predict pathological tumor stage and proliferative activity determined by Ki-67 in clinical stage IA lung adenocarcinomas. Objective: To predict a malignant grade of lung cancer by fluorodeoxyglucose positron emission tomography (FDG-PET) scanning, we investigated the correlation between FDG uptake and pathological tumor stage, proliferative activities determined by Ki-67 and cyclin D1, and an alteration of p53, in clinical stage (c-stage) IA lung adenocarcinomas. Methods: FDG-PET was performed for 71 patients with c-stage IA lung adenocarcinomas. FDG uptake was measured by a contrast ratio (CR) between the tumor and contralateral lung. Ki-67, cyclin D1 and p53 staining scores were examined by immunohistochemistry. Results: The lesions with ground-glass opacity were found in 26 patients, and solid lesions in 45 by computed tomography. The pathological tumor stages (p-stage) were stage IA in 59 and more advanced stages in 12. The latter had significantly higher CR value than the former (P < 0.001). Patients with CR > or = 0.55 could be predicted to be at advanced tumor stages, with a sensitivity of 0.83 and a specificity of 0.82. The CR and staining scores of Ki-67 were significantly correlated with each other (P < 0.0001), and both the values were significantly higher in advanced tumor stages than in p-stage IA, and were also significantly higher in tumors with intratumoral lymphatic, vascular and pleural involvements than in those without such features (P < 0.05-0.0001). Conclusions: In c-stage IA lung adenocarcinomas, the FDG uptake can predict p-stage and tumor proliferative activity determined by Ki-67. For c-stage IA lung adenocarcinomas showing CR > or = 0.55, mediastinoscopy or neoadjuvant chemotherapy is indicated. Answer: Positron emission tomography (PET) has been shown to prevent nontherapeutic pulmonary resections in a subset of patients with clinical stage IA lung cancer, but its effectiveness is limited. According to a reanalysis of patients with clinical stage IA non-small cell lung cancer (NSCLC) from the ACOSOG Z0050 trial, PET correctly identified advanced disease in 7.4% of patients, preventing nontherapeutic resections in these cases. However, due to a high false positive rate, the positive predictive value for advanced disease was only 33.3%, and the negative predictive value of PET to predict benign lesions was only 57%. This means that 43% of patients with a PET negative primary lesion actually had cancer, and all of these had resectable disease (PUBMED:18355490). Furthermore, clinical stage IA lung cancer determined by computed tomography (CT) and PET is frequently not pathologic IA, indicating a problem of understaging. A study found that cancer was correctly staged in only 65% of patients, with positive lymph nodes found in 11.7% of patients, suggesting that limited resection should be undertaken with caution in such patients (PUBMED:19154893). In summary, while PET can prevent some nontherapeutic pulmonary resections by identifying advanced disease or declaring a nodule as benign, its limitations due to false positives and negatives mean that it prevents nontherapeutic resections less than 10% of the time for clinical stage IA lung cancer patients (PUBMED:18355490). Therefore, PET does provide some benefit in preventing nontherapeutic resections, but its effectiveness is not absolute, and a significant proportion of patients with NSCLC will have surgery delayed if a strategy of no surgery and serial CT scans is chosen for PET negative lesions (PUBMED:18355490).
Instruction: Inter- and intra-observer variability in cervical measurement by ultrasound in the first and second trimesters of pregnancy: does it matter? Abstracts: abstract_id: PUBMED:24846957 Inter- and intra-observer variability in cervical measurement by ultrasound in the first and second trimesters of pregnancy: does it matter? Objectives: The aim of this study was to evaluate the inter- and intra-observer variabilities in the cervical length (CL) measurement by transvaginal ultrasound (TVU) in the first and second trimesters of pregnancy. Materials And Methods: A total of 55 singleton pregnant women were evaluated in the first or second ultrasound. Each patient was blindly evaluated by three of the four observers who performed three measurements each. Results: In the first and second trimesters, patients were evaluated at 12.3 weeks (mean) and 21.3 weeks (mean), respectively. The intra-class correlation coefficients (ICC) between observers in the first and second trimesters were 0.76 and 0.72, respectively, and the ICCs within observers were 0.889 and 0.899, respectively. In both trimesters, the main source of variance was the patient and the second source of variance was the interaction, Patient×Observer (12.0% and 13.5%, respectively). The observer and measurement explained a small proportion of variance in both trimesters (1.4% and 0.4%, respectively in the 1st trimester, and 3.3% and 0%, respectively in the 2nd trimester). The residual variances were 10.6% and 11.2%, and the standard errors of measurement were 1.78 mm and 1.82 mm for the first and second trimesters, respectively. Conclusions: TVU is a reliable and reproducible method by which to predict preterm birth in both the first and second trimesters. abstract_id: PUBMED:32164477 Intra- and interobserver reproducibility of second trimester ultrasound cervical length measurement in a general population. Objectives: To evaluate the reproducibility of ultrasound cervical length (CL) measurement at the second trimester. Methods: A set of 565 cervical ultrasound images were collected at 19 + 0-24 + 6 weeks' gestation. Two senior maternal-fetal specialists measured CL in each image on three occasions 2 weeks apart. In the interval between the first and following two measures, the clinicians reviewed 20 images together to agree on the criteria for measurement. Measurements were analyzed for intra- and inter-observer disagreement. The robustness of patient classification when CL measure was used with different cutoff thresholds was analyzed. Results: Average intra-observer deviation was 2.8 mm for clinician 1 and 3.7 mm for clinician 2. Inter-observer deviation among the two clinicians was 5.2 and 3.2 mm before and after reviewing measurement criteria together. When cutoffs were used to classify as "short" cervix, disagreement ranged from 22 to 70% depending on operator and threshold used. Conclusion: Ultrasound CL measurements by experts showed moderate intra- and inter-observer reproducibility. The use of specific cutoffs to classify patients as high or low risk resulted in wide disagreements. The results stress the importance of training and quality assessments on considering universal screening application of CL measurement. abstract_id: PUBMED:32392356 Second trimester cervical length measurements with transvaginal ultrasound: A prospective observational agreement and reliability study. Introduction: Universal screening for preterm delivery by adding transvaginal ultrasound measurement of cervical length to routine second trimester ultrasound has been proposed. The aim is to estimate inter- and intraobserver agreement and reliability of second trimester transvaginal ultrasound measurements of cervical length performed by specially trained midwife sonographers. Material And Methods: This is a prospective reliability and agreement study performed in seven Swedish ultrasound centers. In total, 18 midwife sonographers specially trained to perform ultrasound measurements of cervical length and 286 women in the second trimester were included. In each center, two midwife sonographers measured cervical length a few minutes apart in the same woman, the number of women examined per examiner pair varying between 24 and 30 (LIVE study). Sixteen midwife sonographers measured cervical length twice ≥2 months apart on 93 video clips (CLIPS study). The main outcome measures were mean difference, limits of agreement, intraclass correlation coefficient, intra-individual standard deviation, repeatability, Cohen's kappa and Fleiss kappa. Results: The limits of agreement and intraclass correlation coefficient of the best examiner pair in the LIVE study were -4.06 to 4.72 mm and 0.91, and those of the poorest were -11.11 to 11.39 mm and 0.31. In the CLIPS study, median (range) intra-individual standard deviation was 2.14 mm (1.40-3.46), repeatability 5.93 mm (3.88-9.58), intraclass correlation coefficient 0.84 (0.66-0.94). Median (range) interobserver agreement for cervical length ≤25 mm in the CLIPS study was 94.6% (84.9%-98.9%) and Cohen's kappa 0.56 (0.12-0.92), median (range) intraobserver agreement was 95.2% (87.1%-98.9%) and Cohen's kappa 0.68 (0.27-0.93). Conclusions: Agreement and reliability of cervical length measurements differed substantially between examiner pairs and examiners. If cervical length measurements are used to guide management there is potential for both over- and under-treatment. Uniform training and rigorous supervision and quality control are advised. abstract_id: PUBMED:28960294 Impact of maternal weight on the intra-observer and inter-observer reproducibility of fetal ultrasonography measurements in the third trimester. Objective: To evaluate the effect of maternal weight on the intra- and inter-observer reproducibility of third-trimester ultrasonography fetal measurements. Methods: The present prospective study, performed at a tertiary hospital, enrolled patients at between 35+0 weeks and 36+6 weeks of singleton pregnancies between January 1, 2015, and July 1, 2016. Fetal ultrasonography measurements were evaluated twice by a first observer and a third time by a second observer. Intra- and inter-observer consistency were analyzed using the Cronbach α reliability coefficient, and measurement reproducibility was compared with patients stratified by a body mass index (BMI, calculated as weight in kilograms divided by the square of height in meters) of below 25 or at least 25. Results: The study included 197 patients (133 with BMI <25 and 64 with BMI ≥25). Among patients with a BMI below 25, the reliability coefficients for bi-parietal diameter, head circumference, abdominal circumference, and femur length measurements were 0.97, 0.95, 0.98, and 0.96, respectively, for intra-observer consistency, and were 0.97, 0.93, 0.98, and 0.95, respectively, for inter-observer consistency. Among patients with a BMI of at least 25, these values were 0.97, 0.96, 0.98, and 0.97, respectively, for intra-observed consistency, and 0.97, 0.94, 0.98, and 0.96, respectively, for inter-observer consistency. Conclusion: High intra- and inter-observer reproducibility was observed for third-trimester fetal ultrasonography measurements, including for patients who were overweight. abstract_id: PUBMED:12230448 Intra- and interobserver reproducibility of ultrasound measurements of cervical length and width in the second and third trimesters of pregnancy. Objectives: To determine intra- and interobserver reproducibility of ultrasound measurements of cervical length and width in the second and third trimesters of pregnancy. Design: Twenty healthy women in the second or third trimester of pregnancy underwent transvaginal ultrasound examination of the cervix by two examiners. Three replicate meaurements of cervical length and width were taken by each observer for each woman. Intraobserver repeatability was expressed as the difference between the highest and lowest measurement value obtained by one observer, and as the repeatability coefficient and intraclass correlation coefficient. Interobserver reproducibility was expressed as the difference between the mean of the three measurements of each observer, limits of agreement, and interclass correlation coefficient. The repeatability coefficient/limits of agreement define the range within which 95% of the differences between two measurements by the same observer/two observers are likely to fall. Results: Intra- and interobserver differences did not vary in any systematic way over the range of values measured. For cervical length measurements the repeatability coefficient was +/- 5.4 mm for Observer 1 and +/- 5.9 mm for Observer 2. The intraclass correlation coefficient for cervical length measurements was 0.93 for both observers. The corresponding coefficients for cervical width measurements were +/- 3.9 mm and +/- 7.3 mm, and 0.97 and 0.91. The interclass correlation coefficient for cervical length measurements was 0.76, the mean interobserver difference was 0.4 mm, and the limits of agreement were -10.0 mm to 10.8 mm. For cervical width measurements the interclass correlation coefficient was 0.74, the mean interobserver difference was -0.02 mm, and the limits of agreement were -12.4 mm to 12.4 mm. There was no systematic difference between the first, second and third cervical length measurements and no systematic bias between the two observers, and the results were not affected by which of the two examiners started the examination (three-way analysis of variance). The same was true of cervical width measurements. Conclusions: Our results do not support the contention that the first measurement of cervical length is the longest. There is substantial intra- and interobserver variability in the results of measurements of cervical length and width, even when experienced observers perform the measurements under standardized conditions. It is important to consider the possibility of 'measurement error' when ultrasound measurements of cervical length and width are used in clinical practice to monitor women at high risk of delivering preterm or to screen for preterm birth. abstract_id: PUBMED:30590249 Transrectal three-dimensional fetal volumetry and crown-rump length measurement during early gestation in mares: Intra- and inter-observer reliability and agreement. The aim of this study was to investigate if transrectal three-dimensional (3D) ultrasound is a reliable technique to measure equine fetal volume (FV) during early gestation in mares. In total 149 warmblood mares were examined once transrectally on days 45 ± 1 of pregnancy with a portable 3D ultrasound device (Voluson® i, GE Healthcare, Zipf, Austria). The following measurements were performed: Two-dimensional (2D) and 3D crown-rump length (CRL), FV using Virtual Organ Computer-aided AnaLysis (VOCAL™) software with rotational angles of 6° and 30°. To analyze intra- and inter-observer reliability (intraclass correlation coefficient (ICC)) and agreement (Bland-Altman's limits of agreement (LoA)) of FV measurements, images from 60 horse fetuses were selected and repeatedly analyzed by the same examiner (A) and by a second examiner (B). The time for each ultrasound examination (2D and 3D) and for the FV measurements was recorded. The 3D measured CRL was larger (P < 0.001) than the 2D CRL. Repeated measurements of 3D CRL showed a higher reliability (ICC = 0.91 (0.88-0.94), P < 0.001) and agreement (mean = 0.13%, 95% LoA: 7.45 to +7.19) compared to reliability (ICC = 0.50 (0.36-0.61), P < 0.001) and agreement (mean = -1.54%, 95% LoA: 23.29 to +20.21) of the CRL measurement in 2D mode. For intra-observer examinations, reliability was highest when using a rotational angle of 30° (ICC = 0.98 (0.97-0.99), P < 0.001). The inter-observer reliability of 3D measurements was good (ICC = 0.85 (0.67-0.92), P < 0.001). The extra time needed to perform the 3D scan ranged from 1 to 9 min and FV measurements lasted 03:30 ± 00:46 and 08:10 ± 01:05, for rotational angles of 30° and 6°, respectively. In conclusion, this study showed a high level of intra- and inter-observer reliability and agreement for FV measurements using VOCAL™. Furthermore, the 6° and 30° rotational angles can be used interchangeably, but test duration, reliability and agreement were better with the 30° rotational step method. The CRL measurements obtained with 3D mode probably reflects the true CRL, compared to the 2D measured CRL. abstract_id: PUBMED:28708574 4D ultrasound study of fetal movement early in the second trimester of pregnancy. Aim: To assess the frequency of fetal movement and reproducibility of fetal movement counting in normal singleton pregnancies early in the second trimester using four-dimensional (4D) ultrasound. Methods: Twenty-nine singleton pregnancies were studied for 15 min employing 4D ultrasound at 14-16 (19 cases) and 17-19 (10 cases) weeks of gestation. The frequencies of eight fetal movements (head anteflexion, head retroflexion, body rotation, hand to face movement, general movement, isolated arm movement, isolated leg movement and mouthing movement) were evaluated. Results: The most frequent fetal movements were isolated arm movements at 14-16 and 17-19 weeks' gestation. There was a significant difference only in the frequency of mouthing movement between 14-16 and 17-19 weeks' gestation (P<0.05). All fetal movements showed intra- and inter-class correlation coefficients greater than 0.87, with good intra- and inter-observer agreements. Conclusion: The difference in the frequency of mouthing movement at 14-16 and 17-19 weeks' gestation may be due to increasing fetal swallowing because of the increasing amniotic fluid early in the second trimester of pregnancy. To the best of our knowledge, this is the first study on the reproducibility assessment of fetal movement counting using 4D ultrasound. However, the data and their interpretation in the present study should be taken with some degree of caution because of the small number of subjects studied. Further studies involving a larger sample size are needed to assess the reproducibility of fetal movement counting using 4D ultrasound. abstract_id: PUBMED:28031323 Inter-observer and intra-observer agreement between embryologists during selection of a single Day 5 embryo for transfer: a multicenter study. Study Question: What is the inter-observer and intra-observer agreement between embryologists when selecting a single Day 5 embryo for transfer? Summary Answer: The inter-observer and intra-observer agreement between embryologists when selecting a single Day 5 embryo for transfer was generally good, although not optimal, even among experienced embryologists. What Is Known Already: Previous research on the morphological assessment of early stage (two pronuclei to Day 3) embryos has shown varying levels of inter-observer and intra-observer agreement. However, single blastocyst transfer is now becoming increasingly popular and there are no published data that assess inter-observer and intra-observer agreement when selecting a single embryo for Day 5 transfer. Study Design, Size, Duration: This was a prospective study involving 10 embryologists working at five different IVF clinics within a single organization between July 2013 and November 2015. Participants/materials, Setting, Methods: The top 10 embryologists were selected based on their yearly Quality Assurance Program scores for blastocyst grading and were asked to morphologically grade all Day 5 embryos and choose a single embryo for transfer in a survey of 100 cases using 2D images. A total of 1000 decisions were therefore assessed. For each case, Day 5 images were shown, followed by a Day 3 and Day 5 image of the same embryo. Subgroup analyses were also performed based on the following characteristics of embryologists: the level of clinical embryology experience in the laboratory; amount of research experience; number of days per week spent grading embryos. The agreement between these embryologists and the one that scored the embryos on the actual day of transfer was also evaluated. Inter-observer and intra-observer variability was assessed using the kappa coefficient to evaluate the extent of agreement. Main Results And The Role Of Chance: This study showed that all 10 embryologists agreed on the embryo chosen for transfer in 50 out of 100 cases. In 93 out of 100 cases, at least 6 out of the 10 embryologists agreed. The inter-observer and intra-observer agreement among embryologists when selecting a single Day 5 embryo for transfer was generally good as assessed by the kappa scores (kappa = 0.734, 95% CI: 0.665-0.791 and 0.759, 95% CI: 0.622-0.833, respectively). The subgroup analyses did not substantially alter the inter-observer and intra-observer agreement among embryologists. The agreement when Day 3 images were included alongside Day 5 images of the same embryos resulted in a change of mind at least three times by each embryologist (on average for <10% of cases) and resulted in a small decrease in inter-observer and intra-observer agreement between embryologists (kappa = 0.676, 95% CI: 0.617-0.724 and 0.752, 95% CI: 0.656-808, respectively).The assessment of the inter-observer agreement with regard to morphological grading of Day 5 embryos showed only a fair-to-moderate agreement, which was observed across all subgroup analyses. The highest overall kappa coefficient was seen for the grading of the developmental stage of an embryo (0.513; 95% CI: 0.492-0.538). The findings were similar when the individual embryologists were compared with the embryologist who made the morphological assessments of the available embryos on the actual day of transfer. Limitations, Reasons For Caution: All embryologists had already completed their training and were working under one organization with similar policies between the five clinics. Therefore, the inter-observer agreement might not be as high between embryologists working in clinics with different policies or with different levels of training. Wider Implications Of The Findings: The generally good, although not optimal uniformity between participating embryologists when selecting a Day 5 embryo for transfer, as well as, the surprisingly low agreement when morphologically grading Day 5 embryos could be improved, potentially resulting in increased pregnancy rates. Future studies need to be directed toward technologies that can help achieve this. Study Funding/competing Interests: None declared. Trial Registration Number: Not applicable. abstract_id: PUBMED:30955055 Perineal ultrasound for the measurement of urethral mobility: a study of inter- and intra-observer reliability. Introduction And Hypothesis: Ultrasound measurement of urethral mobility is an attractive approach to directly visualize bladder neck descent (BND) during stress. BND assessed by transperineal ultrasound appears to be associated with stress urinary incontinence (SUI) severity. This study evaluated the inter- and intra-observer reliability of ultrasound BND measurement and its correlation with clinical examination. Methods: We included 50 women from the multicenter randomized 3PN study ("Prenatal Perineal Prevention"). BND was measured by two operators either during pregnancy (at 20 weeks of gestation) or 2 months after delivery. Two measurements were taken by each operator. Intra-class coefficient correlations were used for analysis. Urethral mobility was clinically assessed by measuring the point Aa of the POP-Q classification during maximum strain (Valsalva maneuver) with an empty bladder. Results: Ultrasound analysis showed high intra-observer reliability in the overall population: intraclass correlation coefficients (ICC) = 0.75 (0.59-0.85) and 0.73 (0.55-0.84) for each operator. Intra-observer agreements were considered moderate to high in the post- and antepartum groups. Inter-observer agreements were moderate in the antepartum period [ICC = 0.58 (0.26-0.78) for the first measurement and 0.68 (0.42-0.84) for the second] but low in the postpartum period [ICC = 0.15 (0.10-0.41) and 0.21 (0.10-0.58)]. Correlations between ultrasound and clinical measurements were considered low to moderate (Spearman coefficient, rho = 0.34 and 0.50 for post- and antepartum periods, respectively). Conclusions: Inter-observer reliability of ultrasound urethral mobility measurements by the transperineal route is moderate antepartum and low postpartum. The correlation with point Aa is low to moderate. abstract_id: PUBMED:26496894 Agreement between transperineal ultrasound measurements and digital examinations of cervical dilatation during labor. Background: To compare 2D transperineal ultrasound assessment of cervical dilatation with vaginal examination and to investigate intra-observer variability of the ultrasound method. Methods: A prospective observational study was performed at Skane University Hospital, Lund, Sweden between October 2013 and June 2014. Women with one fetus in cephalic presentation at term had the cervical dilatation assessed with ultrasound and digital vaginal examinations during labor. Inter-method agreement between ultrasound and digital examinations and intra-observer repeatability of ultrasound examinations were tested. Results: Cervical dilatation was successfully assessed with ultrasound in 61/86 (71 %) women. The mean difference between cervical dilatation and ultrasound measurement was 0.9 cm (95 % CI 0.47-1.34). Interclass correlation coefficient (ICC) was 0.83 (95 % CI 0.72-0.90). Intra-observer repeatability was analysed in 26 women. The intra-observer ICC was 0.99 (95 % CI 0.97-0.99). The repeatability coefficient was ± 0.68 (95 % CI 0.45-0.91). Conclusion: The mean ultrasound measurement of cervical dilatation was approximately 1 cm less than clinical assessment. The intra-observer repeatability of ultrasound measurements was high. Answer: Inter- and intra-observer variability in cervical measurement by ultrasound in the first and second trimesters of pregnancy does matter, as it can impact the reliability and reproducibility of the measurements, which are critical for predicting preterm birth and making clinical decisions. Studies have shown that transvaginal ultrasound (TVU) is a reliable and reproducible method for predicting preterm birth in both the first and second trimesters, with intra-class correlation coefficients (ICCs) indicating good agreement both between and within observers (PUBMED:24846957). However, the level of agreement and reliability can vary substantially between examiner pairs and examiners, suggesting that uniform training and rigorous supervision and quality control are necessary to minimize the potential for over- and under-treatment based on cervical length measurements (PUBMED:32392356). In the second trimester, ultrasound cervical length (CL) measurements by experts showed moderate intra- and inter-observer reproducibility, but the use of specific cutoffs to classify patients as high or low risk resulted in wide disagreements, emphasizing the importance of training and quality assessments (PUBMED:32164477). Similarly, another study found substantial intra- and interobserver variability in measurements of cervical length and width, even when experienced observers perform the measurements under standardized conditions, which is important to consider in clinical practice (PUBMED:12230448). In contrast, a study on the impact of maternal weight on the reproducibility of fetal ultrasonography measurements in the third trimester found high intra- and inter-observer reproducibility for fetal measurements, including for overweight patients (PUBMED:28960294). This suggests that certain factors, such as maternal weight, may not significantly affect measurement reproducibility in some contexts. Overall, these findings indicate that while ultrasound measurements for cervical length can be reliable, there is a need for standardization and quality control to ensure consistent and accurate use of these measurements in clinical practice. Variability in measurements can have significant implications for patient care, and efforts should be made to minimize this variability through training and adherence to standardized measurement protocols.
Instruction: Is local resection adequate for T1 stage ampullary cancer? Abstracts: abstract_id: PUBMED:33253806 A "dual-guide" bioinspired drug delivery strategy of a macrophage-based carrier against postoperative triple-negative breast cancer recurrence. Recurrence after tumor resection is mainly caused by post-operative inflammation and residual cancer cells, which is a serious obstacle to breast cancer treatment. Traditional nanoparticles rely primarily on the enhanced permeability and retention (EPR) effect in well-vascularized tumors. In this study, a macrophage-based carrier is designed to enhance the efficiency of targeting to recurrent tumors through a "dual-guide" strategy. After tumor resection, a burst of inflammatory factors occurs in the resection wound, which can recruit monocytes/macrophages rapidly. Combined with the tropism of monocyte chemoattractant protein, a large number of macrophage-mediated carriers will be recruited to surgical recurrence sites. Octaarginine (RRRRRRRR, R8)-modified liposomes in macrophages contain two agents with different pharmacological mechanisms, paclitaxel (PTX) and resveratrol (Res), which have enhanced therapeutic effects. In vitro study demonstrated that macrophage-mediated carriers approach 4 T1 cells through an inflammatory gradient and reach recurrence tumors through a "dual-guide" strategy. Then, membrane fusion and inflammation-triggered release deliver the drug into the recurrent tumor cells. In vivo experiments show that macrophage-based carriers exhibit effective tumor-targeting ability, especially in post-operation situations. More importantly, macrophage-mediated liposomes encapsulated with PTX and Res inhibit tumor recurrence in both ectopic and orthotopic 4 T1 post-operative recurrence models. abstract_id: PUBMED:12576817 Cyclooxygenase-2 is highly expressed in carcinoma in situ and T1 transitional cell carcinoma of the bladder. Purpose: We describe cyclooxygenase-2 (COX-2) expression patterns in patients with carcinoma in situ and/or stage T1 transitional cell carcinoma. We determined whether expression is associated with clinical outcome in these patients. Materials And Methods: Immunostaining for COX-2 was performed on paraffin embedded bladder biopsy specimens from 2 independent groups of patients without muscle invasive carcinoma, including 39 with carcinoma in situ and 34 with stage T1 tumors. Immunoreactivity was scored as the percent of carcinoma in situ cells with cytoplasmic staining for COX-2 in the carcinoma in situ group and as the percent of stage T1 cells expressing COX-2 in the stage T1 transitional cell carcinoma group. We evaluated other molecular alterations, including E-cadherin, p21 and p53, because evidence suggests a biological association of COX-2 with alterations in these molecular markers. Results: In the carcinoma in situ group 5 patients (13%) had no immunoreactivity, while 2 (5%), 5 (13%) and 27 (69%) had 10%, 20% and 30% or greater carcinoma in situ cells positive for COX-2, respectively. In the transitional cell carcinoma group 1 (3%), 4 (12%) and 29 (85%) patients had 10%, 20% and 30% or greater positive cells, respectively. COX-2 expression was not associated with any clinical or pathological parameters, or with molecular markers regardless of the cutoff used. It was also not associated with clinical outcomes in the stage T1 transitional cell carcinoma group. In the carcinoma in situ group COX-2 expression was significantly associated with disease recurrence using cutoffs of 0% and greater than 10% positive cells, and with disease progression using a greater than 20% cutoff. However, it was not associated with bladder cancer related survival. Conclusions: COX-2 is expressed in a high percent of patients with carcinoma in situ and stage T1 transitional cell carcinoma, supporting the rationale for chemoprevention studies with selective COX-2. We could not substantiate a role for COX-2 immunohistochemistry for the staging and prognosis of carcinoma in situ and/or stage T1 transitional cell carcinoma. abstract_id: PUBMED:22592183 Overexpressions of RACK1 and CD147 associated with poor prognosis in stage T1 pulmonary adenocarcinoma. Background: RACK1 has been shown to be able to interact with some key cellular proteins involved in tumor development and progression. Our study showed that the expressions of RACK1 and CD147 are correlated with each other. The purpose of this study is to clarify the relationship between expression of RACK1 and CD147 in 180 patients with operable stage T1 human pulmonary adenocarcinoma and their clinicopathological features and prognostic significance. Methods: DNA transfection and RNA interference of RACK1 were conducted to produce pulmonary adenocarcinoma cell lines with differential RACK1 expression. Western blot and RT-PCR were used to quantify RACK1 and CD147 expression in protein and mRNA levels in pulmonary adenocarcinoma cell lines. Immunohistochemistry, double-labeling immunofluorescence, confocal laser scanning microscopy, and Western blot were used to correlate the clinicopathological significance of RACK1 and CD147 expression in cases of stage T1 pulmonary adenocarcinoma. Results: We detected high levels of RACK1 and CD147 mRNA as well as protein expression in pulmonary adenocarcinoma in vitro. In pulmonary adenocarcinoma, the expression of RACK1 and CD147 were correlated both in vitro and in vivo. Our clinicopathological analysis demonstrated that RACK1 or CD147 expression correlated with higher incidence of lymph node metastasis and lower differentiation than tumors that were negative for expression of either RACK1 or CD147. The expression of RACK1 and CD147 was not associated with the patient age or gender. Multivariate analysis demonstrated that the co-overexpression of RACK1 and CD147 was an independent prognostic factor for stage T1 pulmonary adenocarcinoma (P = 0.012). Conclusions: Tumor invasiveness is associated with expression of RACK1 and CD147 in pulmonary adenocarcinoma. The co-expression of RACK1 and CD147 could be an important prognostic biomarker for stage T1 pulmonary adenocarcinoma. abstract_id: PUBMED:18587253 Does the expression of fascin-1 and tumor subclassification help to assess the risk of recurrence and progression in t1 urothelial urinary bladder carcinoma? Introduction: To evaluate the prognostic value of T1 subclassification and fascin-1 expression in T1 human urothelial cell carcinoma of the bladder. Materials And Methods: In a prospective study with 105 consecutive patients, T1 tumors were subclassified into 2 groups according to the depth of tumor invasion. The tunica muscularis mucosae was used as a landmark. The expression of fascin-1 was examined by using an anti-fascin-1 mouse monoclonal antibody and was evaluated semiquantitatively for both intensity and distribution. The patients were followed up for 27.3 +/- 13.7 months. Results: The T1 tumor subclassification was feasible in 99 patients (94%). T1a tumor was detected in 77 patients (73%), T1b tumor in 22 patients (21%). An invasive tumor was found in 5 patients (4.8%) during the restaging transurethral resection of the bladder. The risk of understaging in patients with T1b tumor was 18%. There was not a significant difference in time to the recurrence in the T1a and the T1b group. The progression-free survival rates were significantly different between both groups (p = 0.0034). No correlation was found between fascin-1 positivity and the depth of tumor invasion. Fascin-1 positivity did not correlate with recurrence or the progression-free intervals. In the multivariate analysis, only the extent of lamina propria invasion was an independent predictor of the tumor progression. The fascin positivity was not an independent prognostic factor relating to the risk of recurrence or progression. Conclusion: The finding of T1b tumor was connected with a significantly higher risk of progression and understaging. The fascin-1 expression did not correlate with the depth of tumor invasion or with the tumor recurrence or progression. abstract_id: PUBMED:17403305 Effect of tumor burden on differentiation of T lymphocytes in the peripheral blood of patients with colorectal cancer Objective: To analyze the effect of tumor burden on the differentiation of T1 and T2 cells and its implication in patients with colorectal cancer. Methods: Peripheral venous blood samples were obtained from 20 patients with primary colorectal cancer before and 7 days after the operation, radical operation in 17 patients and palliative resection in 3 patients. Twenty sex and age-matched patients with benign diseases treated in the same period were used as controls. T1 and T2 cells in the peripheral blood were evaluated by detecting the intracellular interferon-gamma and interleukin-4 production with 4-color flow cytometry. Lymphocyte subsets in the peripheral blood were also measured by flow cytometry. Results: At the time of diagnosis, the percentage of T1 and T2 cells in the peripheral lymphocytes of the case group was lower significantly than that of the control group (P = 0.006, and P = 0.017). There was no significant difference in the T, CD4(+) T, CD8(+) T, B, and NK cells between the two groups. After getting rid of the tumor burden, the percentage of T1 cells increased, however, not significantly (P > 0.05). And the percentage of T2 cells increased significantly (P = 0.020). The percentages of T1 cell of the patients with the tumor > or = 5 cm and of the patients with poorly differentiated tumors were significantly lower than those of the patients with the tumor < 5 cm and of the patients with well or moderately differentiated tumors (P = 0.064, and P = 0.072). The percentage of T1 cells in the patients with lymph node metastasis and stage III approximately IV tumor were lower significantly than those of the patients without lymph node metastasis and those with stage I approximately II tumor (P = 0.033 and P = 0.033). No significant differences were found between the population of T1 cells and such factors as tumor size, serosa invasion, and distant metastasis. Conclusion: Tumor load suppresses the differentiation of T1 and T2 cells in the patients with colorectal cancer significantly, and may be an important factor in the development of immunosuppression. After getting rid of the tumor burden, the percentages of T1 and T2 increase in a short time, and the immunologic function is improved. Determination of T1 may be helpful to indicate the prognosis of colorectal cancer. abstract_id: PUBMED:8578256 T-antigens in primary non-invasive and superficially invasive human urinary bladder tumors: the correlation to tumor recurrence and tumor progression. A mini-review. As cellular carbohydrate structures are involved in multiple cellular functions, alterations in these structures have been studied in an effort to find markers and predictors of the clinical course of disease in human cancers. Special interest has been given to recurrence and progression of cancer disease. At present no such marker or predictor exists for the prediction of recurrence and progression of initially non-invasive (Ta) or superficially invasive (T1) human urinary bladder tumors. Over the years the T- (Thomsen-Friedenreich) antigen (the disaccharide Gal beta(1-3)GalNAc alpha(1-O)-R) has been investigated for its possible role as such a marker. An overview is given. It is concluded that there is a general, but not an individual correlation between T-antigen expression and recurrence or progression of initially non- or superficially invasive human bladder tumors. abstract_id: PUBMED:16892011 Survivin nuclear labeling index: a superior biomarker in superficial urothelial carcinoma of human urinary bladder. The caspase family proteases are key proapoptotic proteins while the inhibitor of apoptosis proteins (IAP) prevent apoptosis by antagonizing the caspases or other key proapoptotic proteins. Limited studies of IAPs suggested their deregulation contributed to urothelial neoplasia. However, the expression status and biologic or prognostic significance of the caspase and IAP family proteins in urothelial neoplasms is not clear. In the present study, we first systematically evaluated the expression profile of the major apoptosis regulators, including caspases (CASP3, 6, 7, 8, 9, 10, and 14), IAPs (survivin/BIRC5, CIAP1, CIAP2, XIAP, and LIVIN), APAF1, SMAC, and BCL2, as well as proliferation markers Ki67 and PHH3, in Ta/T1 human urinary bladder urothelial carcinomas and normal urothelium samples by immunohistochemistry. The analysis showed that survivin/BIRC5 nuclear labeling index (BIRC5-N), but not cytoplasmic staining, was the only apoptotic marker which correlated significantly with tumor grade, stage, and patient outcome. We further analyzed the prognostic value of BIRC5-N in 101 Ta/T1 urinary bladder urothelial carcinomas by univariate analysis, which showed that BIRC5-N as well as the more classical prognosticators (stage, grade, and Ki67 index) were of prognostic significance. However, multivariate analysis by Cox proportional hazard regression demonstrated BIRC5-N was a stronger prognosticator than tumor grade, stage, and Ki67 labeling index. BIRC5-N index of 8% or more predicted unfavorable disease-specific survival (relative risk (RR)=6.6, 95% confidence interval=1.6-26.7, P=0.0080) as well as progression-free survival (RR=4.4, 95% confidence interval=1.3-14.6, P=0.0151). We conclude that BIRC5-N is a superior biologic and prognostic marker for Ta/T1 urothelial carcinomas of urinary bladder. abstract_id: PUBMED:36914018 Local delivery of artesunate dimer liposomes incorporated injectable hydrogel for H2O2 and pH-independent chemodynamic therapy. Chemodynamic therapy (CDT) has emerged as a powerful tumor treatment option by inducing the imbalance of redox homeostasis in cancer cells. Nevertheless, the therapeutic outcomes were greatly limited because of insufficient endogenous H2O2 and upregulated cellular antioxidant defense in the tumor microenvironment (TME). Herein, a liposome-incorporated in-situ alginate hydrogel locoregional treatment strategy was developed, which involves using hemin-loaded artesunate dimer liposomes (HAD-LP) as redox-triggered self-amplified C-center free radical nanogenerator to enhance CDT. First, HAD-LP based on artesunate dimer glycerophosphocholine (ART-GPC) was prepared by a thin film method. Their spherical structure was manifested by dynamic light scattering (DLS) and transmission electron microscope (TEM). The generation of C-center free radicals from HAD-LP was carefully evaluated by using methylene blue (MB) degradation method. The results suggested that the hemin was reduced to heme under the action of glutathione (GSH), which could catalyze the breakage of endoperoxide of ART-GPC derived dihydroartemisinin (DHA) to generate toxic C-centered free radicals in a H2O2 and pH-independent manner. Moreover, the change of intracellular GSH and free radical level was monitored through ultraviolet spectroscopy and confocal laser scanning microscope (CLSM). It was revealed that the hemin reduction induced GSH depletion and elevated free radical level, disrupting cellular redox homeostasis. After co-incubation with MDA-MB-231 or 4 T1 cells, HAD-LP was found to be highly cytotoxic. In order to prolong retention and improve antitumor efficacy, HAD-LP was mixed with alginate and injected intratumorally into 4 T1 tumor bearing mice. The injected HAD-LP and alginate mixture formed in-situ hydrogel and achieved best antitumor effect with the growth inhibition rate of 72.6%. Together, the hemin-loaded artesunate dimer liposome-incorporated alginate hydrogel possessed effective antitumor activity through redox-triggered C-center free radical generation induced apoptosis in a H2O2 and pH-independent manner, which might be a promising candidate in the application of chemodynamic anti-tumor therapy. abstract_id: PUBMED:36477656 Epithelial TGFβ engages growth-factor signalling to circumvent apoptosis and drive intestinal tumourigenesis with aggressive features. The pro-tumourigenic role of epithelial TGFβ signalling in colorectal cancer (CRC) is controversial. Here, we identify a cohort of born to be bad early-stage (T1) colorectal tumours, with aggressive features and a propensity to disseminate early, that are characterised by high epithelial cell-intrinsic TGFβ signalling. In the presence of concurrent Apc and Kras mutations, activation of epithelial TGFβ signalling rampantly accelerates tumourigenesis and share transcriptional signatures with those of the born to be bad T1 human tumours and predicts recurrence in stage II CRC. Mechanistically, epithelial TGFβ signalling induces a growth-promoting EGFR-signalling module that synergises with mutant APC and KRAS to drive MAPK signalling that re-sensitise tumour cells to MEK and/or EGFR inhibitors. Together, we identify epithelial TGFβ signalling both as a determinant of early dissemination and a potential therapeutic vulnerability of CRC's with born to be bad traits. abstract_id: PUBMED:32492507 Folate-receptor mediated pH/reduction-responsive biomimetic nanoparticles for dually activated multi-stage anticancer drug delivery. How to overcome the cell membrane barriers and achieve release payloads efficiently in the cytoplasm have been major challenges for anticancer drug delivery and therapeutic effects with nanosystems. In this study, bovine serum albumin (BSA) was modified with folate acid and histamine, which was then used as the nanocarrier for the antitumor agent doxorubicin (DOX). The DOX-loaded nanoparticles (DOX/FBH-NPs) were prepared via a crosslinking method, and the release of DOX from these nanoparticles (NPs) exhibited pH/reduction-responsive behaviors in vitro. These NPs interacted with the folate receptor overexpressed on the cell membrane of 4 T1 cells and achieved enhanced endocytosis. Afterwards, these NPs exhibited pH-responsiveness within endo-lysosomes and escaped from endosomes due to the "proton sponge" effect, and then completed release of DOX was triggered by high concentration of glutathione (GSH) in cytoplasm. Thus, DOX/FBH-NPs exhibited excellent cytotoxicity against 4 T1 cells in vitro. Benefited from the enhanced permeability and retention (EPR) effect and folate receptor-mediated endocytosis, these NPs gained satisfied tumor-targeting effects in vivo and efficient delivery of DOX to tumor tissues. As a result, these NPs exhibited enhanced antitumor effects and reduced side effects in vivo. In conclusion, these BSA-based NPs modified with both folate acid and histamine showed enhanced tumor-targeting effects in vivo with good biocompatibility and intracellular pH/reduction-responsive behaviors, providing a promising strategy for the efficient delivery of antitumor agents. Answer: The provided abstracts do not directly address the adequacy of local resection for T1 stage ampullary cancer. The abstracts focus on various aspects of cancer treatment and prognosis, including drug delivery strategies, expression of certain proteins in cancer cells, the impact of tumor burden on immune cell differentiation, and the development of biomimetic nanoparticles for drug delivery. However, none of the abstracts discuss surgical treatment options or the adequacy of local resection for T1 stage ampullary cancer. Therefore, based on the information provided in these abstracts, it is not possible to determine whether local resection is adequate for T1 stage ampullary cancer. Additional information from clinical guidelines or studies specifically addressing the treatment of T1 stage ampullary cancer would be required to answer this question.
Instruction: Does the multidimensional grading system (BODE) correspond to differences in health status of patients with COPD? Abstracts: abstract_id: PUBMED:18046907 Does the multidimensional grading system (BODE) correspond to differences in health status of patients with COPD? Study Objectives: To verify whether and to what extent the body mass index, airflow obstruction, dyspnea, and exercise capacity (BODE) index correlates with a disease-specific index of health status in patients with COPD. Design: Cross-sectional study. Setting: University-affiliated hospital. Patients: One hundred patients with stable COPD recruited from the outpatient clinic of a single institution. Measurements: The BODE index was calculated for each patient using variables obtained within 2 weeks of enrollment. At enrollment, all patients completed the St George's Respiratory Questionnaire (SGRQ). The Kruskal-Wallis test was used to compare health status scores with clinical and functional categories of COPD. The Spearman correlation coefficient (r) was calculated to assess the association between health status scores and clinical or functional variables. Results: Categorizing the BODE scores into 4 quartiles, we found that higher BODE quartiles were associated with higher (worse) SGRQ scores. The differences among the BODE quartiles in health status indexes were significant for total SGRQ as well as all 3 of the SGRQ subscales. In all sections of the SGRQ, scores were moderately to strongly associated with the BODE quartiles (r = 0.27-0.46). In contrast, the association between the SGRQ total, impacts, activity and Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages were weaker (r = 0.27-0.28). There was no significant association between SGRQ symptoms and GOLD stages. Conclusions: The BODE scoring system corresponds to important differences in health status of patients with COPD. This grading system is better correlated to the health status indexes of the SGRQ than the GOLD staging criteria. abstract_id: PUBMED:26457459 Evaluation of Left Ventricular Function and its Relationship With Multidimensional Grading System (BODE Index) in Patients With COPD. Cardiovascular disease (CVD) is one of the main causes of morbidity and mortality in chronic obstructive pulmonary disease (COPD) patients however data regarding left ventricle (LV) function in COPD is limited. We, in this study, aimed to evaluate the LV systolic function and its relation to BODE index in COPD patients with the utility of two-dimensional speckle tracking echocardiography (2D-STE). The study involved 125 COPD patients and 30 control subjects. All patients underwent 2D-echocardiography, pulmonary function tests and -minute walk tests. The patients were divided into four quartiles according to BODE index score. COPD patients had lower mitral annulus systolic velocity (Sm), average global longitudinal strain (GLS), average global longitudinal strain rate systolic (GLSRs), average GLSR early diastolic (GLSRe), average GLSR late diastolic (GLSRa), tricuspid annular plane systolic excursion (TAPSE) and peak systolic myocardial velocity (Sm-RV) (p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001 and p = 0.002 respectively) than control subjects. There were significant differences between BODE index quartiles in terms of Sm, average GLS and average GLSRs. Patients were divided into two groups according to median value of GLS (> -18.6 and ≤ -18.6). BODE index quartiles were found to be independent predictors of decreased GLS in multivariate logistic regression analysis (p = 0.030). Increased BODE index was associated with impaired LV mechanics in patients with COPD. abstract_id: PUBMED:24333803 BODE index or geriatric multidimensional assessment for the prediction of very-long-term mortality in elderly patients with chronic obstructive pulmonary disease? a prospective cohort study. Background: a multidimensional approach-the BODE index-has been proposed for prognostic purposes in chronic obstructive pulmonary disease (COPD) and theoretically seems to be well suited for elderly people, but there is a lack of data in this population, especially with respect to long-term survival. The objective of this study is to evaluate whether the BODE index can predict both long (5 years) and very-long (10 and 15 years)-term mortality in an unselected population of elderly people with COPD better than a set of variables commonly taken into account in a geriatric multidimensional assessment (MDA). Methods: : this was a multicentre, prospective, population study. We used data from the SaRA study, which included 563 elderly people with COPD whose vital status was ascertained for up to 15 years after enrolment. The discriminative capacity of the BODE index in predicting mortality was derived from Cox proportional hazard models including the components of the BODE index and compared with that of an alternative model based on MDA variables: age, gender, physical disability, cognitive function and mood status. Results: : at 5 years, the HRs for mortality were 1.04 (95% CI: 0.60-1.79), 1.88 (1.10-3.22) and 3.55 (2.15-5.86) for quartiles 2-4, respectively, compared with quartile 1 of the BODE index. The corresponding figures for 10-year mortality were 1.50 (1.01-2.24), 2.11 (1.39-3.20) and 3.903 (2.62-5.82), and for 15-year mortality were 1.68 (1.19-2.36), 2.08 (1.44-3.01) and 3.78 (2.64-5.41). Similar results were obtained using variables included in the usual MDA. Conclusions: : Both the 'classic' MDA and the BODE index are comparably associated with mortality, even at very long term, in elderly people with COPD. abstract_id: PUBMED:28947025 Patient Related Outcomes-BODE (PRO-BODE): A composite index incorporating health utilization resources predicts mortality and economic cost of COPD in real life. Multidimensional scores were proposed for defining COPD outcomes, but without any incorporation of the economic COPD cost to clinical indices. Aim: using mortality as an outcome, the hypothesis that adding total health care cost to the BODE index would better predict mortality in COPD was investigated. Methods: 275 COPD patients were surveyed. Anthropometrics, lung function, the BODE and the Charlson Comorbidity Index were determined. History of exacerbations, ER visits, hospitalizations and mortality were also determined over the next three years, being their rates graded and added to the BODE index according to a simple algorithm. The novel PRO-BODE index ranged 0-10 points; its relationship to annual total COPD cost and survival was assessed by linear regression analysis. Results: total COD cost showed the highest relationship with survival (r = -0.58), even higher than that of age and of BODE index (r = -0.28 and r = -0.21, respectively). The integrated Pro-BODE score proved proportional to the cost of care and inversely proportional to the length of survival. Conclusions: Pro-BODE is a novel composite index which helps in predicting in real life the impact of COPD over three years, both in terms of patients' survival and of COPD economic burden. abstract_id: PUBMED:27920567 Evaluation of BODE index and its relationship with systemic inflammation mediated by proinflammatory biomarkers in patients with COPD. Introduction: BODE index, a multidimensional grading system which is based on Body mass index, airway Obstruction, Dyspnea scale, and Exercise capacity, has been increasingly used for the evaluation of chronic obstructive pulmonary disease (COPD). Many of the systemic manifestations of COPD are shown to be mediated by elevated levels of proinflammatory biomarkers. Objective: We aimed to investigate the relationship between the BODE index, its components, disease severity, and proinflammatory biomarkers like C-reactive protein (CRP), tumor necrosis factor (TNF)-α, and interleukin (IL)-6. Materials And Methods: A cross-sectional study which included 290 clinically stable COPD patients and 80 smoker controls was conducted. Medical history, body mass index, pulmonary function tests, 6-minute walking test, and modified scale of Medical Research Council dyspnea scale were evaluated. BODE scores were determined. Systemic inflammation was evaluated with the measurement of CRP, TNF-α, and IL-6 in the serum samples of all studied subjects. The correlation between inflammatory biomarkers and BODE index was assessed in COPD patients. Results: We found a significant relationship between COPD stages and BODE index. Our analysis showed significant association between systemic biomarkers and components of the BODE index. Both TNF-α and CRP levels exhibited weak but significant correlation with BODE index. Serum IL-6 concentrations exhibited significant correlation with 6-minute walking test, modified scale of Medical Research Council, and BODE index (r=0.201, P=0.004; r=0.068, P=0.001; and r=0.530, P=0.001, respectively). Also, an inverse and significant correlation was observed between BODE index and FEV1 (r=0.567, P=0.001). IL-6 exhibited a highly significant and inverse correlation with FEV1 (r=-0.580, P=0.001). Conclusion: BODE index should be considered for evaluating patients with COPD. Also, IL-6 seems to be a potential biomarker that may enable determination of the severity and prediction of the course of the disease. abstract_id: PUBMED:20461145 Modified BODE indexes: Agreement between multidimensional prognostic systems based on oxygen uptake. Aim: It has been recently shown that the original BODE index has a high degree of correlation with two modified BODE indexes using maximal oxygen uptake expressed either as mL/min/kg (mBODE) or as the percentage predicted (mBODE%). In this study we investigated the agreement between the two modified BODE indexes (mBODE and mBODE%) in patients with stable chronic obstructive pulmonary disease (COPD). Methods: A total of 169 patients with stable COPD were enrolled in this cross-sectional study. Differences between the two mBODE indexes were assessed using kappa coefficients and Bland-Altman plots. One out of every three patients underwent the six-minute walking test to investigate the agreement with the original BODE index. Results: Correlations between the two mBODE indexes with each other (r = 0.96, P < 0.001) and with the original BODE index (mBODE r = 0.88, P < 0.001; mBODE% r = 0.93, P < 0.001) were excellent. However, the two mBODE indexes were significantly different from each other (mBODE 5.27 +/- 2.3 versus mBODE% 4.31 +/- 2.5; P < 0.001). The kappa coefficients were significantly lower (entire study group k = 0.5, P < 0.001) for every GOLD stage. The mean difference between the two mBODE indexes was 0.8 +/- 0.6 units. Differences with the original BODE were higher for the mBODE (1.8 +/- 0.9) than for the mBODE% (0.6 +/- 0.8). Conclusions: The new mBODE indexes are highly correlated but significantly different from each other. The differences between the novel indexes deserve further scrutiny. abstract_id: PUBMED:34175803 Multidimensional indices in the assessment of chronic obstructive pulmonary disease. Chronic obstructive pulmonary disease (COPD), a very common disease, is the third leading cause of death worldwide. Due to the significant heterogeneity of clinical phenotypes of COPD there is no single method suitable for predicting patients' health status and outcomes, and therefore multidimensional indices, assessing different components of the disease, were developed and are recommended for clinical practice by international guidelines. Several indices have been widely accepted: BODE and its modifications, ADO, DOSE, CODEX, COTE. They differ in their composition and aim, while they are more accurate and better validated in specific settings and populations. We review the characteristics, strengths and limitations of these indices, and we discuss their role in routine management of patients with COPD, as well as in specific clinical scenarios, such as resuscitation and ceiling of care, or decisions to offer more invasive treatments. This analysis may help clinicians to use those indexes in a more practical and appropriate way. abstract_id: PUBMED:25622995 Behavior of Quality of Life (SGRQ) in COPD Patients According to BODE Scores. Introduction: To evaluate if the association between the BODE index and deterioration in health-related quality of life is linear. To determine possible associations between the BODE index and health status evaluated by the Saint George's Respiratory Questionnaire (SGRQ) at all levels of disease severity in COPD. Methods: A cross-sectional study was carried out on 253 patients from two Latin American respiratory centers (Brazil and Chile) with a clinical diagnosis of COPD, based on GOLD criteria. Assessment included the BODE index and the SGRQ questionnaire. Results: Patients had a BODE index of 3.1±2.0; FEV1 (%) of 49±19.2; BMI (kg/m(2)) of 24.7±5.1; 6MWT distance (meters) of 444±96. Significant correlations were found between the BODE index and SGRQ total scores (r=0.5; <0.001), impact (r=0.45; <0.001) and activity (r=0.5; <0.001). From BODE score zero, HRQOL was already compromised in all SGRQ domains. SGRQ scores (total and domain) increased progressively for individual components of the BODE index, with the decrease in airflow limitation (<0.05), BMI (<0.002) and 6MWT (<0.05), and with the increase in the Modified Medical Research Council (MMRC) score (<0.05). Conclusion: There is an association between health-related quality of life, as assessed by the SGRQ and the BODE index within the entire spectrum of COPD severity. Even in early disease stages and BODE index zero, health-related quality of life is already impaired. abstract_id: PUBMED:25624592 Evaluation of correlation of BODE index with health-related quality of life among patients with stable COPD attending a tertiary care hospital. Background: Chronic obstructive pulmonary disease (COPD) is characterized by progressive deterioration of respiratory function along with systemic effects which have a great impact on health-related quality of life (HRQoL). Classification of severity of airflow limitation in COPD does not represent the clinical consequences of COPD. Hence, combined COPD assessment should be preferred. BODE index (Body mass index, Airflow obstruction, Dyspnea and Exercise capacity) has recently been proposed to provide useful prognostic information. Objectives: To find out correlations between the BODE index and HRQoL, and between GOLD classification of COPD severity and HRQoL in stable COPD patients, and to compare between these two correlations. Materials And Methods: A longitudinal observational study was carried out with 114 stable COPD patients recruited over 10 months at the outpatient clinic of a tertiary care hospital in Kolkata, India. Patients were classified according to GOLD classification of severity of airflow limitation after performing spirometry. BODE index was calculated for each patient. Saint George's Respiratory Questionnaire (SGRQ) was used to assess the HRQoL. Results: BODE scores were categorized into four quartiles, quartile one to four with scores of 0-2, 3-4, 5-6 and 7-10, respectively. Higher BODE quartiles were associated with higher total SGRQ scores and SGRQ subscale scores (symptom, activity and impact). Very strong correlations were found between BODE quartiles and total SGRQ scores (P = 0.914; P < 0.01). In contrast, GOLD classes showed moderate correlation with total SGRQ scores (P = 0.590; P < 0.01). Conclusions: BODE index was strongly correlated with the HRQoL in stable COPD patients and it was better than GOLD classes of COPD severity to reflect the health status in patients with stable COPD. abstract_id: PUBMED:30573956 Reassessing the BODE score as a criterion for listing COPD patients for lung transplantation. Background: The BODE score (incorporating body mass index, airflow obstruction, dyspnea and exercise capacity) is used for the timing of listing for lung transplantation (LTx) in COPD, based on survival data from the original BODE cohort. This has limitations, because the original BODE cohort differs from COPD patients who are candidates for LTx and the BODE does not include parameters that may influence survival. Our goal was to assess whether parameters such as age, smoking status and diffusion indices significantly influence survival in the absence of LTx, independently of the BODE. Methods: In the present cohort study, the BODE was prospectively assessed in COPD patients followed in a tertiary care hospital with an LTx program. The files of 469 consecutive patients were reviewed for parameters of interest (age, gender, smoking status and diffusing capacity of the lungs for carbon monoxide [DL,CO]) at the time of BODE assessment, as well as for survival status. Their influence on survival independent of the BODE score was assessed, as well as their ability to predict survival in patients aged less than 65 years. Results: A Cox regression model showed that the BODE score, age and DL,CO were independently related to survival (P-values <0.001), as opposed to smoking status. Survival was better in patients aged less than 65 in the first (P=0.004), third (P=0.002) and fourth BODE quartiles (P=0.008). The difference did not reach significance in the second quartile (P=0.13). Median survival for patients aged less than 65 in the fourth BODE quartile was 55 months. According to a receiver operating characteristic curve analysis, the BODE score as well as FEV1 and DL,CO fared similarly in predicting survival status at 5 years in patients aged less than 65 years. Conclusion: Age and DL,CO add to the BODE score to predict survival in COPD. Assessing survival using tools tested in cohorts of patients younger than 65 years is warranted for improving the listing of patients for LTx. Answer: Yes, the multidimensional grading system BODE, which stands for body mass index, airflow obstruction, dyspnea, and exercise capacity, corresponds to differences in health status of patients with COPD. The BODE index has been found to correlate with disease-specific indices of health status in patients with COPD. A study showed that higher BODE quartiles were associated with higher (worse) scores on the St George's Respiratory Questionnaire (SGRQ), which is a disease-specific index of health status. The differences among the BODE quartiles in health status indexes were significant for the total SGRQ as well as all three of its subscales. The BODE scoring system was better correlated to the health status indexes of the SGRQ than the GOLD staging criteria, indicating that it corresponds to important differences in the health status of patients with COPD (PUBMED:18046907).
Instruction: Longer femoral necks in the young: a predictor of further increases in hip fracture incidence? Abstracts: abstract_id: PUBMED:8769033 Longer femoral necks in the young: a predictor of further increases in hip fracture incidence? Aims: To determine whether femoral neck length, a risk factor for hip fracture, is likely to increase in the coming decades in normal elderly New Zealand women. Methods: Femoral neck length was measured from dual-energy x-ray absorptiometry scans of the proximal femur in white women aged 19-88 years (n = 211). Since this length remains constant throughout adult life, it is possible to predict future trends in this index in the elderly, from its measurement at the present time in younger adults. Results: Femoral neck length was inversely related to age (p = 0.018) such that a 3.7 mm (5.7%) difference occurred over the age-range of this sample. Conclusions: Femoral neck length in elderly New Zealand women is likely to increase in the coming decades. As a result of the dependence of hip fracture risk on femoral neck length, future age-adjusted hip fracture incidence will increase by about 50% over the next 50 years unless significant public health initiatives are taken, either to increase bone density or to reduce fall frequency. abstract_id: PUBMED:18523625 An approach to compare the quality of cancellous bone from the femoral necks of healthy and osteoporotic patients through compression testing and microcomputed tomography imaging. It is estimated that osteoporosis is responsible for about 300,000 hip fractures per year in the United States. Effective prevention of these fractures has been demonstrated using bisphosphonates. However, their mechanism of action has not been elucidated. Furthermore, the precise effect of bisphosphonates on the femoral neck and surrounding areas has never been studied. We are interested in establishing a protocol to analyze the bone quality of proximal femurs from patients treated with bisphosphonates. Following hip replacement surgery, the aim is to determine whether imaging and compression testing of cancellous bone from the discarded femoral necks can accurately assess the bone's microarchitectural and biomechanical properties, respectively. To validate the technique, it was first tested on an untreated population. A bone biopsy trephine was used to extract cylindrical cores of trabecular bone from the centre of femoral necks. Densitometry, microcomputed tomography, and compression testing were used to assess the quality of bone in these samples. The compressive strength was found to be directly proportional to the modulus (i.e. stiffness) of the samples, thus reproducing previous findings. The relative porosity and, to a lesser extent, the bone mineral density were capable of predicting the quality of cancellous bone. In conclusion, a protocol to analyze the bone quality in human femoral necks using microCT and biomechanical compression testing was successfully established. It will be applied in a clinical setting to analyze bones from bisphosphonate-treated patients following total hip replacement. abstract_id: PUBMED:35737120 Incidence and treatment of intracapsular femoral neck fractures in Germany. Introduction: Intracapsular femoral neck fractures are one of the most common fractures in Germany. Nevertheless, the epidemiology and treatment modalities are not described comprehensively. For this reason, this study highlights the epidemiology of femoral neck fractures in different age groups and summarizes treatment strategies within the period from 2009 to 2019 based on nationwide data. Materials And Methods: In this retrospective cohort study all cases of intracapsular femoral neck fractures (ICD-10: S72.0) between 2009 and 2019 in Germany were analyzed with regard to epidemiology, incidence and treatment. Operation and procedure classification system (OPS)- codes in combination with intracapsular femoral neck fracture as main diagnosis were taken to investigation. Data was provided by the Federal Statistical Office of Germany (Destatis). Results: A total of 807,834 intracapsular femoral neck fractures with a mean incidence of 110.0 per 100,000 inhabitants annually was detected within eleven years. In 68.8% of all fractures patients were female. Most patients were older than 70 years (82.4%), and 56.7% were older than 80 years. The overall increase of fracture numbers between 2009 and 2019 was 23.2%. Joint replacement has been most often performed (80.4%). Hemiarthroplasty (56.8%) and total hip arthroplasty (22.8%) were the most common procedures with an increase of 27.1 and 38.6%, respectively. The proportion of cemented hemiarthroplasties was 86.2% while 51.3% of all total hip arthroplasties were totally or partially cemented. Osteosyntheses were mainly conducted using dynamic compression screws (34.0%), conventional screws (31.3%) and nails (22.2%). Conclusion: The incidence of intracapsular femoral neck fractures in Germany has been increasing continuously within the last decade. In particular, patients over 80 years suffered predominantly from this type of fracture. The majority was treated with a joint replacement procedure, mainly cemented hemiarthroplasty. abstract_id: PUBMED:24852205 The incidence of and risk factors for developing atypical femoral fractures in Japan. The long-term treatment with anti-resorptive drugs for osteoporotic patients is suggested to be associated with an increase in atypical femoral fractures (AFFs). However, their incidence, patient characteristics, and risk factors have not been fully elucidated especially in Asian countries. This retrospective observational cohort study found fourteen AFFs in ten patients (four bilateral fractures) among 2,238 hip and femoral shaft fractures treated in our associated hospitals between 2005 and 2010; this incidence (0.63%) was similar to Caucasians. Of the ten patients with AFFs, nine (90%) and six (60%) were using bisphosphonates (BPs) and glucocorticoids (GCs), respectively, compared to 14.3 and 8.6% for patients with typical femoral fractures who were using these agents. As comorbid conditions, five patients had collagen disease (CD) and two had diabetes. A fracture location-, age- and gender-matched (1:3) case-control study revealed that administration of BPs, GCs, and suffering from collagen disease (CD) were significant risk factors for developing AFFs [odds ratios 36.0 (95% confidence intervals 3.8-342.2), 13.0 (2.3-74.1) and 9.0 (1.6-50.3), respectively]. Interestingly, all of the patients with atypical subtrochanteric femoral fractures, defined as those within 5 cm of the lesser trochanter, were taking GCs due to CD, and the age of these patients (average of 54.8 years) was significantly younger than those with atypical diaphyseal femoral fractures (average of 77.2 years, p < 0.05). In conclusion, the incidence of AFFs in the Japanese population was similar to that of Caucasians, and taking BPs and GCs and suffering from CD were risk factors for developing AFFs. abstract_id: PUBMED:37119329 Trends in incidence, health care consumption, and costs for proximal femoral fractures in the Netherlands between 2000 and 2019: a nationwide study. This study evaluated the incidence rates and societal burden of hip fractures in The Netherlands. Although incidence in the elderly population is decreasing and hospital stay is at an all-time low, the burden of medical costs and crude numbers of proximal femoral fractures are still rising in our aging population. Purpose: The aim of this nationwide study was to provide an overview of the incidence rate and economic burden of acute femoral neck and trochanteric fractures in The Netherlands. Methods: Data of patients who sustained acute proximal femoral fractures in the period January 1, 2000, to December 31, 2019, were extracted from the National Medical Registration of the Dutch Hospital Database. The incidence rate, hospital length of stay (HLOS), health care and lost productivity costs, and years lived with disability (YLD) were calculated for age- and sex-specific groups. Results: A total of 357,073 patients were included. The overall incidence rate increased by 22% over the 20-year study period from 16.4 to 27.1/100,000 person-years (py). The age-specific incidence rate in elderly > 65 years decreased by 16% (from 649.1 to 547.6/100,000 py). The incidence rate in men aged > 90 has surpassed the incidence rate in women. HLOS decreased in all age groups, hip fracture subtypes, and sexes from a mean of 18.5 to 7.2 days. The mean health care costs, over the 2015-2019 period, were lower for men (€17,723) than for women (€23,351) and increased with age to €26,639 in women aged > 80. Annual cumulative costs reached €425M, of which 73% was spent on women. Conclusion: The total incidence of hip fractures in The Netherlands has increased by 22%. Although incidence in the elderly population is decreasing and HLOS is at an all-time low, the burden of medical costs and crude numbers of proximal femoral fractures are still rising in our aging population. abstract_id: PUBMED:25472855 Wintertime surgery increases the risk of conversion to hip arthroplasty after internal fixation of femoral neck fracture. Unlabelled: The study demonstrates that wintertime surgeries are associated with impaired fracture healing and increases the risk of conversion to hip arthroplasty after osteosynthesis of femoral neck fracture. Furthermore, the results raise the possibility of association between seasonal changes in vitamin D levels and impaired fracture healing of femoral neck fracture. Introduction: Although the changes of vitamin D level and calcitropic hormones influencing bone metabolism are seasonal, the effect of seasons on hip fracture healing is unknown. We assessed the effects of seasonal periodicity on conversion to hip arthroplasty after primary osteosynthesis of femoral neck fracture. Methods: This nationwide retrospective observational cohort study involved 2779 patients aged ≥ 60 years who underwent internal screw fixation for primary femoral neck fracture and were discharged in 2000. Cases requiring conversion to arthroplasty during the 8-year follow-up derived from the Hungarian health insurance database were registered. Risk factors assessed included sex, age, fracture type, season of primary surgery and surgical delay. Competing-risks regression analysis was used for data analyses. Results: During the observation period, 190 conversions to hip arthroplasty (6.8%) were identified, yielding an overall incidence of 19.5 per 1000 person-years. The crude incidence rates of conversions after osteosynthesis in winter, spring, summer and fall were 28.6, 17.8, 16.9 and 14.7 per 1000 person-years, respectively. Besides younger age, female sex and intracapsular fracture displacement, wintertime primary osteosynthesis significantly increased the risk of conversion (fall vs. winter, hazard ratio (HR): 0.50, 95% confidence interval [95% CI 0.33-0.76]; spring vs. winter, HR: 0.63, [95% CI 0.44-0.92]; summer vs. winter, HR: 0.62, [95% CI 0.42-0.91]). Conclusions: Our study demonstrate that wintertime primary osteosynthesis increases the risk of conversion surgeries. The results may help improving the outcome of primary fixation of femoral neck fractures. abstract_id: PUBMED:22643705 Atypical femoral fractures: epidemiology, etiology, and patient management. Purpose Of Review: To review the definition, epidemiology, and putative pathophysiology of atypical femoral fractures and propose strategies for the management of patients with atypical fractures as well as patients on long-term bisphosphonates without atypical fractures. Recent Findings: Recent epidemiologic evidence shows that the absolute incidence of atypical femoral fractures is small compared with the incidence of typical hip fractures. However, long-term bisphosphonate use may be an important risk factor for atypical fractures, and minimal additional antifracture benefit has been demonstrated for treatment durations longer than 5 years for patients with postmenopausal osteoporosis. This review gives advice to aid clinicians in the management of patients with incipient or complete atypical fractures. Summary: Extremely limited evidence is available for how best to manage patients with atypical fractures. A comprehensive metabolic approach for the management of patients on long-term bisphosphonates will help to prevent oversuppression of bone remodeling that is implicated in the pathogenesis of these fractures. abstract_id: PUBMED:37786250 The increasing complexity of femoral fragility fractures: incidence, fracture patterns and management over a 10-year period. Introduction: The purpose of this study was to determine whether there have been changes in the numbers and complexity of femoral fragility fractures presenting to our department over a period of 10 years. Methods: Patients >60 years presenting with femoral fragility fractures to our institution in 2018-2019 (397 fractures) were compared with respect to demographic data, incidence rates, fracture classification and surgical management with a historical cohort from 2009-2010 (335 fractures). Pathological and high velocity fractures were excluded. Results: The gender proportion and average age (83.1 vs. 82.7 years) was unchanged. The number of femoral fractures increased by 19% but the overall incidence in people >60 years fell by 6% (p = 0.41). The proportion of unstable trochanteric fractures (31A2 and A3) increased from 22% to 55% (p < 0.001). The proportion of displaced intracapsular fractures increased from 53% to 72% (p < 0.001). The incidence of stable trochanteric fractures fell from 12.4 to 7.3/10,000 patients>60 years (p = 0.0006) while the incidence of unstable trochanteric fractures (31A2 and 31A3) increased from 3.5 to 8.9/10,000 patients >60 years (p < 0.0001). The proportion of trochanteric fractures treated with an intramedullary (IM) nail increased from 9% to 35% (p = 0.0001). The number of shaft and distal femoral fractures increased by 41% although the incidence did not change significantly. Periprosthetic fractures comprised 70% of femoral shaft fractures in both cohorts. Conclusions: The increasing number and complexity of femoral fragility fractures, especially unstable trochanteric fractures and periprosthetic fractures, is likely to have an impact on implant use, theatre time and cost. abstract_id: PUBMED:30039251 Incidence, risk factors, and fracture healing of atypical femoral fractures: a multicenter case-control study. The incidence of atypical femoral fractures (AFFs) was 2.95% among 6644 hip and femoral fractures. Independent risk factors included the use of bisphosphonates (BPs), osteopenia or osteoporosis, rheumatoid arthritis, increased femoral curvatures, and thicker femoral cortices. Patients with AFFs and BP treatment were more likely to have problematic healing than those with typical femoral fractures (TFFs) and no BP treatment. Introduction: To determine the incidence and risk factors of atypical femoral fractures (AFFs), we performed a multicenter case-control study. We also investigated the effects of bisphosphonates (BPs) on AFF healing. Methods: We retrospectively reviewed the medical records and radiographs of 6644 hip and femoral fractures of patients from eight tertiary referral hospitals. All the radiographs were reviewed to distinguish AFFs from TFFs. Univariate and multivariate logistic regression analyses were performed to identify risk factors, and interaction analyses were used to investigate the effects of BPs on fracture healing. Results: The incidence of AFFs among 6644 hip and femoral fractures was 2.95% (90 subtrochanter and 106 femoral shaft fractures). All patients were females with a mean age of 72 years, and 75.5% were exposed to BPs for an average duration of 5.2 years (range, 1-17 years). The use of BPs was significantly associated with AFFs (p < 0.001, odds ratio = 25.65; 95% confidence interval = 10.74-61.28). Other independent risk factors for AFFs included osteopenia or osteoporosis, rheumatoid arthritis, increased anterior and lateral femoral curvatures, and thicker lateral femoral cortex at the shaft level. Interaction analyses showed that patients with AFFs using BPs had a significantly higher risk of problematic fracture healing than those with TFFs and no BP treatment. Conclusions: The incidence of AFFs among 6644 hip and femoral fractures was 2.95%. Osteopenia or osteoporosis, use of BPs, rheumatoid arthritis, increased anterior and lateral femoral curvatures, and thicker lateral femoral cortex were independent risk factors for the development of AFFs. Patients with AFFs and BP treatment were more likely to have problematic fracture healing than those with TFFs and no BP treatment. abstract_id: PUBMED:2011721 Incidence of proximal femoral fractures in the Canton of Vaud Hip fractures place a major and increasing burden on health services in Western countries. Reported incidence rates vary considerably from one geographic area to another. No published data are available for Switzerland or surrounding countries, but such descriptive indicators are indispensable in orienting national or regional policies. To fill this gap and to assess the similarity of hip fracture incidence in Switzerland and other countries, we collected data from several sources in 26 public and private hospitals, in the Canton of Vaud (total population: 538,000) for 1986, which allowed us to calculate the incidence (for people over twenty years old) and assess related parameters. 577 hip fractures were identified among the resident population, indicating a crude average annual incidence rate of 140 per 100,000 (95% confidence interval: 128, 152). Corresponding rates for males and females were 58 (47, 68) and 213 (193, 232). Standardized rates and international comparisons show that Swiss rates are slightly lower than those of most industrial countries. More detailed results of relative risks for various study variables are presented and the pathogenesis of hip fractures is discussed. Answer: The study conducted on a sample of white women aged 19-88 years in New Zealand found that femoral neck length, which is a risk factor for hip fracture, was inversely related to age. The results indicated that there was a 3.7 mm (5.7%) difference in femoral neck length over the age range of the sample. Since femoral neck length remains constant throughout adult life, the study concluded that femoral neck length in elderly New Zealand women is likely to increase in the coming decades. This increase in femoral neck length could lead to a future age-adjusted hip fracture incidence increase by about 50% over the next 50 years unless significant public health initiatives are taken to either increase bone density or reduce fall frequency (PUBMED:8769033). Therefore, based on this study, it can be inferred that longer femoral necks in the young may indeed be a predictor of further increases in hip fracture incidence, especially if no preventive measures are implemented.
Instruction: Are clinical, laboratory, and imaging markers suitable predictors of vesicoureteral reflux in children with their first febrile urinary tract infection? Abstracts: abstract_id: PUBMED:25132949 Are clinical, laboratory, and imaging markers suitable predictors of vesicoureteral reflux in children with their first febrile urinary tract infection? Purpose: This study was conducted to determine the predictive value of clinical, laboratory, and imaging variables for the diagnosis of vesicoureteral reflux in children with their first febrile urinary tract infection. Materials And Methods: One hundred fifty-three children with their first febrile urinary tract infection were divided into two groups according to the results of voiding cystourethrography: 60 children with vesicoureteral reflux and 93 children without. The sensitivity, specificity, positive and negative predictive value, likelihood ratio (positive and negative), and accuracy of the clinical, laboratory, and imaging variables for the diagnosis of vesicoureteral reflux were determined. Results: Of the 153 children with febrile urinary tract infection, 60 patients (39.2%) had vesicoureteral reflux. There were significant differences between the two groups regarding fever>38℃, suprapubic pain, C-reactive protein quantitative level, number of red blood cells in the urine, and results of renal ultrasound and dimercaptosuccinic acid renal scanning (p<0.05). There were significant positive correlations between fever>38.2℃ and dimercaptosuccinic acid renal scanning and vesicoureteral reflux. Also, there were significant positive correlations between the erythrocyte sedimentation rate, positive urinary nitrite test, hyaline cast, and renal ultrasound and high-grade vesicoureteral reflux. Conclusions: This study revealed fever>38.2℃ and dimercaptosuccinic acid renal scanning as the best predictive markers for vesicoureteral reflux in children with their first febrile urinary tract infection. In addition, erythrocyte sedimentation rate, positive urinary nitrite test, hyaline cast, and renal ultrasound are the best predictive markers for high-grade vesicoureteral reflux. abstract_id: PUBMED:31591046 An update on renal scarring after urinary tract infection in children: what are the risk factors? Aim: The aim of this study was to present updated information on clinical, laboratory, and imaging risk factors and predictors of renal scarring after first or recurrent febrile UTIs, which may be associated with renal scarring. Methods: PubMed was searched for current data on possible risk factors and predictors of renal scarring after febrile urinary tract infections in children. Results: Recurrence of acute pyelonephritis is an independent risk factor for renal scarring, while the duration of fever before treatment initiation is mainly associated with acute pyelonephritis and its severity. Severe vesicoureteral reflux is an important independent risk factor for the development of renal scarring after a febrile urinary tract infection. Conclusions: Certain clinical parameters could be used to identify children at high risk for renal scarring after febrile urinary tract infection, helping clinicians to reserve dimercaptosuccinic acid scan for selected cases. abstract_id: PUBMED:36253233 Renal scars in children with febrile urinary tract infection - Looking for associated factors. Introduction: Febrile urinary tract infection (UTI) is a common childhood infection related to renal scarring and potentially long-term complications like chronic kidney disease. It would be of great benefit to find a correlation between easy-accessible factors in the acute phase of a febrile UTI and the development of renal scar formation and/or decreased renal function in order to identify children at risk of future complications. Objective: The aim of this study was to identify factors associated with the development of decreased split renal function (DSRF) and/or permanent renal scar formation in children with febrile UTI. Study Design: The medical records of 212 Children aged 0 months to 15 years with febrile UTI admitted to The Pediatric Department of Lillebaelt Hospital, Kolding from January 2011 to September 2014 were systematically reviewed. We analyzed clinical, laboratory, and radiologic findings. Statistical analysis was performed to identify factors associated with renal scar formation and DSRF on nuclear imaging at 6 months follow-up. Results: A total of 113 medical records were eligible for further analysis, 99 girls and 14 boys, 34 patients younger than 12 months. In total 30 patients (26.5%) had an abnormal follow-up imaging (DSRF less than 45% and/or renal scarring). Nine patients (8%) had renal scarring. Four patients (3.5%) had renal scarring only, 21 patients (18.6%) had DSRF only, and five patients (4.4%) had both renal scarring and DSRF. Patients with renal scar formation on follow-up imaging had significantly higher C-reactive protein (CRP) than patients with no scarring (p < 0.01). CRP and absolute neutrophil count (ANC) was significantly higher in patients with abnormal follow-up imaging (p < 0.01 and p = 0.010), and these patients more often had positive nitrite in urine dipstick compared to patients with normal kidneys on follow-up (p = 0.048). Temperature above 38.5 °C and CRP >50 mg/L in combination were also associated with a higher risk of abnormal follow-up imaging (p = 0.016). Discussion: This study contributes with further knowledge to the ongoing debate regarding renal scarring but also reveals the possibility of associated factors for the development of DSRF following a febrile UTI in children. However, due to the retrospective design as well as the small number of events in our study definite conclusions on whether the above-mentioned factors are indeed prognostic for the development of renal scarring or DSRF following a febrile UTI can not be drawn. abstract_id: PUBMED:24648130 Comparison of procalcitonin and different guidelines for first febrile urinary tract infection in children by imaging. Background: We examined the ability of a procalcitonin (PCT) protocol to detect vesicoureteral reflux (VUR) and renal scarring (RS), evaluated procedural costs and radiation burden, and compared four representative guidelines for children with their first febrile urinary tract infection (UTI). Methods: Children aged ≤2 years with their first febrile UTI who underwent renal ultrasonography (US), acute and late technetium-99m ((99m)Tc)-dimercaptosuccinic acid scan, and voiding cystourethrography were prospectively studied. The representative guidelines applied in a retrospective simulation included the American Academy of Pediatrics (AAP), National Institute of Clinical Excellence, top-down approach (TDA), and Italian Society of Pediatric Nephrology (ISPN). These were compared in terms of ability to detect abnormalities, procedural costs and radiation. Results: Of 278 children analyzed, 172 (61.9%) had acute pyelonephritis. There was VUR in 101 (36.3%) children, including 73 (26.3%) with grades III-V VUR. RS was identified in 75 (27.0%) children. To detect VUR, TDA and PCT had the highest sensitivity for grades I-V VUR (80.2%) and III-V VUR (94.5%), respectively, whereas AAP had the highest specificity for I-V VUR (77.4%) and III-V VUR (78.0%), respectively. TDA and PCT had the highest sensitivity (100%) for detecting RS. The highest cost and radiation dose was associated with TDA, whereas AAP had the least expenditure and radiation exposure. By multivariate analysis, PCT and VUR, especially grades III-V, were independent predictors of RS. Conclusions: There is no perfect guideline for first febrile UTI children. The PCT protocol has good ability for detecting high-grade VUR and RS. If based on available imaging modalities and reducing cost and radiation burden, clinical suggestions in the AAP guidelines represent a considerable protocol. abstract_id: PUBMED:9499305 Usefulness of renal scintigraphy in the assessment of the first febrile urinary infection in children Objective: The objective of this study was to compare the findings of renal ultrasonography and 99mTc-DMSA renal scintigraphy in children with their first acute febrile urinary tract infection to determine which method is better in detecting patients at risk of renal injury or reflux. Patients And Methods: Thirty-three children between 0.2 and 12.3 years of age with their first acute febrile urinary tract infection were studied by means of clinical and laboratory assessment, renal ultrasonography and 99mTc-DMSA renal scintigraphy. In 24 patients (72.7%) a voiding cystourethrography was made. The patients were divided into two groups, those under 2 years of age (n = 14) and those over 2 years old (n = 19). Results: Cortical scintigraphy showed renal changes in 23 patients (69.7%) and ultrasonography showed renal changes in 2 (6.1%; p < 0.05). Children over 2 years of age had a higher incidence of renal lesions than did younger children (84.2% vs 50%; p < 0.05). There were no differences between girls and boys. Reflux was demonstrated in 13 patients (54.2%). Among those kidneys which presented abnormal cortical scintigraphy, vesicoureteral reflux was present in 76.5% of the studies. Furthermore, of those with abnormal ultrasonography vesicoureteral reflux was present in 17.6%. Conclusions: We found a high incidence of renal involvement in children with their first acute febrile urinary tract infection. The cortical scintigraphy is more sensitive than ultrasonography in detecting renal changes. The incidence of vesicoureteral reflux in febrile urinary tract infection is high. When there is a renal cortical defect the risk of reflux is higher. This suggests that cortical scintigraphy should be added to the initial examination of children with their first acute febrile urinary tract infection and this could be supplemented by voiding cystourethrography alone, with ultrasonography having a secondary role. abstract_id: PUBMED:35760670 Model for predicting high-grade vesicoureteral reflux in young children presenting with febrile urinary tract infection. Introduction: High-grade vesicoureteral reflux (VUR) is associated with long-term renal outcome, and can be missed if voiding cystourethrogram (VCUG) is not performed. The current febrile urinary tract infection (UTI) guidelines have narrowed down the indications for VCUG. This study aimed to determine the predictors and develop a model to predict high-grade VUR (grade III-V) in young children presenting with febrile UTI. Methods: We retrospectively reviewed 324 children aged 2-60 months who presented with first or recurrent febrile UTI and underwent both renal and bladder ultrasonography (RBUS) and VCUG during 2004-2013. Multivariate logistic regression was used to identify independent predictors of high-grade VUR, and a predictive model and scoring system were developed. Sensitivity and specificity of the predictors were evaluated. Results: Sixty-two (67.4%) of 92 children with VUR were high-grade VUR. The independent risk factors for high-grade VUR were recurrent UTI (p < 0.001), non-Escherichia coli (non-E. coli) pathogen (p = 0.011), and abnormal RBUS (p < 0.001) including pelvicalyceal dilation only (p = 0.003), and with ureteric dilation (p < 0.001). Abnormal RBUS showed highest sensitivity (58.1%), while ureteric dilation had highest specificity (96.6%) to predict high-grade VUR. Combination of these predictors improved the specificity but decreased the sensitivity. The prediction score was calculated, as follows: recurrent UTI = 3 points + non-E. coli = 1 point + pelvicalyceal dilation only = 1 point + pelvicalyceal and ureteric dilation = 3 point. A cutoff score of 1 showed sensitivity of 82.3% and specificity of 63.7%. The specificity would increase (87.8, 92.7, 97.7, 100, 100, and 100%) for higher scores of 2, 3, 4, 5, 6 and 7, respectively. Discussion: High-grade VUR could be predicted by simple model using single or combined predictors. Sensitivity and specificity varied according to each predictor. The specificity of ureteral dilation was as high as recurrent UTI. The weighing score for each presenting predictor and the summation of the scores provided more detailed in sensitivity and specificity. Conclusions: Children with febrile UTI who have the predictor(s) of high-grade VUR are recommended to undergo VCUG. The high-grade VUR prediction score may help for decision-making regarding whether to perform VCUG in a particular child. abstract_id: PUBMED:12529459 Imaging studies after a first febrile urinary tract infection in young children. Background: Guidelines from the American Academy of Pediatrics recommend obtaining a voiding cystourethrogram and a renal ultrasonogram for young children after a first urinary tract infection; renal scanning with technetium-99m-labeled dimercaptosuccinic acid has also been endorsed by other authorities. We investigated whether imaging studies altered management or improved outcomes in young children with a first febrile urinary tract infection. Methods: In a prospective trial involving 309 children (1 to 24 months old), an ultrasonogram and an initial renal scan were obtained within 72 hours after diagnosis, contrast voiding cystourethrography was performed one month later, and renal scanning was repeated six months later. Results: The ultrasonographic results were normal in 88 percent of the children (272 of 309); the identified abnormalities did not modify management. Acute pyelonephritis was diagnosed in 61 percent of the children (190 of 309). Thirty-nine percent of the children who underwent cystourethrography (117 of 302) had vesicoureteral reflux; 96 percent of these children (112 of 117) had grade I, II, or III vesicoureteral reflux. Repeated scans were obtained for 89 percent of the children (275 of 309); renal scarring was noted in 9.5 percent of these children (26 of 275). Conclusions: An ultrasonogram performed at the time of acute illness is of limited value. A voiding cystourethrogram for the identification of reflux is useful only if antimicrobial prophylaxis is effective in reducing reinfections and renal scarring. Renal scans obtained at presentation identify children with acute pyelonephritis, and scans obtained six months later identify those with renal scarring. The routine performance of urinalysis, urine culture, or both during subsequent febrile illnesses in all children with a previous febrile urinary tract infection will probably obviate the need to obtain either early or late scans. abstract_id: PUBMED:33481100 Is ultrasonography mandatory in all children at their first febrile urinary tract infection? Background: This study investigated whether performing kidney ultrasound (KUS) only in children presenting either a pathogen other than E. coli at their first febrile urinary tract infection (fUTI) or experiencing fUTI recurrence would increase missed diagnoses of kidney anomalies. Methods: Patients aged 2-36 months with fUTI who underwent KUS evaluation from 2 January 2013 to 31 June 2018 were enrolled. Cystourethrography was performed after pathological KUS or recurring fUTIs. Thereafter, we retrospectively assessed the detection rate of kidney anomalies through performing KUS only in patients with atypical pathogen at first fUTI or with recurring fUTIs. Results: In 263 patients included, the isolated pathogen was E. coli in 223 cases (84.8%) and atypical in 40 cases (15.2%). KUS detected kidney anomalies in 14/223 (6.3%) of fUTIs caused by E. coli and in 11/40 (27.5%) of fUTIs caused by an atypical pathogen (OR 5.5, 95%CI 2.5-14.5). Cystourethrography was performed in 40 patients and vesicoureteral reflux (VUR) found in 20 cases. None of the high grade VUR diagnoses or other kidney anomalies would have been lost through a different diagnostic protocol that required the presence of an atypical pathogen at the first fUTI or a fUTI recurrence to perform the KUS. Conclusions: A diagnostic protocol that requires presence of an atypical pathogen at the first fUTI or a second episode of fUTI to perform the KUS would allow a reduction in the number of negative ultrasounds with a negligible risk of missed diagnoses of kidney anomalies. abstract_id: PUBMED:28814261 Recent Studies on the Care of First Febrile Urinary Tract Infection in Infants and Children for the Pediatric Hospitalist. Background: Urinary Tract Infection (UTI) is a common cause of bacterial infection in young children, and accounts for a significant number of pediatric hospitalizations. Objective: To review recent publications focusing on the care of children hospitalized with their first febrile UTI. Methods: A PubMed search was performed including publications from 2011-2016 on first febrile UTI in childhood. Abstracts were reviewed for being relevant to the care of hospitalized children and their follow-up. Relevant articles underwent full review by all authors and articles excluded from results included those without novel data analysis, primary improvement-based reports and studies with poor design or analysis. Included articles were categorized as "diagnosis", "management", "imaging" or "follow-up". Results: Of 406 articles initially identified, 40 studies were included. One technical report with a systematic review was also included. Major topics addressed included the role of urinalysis in screening for UTI, use of parenteral antimicrobial therapy, the role of antimicrobial prophylaxis in prevention of recurrent UTI, and ideal follow-up and imaging approach following diagnosis of febrile UTI. Conclusion: Recent literature on first febrile UTI addresses a broad range of areas regarding the care of hospitalized children, though some questions remain unanswered. Overall, studies support increased attention to the potential risks, expense and invasiveness of various approaches for evaluation. Proposed updates to practice included: utilization of urinalysis for screening and diagnosis, transitioning to oral antimicrobials based on clinical improvement and limiting the routine use of voiding cystourethrogram and antimicrobial prophylaxis. abstract_id: PUBMED:31454101 Updated Italian recommendations for the diagnosis, treatment and follow-up of the first febrile urinary tract infection in young children. Aim: Our aim was to update the recommendations for the diagnosis, treatment and follow-up of the first febrile urinary tract infection in young children, which were endorsed in 2012 by the Italian Society of Pediatric Nephrology. Methods: The Italian recommendations were revised on the basis of a review of the literature published from 2012 to October 2018. We also carried out an ad hoc evaluation of the risk factors to identify children with high-grade vesicoureteral reflux or renal scarring, which were published in the previous recommendations. When evidence was not available, the working group held extensive discussions, during various meetings and through email exchanges. Results: Four major modifications have been introduced. The method for collecting urine for culture and its interpretation has been re-evaluated. We have reformulated the algorithm that guides clinical decisions to proceed with voiding cystourethrography. The suggested antibiotics have been revised, and we have recommended further restrictions of the use of antibiotic prophylaxis. Conclusion: These updated recommendations have now been endorsed by the Italian Society of Pediatric Nephrology and the Italian Society for Pediatric Infectivology. They can also be used to compare other recommendations that are available, as a worldwide consensus in this area is still lacking. Answer: Yes, clinical, laboratory, and imaging markers are suitable predictors of vesicoureteral reflux (VUR) in children with their first febrile urinary tract infection (UTI). A study found that fever greater than 38.2℃ and dimercaptosuccinic acid renal scanning were the best predictive markers for VUR in children with their first febrile UTI. Additionally, erythrocyte sedimentation rate, positive urinary nitrite test, hyaline cast, and renal ultrasound were identified as the best predictive markers for high-grade VUR (PUBMED:25132949). Other studies have also identified risk factors and predictors for renal scarring after febrile UTIs, which may be associated with VUR. Recurrence of acute pyelonephritis and severe VUR are important independent risk factors for the development of renal scarring after a febrile UTI (PUBMED:31591046). Moreover, factors such as higher C-reactive protein levels, absolute neutrophil count, and positive nitrite in urine dipstick have been associated with abnormal follow-up imaging, indicating a higher risk of renal scarring (PUBMED:36253233). In terms of imaging, procalcitonin (PCT) protocols and voiding cystourethrography (VCUG) have been compared with various guidelines to detect VUR and renal scarring. The PCT protocol showed good ability for detecting high-grade VUR and renal scarring, suggesting that it could be a valuable tool in the diagnostic process (PUBMED:24648130). Renal scintigraphy has also been shown to be more sensitive than ultrasonography in detecting renal changes associated with VUR (PUBMED:9499305). A predictive model for high-grade VUR in young children presenting with febrile UTI has been developed, which includes factors such as recurrent UTI, non-Escherichia coli pathogen, and abnormal renal and bladder ultrasonography (RBUS) findings (PUBMED:35760670). Lastly, guidelines from the American Academy of Pediatrics and other authorities have recommended imaging studies, such as VCUG and renal ultrasonography, after a first febrile UTI in young children, although the value of these studies in altering management or improving outcomes has been debated (PUBMED:12529459). In conclusion, a combination of clinical signs, laboratory tests, and imaging studies can serve as suitable predictors for VUR in children experiencing their first febrile UTI, aiding in the identification of those at risk for VUR and associated renal scarring.
Instruction: Is it worthwhile to operate on subclinical right varicocele in patients with grade II-III varicocele in the left testicle? Abstracts: abstract_id: PUBMED:16047585 Is it worthwhile to operate on subclinical right varicocele in patients with grade II-III varicocele in the left testicle? Purpose: To determine whether repair of subclinical varicoceles in the right testicle results in significant seminal improvement in patients with clinical left varicocele. Methods: Patients were divided into two groups: Group I (unilateral varicocelectomy) and Group II (bilateral varicocelectomy-subclinical left varicocele). The mean sperm concentration before treatment was higher in Group I (21.01 +/- 19.1) compared to Group II (5.7 +/- 10.7) (p = 0.04). Results: An increase in volume was detected in the left testicle of patients in Group I (17 +/- 7.9 vs. 22.81 +/- 8.2; p = 0.04) and in the right testicle of patients in Group II (18.4 +/- 6.2 vs. 22.3 +/- 6.5; p = 0.04). Although the mean postoperative sperm concentration in Group I increased slightly (25.7 +/- 22.8), the mean sperm concentration in Group II increased significantly (30.32 +/- 9.8; p = 0.03). Pregnancy rate was higher in Group II (66.7%) compared to Group I (33.3%). Conclusions: Even a small, subclinical unrepaired varicocele continues to have a detrimental effect on bilateral testis function in a patient with grade II-III left varicocele. abstract_id: PUBMED:24124170 Physical exam and ultrasound characteristics of right varicocoeles in adolescents with left varicocoeles. The published incidence of bilateral varicocoeles in adults is much higher than among adolescents. We investigated our adolescent varicocoele database to determine (i) the presence of subclinical and palpable right varicocoele s in boys with a left varicocoele, (ii) the effect, if any, of right varicocoele on asymmetry measurements and (iii) the difference in duplex doppler ultrasound (DDUS) findings between left varicocoele s and contralateral right varicocoele s. Among 506 boys with a known left varicocoele who underwent DDUS, the incidence of right clinical and subclinical varicocoele s as well as the influence of Tanner stage, varicocoele grade, laterality, peak retrograde flow (PRF) and maximum vein diameter (MVD) on testicular volume and asymmetry was examined using univariate and multivariate linear regression. Right retrograde flow was detected in 204 of 506 (40.3%) boys. Varicocoele grades were subclinical in 89 (18%), grade I in 51 (10%), grade II in 63 (12%) and grade III in 1 (0.2%). In patients with bilaterally palpable varicocoele s, increasing left grade was associated with greater testicular asymmetry (p = 0.02) and increasing right grade was associated with less testicular asymmetry (p = 0.01). Mean PRF and MVD values were lower in right as compared to left varicocoele s. Contrary to common perception, both subclinical and palpable right varicocoele s are frequently associated with left varicocoele s in adolescents. Although the contribution of right varicocoele s to fertility is controversial, these right varicocoele s can influence clinically relevant parameters, particularly testicular asymmetry, and thus influence which left varicocoele s undergo surgery. abstract_id: PUBMED:38402451 Significant reduction in left testicular volume shown by ultrasonography in infertile patients with left-side grade 2-3 varicocele as compared to less than grade 2: An evidence for indication of varicocelectomy. Purpose: Testicular volume (TV) is known to be one of the main parameters for testicular function (TF). This study was conducted to re-evaluate the indications of a varicocelectomy based on a survey of preoperative TV results in left-side varicocele patients considered to reflect the detrimental effects of a varicocele on TF. Methods: TV results of infertile patients determined using ultrasonography by a single expert physician were retrospectively evaluated. Results: Of 590 examined patients, 424 had no varicocele findings (Group A), while 148 had a left-side varicocele (Group B). Group B was subdivided based on varicocele grade into Group B0 (subclinical), B1 (grade 1), B2 (grade 2), and B3 (grade 3). Comparisons of left-side TV showed no significant differences for grade among Group A, B0, and B1, whereas that for Group B2 and B3 was significantly lower as compared with Group A (p < 0.01, 0.02, respectively). The median TV of Group B I (composed of Groups B0 and B1) was 9.8 cm3 , while that of Group B II (Groups B2 and B3) was significantly lower at 8.4 cm3 (p < 0.05). In contrast, a comparison of right TV values identified no significant differences among the groups (p = 0.918). Conclusion: A varicocelectomy should be performed for patients with a grade 2 and 3 varicocele for ameliorating testicular function. abstract_id: PUBMED:31555558 Cut-off values of the Johnsen score and Copenhagen index as histopathological prognostic factors for postoperative semen quality in selected infertile patients undergoing microsurgical correction of bilateral subclinical varicocele. Background: It remains challenging to determine which individuals are likely to benefit from microsurgical correction of subclinical varicocele, as basic semen parameters often do not improve postoperatively. We aimed to develop an easily accessible tool for prognostic stratification of infertile men indicated for microsurgical correction of bilateral subclinical varicocele characterized by prolonged and clear venous reflux and no other cause for infertility. Methods: We retrospectively analyzed the testicular biopsy, seminal analysis, and ultrasound evaluation records of 20 men managed between 2006 and 2014. Subclinical varicocele was diagnosed through bilateral testicular palpation and auscultation of venous reflux using a Doppler stethoscope, with confirmation on color Doppler sonography. We conducted receiver operating characteristic curve analysis to identify the optimal combinations of cut-offs for the Johnsen score, Copenhagen index, and testicular volume defining histological patterns with positive prognostic value for improved postoperatively reproductive capacity. Results: Positive prognostic value was noted for the following combinations of parameters: (I) Johnsen score >8.2 in the left testicle and right testicular volume >12.8 mL predicted improved sperm concentration; (II) Johnsen score >8.2 and Copenhagen index digit II <2.5 bilaterally predicted improved total sperm motility; (III) Johnsen score >9.1 and Copenhagen index digit III <1.5 bilaterally predicted improved progressive sperm motility; (IV) Johnsen score >7.9 and right testicular volume >13.6 mL predicted improved sperm morphology. Conclusions: Johnsen score and Copenhagen index as histopathological prognostic factors can be easily obtained upon evaluation of testicular biopsy specimens and can be simple and reliable tool to establish a more realistic prognosis for reproductive capacity in men who undergo microsurgical correction of subclinical varicocele with no other detectable cause for infertility. abstract_id: PUBMED:30024042 Comparison of clinical outcome of bilateral and unilateral varicocelectomy in infertile males with left clinical and right subclinical varicocele: A meta-analysis of randomised controlled trials. Varicocele is the most common cause of male infertility. Varicoceles are classified into two types: clinical and subclinical varicoceles. Some researchers reported right subclinical varicoceles are often accompanied with left clinical varicoceles. However, the treatment is controversial. Our aim was to compare the clinical outcome of unilateral varicocelectomy (UV) and bilateral varicocelectomy (BV) in infertile males with left clinical and right subclinical varicocele. A total of four randomised controlled trials (RCT) were enrolled in this study, including 637 cases with left clinical and right subclinical varicocele (318 cases in the BV group and 319 cases in the UV group). The fixed effects model combined difference in progressive sperm motility between the two groups was 6.42% (95% CI: 5.09, 7.75). The random effects model combined difference in normal sperm morphology between the two groups was 2.04% (95% CI: 0.60, 3.48). The odds ratio shown by the fixed effects model in spontaneous pregnancy rate was 1.73 (95% CI: 1.24, 2.43). No statistically significant difference between the two groups was found in sperm concentration and sperm motility. Thus, BV may be superior to UV for infertile male patients with left clinical and right subclinical varicocele. However, more properly conducted RCTs are still needed. abstract_id: PUBMED:29210007 Bilateral is superior to unilateral varicocelectomy in infertile males with left clinical and right subclinical varicocele: a prospective randomized controlled study. Purpose: The purpose of this study is to compare the effect of bilateral versus unilateral varicocelectomy on seminal response and spontaneous pregnancy rates in infertile male patients with left clinical and right subclinical varicocele. Methods: A total of 358 infertile men with left clinical and right subclinical varicocele were randomized to group that underwent bilateral (n = 179) and group that underwent unilateral microsurgical subinguinal varicocelectomy (n = 179). Baseline data regarding male age, female partner age, grade of varicocele body mass index, bilateral testicular volume and serum follicle-stimulating hormone, luteinizing hormone, total testosterone levels and infertility duration and semen parameters were gathered. One year after the surgery, semen parameters including sperm volume, sperm concentration, normal sperm morphology, progressive motility and sperm DNA fragmentation index were recorded and any pregnancy was also documented via telephone calls and hospital visits. Results: We found the baseline characteristics were comparable between the two groups. The seminal parameters had significant improvements 1 year postoperatively in both groups. However, the bilateral group showed significantly greater improvements than the unilateral group in sperm concentration, normal sperm morphology and progressive motility. Besides, the pregnancy rate was statistically higher in the bilateral group after the surgery (42.5 versus 26.0%, bilateral versus unilateral group). Conclusion: In conclusion, our study indicated that bilateral is superior to unilateral varicocelectomy in infertile males with left clinical and right subclinical varicocele, which is associated with greater improvements in sperm concentration, normal sperm morphology and progressive motility and spontaneous pregnancy rate after the surgery. abstract_id: PUBMED:19615680 Right subclinical varicocele: how to manage in infertile patients with clinical left varicocele? In this prospective study, we compared the results of bilateral varicocelectomy and left varicocelectomy for treatment of infertile patients with subclinical right and clinical left varicoceles with oligoasthenospermia. Because there was more improvement in semen parameters and pregnancy rate in patients who underwent the bilateral surgery, we concluded that right subclinical varicocele should be surgically treated in conjunction with the left one. abstract_id: PUBMED:28255399 Association between Body Mass Index and frequency and grade of varicocele in southeast Iran. Background: Varicocele is associated with impaired spermatogenesis. It may also be associated with the patients' weight. We aimed to determine the relationship between body mass index (BMI) and the frequency and grade of varicocele among a sample of Iranian men in southeast Iran. Methods: This cross-sectional study was conducted during 2010 and 2011 among 167 men who referred to the University affiliated Urology Clinics in Zahedan, Iran. Expert urologists examined the patients for the existence of varicocele and determined its grade as I to III. The age, grade of varicocele, and the side of varicocele as well as the weight and height of the patients were measured and recorded. The patients were categorized in different age groups. Data were analyzed by using SPSS software. Results: The patients' mean age was 27.9 ± 7.6 years and their mean BMI was 23.5 ± 4.7 kg/ m2. Varicocele was more frequent in the age group of 20-30 years. Most patients had grade II varicocele without a significant difference between the age groups (P=0.11). The mean BMI of patients with right varicocele was not different from those with left varicocele (P=0.34). The BMI of patients with bilateral varicocele was not different with those having right (P=0.94) and left (P=0.17) varicocele. 17.9%, 56.5%, and 25.6% of the patients had varicocele grade I, II, and III, respectively. Conclusions: In patients with lower BMI, the testicular vein may have a higher grade of varicocele. Urologists should consider examining young, tall men to detect various grades of varicocele as a part of their physical examination. abstract_id: PUBMED:11298042 Varicocele in adolescence induces left and right testicular volume loss. Objective: To determine the effect of a palpable left-sided varicocele (which in adolescent patients can adversely affect left testicular volume) on right testicular volume with progressive Tanner development and increasing varicocele grade. Patients And Methods: The right and left testicular volumes were measured with a standard orchidometer in 70 control patients (mean age 14.6 years, SD 2.2) with no palpable testicular abnormality and in 434 (mean age 14.3 years, SD 2.3) with a palpable left-sided varicocele. Patients with bilateral and right-sided varicoceles were excluded from the study. Results: There was no significant difference between the left and right testicular volumes in the control patients. The testicular volumes of patients with a grade I varicocele were similar to those in control patients. Patients with a grade II varicocele had a significantly smaller left testis than the controls at Tanner stages 4 and 5 (P < or = 0.05). Patients with a grade III varicocele had a significantly smaller left testis than controls at each Tanner stage (P < or = 0.05) and significantly smaller right testis than controls at Tanner stages 4 and 5 (P < or = 0.05). Conclusion: The presence of a grade I varicocele in adolescence appears to have no effect on normal testicular growth. Some patients with a grade II varicocele are at risk of left testicular volume loss with time and should have their testicular volume measured annually. Patients with grade III varicocele are at risk of bilateral testicular volume loss; a careful evaluation and early surgical intervention are recommended in this group of patients. abstract_id: PUBMED:19371942 Efficacy of bilateral and left varicocelectomy in infertile men with left clinical and right subclinical varicoceles: a comparative study. Objectives: To determine whether it is necessary to perform bilateral varicocelectomy (BV) in infertile men with left clinical and right subclinical varicoceles by comparing the outcomes of BV with those of left varicocelectomy (LV) in these patients. Methods: A total of 104 infertile men with left clinical and right subclinical varicoceles were randomly divided into 2 groups: BV (n = 51) and LV (n = 53). Both BV and LV were performed using a retroperitoneal approach with ligation of the dilated internal spermatic veins. The sperm concentration, sperm motility, normal morphology, serum testosterone level, bilateral testicular volume, and spontaneous pregnancy rate were measured pre- and postoperatively. Statistical analysis was performed using analysis of variance and the chi(2) test, with significance determined by P < .05. Results: The patients in the 2 groups had comparable preoperative age, partner age, sperm concentration, sperm motility, normal morphology, left varicocele grade, serum testosterone level, and bilateral testicular volume (P > .05). Both BV and LV resulted in significant increases in sperm concentration, sperm motility, and normal morphology (P < .05). No significant changes in serum testosterone level or bilateral testicular volume were observed after varicocelectomy in the 2 groups (P > .05). No significant differences were found in the postoperative sperm concentration, sperm motility, normal morphology, bilateral testicular volume, serum testosterone level, and spontaneous pregnancy rate between the 2 groups (P > .05). Conclusions: These findings suggest that no benefit is realized with BV compared with LV in infertile men with left clinical and right subclinical varicoceles. Answer: The question of whether it is worthwhile to operate on a subclinical right varicocele in patients with a grade II-III varicocele in the left testicle has been explored in several studies, and the evidence suggests that there may be benefits to bilateral varicocelectomy in such cases. One study found that even a small, subclinical unrepaired varicocele continues to have a detrimental effect on bilateral testis function in a patient with grade II-III left varicocele. The study reported that patients who underwent bilateral varicocelectomy (subclinical right and clinical left varicocele) experienced significant seminal improvement, including increased sperm concentration and higher pregnancy rates, compared to those who underwent unilateral varicocelectomy (clinical left varicocele only) (PUBMED:16047585). Another study indicated that bilateral varicocelectomy is superior to unilateral varicocelectomy in infertile males with left clinical and right subclinical varicocele, showing greater improvements in sperm concentration, normal sperm morphology, progressive motility, and spontaneous pregnancy rate after surgery (PUBMED:29210007). A meta-analysis of randomized controlled trials also supported the superiority of bilateral varicocelectomy over unilateral varicocelectomy for infertile male patients with left clinical and right subclinical varicocele, although it noted that more well-conducted trials are needed (PUBMED:30024042). However, one study concluded that there was no significant benefit of bilateral varicocelectomy compared with unilateral varicocelectomy in infertile men with left clinical and right subclinical varicoceles, as no significant differences were found in postoperative sperm parameters or spontaneous pregnancy rates between the two groups (PUBMED:19371942). In summary, while there is some evidence to suggest that operating on a subclinical right varicocele in patients with a grade II-III varicocele in the left testicle can be beneficial, the decision should be individualized, and further research may be needed to fully establish the advantages of bilateral varicocelectomy in this patient population.
Instruction: Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging? Abstracts: abstract_id: PUBMED:26731403 Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging? Introduction: High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points. Methods: A sample of fifty-seven youth (mean age: 14.5 years; 32 males) was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI). Pattern regression analyses consisted of Relevance Vector Regression (RVR) and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo). Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r) and mean squared error (MSE) to evaluate the models. Permutation test was applied to estimate significance levels. Results: Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model included cerebellum, sensory-motor and fronto-limbic areas. Conclusions: The combination of pattern regression models and neuroimaging can help to determine the severity of behavioral and emotional dysregulation in youth at different time points. abstract_id: PUBMED:24468022 Behavioral and emotional dysregulation trajectories marked by prefrontal-amygdala function in symptomatic youth. Background: Neuroimaging measures of behavioral and emotional dysregulation can yield biomarkers denoting developmental trajectories of psychiatric pathology in youth. We aimed to identify functional abnormalities in emotion regulation (ER) neural circuitry associated with different behavioral and emotional dysregulation trajectories using latent class growth analysis (LCGA) and neuroimaging. Method: A total of 61 youth (9-17 years) from the Longitudinal Assessment of Manic Symptoms study, and 24 healthy control youth, completed an emotional face n-back ER task during scanning. LCGA was performed on 12 biannual reports completed over 5 years of the Parent General Behavior Inventory 10-Item Mania Scale (PGBI-10M), a parental report of the child's difficulty regulating positive mood and energy. Results: There were two latent classes of PGBI-10M trajectories: high and decreasing (HighD; n=22) and low and decreasing (LowD; n=39) course of behavioral and emotional dysregulation over the 12 time points. Task performance was >89% in all youth, but more accurate in healthy controls and LowD versus HighD (p<0.001). During ER, LowD had greater activity than HighD and healthy controls in the dorsolateral prefrontal cortex, a key ER region, and greater functional connectivity than HighD between the amygdala and ventrolateral prefrontal cortex (p's<0.001, corrected). Conclusions: Patterns of function in lateral prefrontal cortical-amygdala circuitry in youth denote the severity of the developmental trajectory of behavioral and emotional dysregulation over time, and may be biological targets to guide differential treatment and novel treatment development for different levels of behavioral and emotional dysregulation in youth. abstract_id: PUBMED:37444121 Emotional Regulation and Adolescent Concussion: Overview and Role of Neuroimaging. Emotional dysregulation symptoms following a concussion are associated with an increased risk for emotional dysregulation disorders (e.g., depression and anxiety), especially in adolescents. However, predicting the emergence or worsening of emotional dysregulation symptoms after concussion and the extent to which this predates the onset of subsequent psychiatric morbidity after injury remains challenging. Although advanced neuroimaging techniques, such as functional magnetic resonance imaging and diffusion magnetic resonance imaging, have been used to detect and monitor concussion-related brain abnormalities in research settings, their clinical utility remains limited. In this narrative review, we have performed a comprehensive search of the available literature regarding emotional regulation, adolescent concussion, and advanced neuroimaging techniques in electronic databases (PubMed, Scopus, and Google Scholar). We highlight clinical evidence showing the heightened susceptibility of adolescents to experiencing emotional dysregulation symptoms following a concussion. Furthermore, we describe and provide empirical support for widely used magnetic resonance imaging modalities (i.e., functional and diffusion imaging), which are utilized to detect abnormalities in circuits responsible for emotional regulation. Additionally, we assess how these abnormalities relate to the emotional dysregulation symptoms often reported by adolescents post-injury. Yet, it remains to be determined if a progression of concussion-related abnormalities exists, especially in brain regions that undergo significant developmental changes during adolescence. We conclude that neuroimaging techniques hold potential as clinically useful tools for predicting and, ultimately, monitoring the treatment response to emotional dysregulation in adolescents following a concussion. abstract_id: PUBMED:35551054 Implementation of a Brief Dialectical Behavioral Therapy Skills Group in High Schools for At-Risk Youth: Protocol for a Mixed Methods Study. Background: Adolescence is a developmental period marked by engagement in risk-taking behaviors, especially among impulsive or emotionally dysregulated youth. Thus, interventions that teach skills to reduce the risk of negative outcomes associated with emotional dysregulation are required. Social and emotional learning (SEL) programs have been developed to address both adolescent emotional dysregulation and risk-taking behaviors; however, current programs have mostly been implemented among younger youth and are used as a tier 1 universal intervention rather than a targeted tier 2 intervention for youth identified with emotional regulation difficulties. Objective: This study aimed to address the need for SEL programming that can be delivered in schools, particularly for older youth who have difficulties with emotional or behavioral dysregulation, to reduce the risk of health-risk behaviors among this population. Methods: Here, we outline the implementation of an SEL intervention titled Going 4 Goals, a 9-session adaptation of the Dialectical Behavioral Therapy for Adolescents (DBT-A) program delivered to at-risk high school students in a school setting. The primary objectives of the study are to test whether participating in the skills group intervention produces significant increases in the core DBT-A skills of mindfulness, emotional regulation, distress tolerance, and interpersonal effectiveness, while also producing significant decreases in substance use and risky behaviors. These primary outcomes are based on changes in participant scores between baseline and after the intervention and follow-ups at 1, 3, and 6 months compared with a control group of youth participating in the school's health curriculum at the same time points. Qualitative interviews will also be conducted with intervention participants and school staff to examine acceptability and facilitators of and barriers to the intervention. Results: A total of 171 participants across 13 groups had been enrolled in the intervention, with data collection ending December 2021. Data analysis will begin in the spring of 2022, with expected results to be published in the spring of 2023. Conclusions: This paper describes the protocol of the 9-session school-based adaptation of the DBT-A intervention and discusses the strengths and limitations of the study and future directions. International Registered Report Identifier (irrid): DERR1-10.2196/32490. abstract_id: PUBMED:25715064 White matter structure in youth with behavioral and emotional dysregulation disorders: a probabilistic tractographic study. Importance: Psychiatric disorders in youth characterized by behavioral and emotional dysregulation are often comorbid and difficult to distinguish. An alternative approach to conceptualizing these disorders is to move toward a diagnostic system based on underlying pathophysiologic processes that may cut across conventionally defined diagnoses. Neuroimaging techniques have potentials for the identification of these processes. Objective: To determine whether diffusion imaging, a neuroimaging technique examining white matter (WM) structure, can identify neural correlates of emotional dysregulation in a sample of youth with different psychiatric disorders characterized by behavioral and emotional dysregulation. Design, Setting, And Participants: Using global probabilistic tractography, we examined relationships between WM structure in key tracts in emotional regulation circuitry (ie, cingulum, uncinate fasciculus, and forceps minor) and (1) broader diagnostic categories of behavioral and emotional dysregulation disorders (DDs) and (2) symptom dimensions cutting across conventional diagnoses in 120 youth with behavioral and/or emotional DDs, a referred sample of the Longitudinal Assessment of Manic Symptoms (LAM) study. Thirty age- and sex-matched typically developing youth (control participants) were included. Multivariate multiple regression models were used. The study was conducted from July 1, 2010, to February 28, 2014. Main Outcomes And Measures: Fractional anisotropy as well as axial and radial diffusivity were estimated and imported into a well-established statistical package. We hypothesized that (1) youth with emotional DDs and those with both behavioral and emotional DDs would show significantly lower fractional anisotropy compared with youth with behavioral DDs in these WM tracts and (2) that there would be significant inverse relationships between dimensional measures of affective symptom severity and fractional anisotropy in these tracts across all participants. Results: Multivariate multiple regression analyses revealed decreased fractional anisotropy and decreased axial diffusivity within the uncinate fasciculus in youth with emotional DDs vs those with behavioral DDs, those with both DDs, and the controls (F6,160 = 2.4; P = .032; all pairwise comparisons, P < .002). In the same model, greater severity of manic symptoms was positively associated with higher fractional anisotropy across all affected youth (F3,85 = 2.8; P = .044). Conclusions And Relevance: These findings suggest that abnormal uncinate fasciculus and cingulum WM structure may underlie emotional, but not behavioral, dysregulation in pediatric psychiatric disorders and that a different neural mechanism may exist for comorbid emotional and behavioral DDs. abstract_id: PUBMED:33743945 Emotional Dysregulation: A Trauma-Informed Approach. Traumatic experiences, subsequent traumatic stress, and other trauma reactions are common among youth who experience emotional dysregulation. This article highlights key considerations for the delivery of care to emotionally dysregulated youth with histories of trauma. An initial, trauma informed assessment is critical to identify those youth with emotional dysregulation best served by evidence-based, trauma-focused treatments trauma-informed approaches to severely emotionally dysregulated youth, including youth in in-patient and residential settings, can improve emotional and behavioral outbursts while maintaining the safety the milieu. Finally, incorporating awareness of trauma is considered when prescribing psychopharmacologic interventions in severely emotionally dysregulated youth. abstract_id: PUBMED:29675767 High Risk for Severe Emotional Dysregulation in Psychiatrically Referred Youth with Autism Spectrum Disorder: A Controlled Study. To assess prevalence and severity of emotional dysregulation (ED) in psychiatrically referred youth with autism spectrum disorder (ASD). ASD youth (N = 123) were compared to youth with attention-deficit/hyperactivity disorder (ADHD) and controls. The majority of psychiatrically referred youth with ASD had positive Child Behavior Checklist-ED (CBCL-ED) profile that was significantly higher than in youth with ADHD (82 vs. 53%; p < 0.001). The severe emotional dysregulation (SED) profile was significantly greater in ASD youth than ADHD (44 vs. 15%; p < 0.001). In the presence of SED profile ASD youth suffered from greater severity of autism, associated psychopathology, and psychosocial dysfunction. Greater than expected prevalence of SED in psychiatrically referred youth with ASD that identifies distinct clinical correlates associated with severe morbidity and dysfunction. abstract_id: PUBMED:31004740 Neuroimaging Evidence for Right Orbitofrontal Cortex Differences in Adolescents With Emotional and Behavioral Dysregulation. Objective: To characterize the structural and functional neurobiology of a large group of adolescents exhibiting a behaviorally and emotionally dysregulated phenotype. Method: Adolescents aged 14 years from the IMAGEN study were investigated. Latent class analysis (LCA) on the Strengths and Difficulties Questionnaire (SDQ) was used to identify a class of individuals with elevated behavioral and emotional difficulties ("dysregulated"; n = 233) who were compared to a matched sample from a low symptom class (controls, n = 233). Whole-brain gray matter volume (GMV) images were compared using a general linear model with 10,000 random label permutations. Regional GMV findings were then probed for functional differences from three functional magnetic resonance imaging (fMRI) tasks. Significant brain features then informed mediation path models linking the likelihood of psychiatric disorders (DSM-IV) with dysregulation. Results: Whole-brain differences were found in the right orbitofrontal cortex (R.OFC; p < .05; k = 48), with dysregulated individuals exhibiting lower GMV. The dysregulated group also exhibited higher activity in this region during successful inhibitory control (F1,429 = 7.53, p < .05). Path analyses indicated significant direct effects between the likelihood of psychopathologies and dysregulation. Modeling the R.OFC as a mediator returned modest partial effects, suggesting that the path linking the likelihood of an anxiety or conduct disorder diagnoses to dysregulation is partially explained by this anatomical feature. Conclusion: A large sample of dysregulated adolescents exhibited lower GMV in the R.OFC relative to controls. Dysregulated individuals also exhibited higher regional activations when exercising inhibitory control at performance levels comparable to those of controls. These findings suggest a neurobiological marker of dysregulation and highlight the role of the R.OFC in impaired emotional and behavioral control. abstract_id: PUBMED:24838017 Exploring the link between emotional and behavioral dysregulation: a test of the emotional cascade model. The emotional cascade model (Selby, Anestis, & Joiner, 2008) posits that the link between emotional and behavioral dysregulation may be through emotional cascades, which are repetitive cycles of rumination and negative affect that result in an intensification of emotional distress. Dysregulated behaviors, such as non-suicidal self-injury, are used to reduce aversive emotions and distract the person from ruminative processes. The aim of the present study was to investigate the emotional cascade model in a non-Western sample of Turkish university students. Accordingly, a structural equation model was tested, and the results demonstrated that the emotional cascades were indeed associated to dysregulated behaviors, even when the effect of current symptoms of depression and anxiety on behavioral dysregulation was statistically controlled. Furthermore, thought suppression had a stronger relationship with all symptom patterns as compared to rumination, which may point to a cultural difference. Possible implications of the findings are discussed. abstract_id: PUBMED:23960247 The psychotherapeutic implications of functional neuroimaging studies of anxiety disorders. Objective: To explore and discuss the psychotherapeutic implications of functional neuroimaging findings in anxiety disorders. Conclusions: Therapeutic implications of the neural system dysfunctions found in functional neuroimaging studies of anxiety disorders include: emotional dysregulation can probably be found in all anxiety disorders except obsessive compulsive disorder and needs to be addressed; the probable salience of enhancing implicit emotion regulation in improving overall treatment outcome; the importance of addressing insecure attachment in establishing and maintaining the therapeutic relationship; the need to enhance the reward-approach system especially in chronic treatment-resistant patients; the need to address traumatic memory in anxiety disorders other than PTSD among those with a traumatic history. There is likely to be a need for therapeutic techniques and strategies of different psychotherapy schools to be utilized to adequately address the various neural dysfunctions in patients with anxiety disorders. Answer: Yes, emotional and behavioral dysregulation in youth can be decoded from functional neuroimaging. Studies have used functional Magnetic Resonance Imaging (fMRI) and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioral and emotional dysregulation at different time points (PUBMED:26731403). Furthermore, neuroimaging measures of behavioral and emotional dysregulation can yield biomarkers denoting developmental trajectories of psychiatric pathology in youth, with functional abnormalities in emotion regulation neural circuitry associated with different behavioral and emotional dysregulation trajectories (PUBMED:24468022). Additionally, advanced neuroimaging techniques, such as fMRI and diffusion magnetic resonance imaging, have been used to detect and monitor concussion-related brain abnormalities in research settings, which are associated with an increased risk for emotional dysregulation disorders, especially in adolescents (PUBMED:37444121). White matter structure in youth with behavioral and emotional dysregulation disorders has also been studied using diffusion imaging, which can identify neural correlates of emotional dysregulation in a sample of youth with different psychiatric disorders characterized by behavioral and emotional dysregulation (PUBMED:25715064). Moreover, neuroimaging evidence has pointed to right orbitofrontal cortex differences in adolescents with emotional and behavioral dysregulation, suggesting a neurobiological marker of dysregulation and highlighting the role of the orbitofrontal cortex in impaired emotional and behavioral control (PUBMED:31004740). In summary, functional neuroimaging has been instrumental in decoding and understanding the neural correlates and potential biomarkers of emotional and behavioral dysregulation in youth, which can guide individualized treatment and inform the development of new therapeutic interventions.
Instruction: Can a trained non-physician provider perform transrectal ultrasound-guided prostatic biopsies as effectively as an experienced urologist? Abstracts: abstract_id: PUBMED:22726849 Can a trained non-physician provider perform transrectal ultrasound-guided prostatic biopsies as effectively as an experienced urologist? Unlabelled: WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: PSA testing has resulted in a large number of patients being referred to urologists for investigation of potential prostate cancer. Despite limited evidence, non-physician providers now perform a number of routine urological procedures such as transrectal ultrasound-guided prostatic biopsies (TRUSP) in a bid to help relieve this increasing workload. In the largest series to date, we provide evidence that an adequately trained non-physician provider is able to perform TRUSP as effectively as an experienced urologist after an initial learning curve. Objective: To evaluate differences in cancer detection rates between a trained non-physician provider (NPP) and an experienced urologist performing transrectal ultrasound-guided prostatic biopsies (TRUSP) at a single UK institution. Patients And Methods: We retrospectively analysed a prospectively accrued database of patients (n = 440) referred for investigation of an abnormal digital rectal examination and/or a raised age-specific prostate-specific antigen (PSA) value undergoing first-time outpatient prostatic biopsies who were sequentially allocated to either an NPP or a physician-led TRUSP clinic. Differences in overall and risk-stratified prostate cancer detection rates were evaluated according to TRUSP operator. Continuous variables were analysed using Mann-Whitney U test whereas categorical variables were analysed using Pearson's chi-squared test. A multivariate binary logistic regression model was fitted for predictors of a positive biopsy. Results: In all, 57.3% (126/220) of patients who underwent physician-led TRUSP were diagnosed with prostate cancer compared with 52.7% (116/220) in the NPP-led clinic (P = 0.338). Sub-group analysis revealed a lower cancer detection rate in men presenting with a low PSA level (<9.9 ng/mL) during the first 50 independent TRUSP procedures performed by the NPP (P = 0.014). This initial difference was lost with increasing case volume, suggesting the presence of a learning curve. Multivariate logistic regression analysis revealed age (odds ratio (OR) 1.054, 95% confidence interval (95% CI) 1.025-1.084, P ≤ 0.001), presenting PSA level (OR 1.05, 95% CI 1.02-1.081, P = 0.001), prostatic volume (OR 0.969, 95% CI 0.958-0.981, P ≤ 0.001) and clinical stage (OR 1.538, 95% CI 1.046-2.261, P = 0.029) to be predictors of a positive prostatic biopsy outcome. The choice of TRUSP operator was not predictive of a positive prostatic biopsy (OR 0.729, 95% CI 0.464-1.146, P = 0.171). Conclusion: An adequately trained NPP is able to perform TRUSP as effectively as an experienced urologist after an initial learning curve of 50 cases. abstract_id: PUBMED:9894260 Diagnostic value of 10 systematized and ultrasound-guided transrectal prostatic biopsies Objective: To study the improvement of prostatic cancer detection provided by ten strictly systematized transrectal prostatic biopsies. Material And Methods: This prospective study was conducted in 162 patients submitted to a series of 10 ultrasound-guided transrectal prostatic biopsies due to the presence of elevated PSA and/or an abnormality on digital rectal examination. Five biopsies were taken from different sites in each lobe: 2 biopsies were inserted between the 3 biopsies usually performed in the standard protocol, while maintaining the angle of entry of the needle recommended in this protocol. Results: The complication rate was 1.85% with the 10-biopsy technique. Prostatic cancer was detected in 40.1% of the 162 patients. The percentage diagnostic improvement provided by the 10-biopsy protocol in the overall patient population was +3.1% compared to the standard protocol. The highest percentage diagnostic improvement was observed in the group of patients with PSA < or = 10 ng/ml (+4.9%) and in the group of patients with an ultrasound prostatic volume < or = 40 cc (+4%). Conclusion: The most marked diagnostic improvement appears to be related more to sampling of prostatic zones presenting a statistically increased risk of cancer (peripheral zone) than to the increased number of prostatic biopsies performed. abstract_id: PUBMED:25013831 Correlation of transrectal ultrasonographic findings with histo pathology in prostatic cancer. Objective: To estimate the incidence of hyperechoic, hypoechoic, isoechoic, prostatic cancer in TRUS (transrectal ultrasound guided) guided prostatic specimens. Materials And Methods: Four hundred and ninety three patients with raised serum prostatic specific antigen (PSA) and abnormal DRE findings were subjected to TRUS-Guided prostate biopsy. Lateralized sextant biopsy plus prostatic cores from suspicious areas were obtained. Results: Out of 493 patients who were enrolled in the study, 65 (13.18) patients showed hyperechoic lesions on TRUS and 211 (42.79) patients had hypoechoic lesions on TRUS. Conclusion: Our study has revealed that hyperechoic lesions on transrectal ultrasonography have more chances of prostatic cancer as reported in previous literature, so we suggest that we should take additional biopsy of hyperechoic lesions and perhaps it should be part of the standard protocol in patients suspected cancer prostate. abstract_id: PUBMED:29368429 Accuracy of standardized 12-core template biopsies versus non-standardized biopsies for detection of Epstein Grade 5 prostate cancer regarding the histology of the prostatectomy specimen. Objective: To evaluate the effectiveness of EAU Guideline compliant transrectal ultrasound-guided 12-core prostate biopsies for detection of highly aggressive Epstein Grade 5 (Gleason Score 9-10) prostate cancer. Methods: Two hundred ninety-nine patients, treated by radical prostatectomy for prostate cancer, have been prospectively recorded in a database and were evaluated for this study. Pre-operatively, all patients received transrectal ultrasound-guided biopsies according to inhomogeneous templates chosen by the referring urologist. We evaluated the outcomes according to a stratified group-analysis: Group 1 received less than 12 biopsies, Group 2 received more than 12 biopsies, and Group 3 received exactly 12 biopsies, according to the EAU Guidelines template. After surgical removal of the prostate, 12 EAU Guideline-templated biopsies were performed in all prostatectomy specimens, directly after the surgery. Pre-operative and post-operative Epstein Grade 5 biopsy detection rates were thereafter correlated with these prostatectomy specimens. Results: In prostatectomy specimens, the histology of 12 patients (4.0%) were Epstein Grade 1, 31 patients (10.5%) were Epstein Grade 2, 190 patients (63.5%) were Epstein Grade 3, 27 patients (9%) were Epstein Grade 4, and 39 patients (13%) were Epstein Grade 5. The detection rate of Epstein Grade 5 compared to the radical prostatectomy specimen was: Group 1: 23.0% pre-operatively and 61.5% post-operatively, Group 2: 33.3% pre-operatively and 58.3% post-operatively; and Group 3: 57.1% pre-operatively and 64.2% post-operatively. Conclusion: Detection rates of highly aggressive Epstein Grade 5 prostate cancer vary considerably according to the biopsy technique. EAU Guideline compliant 12-core template biopsies increase the detection rates of Epstein Grade 5 prostate cancer. abstract_id: PUBMED:1705293 Correlation of ultrasound guided and digitally directed transrectal biopsies of palpable prostatic abnormalities. The authors evaluated 51 patients with palpable prostatic abnormalities detected during digital rectal examination. These findings consisted of a nodule or an area of induration. Each palpable abnormality was confined to 1 prostatic lobe and there was no suggestion of extracapsular extension of neoplasm or systemic metastatic disease. All patients underwent 7.0 MHz. sagittal ultrasound guided transrectal biopsy followed by digitally directed transrectal biopsy. Biopsies were obtained only from the area of interest. The procedure was performed in the outpatient clinic without use of sedation or anesthesia. Digitally directed biopsies were positive for adenocarcinoma in 9 lesions. Ultrasound guided biopsies detected adenocarcinoma in 23 lesions, including all those detected by the blind digitally directed technique. This study demonstrates greater diagnostic accuracy using 7.0 MHz. ultrasound guided techniques and its routine use is warranted in the evaluation of palpable prostatic abnormalities. abstract_id: PUBMED:37179346 The significance of transrectal ultrasound and urologist_dually guided pelvic floor muscle exercise in improving urinary continence after radical prostatectomy. Background: To determine whether transrectal ultrasound and urologist_dually guided pelvic floor muscle exercise is associated with immediate, early and long-term urinary continence after radical prostatectomy. Materials And Methods: Data from 114 patients with localized prostate cancer (PC) who underwent RP at Henan Cancer Hospital from November 2018 to April 2021 were included in the retrospective study. Of the 114 patients, 50 patients in the observation group underwent transrectal ultrasound and urologist_dually guided PFME, and 64 patients in the control group underwent verbally_guided PFME. Contractile function of the external urinary sphincter was in the observation group was evaluated. The immediate, early and long-term urinary continence rates were assessed in both groups, and the factors affecting urinary continence were analyzed. Results: The urinary continence rate at 2 weeks and 1, 3, 6 and 12 months in the observation group after RP was significantly higher than that in the control group (52.0% vs. 29.7%, 70.0% vs. 39.1%, 82% vs. 57.8, 88% vs. 70.3%, 98.0 vs. 84.4%, p < 0.05). The contractile function of the external urinary sphincter was obviously correlated with urinary continence at multiple visits after RP, except for the 12-month visit. Transrectal ultrasound and urologist-dually guided PFME was verified to be an independent positive factor for urinary continence at 2 weeks and 1, 3, 6 and 12 months using logistic regression analysis. However, TURP was a negative factor for postoperative urinary continence at different times. Conclusions: Transrectal ultrasound and urologist_dually guided PFME had a significant role in improving immediate, early and long-term urinary continence after RP and acted as an independent prognostic factor. abstract_id: PUBMED:37592459 Ultrasound Guided, Freehand Transperineal Prostate Biopsy: An Alternative to the Transrectal Approach. Introduction: There is growing interest in the use of transperineal prostate biopsy due to the advantages of decreased infection risk and improved cancer detection rates. However, brachytherapy stepper units and templates may increase costs and operative time for the practicing urologist. We present the safety, feasibility and early outcomes of a single urologist's experience with ultrasound guided freehand transperineal prostate biopsy as an alternative to transrectal ultrasound guided biopsy. Methods: A retrospective review of all prospectively performed ultrasound guided freehand transperineal prostate biopsies between January 1, 2012 and April 30, 2014 was performed. Primary outcome measurements were safety and feasibility. Results: A total of 274 ultrasound guided freehand transperineal prostate biopsies were performed in 244 patients. Operative and total operating room use times were 7.9 and 17.5 minutes, respectively, with an average of 14.4 cores obtained during each procedure. The overall cancer detection rates for all procedures, those in biopsy naïve patients and those performed for active surveillance were 62.8%, 56.4% and 89%, respectively. New diagnoses of prostate cancer occurred in 41.2% of patients with 10% positive after a previous negative transrectal ultrasound guided biopsy. Complications (Clavien grade I or greater) including systemic infection, urinary retention and hematuria or pain requiring physician or hospital intervention did not occur. Conclusions: The use of ultrasound guided freehand transperineal prostate biopsy for the suspicion or surveillance of prostate cancer is feasible and safe. The results were encouraging with respect to the primary outcome measurements. Ultrasound guided freehand transperineal prostate biopsy with the patient under local anesthesia is currently under investigation. Large, prospective, randomized, multiple operator studies to evaluate the comparative effectiveness of freehand transperineal prostate biopsy and transrectal ultrasound guided biopsy techniques are recommended. abstract_id: PUBMED:16800139 The role of transrectal ultrasound on prostatic cryotherapy and brachytherapy Objectives: Transrectal ultrasound is the method that gives a direct image of the prostate, its limits, structural and morphologic anomalies, and anatomical relations. Therefore, prostate volume is easily determined, being the first step for the application of certain therapeutic procedures. Prostatic cryotherapy and brachytherapy have been developed over the last years as minimally invasive options for the treatment of prostate cancer. Transrectal ultrasound of the prostate has allowed the application of these technologies in the daily practice, guaranteeing high efficacy and safety indexes. Cryosurgery is the controlled freezing of tissues. Prostatic transrectal ultrasound is the only method able to show the real-time evolution of prostatic cryoablation, allowing the urologist to control the evolution of the ice ball and to reach the targeted anatomical structures guaranteeing the oncological objectives, and diminishing complications and sequels. Brachytherapy, as a local intraprostatic radiotherapy, needs exact volume and dose calculations before the implant of the radioactive source within the gland. With transrectal ultrasound of the prostate, ultrasound-tomographic cuts are made for prostatic volume calculation and planimetry Once dosimetry is completed, real-time transrectal ultrasound control is necessary to perform the implant of the needles loaded with the seeds. Today, prostate cryotherapy and brachytherapy would be inconceivable without transrectal ultrasound. abstract_id: PUBMED:2690834 Transrectal prostatic ultrasound: lesions diagnosed. From January 1, 1988 to June 30 1989, 486 cases of transrectal prostatic ultrasound were done at San Pablo Hospital. These patients had been referred by the urologist and attending physician due to abnormal digital prostatic examination such as a palpable nodule, an enlarged prostate or prostatic induration. The lesions that have been diagnosed by transrectal prostatic ultrasound include cancer, prostatitis, benign prostatic hypertrophy, cyst, abscess, hematomas and calcifications. From the total of 486 cases studied, 72 patients showed low density lesions in the peripheral zone. Thirty-seven of these 72 cases proved to be malignant for an average of 51.4%. Transrectal prostatic sonography can be easily performed in 10 to 15 minutes. The prostate lesions mentioned above can easily be diagnosed using a short focus 7 MHz transrectal transducer. Prostatitis and early carcinoma present a similar appearance on ultrasound that can be differentiated by transrectal needle biopsy of the prostate which can be sonographically guided with ease using the same equipment on an outpatient setting. abstract_id: PUBMED:28635568 Additional value of magnetic resonance-targeted biopsies to standard transrectal ultrasound-guided biopsies for detection of clinically significant prostate cancer. Objective: The aim of this study was to evaluate the additional value of magnetic resonance imaging-targeted biopsy (MRI-TB) to standard transrectal ultrasound-guided biopsy (SB) for detection of clinically significant prostate cancer (PCa). An additional aim was to compare the biopsy results to MRI evaluation using a Likert scale. Materials And Methods: Patients with newly diagnosed localized PCa (n = 53) by clinical routine SB were prospectively included. The majority of the patients were scheduled for curative therapy before enrollment. The patients underwent multiparametric MRI (mpMRI) at 3 T using an endorectal coil followed by two MRI-TBs, using ultrasound with cognitive fusion. All included patients underwent MRI-TB, even those who had low to very low suspicion of significant PCa on mpMRI. The detection rate of significant cancer on SB versus SB + MRI-TB was compared in the 53 included patients and with whole-mounted histopathology as reference in 34 cases. Comparison of the biopsy results to MRI evaluation and interreader agreement calculation of five-point Likert score evaluation were performed. Results: In total, 32 significant (Gleason ≥7) PCa were detected by SB, while SB + MRI-TB detected an additional five significant PCa. MRI-TB alone detected 20 and missed 17 significant PCa. Ten of the significant PCa cases missed by MRI-TB had a Likert score of 3 or lower. Interreader agreement using the Likert scale was high, with a kappa value of 0.77 (95% confidence interval 0.63-0.92, p < 0.0001). Conclusion: Detection of significant PCa increased by adding MRI-TB to SB. This may not be of enough clinical value to justify the use of targeted biopsies in this patient group. Answer: Yes, a trained non-physician provider (NPP) can perform transrectal ultrasound-guided prostatic biopsies (TRUSP) as effectively as an experienced urologist after an initial learning curve. A study evaluating the cancer detection rates between a trained NPP and an experienced urologist found that there was no significant difference in overall prostate cancer detection rates between the two groups. Specifically, 57.3% of patients who underwent physician-led TRUSP were diagnosed with prostate cancer compared with 52.7% in the NPP-led clinic (P = 0.338). However, there was a lower cancer detection rate in men presenting with a low PSA level during the first 50 independent TRUSP procedures performed by the NPP, which suggests the presence of a learning curve. This initial difference in detection rates was lost with increasing case volume, indicating that the NPP's performance improved with experience. The study concluded that an adequately trained NPP is able to perform TRUSP as effectively as an experienced urologist after an initial learning curve of 50 cases (PUBMED:22726849).
Instruction: Do mental health problems increase sickness absence due to other diseases? Abstracts: abstract_id: PUBMED:11583462 Do mental health problems increase sickness absence due to other diseases? Background: The incidence of mental health problems is rising. Large sex differences are found for mental health problems, which also is reflected in sick-leave. The aim of this study was to analyse the association of mental health problems with sickness absence in general. Methods: Employed women (n = 1407) in four occupational groups, namely metal workers, enrolled nurses, medical secretaries and nurses, were included. Sick-leave data were collected through social insurance and employers' registers. Five indicators reflecting different aspects of mental health problems were analysed in relation to sickness absence. Results: Women with mental health problems, irrespective of indicator used, had higher levels of sickness absence than women without mental health problems. The association was found for frequency, incidence, length and duration of sickness absence, and it was found also with respect to less severe forms of mental health problems. For example, women with worries had a frequency of 3.5 sick-leave spells per year, while women without worries had 2.5 spells. Furthermore, women with self-reported mental illness (SF-36) had a 1.93 times increase in sick-leave length over 1 year. Conclusions: We conclude that mental health problems need to be recognised in relation to sickness absence, rehabilitation and prevention in all diagnoses. abstract_id: PUBMED:33615899 Increase in sickness absence due to mental disorders in Finland: trends by gender, age and diagnostic group in 2005-2019. Aims: Mental disorders are among the key public health challenges and cause a significant share of sickness absence. The aim of this study was to examine gender and age-specific trends in sickness absence in Finland among non-retired persons aged 16-67 years during 2005-2019 by main diagnostic groups. Special focus was put on the development of sickness absence due to mental and behavioural disorders. Methods: Data on compensated sickness allowance days were retrieved from the database of the Social Insurance Institution of Finland, and data on the non-retired population aged 16-67 years from the database of Statistics Finland for years 2005-2019. Yearly age-standardised sickness absence rates (yearly sickness absence days per each person in the population at risk) according to diagnostic group were calculated for women and men in age groups 16-34, 35-49 and 50-67 years. Results: A steep increase in sickness absence due to mental disorders was observed between 2016 and 2019 in all age groups among both genders, but the increase was more prominent among women. The age group 16-34 years also showed a longer-term gradual increase. In all examined gender and age groups, the increase was mainly a consequence of an increase in sickness absence due to depression and anxiety disorders. Conclusions: Increase in sickness absence due to mental disorders is an early sign of threats to work ability and productivity of the working-age population. Several factors may simultaneously drive the development. The specific reasons for the recent trend need to be studied. abstract_id: PUBMED:38289869 Explanations for sickness absence due to common mental disorders: A narrative study of young health and social care workers. Over recent decades, sickness absence due to common mental disorders has increased among young workers. The phenomenon is mostly understood on the basis of epidemiological research, and knowledge regarding the viewpoints of young workers themselves is lacking. Our study explored the explanations for mental health-related sickness absence in the narrative accounts of young workers in high-risk health and social care occupations. Semi-structured narrative interviews were conducted with 23 Finnish young workers (aged 21-34), with self-reported sickness absence related to common mental disorders over the previous year. Our analysis identified three narrative explanations for the onset of mental health problems leading to sick leave: work as the sole cause, work as an additional cause and work as a trigger. These findings indicate that mental health-related sickness absences form a complex phenomenon related to various life and work-related circumstances. More comprehensive preventive measures are needed in the health and social care sector to help tackle mental health problems among young workers. abstract_id: PUBMED:32467313 Differences between women and men in the relationship between psychosocial stressors at work and work absence due to mental health problem. Objectives: Women have a higher incidence of mental health problems compared with men. Psychosocial stressors at work are associated with mental health problems. However, few prospective studies have examined the association between these stressors and objectively measured outcomes of mental health. Moreover, evidence regarding potential differences between women and men in this association is scarce and inconsistent. This study investigates whether psychosocial stressors at work are associated with the 7.5-year incidence of medically certified work absence due to a mental health problem, separately for women and men. Methods: Data from a prospective cohort of white-collar workers in Canada (n=7138; 47.3% women) were used. We performed Cox regression models to examine the prospective association between self-reported psychosocial stressors at work (job strain model) at baseline and the 7.5-year HR of medically certified work absence of ≥5 days due to a mental health problem. Results: During follow-up, 11.9% of participants had a certified work absence, with a twofold higher incidence among women. Women (HR 1.40, 95% CI 1.01 to 1.93) and men (HR 1.41, 95% CI 0.97 to 2.05) exposed to high strain (high demands and low control) had a higher incidence of work absence compared with those unexposed. Among women only, those exposed to an active job situation (high demands and high control) also had a higher risk (HR 1.82, 95% CI 1.29 to 2.56). Conclusions: Prevention efforts aimed at reducing psychosocial stressors at work could help lower the risk of work absence for both women and men. However, important differences between women and men need to be further studied in order to orient these efforts. abstract_id: PUBMED:29343233 Alcohol use and sickness absence due to all causes and mental- or musculoskeletal disorders: a nationally representative study. Background: Previous studies have not distinguished between different alcohol-use histories, which could have contributed to the current inconsistent evidence regarding the relationship between alcohol use and subsequent sickness absence. We thus examined alcohol use and subsequent diagnosis-specific sickness absence in groups with different levels of alcohol use, as well as in lifelong abstainers, former drinkers, and people with clinical alcohol use disorders. Methods: The data of the population-based Health 2000 Survey (BRIF8901) of 3666 Finns aged 30-55 were linked with national registers on medically certified sickness absences lasting for > 10 working days (long-term) for all causes (2000 - 2010) and for mental or musculoskeletal disorders (2004-2010), as well as with registers on pensions and death (2000-2010). Alcohol use was assessed by questionnaire. Chronic somatic diseases were evaluated at baseline in a clinical examination, and common mental and alcohol use disorders using the Composite International Diagnostic Interview (CIDI). Cox regression analyses were conducted with censoring for death and retirement from work. Results: During an average 10-year follow-up, 56.0% of the participants had at least one long-term sickness absence period. Compared with light drinkers, those having an alcohol use disorder had increased risk of all-cause sickness absence (HR = 1.27; 95% CI = 1.04 - 1.54) and sickness absence due to mental disorders (HR = 2.16; 95% CI = 1.39 - 3.35), when somatic and mental disorders as well as demographic, lifestyle-related and occupational factors at baseline were accounted for. Lifelong abstainers did not differ from light drinkers. Also high-volume drinking (HR = 1.52; 95% CI 1.03 - 2.25) and former drinking (HR = 1.57; 95% CI = 1.15 - 2.15) were associated with long-term sickness absence due to mental disorders. Alcohol use was not predictive of sickness absence due to musculoskeletal disorders. Conclusions: These results highlight the need to distinguish between former drinking and lifelong abstinence, as only former drinking was associated with sickness absence. Alcohol use disorder and high-volume drinking were strongly predictive of sickness absence due to mental disorders. Identifying people with excessive alcohol use e.g. in occupational health care, and mapping and supporting their mental health may help in preventing sickness absences. abstract_id: PUBMED:29546193 Parent-reported Mental Health Problems and Mental Health Services Use in South Australian School-aged Children. Background: Monitoring and reporting childhood mental health problems and mental health services utilization over time provide important information to identify mental health related issues and to guide early intervention. This paper aims to describe the recent prevalence of parent-reported mental health problems among South Australian (SA) children; to identify mental health problems associated characteristics; and to describe mental health services utilization and its related characteristics among this population. Methods: Parent-reported mental health problems were assessed against the first item of the Strength and Difficulties Questionnaire. School-aged children were randomly sampled monthly and data were collected using a surveillance system between 2005 and 2015. Associations between mental health problems and various factors were analysed using univariable analysis and multivariable logistic regression modelling. Results: Prevalence of parent-reported mental health problems among children was 9.1% and 9.3% for children aged 5 to 11 years and children aged 12 to 15 years, respectively. No change in prevalence was observed during the past decade. Mental health problems were associated with male sex, long-term illness or pain, negative school experiences, not living with biological parents, and living in a rental dwelling. Less than half (48.7%) of the children with mental health problems received professional help. An increasing trend was found in mental health services utilisation among children aged 5 to 15 years. Utilization of mental health services was associated with male sex, older age, long-term illness or pain, and feeling unhappy at school. Conclusion: This study reports the prevalence of parent-reported mental and mental health services utilisation among SA school-aged children. Identified characteristics associated with mental health problems and mental health services utilisation provide useful information for the planning of catered population initiatives. abstract_id: PUBMED:26106512 Relationship between Comorbid Health Problems and Musculoskeletal Disorders Resulting in Musculoskeletal Complaints and Musculoskeletal Sickness Absence among Employees in Korea. Background: To investigate the relationship between musculoskeletal disorders and comorbid health problems, including depression/anxiety disorder, insomnia/sleep disorder, fatigue, and injury by accident, and to determine whether certain physical and psychological factors reduce comorbid health problems. Methods: In total, 29,711 employees were selected from respondents of the Third Korean Working Conditions Survey and categorized into two groups: Musculoskeletal Complaints or Musculoskeletal Sickness Absence. Four self-reported health indicators (overall fatigue, depression/anxiety, insomnia/sleep disorder, and injury by accident) were selected as outcomes, based on their high prevalence in Korea. We used multiple logistic regression analysis to determine the relationship between comorbid health problems, musculoskeletal complaints, and sickness absence. Results: The prevalence of musculoskeletal complaints and musculoskeletal sickness absence due to muscular pain was 32.26% and 0.59%, respectively. Compared to the reference group, depression/anxiety disorder and overall fatigue were 5.2-6.1 times more prevalent in the Musculoskeletal Complaints Group and insomnia/sleep disorder and injury by accident were 7.6-11.0 times more prevalent in the Sickness Absence Group. When adjusted for individual and work-related physical factors, prevalence of all four comorbid health problems were slightly decreased in both groups. Conclusion: Increases in overall fatigue and depression/anxiety disorder were observed in the Musculoskeletal Complaints Group, while increases in insomnia/sleep disorder and injury by accident were observed in the Sickness Absence Group. For management of musculoskeletal complaints and sickness absence in the workplace, differences in health problems between employees with musculoskeletal complaints and those with sickness absence as well as the physical and psychological risk factors should be considered. abstract_id: PUBMED:24174636 Effort--reward imbalance and medically certified absence for mental health problems: a prospective study of white-collar workers. Objective: Little is known about the effects of psychosocial work factors on objectively assessed mental health problems leading to medically certified absence. Only one study has evaluated the prospective effects of effort-reward imbalance (ERI) at work with regards to this outcome. The present study aimed to evaluate the effects of ERI on the incidence of medically certified absence for mental health problems. Methods: The study included 2086 white-collar workers (63.3% women) employed in public organisations in Quebec city. Participants were followed over a 9-year period. Medical absences from work were collected from employers' files and psychosocial factors were measured using the ERI questionnaire. Cox regression models were used to estimate the incidence of certified sickness absence due to mental health problems that lasted 5 workdays or more, while controlling for confounders. Results: Workers exposed to ERI had a higher risk of a first spell of medically certified absence for mental health problems (HR=1.38, 95% CI 1.08 to 1.76) compared with unexposed workers. Low reward was significantly associated with a high risk among men (HR=2.80, 95% CI 1.34 to 5.89) but not in women. (HR=1.24, 95% CI 0.90 to 1.73). Effort at work had no effect on certified absence. All these effects were adjusted for potential confounders. Conclusions: ERI and low reward at work were prospectively associated with medically certified absence for mental health problems. These effects seem to differ by gender. Primary prevention that is aimed at reducing these stressors should be considered to help reduce the incidence of such severe mental health problems. abstract_id: PUBMED:26647948 Selected aspects of absence at work and work-related health problems in Polish enterprises. Workers' working conditions, work-related health problems and sickness absence are interdependent factors. Both workers' health problems and their absence are adverse events which generate significant costs for both Poland's Social Insurance Institution (ZUS) and employers. Despite the related burdens, it is difficult to assess the number of workers who experience work-related health problems, to indicate the share of those workers who have been unfit for work owing to such disorders and to indicate the types of workers' disorders which are caused by factors the workers are exposed to in the working environment. This article presents the findings of surveys carried out in selected production and service-providing companies, assessing the scale and nature of work-related health problems and their links with workers' sickness absence. abstract_id: PUBMED:26188192 Time to recurrence of mental health-related absence from work. Background: It is unclear when occupational health providers should re-evaluate workers after mental health-related absences from work. Aims: To investigate the time to recurrence of mental health-related absences, stratified by International Classification of Diseases-Tenth Revision (ICD-10) diagnostic categories. Methods: A 10-year observational study of workers employed at a steel mill. Sickness absence data were retrieved from an occupational health register. Mental health-related absences were defined as absence due to emotional disturbance or mental and behavioural disorders. The first mental health-related absence since baseline was called the index episode. Recurrences were defined as mental health-related absences occurring >28 days after recovery from the index episode. The frequency of recurrent mental health-related absence was assessed by the recurrence density (RD) per 1000 person-years. The time to recurrent mental health-related absence was investigated by Kaplan-Meier survival analysis. Results: Of 15461 workers, 391 had recurrent mental health-related absences. RD was 30.5, 34.3, 29.9 and 37.7 per 1000 person-years after index episodes due to emotional disturbance, mood disorders, neurotic disorders and other psychiatric disorders, respectively. RDs did not differ across ICD-10 diagnostic categories. The median time to recurrent mental health-related absence was 15.2 months [95% confidence interval (CI) 12.6-17.7] and was shortest for mood disorders (5.2, 95% CI 1.4-8.9 months) and specific psychiatric disorders (5.3, 95% CI 1.0-13.1 months). Conclusions: Based on this observational study, we suggest that occupational and primary health care providers consider reviewing the mental health status of workers 6 months after recovery from mental health-related absence. Answer: Yes, mental health problems do increase sickness absence due to other diseases. The study conducted on employed women across four occupational groups found that women with mental health problems had higher levels of sickness absence than those without mental health problems, regardless of the indicator used. This association was observed for frequency, incidence, length, and duration of sickness absence, and was also true for less severe forms of mental health problems (PUBMED:11583462). Furthermore, an increase in sickness absence due to mental disorders was observed in Finland between 2016 and 2019 across all age groups and genders, with a more prominent increase among women. The increase was mainly due to sickness absence resulting from depression and anxiety disorders (PUBMED:33615899). Narrative accounts from young workers in high-risk health and social care occupations also revealed that mental health-related sickness absences are a complex phenomenon related to various life and work-related circumstances (PUBMED:38289869). Additionally, psychosocial stressors at work are associated with mental health problems and have been linked to a higher incidence of work absence due to mental health problems in both women and men (PUBMED:32467313). Alcohol use disorders and high-volume drinking were found to be predictive of sickness absence due to mental disorders, indicating that mental health problems can lead to increased sickness absence (PUBMED:29343233). Overall, these findings suggest that mental health problems are indeed associated with increased sickness absence due to a range of diseases, highlighting the need for recognition, rehabilitation, and prevention strategies in relation to mental health in the workplace.
Instruction: Oral myiasis: does an indication for surgical treatment still exist? Abstracts: abstract_id: PUBMED:22862986 Oral myiasis: does an indication for surgical treatment still exist? Two case reports. Objective: Oral myiasis is a rare infection for which treatment protocol has not yet been established. This article presents 2 cases treated with a combination of topical application of sulfuric ether and surgery. The reasons for the use of surgical therapy, as well as the possible advantages and disadvantages of drug-based treatments, are discussed. Case Report: Two cases of oral myiasis are described, the first being observed in a 9-year-old child with hypotonic cerebral palsy, and the second in a 52-year-old adult, alcohol-dependent, both showing infection in the gingival sulcus. Both cases were successfully treated in a process that involved topical application of sulfuric ether, mechanical removal of larvae, and surgical debridement. Conclusions: Oral myiasis can be treated effectively with surgery after topical application of sulfuric ether. The use of drugs may suggest a therapeutic alternative, but still requires further study and experience to be implemented, especially in individuals with neurological disorders. abstract_id: PUBMED:32492640 Treatment of facial myiasis in an elderly patient with oral squamous cell carcinoma: Case report. Myiasis is caused by the infestation of fly larvae in human tissues and it presents immunodeficiency, poor hygiene, or malignant neoplasias as predisposing chronic diseases. Objective: To describe a clinical case of myiasis associated with oral squamous cell carcinoma (OSCC) in an elderly patient. Case Presentation: A 60-year-old male, black, smoker, and alcoholic patient with OSCC, who refused initial cancer treatment and sought hospital care with an extensive facial lesion and approximately 150 larvae in the extraoral region. The treatment was given through the administration of 6 mg of Ivermectin associated with the surgical removal of the larvae. Subsequently, palliative chemotherapy began. Discussion: Myiasis can sometimes be associated with OSCC and it commonly occurs in individuals who live in unhealthy locations and present poor hygiene and low immunity. In addition to medication, the surgical removal of the larvae is important in the treatment of myiasis. Conclusion: Patient adherence to cancer treatment is essential to avoid the negative evolution of the tumor. The social component linked to the development of myiasis makes it an important public health topic, as it is related directly to the lack of primary care. Surgical and drug treatments are required to cure myiasis. abstract_id: PUBMED:23417950 Cordylobia anthropophaga: a rare surgical emergency in the UK. A 38-year-old healthy Caucasian man presented to the surgical team with a punctured wound on the right side of his scalp. He had been on a month's holiday to Uganda and South Africa. He recalls a painful swelling progressively increasing over 2 weeks. This was associated with sharp shooting pains and swelling on the right side of the face. He sought medical opinion while on holiday. Broad spectrum oral antibiotics were advised. The condition did not settle with the treatment so he consulted his general practitioner (GP) on return. The GP referred him to the surgical team for management of a scalp wound. On examination, a live pulsating larva was seen within a painful 1.5×2 cm puncture wound on the right side of the scalp. The larva was removed. Microbiology identified the larva as Cordylobia anthropophaga. No further treatment was advised. To our knowledge, the patient remains well to date. abstract_id: PUBMED:11050502 Myiasis in a pregnant woman and an effective, sterile method of surgical extraction. Background: Cutaneous myiasis caused by Dermatobia hominis involves the infestation of tissue with dipterous fly larvae and is common in many tropical and subtropical areas. We describe a patient in her sixth month of pregnancy who returned from Belize with myiasis detected in the right popliteal fossa. Multiple surgical techniques have been described in the past outlining various ways to extract the botfly larva. No single standardized technique for surgical extraction of larvae has been adopted. Objective: To describe a highly effective, sterile method of extraction used in a pregnant patient with botfly infestation. Methods: A combination of injection with plain 1% lidocaine, sterile occlusion with polymyxin B sulfate ointment and a cruciform incision was used to extract the larva. Results: The botfly larva was easily and completely extracted without remnants of the larval body being left in the skin. Conclusion: The sterile technique we describe allows for quick and easy extraction of the larva without risk of secondary infection or need for antibiotics. This method is especially appropriate for pregnant patients or those with medical conditions precluding a completely competent immune response to potential secondary infection. abstract_id: PUBMED:25324659 Dermatobia hominis: Small Migrants Hidden in Your Skin. Myiasis is a parasitic infestation of vertebrate animal tissues due to maggots of two-winged flies (Diptera) that feed on living or necrotic tissue. Dermatobia hominis occurs widely in tropical parts of Latin America; it is the most common cause of furuncular myiasis in this region. The continuous increase in international travel has increased the possibility of observing this pathology outside endemic countries, especially in travelers returning from the tropics. If clinicians are aware of the possibility of the disease and its treatment options, this dermatosis can be easily managed. However, diagnostic delay is very common because the disease is often misdiagnosed as a bacterial skin infection. Here, we report 2 cases of furuncular myiasis caused by D. hominis in travelers returning to Italy from Latin America. Surgical and noninvasive treatment approaches are also described. abstract_id: PUBMED:27438206 Excision of furuncular myiasis larvae using a punch: a simple, practical and aesthetic method. Myiasis refers to Diptera larvae infesting vertebrate animals. There are two forms of the disease: primary and secondary. In primary myiasis, fly larvae invade and develop in healthy tissue; in secondary myiasis, flies lay their eggs in skin ulcerations, and the larvae develop in tissue necrosis products. Furuncular myiasis is a type of primary myiasis. Treatment for it consists of techniques such as the production of localized hypoxia to force the emergence of the larvae, and mechanical or surgical removal of the maggots. These techniques, however, are painful and often unsuccessful. We propose a new technique for extraction of myiasis larvae, which might facilitate the surgical procedure and constitute a virtually painless and aesthetic option for the patient. abstract_id: PUBMED:12785708 Rectal prolapse in a free-ranging mountain gorilla (Gorilla beringei beringei): clinical presentation and surgical management. A juvenile female mountain gorilla (Gorilla beringei beringei) of the Mubare tourist group in Bwindi Impenetrable National Park, Uganda, developed a severe, complete rectal prolapse that did not spontaneously resolve. Eight months prior, a juvenile female mountain gorilla of the Mubare group developed a mild, complete rectal prolapse that resolved spontaneously within 24 hr. Field guides reported that spontaneously resolving prolapses had been seen previously in two other juveniles, one of which was from the Mubare group. The tissue became increasingly necrotic and maggot infested over the course of 1 wk. Surgical intervention involved amputation of the affected rectal tissues and suturing the viable portion to the anal sphincter muscle with simple interrupted absorbable sutures. The surgery was performed in the field in accordance with Uganda Wildlife Authority policies. Antibiotics and anthelmintics were administered systemically, and the gorilla returned to the group. The gorilla appeared to recover fully after 3 wk. Histology of the resected rectal tissue confirmed intense inflammation and necrosis with myiasis but did not reveal an underlying etiology. abstract_id: PUBMED:21530939 Myasis in uterine prolapse, successful treatment. We describe the case of an elderly female with total genital prolapse and superinfected uterine myasis. The successful treatment included mechanical extraction of the larvae, antibiotics, and ivermectin (this last one reported for the first time); along with the surgical correction of the prolapse. abstract_id: PUBMED:16634929 Ophthalmomyiasis interna anterior in a cat: surgical resolution. A 4-year-old Domestic short-haired cat was presented for severe anterior uveitis in the right eye associated with a Cuterebra spp. larva in the anterior chamber. This report describes successful surgical removal of the parasite with preservation of vision. abstract_id: PUBMED:19122066 Surgical removal of a Cuterebra larva from the cervical trachea of a cat. A case of cervical intratracheal cuterebriasis is reported. The cat was presented with intermittent dyspnea of 3 days' duration. The larva was located during tracheoscopy but was not retrievable. Surgical exploration of the cervical region was performed, and the larva was removed. All clinical signs resolved upon recovery from surgery. The larva was identified as a second instar Cuterebra sp. Answer: Yes, there is still an indication for surgical treatment in cases of oral myiasis. The abstracts provided demonstrate that surgical intervention, often in combination with other treatments such as topical application of substances or administration of drugs, is a viable and sometimes necessary approach to treating myiasis in various anatomical locations, including the oral cavity. For instance, in the case reports described in abstract PUBMED:22862986, oral myiasis was successfully treated with a combination of topical application of sulfuric ether and surgical debridement. The authors discuss the potential advantages and disadvantages of drug-based treatments, suggesting that while drugs may offer a therapeutic alternative, their use still requires further study, particularly in individuals with neurological disorders. Similarly, in the case of facial myiasis associated with oral squamous cell carcinoma (OSCC) in an elderly patient, the treatment involved the administration of Ivermectin along with surgical removal of the larvae (PUBMED:32492640). This indicates that surgical removal of larvae is an important aspect of myiasis treatment, especially when associated with other conditions such as OSCC. Other abstracts, while not specifically addressing oral myiasis, also highlight the role of surgical intervention in the management of myiasis. For example, the extraction of furuncular myiasis larvae using a punch is described as a simple, practical, and aesthetic method (PUBMED:27438206), and the surgical removal of a Cuterebra larva from the cervical trachea of a cat is reported (PUBMED:19122066). In conclusion, the evidence from these case reports and clinical experiences suggests that surgical treatment remains a relevant option for managing oral myiasis, often in conjunction with other therapeutic modalities.
Instruction: Spinal anaesthesia for Caesarean section: dose injection speed have an effect on the incidence of hypotension? Abstracts: abstract_id: PUBMED:28494915 Injection speed of spinal anaesthesia for Caesarean delivery in Asian women and the incidence of hypotension: A randomised controlled trial. Study Objective: The purpose of this investigation was to determine if a slower speed of spinal anaesthesia injection would reduce the incidence of hypotension. Study Design: Randomised controlled trial. Setting: Tertiary level hospital in Malaysia. Patients: 77 patients undergoing elective Caesarean delivery. Intervention: Differing speeds of spinal injection. Measurements: Systolic blood pressure was assessed every minute for the first 10min and incidence of hypotension (reduction in blood pressure of >30% of baseline) was recorded. The use of vasopressor and occurrence of nausea/vomiting were also recorded. Main Results: 36 patients in SLOW group and 41 patients in FAST group were recruited into the study. There was no significant difference in blood pressure drop of >30% (p=0.497) between the two groups. There was no difference in the amount of vasopressor used and incidence of nausea/vomiting in both groups. Conclusion: In our study population, there was no difference in incidence of hypotension and nausea/vomiting when spinal injection time is prolonged beyond 15s to 60s. Trial Registration: ClinicalTrials.govNCT02275897. Registered on 15 October 2014. abstract_id: PUBMED:16226864 Spinal anaesthesia for Caesarean section: dose injection speed have an effect on the incidence of hypotension? Objective: To study effect of injection rate on spinal blockade and haemodynamic of spinal bupivacaine for Caesarean section. Study Design: Prospective and randomized. Patients And Methods: Sixty ASA I-II patients scheduled for elective Caesarean section were randomized to receive either fast (20 seconds, group R, n=30) or slow (60 seconds, group L, n=30) spinal injection of 10 mg of hyperbaric bupivacaine 0.5% associated with 25 microg of fentanyl and 100 microg of morphine. Sensory and motor blockade and haemodynamic parameters were recorded. Results: Profiles of sensory and motor block were similar in both groups. Variations of arterial blood pressure and total dose of ephedrine were not different between R and L groups. However, the number of patient with systolic blood pressure lower than 100 mmHg were significantly lower in L group than in R group (p=0.04). Incidences of adverse effects were similar in both groups. Conclusion: Result of the present study suggests that a slow rate of injection would induce lower incidence of hypotension induced by spinal bupivacaine for Caesarean section. abstract_id: PUBMED:28704246 The Effect of Glycopyrrolate on the Incidence of Hypotension and Vasopressor Requirement During Spinal Anesthesia for Cesarean Delivery: A Meta-analysis. Background: The objective of this meta-analysis was to determine the efficacy of glycopyrrolate at reducing spinal hypotension during cesarean delivery. Methods: A literature search was performed to identify randomized controlled trials investigating the effect of glycopyrrolate on spinal-induced hypotension during cesarean delivery. Primary outcomes were intraoperative hypotension and vasopressor requirement (phenylephrine equivalents). Secondary outcomes included heart rate (HR), nausea and vomiting, dry mouth, and Apgar scores. Risk ratios (RRs), and mean differences (MDs) were calculated using random-effects modeling with 95% confidence intervals for primary outcomes and 99% confidence intervals for secondary outcomes. Results: Five randomized controlled trials met our inclusion criteria. A total of 311 patients were included: 153 received glycopyrrolate and 158 placebo. The incidence of spinal-induced hypotension was no different with prophylactic glycopyrrolate compared to control (RR, 0.93 [0.71-1.21]; P = .59), but the total phenylephrine dose required was significantly reduced with glycopyrrolate (MD, -62.64 μg [-107.61 to -17.66 μg]; P = .006). The maximal HR achieved in the glycopyrrolate group was significantly higher compared to controls (MD, 15.85 bpm [5.40-26.31]; P < .0001); however, the incidence of bradycardia was not statistically different. The incidence of intraoperative nausea and vomiting was not different between groups; however, glycopyrrolate increased the risk of dry mouth (RR, 5.15 [1.82-14.57]; P < .0001). Apgar scores at 1 and 5 minutes did not differ between groups. Conclusions: Prophylactic glycopyrrolate does not reduce the incidence of spinal-induced hypotension but results in a modest reduction in vasopressor requirements while increasing maternal HR. abstract_id: PUBMED:15321104 Effect of injection rate on hypotension associated with spinal anesthesia for cesarean section. Maternal hypotension is a common problem during cesarean section under spinal anesthesia. We evaluated in a prospective observational study the influence of injection speed on maternal hypotension. Hyperbaric bupivacaine 10 mg, sufentanil 2 microg and morphine 200 microg (total volume 4 mL) were injected either quickly (<15 s) or slowly (=120 s) in 50 women scheduled for elective cesarean section. Hypotension (systolic arterial pressure (SAP) <100 mmHg or <70% of baseline) was promptly treated with 5 mg ephedrine boluses. Slow injection significantly reduced the incidence of hypotension (68% in the 120 s group and 92% in the other, P =0.03). In addition, onset of hypotension was delayed, had a shorter duration and required less ephedrine for hypotension in the 120 s group (11.6 mg vs. 19.6 mg, P =0.019). Anesthesia was satisfactory for all women. We conclude that a 2 mL/min injection rate may be a simple and effective way to reduce the incidence and severity of hypotension during cesarean section under spinal anesthesia. abstract_id: PUBMED:17718270 The incidence and risk factors of hypotension and bradycardia associated with spinal anesthesia. Objectives: Hypotension and bradycardia after conduction of spinal anesthesia are common and may lead to intraoperative cardiac arrest or death. The present study was carried out to investigate the incidence and risk factors of hypotension and/or bradycardia in the patients receiving spinal anesthesia. Material And Method: The authors prospectively studied 1,220 patients to identify the incidence of hypotension (> 30% decreased systolic blood pressure) and bradycardia (heart rate < 60 beats/min) after spinal anesthesia. Historical, clinical and physiologic data were correlated with the incidences by univariate analysis. Logistic regression with a forward stepwise algorithm was performed to identify independent variables. A p value < 0.05 was considered significant. Results: Incidence of hypotension and bradycardia were 36.8% and 4.9% respectively. The risk factors of hypotension included increasing age (OR = 1.019 ; 95%CI 1.017-1.031); analgesia level > or = T4 dermatome (OR = 2.068; 95%CI 1.486-2.879); body mass index > or = 30 (OR = 1.534; 95%CI 1.120-2. 100); cesarean section (OR= 1.723; 95%CI 1.244-2.386 and prehydration fluid < 500 mL (OR 1.472; 95%CI 1.071-2.023). The risk factors of bradycardia were increasing age (OR = 1.042; 95%CI 1.023-1.061) and analgesic level > or = T4 dermatome (OR = 2.246; 95%CI 1.101-4.584). Conclusion: The incidence of hypotension and bradycardia may increase with increasing age and analgesic level > or = T4 dermatome. Three other factors related to hypotension after spinal anesthesia were body mass index > or = 30, cesarean section, and prehydration fluid of less than 500 mL. abstract_id: PUBMED:22220225 Hemodynamic effects of continuous intravenous injection and bolus plus continuous intravenous injection of oxytocin in cesarean section. Background: Oxytocin may cause adverse cardiovascular effects, including tachycardia and hypotension, whereas the optimal dose of oxytocin at elective cesarean section is unclear. To determine the lowest effective dose of oxytocin, we studied the hemodynamic effects of three doses during spinal anesthesia for elective single cesarean delivery. Methods: Sixty women received oxytocin by continuous (0.5 IU/min) or bolus-continuous (2 or 5 IU prior to 0.25 IU/min continuous intravenous injection) intravenous injection after clamping of the umbilical cord. We compared changes in heart rate (HR), mean arterial pressure (MAP) and estimated blood loss (EBL). Uterine tone (UT) was assessed by palpation on a linear analog scale (LAS) at 5, 10, 15, 20 and 25 minutes after the oxytocin injection. In addition, oxytocin-related side-effects such as nausea and vomiting were recorded. Results: Marked hemodynamic changes such as HR and MAP occurred in the bolus-continuous groups but not in the continuous groups. Although we were not able to observe a variation of EBL in each group, the UT significantly increased in the bolus-continuous groups when compared with that the continuous groups. In addition, the hemodynamic changes such as HR and MAP were lower in the two IU bolus-continuous group than those in the five IU group. Conclusions: Although bolus-continuous injection of oxytocin resulted in more hemodynamic changes than continuous injection, bolus-continuous injection had a greater effect on uterine contraction. Furthermore, two IU bolus-continuous injection showed lower hemodynamic changes than in the five IU bolus-continuous injection. abstract_id: PUBMED:30622224 Sequential intrathecal injection of fentanyl and hyperbaric bupivacaine at different rates: does it make a difference? A randomized controlled trial. Background: Previous studies have shown that sequential intrathecal injection of fentanyl and hyperbaric bupivacaine for cesarean section (CS) anesthesia provides a superior anesthetic effect than use of bupivacaine alone, and prolongs postoperative analgesia. Herein, we investigated whether rapid intrathecal injection of fentanyl followed by slow injection of hyperbaric bupivacaine affects the duration of postoperative analgesia, the effectiveness of anesthesia, and hemodynamic status. Methods: Fifty-six parturients with American Society of Anesthesiologists physical status I or II, aged 18-40 years, and scheduled to undergo elective CS were randomly assigned to 2 groups of 28 patients each. The normal sequential group received sequential intrathecal injections of fentanyl and hyperbaric bupivacaine at the same rate, each with a 5 ml syringe. The rapid sequential group received a rapid intrathecal injection of fentanyl with an insulin syringe, followed by a slow injection of hyperbaric bupivacaine with a 5 ml syringe. The onset of sensory block, the timing of the first rescue analgesia, the doses of rescue analgesics, the degree of postoperative pain, the onset and duration of motor block, the incidence and duration of hypotension, and spinal anesthesia-related complications were recorded. Results: While both approaches had comparable spinal anesthesia-related complications, incidence and duration of hypotension, and doses of ephedrine, the rapid sequential group exhibited a more rapid onset of sensory block, a higher sensory level, and more prolonged postoperative analgesia. Conclusions: Rapid sequential injection of fentanyl and hyperbaric bupivacaine produced superior anesthesia and more prolonged postoperative analgesia than sequential injections of both at the same rate. abstract_id: PUBMED:29952519 The Effect of Reverse Trendelenburg Position on Incidence of Hypotension after Spinal Anesthesia. Objective: To prove if 10-degree head-up tilt position during conduction of spinal anesthesia and continue through the operation will reduce the incidence of hypotension comparing to horizontal position in elective cesarean section parturient. Material And Method: In this randomized double-blind controlled trial. Forty-four parturient were equally allocated into 10-degree head-up tilt and horizontal position during conduction of spinal anesthesia and continue during the operation. Blood pressure, heart rate, incidence of hypotension, ephedrine consumption, and anesthesia level were compared. Results: Incidence of hypotension and percentage of parturient that required ephedrine were comparable in control and study groups (72.73% vs. 45.45%, p = 0.066) but lower before cord clamping in the study group (68.18% vs. 36.36%, p = 0.03). The sensory block levels were identical but more parturient in the study group required reposition in order to gain the required anesthetic level. Conclusion: Ten-degree head-up tilt position during conduction and maintenance of spinal anesthesia for cesarean section reduced incidence of hypotension and ephedrine consumption without any adverse effect in both parturient and newly-born babies. abstract_id: PUBMED:25741842 Lower incidence of hypotension following spinal anesthesia with 6% hydroxyethyl starch preload compared to 9 ‰ saline solution in caesarean delivery. Background: Hypotension is a common complication following spinal anesthesia for cesarean delivery. Techniques to prevent hypotension include preloading intravenous fluid. Aim: To compare the effect of two preloading regimens: 6% hydroxyethyl starch (HES) and 9‰ saline solution, to prevent hypotension after spinal anesthesia in cesarean delivery. Methods: 105 patients undergoing intrathecal anesthesia for elective cesarean delivery were randomized and allocated to receive a preload of 500 ml HES 130/0.4 (HES Group) or a preload of 1500 ml 9‰ saline solution (CR group). Blood pressure and heart rate were recorded at baseline and after spinal anesthesia (every minute for the first 10 min, every 3 min for the next 10 min, and then every 5 min for the last 20 min). The primary outcome was to compare the incidence of hypotension (defined as a 20% reduction in systolic arterial pressure from baseline) between the two preloading regimens. Vasopressor requirements (i.v. bolus of 6 mg ephedrine) were also compared. Results: The incidence of hypotension was 87% in the CR group and 69% in the HES group (p= 0.028). Ephedrine requirement, incidence of nausea, and/or vomiting and neonatal outcome did not significantly differ between the two groups. Conclusion: The incidence of hypotension was lower after preloading of 500 mL of HES 130/0.4 than preloading with 1500 mL of 9‰ saline solution. abstract_id: PUBMED:1528096 Incidence of hypotension induced by spinal anesthesia with xylocaine for cesarean section and postpartum tubal ligation. The incidence of hypotension induced by spinal anesthesia in 154 ASA class I or II patients having Cesarean section and postpartum bilateral tubal ligation was studied. Xylocaine (2%) 60-65 mg was used for spinal anesthesia. There was 41.0% of patients in Cesarean section group developed hypotension induced by spinal anesthesia, and 13.5% in tubal ligation group. We found no correlation between the time after delivery and the occurrence of hypotension induced by spinal anesthesia. In addition, 20.3% of the patients in tubal ligation group required intravenous narcotics because of inadequate sensory blockade during skin incision, but only 9.0% in Cesarean section needed the same supplement which was due to visceral pain during intra-abdominal manipulation. We conclude that spinal anesthesia with 2% xylocaine is safe and effective in both Cesarean section and postpartum tubal ligation although a significant difference of spinal hypotension existed. Answer: The effect of injection speed during spinal anesthesia for Cesarean section on the incidence of hypotension has been studied with varying results. Some studies suggest that a slower rate of injection may reduce the incidence of hypotension. For instance, a study found that the number of patients with systolic blood pressure lower than 100 mmHg was significantly lower in the group receiving a slow injection (60 seconds) compared to the fast injection (20 seconds) group (PUBMED:16226864). Another study concluded that a slow injection rate (2 mL/min) significantly reduced the incidence of hypotension, delayed its onset, shortened its duration, and required less ephedrine for treatment (PUBMED:15321104). However, another study found no significant difference in the incidence of hypotension when spinal injection time was prolonged from 15 seconds to 60 seconds (PUBMED:28494915). This suggests that the injection speed may not be a critical factor in all patient populations. Additional factors, such as the use of prophylactic medications like glycopyrrolate, have been studied for their effects on spinal-induced hypotension. Glycopyrrolate did not reduce the incidence of hypotension but resulted in a modest reduction in vasopressor requirements while increasing maternal heart rate (PUBMED:28704246). In summary, while some evidence indicates that a slower injection speed during spinal anesthesia for Cesarean section may reduce the incidence of hypotension, other studies have not found a significant difference. This suggests that the relationship between injection speed and hypotension may vary and could be influenced by other factors such as patient characteristics, the specific anesthetic protocol used, and the use of prophylactic medications.
Instruction: Do reward-processing deficits in schizophrenia-spectrum disorders promote cannabis use? Abstracts: abstract_id: PUBMED:24913137 Do reward-processing deficits in schizophrenia-spectrum disorders promote cannabis use? An investigation of physiological response to natural rewards and drug cues. Background: Dysfunctional reward processing is present in individuals with schizophrenia-spectrum disorders (SSD) and may confer vulnerability to addiction. Our objective was to identify a deficit in patients with SSD on response to rewarding stimuli and determine whether this deficit predicts cannabis use. Methods: We divided a group of patients with SSD and nonpsychotic controls into cannabis users and nonusers. Response to emotional and cannabis-associated visual stimuli was assessed using self-report, event-related potentials (using the late positive potential [LPP]), facial electromyography and skin-conductance response. Results: Our sample comprised 35 patients with SSD and 35 nonpsychotic controls. Compared with controls, the patients with SSD showed blunted LPP response to pleasant stimuli (p = 0.003). Across measures, cannabis-using controls showed greater response to pleasant stimuli than to cannabis stimuli whereas cannabis-using patients showed little bias toward pleasant stimuli. Reduced LPP response to pleasant stimuli was predictive of more frequent subsequent cannabis use (β = -0.24, p = 0.034). Limitations: It is not clear if the deficit associated with cannabis use is specific to rewarding stimuli or nonspecific to any kind of emotionally salient stimuli. Conclusion: The LPP captures a reward-processing deficit in patients with SSD and shows potential as a biomarker for identifying patients at risk of heavy cannabis use. abstract_id: PUBMED:24398065 Do motivation deficits in schizophrenia-spectrum disorders promote cannabis use? An investigation of behavioural response to natural rewards and drug cues. Deficits in incentive motivation are often present in both Schizophrenia Spectrum Disorders (SSD) and substance-use disorders. The current study aims to test whether the presence of such deficits confers vulnerability to cannabis use in individuals with SSD. SSD patients (n=35) and healthy controls (n=35) were each divided into a group with (n=20) and a group without (n=15) current cannabis use disorder. Subjects performed a behavioural task designed for schizophrenia patients in which they could seek exposure to pleasant and cannabis visual stimuli on the basis of internal representations of these stimuli. Intensity of cannabis use was assessed by self-report. SSD patients were significantly less likely than controls to exert effort to try to re-view pleasant stimuli but were not significantly less likely to work to avoid unpleasant stimuli. Lack of response to re-view pleasant stimuli significantly predicted higher subsequent cannabis self-administration in patients but not controls, after controlling for degree of prior exposure to cannabis. Deficits in incentive motivation may be an aspect of SSD which promotes cannabis use in this population. abstract_id: PUBMED:33644820 Cannabinoids, reward processing, and psychosis. Background: Evidence suggests that an overlap exists between the neurobiology of psychotic disorders and the effects of cannabinoids on neurocognitive and neurochemical substrates involved in reward processing. Aims: We investigate whether the psychotomimetic effects of delta-9-tetrahydrocannabinol (THC) and the antipsychotic potential of cannabidiol (CBD) are underpinned by their effects on the reward system and dopamine. Methods: This narrative review focuses on the overlap between altered dopamine signalling and reward processing induced by cannabinoids, pre-clinically and in humans. A systematic search was conducted of acute cannabinoid drug-challenge studies using neuroimaging in healthy subjects and those with psychosis RESULTS: There is evidence of increased striatal presynaptic dopamine synthesis and release in psychosis, as well as abnormal engagement of the striatum during reward processing. Although, acute THC challenges have elicited a modest effect on striatal dopamine, cannabis users generally indicate impaired presynaptic dopaminergic function. Functional MRI studies have identified that a single dose of THC may modulate regions involved in reward and salience processing such as the striatum, midbrain, insular, and anterior cingulate, with some effects correlating with the severity of THC-induced psychotic symptoms. CBD may modulate brain regions involved in reward/salience processing in an opposite direction to that of THC. Conclusions: There is evidence to suggest modulation of reward processing and its neural substrates by THC and CBD. Whether such effects underlie the psychotomimetic/antipsychotic effects of these cannabinoids remains unclear. Future research should address these unanswered questions to understand the relationship between endocannabinoid dysfunction, reward processing abnormalities, and psychosis. abstract_id: PUBMED:32508368 Cannabis Use and its Association with Psychological Disorders. Purpose Of Review: This is a comprehensive review of the association between cannabis use and psychological disorders. It reviews the latest and seminal evidence that is available and attempts to conclude the strength of such association. Recent Findings: Cannabis is a flowering plant with psychoactive properties, attributed to cannabinoids that naturally occur within the plant. These act through the CB1 and CB2 receptors to inhibit GABA and glutamate release, as well through other forms of neuromodulation through the modulation of the endocannabinoid system (eCBs); a system that is otherwise involved in different pathways, including reward, memory, learning, and pain. Recent societal changes have increased the use of both medical and recreational cannabis. Patients with mental illness are considered more vulnerable and are prone to reward-seeking behavior. Cannabis use disorder (CUD) has been shown to have an increased prevalence in individuals with mental illness, creating an explosive cocktail. Approximately 1 in 4 patients with schizophrenia are also diagnosed with CUD. Cannabis use is associated with 2-4 times the likelihood of developing psychosis in healthy individuals. It has also been associated with multiple poor prognostic factors in schizophrenia, as well as in patients with a history of psychosis who do not meet diagnostic criteria for schizophrenia. Cannabis has been linked with anxiety; THC has been shown to elicit anxiety; however, anxiety is also a trigger for cannabis use. However, a recent large meta-analysis did not find a convincing link between cannabis and anxiety. This was reiterated in a recent epidemiological study that did not find such a correlation; however, it did identify a link between cannabis use, substance disorder, alcohol use disorder, drug use disorder, and nicotine dependence. Similarly, contradicting data exists regarding the link of depression and cannabis use. Summary: Cannabis use is increasing with recent societal shifts; however, its interaction with mental health is less well understood. CUD is highly prevalent in individuals with mental health disorders, especially those with other substance abuse disorders. There is evidence to support that cannabis use may trigger and worsen psychosis and schizophrenia. The link with depression and anxiety is less clear and needs further investigation. Personality disorder is linked with substance use disorder and shares similar risk factors with CUD. abstract_id: PUBMED:36340643 Prevalence of cannabis use in people with psychosis in KwaZulu-Natal, South Africa. Background: There is a high prevalence of cannabis use in patients with schizophrenia spectrum and other psychotic disorders, with comorbid cannabis use in this population being associated with poorer long-term outcomes. Aim: To determine the prevalence of cannabis use in patients with a schizophrenia spectrum and other psychotic disorders. Setting: The study was conducted at a psychiatric hospital in Durban, KwaZulu-Natal Province, South Africa. Methods: A review of clinical records of patients admitted to the hospital for the period, June 2018 to June 2020, was conducted. Results: A total of 370 clinical records were reviewed, of which 48.9% reported current and 51.1% lifetime cannabis use. Being male was significantly associated with current and lifetime cannabis use (OR = 4.90, 95% CI 2.49-9.62 and OR = 6.27, 95% CI 3.28-11.95, respectively). Current alcohol use was also associated with current cannabis use (CCU) (OR = 3.06, 95% CI 1.78-5.28), and age 45 years and older was associated with a lower odds of cannabis use (OR = 0.30, 95% CI 0.09-0.96). Forty-eight per cent of participants were admitted three or more times, and readmission was associated with cannabis use (p = 0.01). There was a lack of association between cannabis use, readmission and human immunodeficiency virus (HIV) status, after controlling for variables such as alcohol use and gender. Conclusion: Almost 50% of people admitted with schizophrenia spectrum and other psychotic disorders have comorbid current and lifetime cannabis use. There is a need for dual diagnosis units to address comorbid substance use in people with psychotic disorders, as it leads to poorer outcomes. Contribution: The study found that there is a high prevalence of cannabis use in people with psychosis. Therefore, it is imperative that we revise treatment programs in our psychiatric units and there is an urgent need for dual diagnosis programs that address substance use in this group of patients. abstract_id: PUBMED:32362972 Cannabis Use in Inpatients With Schizophrenia Spectrum Disorders at a Community Hospital. Background: Cannabis is the second most used recreational drug in the United States and one of the most used substances in patients with schizophrenia spectrum disorder (SSD). Unfortunately, the increased use is likely to continue as more states legalize recreational use of cannabis. Although the association between cannabis and schizophrenia has been studied extensively, the understanding of the relationship is still evolving. In this study, we sought to determine the prevalence and potential factors associated with cannabis use (CU) among inpatients with SSD at a community teaching hospital. Methods: We performed a retrospective review of the electronic medical charts of patients discharged from the psychiatric unit of our hospital from July 1, 2017 through October 31, 2017. Patients were included in this study if: 1) They were ≥ 18 years old; 2) They had discharge diagnosis of SSD; and 3) They had urine drug testing performed. Pertinent sociodemographic and clinical variables, including substance use status and hospital length of stay (LOS), were abstracted. Univariate frequencies and summary statistics were performed. Odds ratios (ORs) were determined by logistic regression analysis of bivariate and multivariate analyses. Results: Three hundred sixty-five (52.2%) patients had a discharge diagnosis of SSD, and only 322 had urine toxicology result for cannabinoids and were included in analysis. Of the 322 patients, 41.5% (n = 133) screened positive for cannabinoids. Of the 133 patients, 78% were African American, 15% were Hispanic and 5% were White; 77% were male and the median age was 36 years. Bivariate analyses showed tobacco use (OR: 2.8, 95% confidence interval (CI): 1.7 - 4.6), alcohol use (OR: 3.4, 95% CI: 2.9 - 7.0), younger age (OR: 2.8, 95% CI: 1.8 - 4.5), male gender (OR: 2.9, 95% CI: 2.2 - 3.2), unemployment (OR: 3.91, 95% CI: 3.49 - 7.35), homelessness (OR: 3.18, 95% CI: 2.76 - 3.84) and LOS (OR: 3.46, 95% CI: 2.93 - 4.31) were significantly associated with CU. Result of multivariate analysis was similar to that found in bivariate analysis. Conclusions: CU appears to be prevalent among patients with SSD. Clinicians and public health professionals are encouraged to understand the health implications of its use in patients with mental illness especially against the backdrop of current marijuana laws. abstract_id: PUBMED:21938097 Cannabis and schizophrenia spectrum disorders: a review of clinical studies. Cannabis is the most widely used illegitimate substance in the world, and the number of users has increased by 10% over the last decade worldwide. Therefore, it is important to review the evidence on psychoactive properties of cannabis and its possible association with schizophrenia spectrum disorders (SSD). We searched MEDLINE with the key words cannabis and schizophrenia. The search was limited to articles published in English over the last 10 years (1999-2009). Bibliographies of cited literature were also searched. Data sources included reviews published in core clinical journals, cohort studies, interventional studies, case-control studies, cross-sectional analyses and epidemiological data. Results are discussed under 2 topics. Firstly, evidence related to biochemical functioning of cannabinoids and their relationship to endocannabinoid system is discussed briefly. Secondly, the evidence from clinical studies on cannabis, psychosis proneness and SSD are discussed in detail. The discussion is structured to fit in the evidence from results section to 3 plausible hypotheses on cannabis use and SSD. The evidence for and against each hypothesis is discussed. Despite new evidence, the exact relationship between cannabis and SSD is unclear. There is no firm evidence that cannabis causes SSD. The evidence for the argument that schizophrenic patients are attracted to cannabis is also not strong. The most plausible explanation is that cannabis use and psychosis proneness may have synergistic effects in a vulnerable minority. abstract_id: PUBMED:28415138 The roles of valuation and reward processing in cognitive function and psychiatric disorders. In neuroeconomics, valuation refers to the process of assigning values to states and actions on the basis of the animal's current representation of the environment, while reward processing corresponds to processing the feedback received from the environment to update the values of states and actions. In this article, we review the brain circuits associated with valuation and reward processing and argue that these are fundamental processes critical to many cognitive functions. Specifically, we focus on the role of valuation and reward processing in attention, memory, decision making, and learning. Next, the extant neuroimaging literature on a number of psychiatric disorders is reviewed (i.e., addiction, pathological gambling, schizophrenia, and mood disorders), and an argument is made that associated deficits in cognitive functions can be explained in terms of abnormal valuation and reward processing. The review concludes with the impact of this framework in clinical settings and prescriptions for future research, in particular with regard to the conversions of qualitatively different valuation systems into a system of common currency. abstract_id: PUBMED:33250788 Optimizing Behavioral Paradigms to Facilitate Development of New Treatments for Anhedonia and Reward Processing Deficits in Schizophrenia and Major Depressive Disorder: Study Protocol. Background: Behavioral tasks focusing on different subdomains of reward processing may provide more objective and quantifiable measures of anhedonia and impaired motivation compared with clinical scales. Typically, single tasks are used in relatively small studies to compare cases and controls in one indication, but they are rarely included in larger multisite trials. This is due to limited systematic standardization as well as the challenges of deployment in international studies and stringent adherence to the high regulatory requirements for data integrity. The Reward Task Optimization Consortium (RTOC) was formed to facilitate operational implementation of reward processing tasks, making them suitable for use in future large-scale, international, multisite drug development studies across multiple indications. The RTOC clinical study aims to conduct initial optimization of a set of tasks in patients with major depressive disorder (MDD) or schizophrenia (SZ). Methods: We will conduct a multicenter study across four EU countries. Participants (MDD = 37, SZ = 37, with ≤80 age- and gender-matched healthy volunteers) will attend a study visit comprising screening, self-report and clinically rated assessments of anhedonia and symptom severity, and three reward processing tasks; specifically, the Grip Strength Effort task, the Doors task, and the Reinforcement Learning Working Memory task. The Grip Strength Effort and Doors tasks include simultaneous electroencephalography/event-related potential recordings. Outcomes will be compared using a two-way group design of MDD and SZ with matched controls, respectively. Further analyses will include anhedonia assessment scores as covariates. Planned analyses will assess whether our findings replicate previously published data, and multisite deployment will be evaluated through assessments of quality and conduct. A subset of participants will complete a second visit, to assess test-retest reliability of the task battery. Discussion: This study will evaluate the operational deployment of three reward processing tasks to the regulatory standards required for use in drug development trials. We will explore the potential of these tasks to differentiate patients from controls and to provide a quantitative marker of anhedonia and/or impaired motivation, establishing their usefulness as endpoints in multisite clinical trials. This study should demonstrate where multifaceted reward deficits are similar or divergent across patient populations. Registration: ClinicalTrials.gov (NCT04024371). abstract_id: PUBMED:36201975 Aberrant reward processing to positive versus negative outcomes across psychotic disorders. Several studies of reward processing in schizophrenia have shown reduced sensitivity to positive, but not negative, outcomes although inconsistencies have been reported. In addition, few studies have investigated whether patients show a relative deficit to social versus nonsocial rewards, whether deficits occur across the spectrum of psychosis, or whether deficits relate to negative symptoms and functioning. This study examined probabilistic implicit learning via two visually distinctive slot machines for social and nonsocial rewards in 101 outpatients with diverse psychotic disorders and 48 community controls. The task consisted of two trial types: positive (optimal to choose a positive vs. neutral machine) and negative (optimal to choose a neutral vs. negative machine), with two reward conditions: social (faces) and nonsocial (money) reward conditions. A significant group X trial type interaction indicated that controls performed better on positive than negative trials, whereas patients showed the opposite pattern of better performance on negative than positive trials. In addition, both groups performed better for social than nonsocial stimuli, despite lower overall task performance in patients. Within patients, worse performance on negative trials showed significant, small-to-moderate correlations with motivation and pleasure-related negative symptoms and social functioning. The current findings suggest reward processing disturbances, particularly decreased sensitivity to positive outcomes, extend beyond schizophrenia to a broader spectrum of psychotic disorders and relate to important clinical outcomes. Answer: Yes, reward processing deficits in schizophrenia spectrum disorders (SSD) appear to promote cannabis use. Studies have found that individuals with SSD exhibit dysfunctional reward processing, which may confer vulnerability to addiction, including cannabis use (PUBMED:24913137). Specifically, patients with SSD show a blunted response to pleasant stimuli, which has been predictive of more frequent subsequent cannabis use (PUBMED:24913137). Additionally, SSD patients are less likely than controls to exert effort to re-view pleasant stimuli, and this lack of response significantly predicts higher subsequent cannabis self-administration in patients (PUBMED:24398065). The relationship between cannabinoids, reward processing, and psychosis is complex, with evidence suggesting that cannabinoids can modulate regions involved in reward and salience processing, potentially contributing to the psychotomimetic effects of delta-9-tetrahydrocannabinol (THC) and the antipsychotic potential of cannabidiol (CBD) (PUBMED:33644820). Cannabis use disorder (CUD) has a high prevalence in individuals with mental illness, and cannabis use is associated with an increased likelihood of developing psychosis in healthy individuals and poorer prognostic factors in schizophrenia (PUBMED:32508368). Furthermore, nearly half of the people admitted with schizophrenia spectrum and other psychotic disorders have comorbid current and lifetime cannabis use, which is associated with poorer outcomes (PUBMED:36340643). The prevalence of cannabis use among inpatients with SSD at a community hospital was found to be high, with several factors such as tobacco use, alcohol use, younger age, male gender, unemployment, homelessness, and length of hospital stay being significantly associated with cannabis use (PUBMED:32362972). In summary, reward processing deficits in SSD are linked to an increased propensity for cannabis use, which can exacerbate the symptoms and outcomes of psychotic disorders. The evidence suggests that addressing reward processing deficits and comorbid substance use is crucial for improving treatment programs for individuals with SSD (PUBMED:24913137; PUBMED:24398065; PUBMED:33644820; PUBMED:32508368; PUBMED:36340643; PUBMED:32362972).
Instruction: Cost-Value Analysis and the SAVE: A Work in Progress, But an Option for Localised Decision Making? Abstracts: abstract_id: PUBMED:26100285 Cost-Value Analysis and the SAVE: A Work in Progress, But an Option for Localised Decision Making? Background: Cost-value analysis aims to address the limitations of the quality-adjusted life-year (QALY) by incorporating the strength of public concerns for fairness in the allocation of scarce health care resources. To date, the measurement of value has focused on equity weights to reflect societal preferences for the allocation of QALY gains. Another approach is to use a non-QALY-based measure of value, such as an outcome 'equivalent to saving the life of a young person' (a SAVE). Objective: This paper assesses the feasibility and validity of using the SAVE as a measure of value for the economic evaluation of health care technologies. Methods: A web-based person trade-off (PTO) survey was designed and implemented to estimate equivalent SAVEs for outcome events associated with the progression and treatment of early-stage breast cancer. The estimated equivalent SAVEs were applied to the outputs of an existing decision analytic model for early breast cancer. Results: The web-based PTO survey was undertaken by 1094 respondents. Validation tests showed that 68 % of eligible responses revealed consistent ordering of responses and 32 % displayed ordinal transitivity, while 37 % of respondents showing consistency and ordinal transitivity approached cardinal transitivity. Using consistent and ordinally transitive responses, the mean incremental cost per SAVE gained was £ 3.72 million. Conclusion: Further research is required to improve the validity of the SAVE, which may include a simpler web-based survey format or a face-to-face format to facilitate more informed responses. A validated method for estimating equivalent SAVEs is unlikely to replace the QALY as the globally preferred measure of outcome, but the SAVE may provide a useful alternative for localized decision makers with relatively small, constrained budgets-for example, in programme budgeting and marginal analysis. abstract_id: PUBMED:23986737 Why we should talk about option generation in decision-making research. Most empirical studies on decision-making start from a set of given options for action. However, in everyday life there is usually no one asking you to choose between A, B, and C. Recently, the question how people come up with options has been receiving growing attention. However, so far there has been neither a systematic attempt to define the construct of "option" nor an attempt to show why decision-making research really needs this construct. This paper aims to fill that void by developing definitions of "option" and "option generation" that can be used as a basis for decision-making research in a wide variety of decision-making settings, while clarifying how these notions relate to familiar psychological constructs. We conclude our analysis by arguing that there are indeed reasons to believe that option generation is an important and distinct aspect of human decision-making. abstract_id: PUBMED:10178653 The role of cost-consequence analysis in healthcare decision-making. A greater understanding of value associated with new pharmaceutical products should lead to better decision-making. Most commonly cost-effectiveness ratios (CERs) are used to indicate value; however, researchers have recently shown that CER estimates are rarely used by decision-makers in making formulary decisions. In this article, a cost-consequence approach to estimating the value for money of a new treatment for a specific disease is described. Using a cost-consequence approach, the impact of the new treatment on lifetime resource use and costs (including specific healthcare service use and costs, and productivity losses) and health outcomes (including disease symptoms, life expectancy and quality of life) for an individual or group of individuals is estimated and presented in a tabular format. The cost-consequence format is more likely to be approachable, readily understandable and applied by healthcare decision-makers than a simple CER. The decision-maker may use selected items from the cost-consequence analysis to compute composite measures of drug value, such as cost per life-year gained or cost per quality-adjusted life-year (QALY) gained. In general, the cost-consequence approach, by making the impact of the new treatment as comprehensive and transparent as possible, will enable decision-makers to select the components most relevant to their perspective and will also give them confidence that the data are credible to use as the basis for resource allocation decisions. abstract_id: PUBMED:34243829 Broadening the Concept of Value: A Scoping Review on the Option Value of Medical Technologies. Objectives: A recent debate in health economics and outcomes research community identified option value as one of the elements warranting consideration in the assessment of medical technologies. To conduct a scoping review of contributions on option value in the healthcare sector and identify relevant conceptual aspects and methods used to incorporate it in standard economic evaluations. Methods: A systematic search was conducted up to July 2020 to identify contributions from electronic bibliographic database and gray literature. Data on the proposed definitions of option value, theoretical implications of its use in economic evaluations, and methods used to estimate it were extracted and analyzed. Results: We found 57 eligible studies. Three different definitions emerged: insurance value, real option value, and option value of survival. Focusing on the latter (24 studies), we analyzed in depth 8 empirical applications across 7 therapeutic areas. The most relevant methodological challenges were on the perspective used in economic evaluations and how to robustly manage forecasting uncertainty, update cost-effectiveness thresholds, and avoid double-counting issues. For empirical studies assessing the total value of the technology, including option value, estimates ranged from +7% to +469% of its conventional value. Conclusions: This review synthesizes theoretical and empirical aspects on option value of healthcare technologies and proposes a terminology to distinguish 3 different concepts identified. Future work should focus primarily on agreeing on whether option value should be included in economic evaluations and, if so, on developing and validating reliable methods for its ex-ante estimation. abstract_id: PUBMED:25657628 Option generation in decision making: ideation beyond memory retrieval. According to prescriptive decision theories, the generation of options for choice is a central aspect of decision making. A too narrow representation of the problem may indeed limit the opportunity to evaluate promising options. However, despite the theoretical and applied significance of this topic, the cognitive processes underlying option generation are still unclear. In particular, while a cued recall account of option generation emphasizes the role of memory and executive control, other theoretical proposals stress the importance of ideation processes based on various search and thinking processes. Unfortunately, relevant behavioral evidence on the cognitive processes underlying option generation is scattered and inconclusive. In order to reach a better understanding, we carried out an individual-differences study employing a wide array of cognitive predictors, including measures of episodic memory, semantic memory, cognitive control, and ideation fluency. The criterion tasks consisted of three different poorly-structured decision-making scenarios, and the participants were asked to generate options to solve these problems. The main criterion variable of the study was the number of valid options generated, but also the diversity and the quality of generated options were examined. The results showed that option generation fluency and diversity in the context of ill-structured decision making are supported by ideation ability even after taking into account the effects of individual differences in several other aspects of cognitive functioning. Thus, ideation processes, possibly supported by search and thinking processes, seem to contribute to option generation beyond basic associative memory retrieval. The findings of the study also indicate that generating more options may have multifaceted consequences for choice, increasing the quality of the best option generated but decreasing the mean quality of the options in the generated set. abstract_id: PUBMED:30963894 Better than expected: the influence of option expectations during decision-making. Our choices often arise from a consideration of options presented in a sequence (e.g. the products in a supermarket row). However, whether the precise sequential order of option presentation affects decision-making remains poorly understood. A recent model of choice proposes that, in a set of options presented sequentially, those that are better than expected will be perceived as more valuable, even when options are objectively equivalent within the set. Inspired by this proposal, we devised a novel decision-making task where we manipulated the order of option presentation together with expectations about option value. Even when we compared trials that were exactly equivalent except for option order, we observed a striking preference for options that were better than expected. Our findings show that expectations about options affect which option will be favoured within a sequence, an influence which is manifested as a preference for better-than-expected options. The findings have potential practical implications, as for example they may help policymakers in devising nudge strategies that rely on ad hoc option orders. abstract_id: PUBMED:16981193 Expected value of information and decision making in HTA. Decision makers within a jurisdiction facing evidence of positive but uncertain incremental net benefit of a new health care intervention have viable options where no further evidence is anticipated to:(1)adopt the new intervention without further evidence;(2)adopt the new intervention and undertake a trial; or(3)delay the decision and undertake a trial.Value of information methods have been shown previously to allow optimal design of clinical trials in comparing option (2) against option (1), by trading off the expected value and cost of sample information. However, this previous research has not considered the effect of cost of reversal on expected value of information in comparing these options. This paper demonstrates that, where a new intervention is adopted, the expected value of information is reduced under optimal decision making with costs of reversing decisions. Further, the paper shows that comparing expected net gain of optimally designed trials for option (2) vs (1) conditional on cost of reversal, and (3) vs (1) conditional on opportunity cost of delay allow systematic identification of an optimal decision strategy and trial design. abstract_id: PUBMED:30264544 The Cognitive Underpinnings of Option Generation in Everyday Life Decision-Making: A Latent Variable Analysis. The ability to generate options for action is crucial for everyday life decision-making. In this article, we propose and test a model of the cognitive underpinnings of option generation in everyday life situations. We carried out a laboratory study using measures of a wide range of cognitive functions and asked participants (N = 157) to generate options for actions for different everyday life decision-making scenarios. The results of a latent variable analysis show that the cognitive underpinnings of option generation are consistent across different everyday life situations and, hence, option generation can be conceptualized as a general construct. Moreover, the results of a confirmatory factor analysis reveal that, when controlling for the shared variance among the cognitive processes assessed, verbal fluency, working memory capacity, ideation fluency, and processing speed predicted option generation. These findings suggest that option generation in everyday life situations can be distinguished from other cognitive constructs, such as divergent thinking (in terms of ideas' originality) and long-term memory. abstract_id: PUBMED:23898252 Neurocognitive mechanisms underlying value-based decision-making: from core values to economic value. VALUE PLAYS A CENTRAL ROLE IN PRACTICALLY EVERY ASPECT OF HUMAN LIFE THAT REQUIRES A DECISION: whether we choose between different consumer goods, whether we decide which person we marry or which political candidate gets our vote, we choose the option that has more value to us. Over the last decade, neuroeconomic research has mapped the neural substrates of economic value, revealing that activation in brain regions such as ventromedial prefrontal cortex (VMPFC), ventral striatum or posterior cingulate cortex reflects how much an individual values an option and which of several options he/she will choose. However, while great progress has been made exploring the mechanisms underlying concrete decisions, neuroeconomic research has been less concerned with the questions of why people value what they value, and why different people value different things. Social psychologists and sociologists have long been interested in core values, motivational constructs that are intrinsically linked to the self-schema and are used to guide actions and decisions across different situations and different time points. Core value may thus be an important determinant of individual differences in economic value computation and decision-making. Based on a review of recent neuroimaging studies investigating the neural representation of core values and their interactions with neural systems representing economic value, we outline a common framework that integrates the core value concept and neuroeconomic research on value-based decision-making. abstract_id: PUBMED:14720105 Cost-effectiveness analysis and the formulary decision-making process. Background: Faced with high drug expenditures in an environment of cost containment, drug formulary systems, particularly in managed care, have become more dependent on pharmacoeconomic evaluations to assess the value of new products. Within pharmacoeconomics (PE), cost-effectiveness analysis (CEA) is the most commonly used method. However, current methodological concerns about CEA have limited its practical contribution to the formulary process. Advances in analysis are likely to improve the relevance of CEA over time. Objective: The purpose of this paper is to review CEA, its limitations, and its applications in formulary decision making in order to promote greater utility of CEA for managed care pharmacists. Summary: Enhancements to CEA, such as the development of modeling software, rank-order stability analysis, cost-consequence analysis (CCA), and budget impact analysis are discussed. A combined method of CCA-CEA and standardized guidelines are suggested to improve the impact of CEA in the drug formulary process. Conclusion: Along with advances in its methodology and relevant standardized guidelines, CEA will gain increased importance in formulary decision making, helping to assure the goal of cost containment while ensuring quality of care. Answer: Cost-value analysis, which aims to address the limitations of the quality-adjusted life-year (QALY) by incorporating public concerns for fairness in health care resource allocation, is explored through the concept of the SAVE (an outcome 'equivalent to saving the life of a young person'). The feasibility and validity of using the SAVE as a measure of value for economic evaluation of health care technologies are assessed in a study involving a web-based person trade-off (PTO) survey related to early-stage breast cancer. The survey, completed by 1094 respondents, showed that while 68% of eligible responses revealed consistent ordering of responses and 32% displayed ordinal transitivity, only 37% of respondents showing consistency and ordinal transitivity approached cardinal transitivity. The mean incremental cost per SAVE gained was calculated at £3.72 million using consistent and ordinally transitive responses. The conclusion suggests that while further research is needed to improve the validity of the SAVE, it may offer a useful alternative for localized decision-makers with constrained budgets, although it is unlikely to replace the QALY as the globally preferred measure of outcome (PUBMED:26100285).
Instruction: Do baseline haemoglobin-rates point to the efficacy of blood saving techniques? Abstracts: abstract_id: PUBMED:11573126 Do baseline haemoglobin-rates point to the efficacy of blood saving techniques? Objective: We tested the hypothesis that blood saving techniques are more effective in patients with a low baseline haemoglobin. Methods: We analysed retrospectively 1665 total hip replacements and 848 total knee replacements. Patients were divided in three groups according to the baseline haemoglobin (Hb): Group A were patients with a Hb of more than 15 g/dl, group B between 15 g/dl and 13 g/dl and group C with a Hb of less than 13 g/dl. The blood saving techniques applied (Preoperative blood donation, isovolemic haemodilution, and mechanical autotransfusion) aimed to a high patient population which underwent surgery without transfusion of homologous blood. Results: The higher the baseline value of haemoglobin the higher was the patient population without homologous blood transfusion. The lower the Hb the higher was the effectiveness of blood saving techniques. In case of total hip replacement preoperative blood donation increased the patient population without homologous blood transfusion in all three groups significantly. However, in case of total knee replacement preoperative blood donation increased the patient population without homologous blood transfusion only in group B and C significantly. In both surgical procedures the combination of preoperative blood donation with other blood saving techniques did not further increase the patient population without homologous blood transfusion. In total hip replacement mechanical autotransfusion as well as isovolemic haemodilution reduced homologous transfusion in patients with low baseline haemoglobin. However, the increase was more pronounced when both techniques were combined. Conclusion: From our data is derived that the likelihood of homologous blood transfusion is greater in patients with a low baseline haemoglobin. However, in case of hip arthroplasty patients with a high baseline haemoglobin also profit from blood saving techniques while in case of knee arthroplasty the efficiency of blood saving techniques in a patient population with a high baseline haemoglobin is negligible. It seems to make sense to evaluate for the different kinds of surgery those patients who profit most from the different blood saving techniques. abstract_id: PUBMED:17013214 State-of-the-art blood saving techniques. Preoperative autologous donation, the use of erythropoietin, acute normovolaemic haemodilution, acceptance of minimal perioperative haemoglobin levels, the use of specific drugs (aprotinin, antifibrinolytics), cell saving and a meticulous surgical technique aimed at minimizing blood loss have all been described as blood saving techniques. Each has proved effective in reducing the need for allogeneic blood transfusions. With an appropriate selection of patients, all techniques can be used efficiently. abstract_id: PUBMED:17301113 Point of care estimation of haemoglobin in neonates. Objective: To evaluate whether measurement of haemoglobin concentration in neonates using point of care testing agrees with laboratory measurement. Design: 127 paired blood samples taken from babies on a neonatal intensive care unit for full blood count and blood gas analysis by point of care testing were reviewed according to current practice. A comparison was made between the laboratory and blood gas analyser haemoglobin measurements to assess limits of agreement and look for any systematic difference. Setting: Neonatal Unit, Jessop Wing, Royal Hallamshire Hospital, Sheffield, UK PATIENTS: Babies staying on the neonatal unit, who currently have contemporaneous blood samples taken for full blood count and blood gas analysis by point of care testing. Intervention: Results from blood samples were reviewed. Main Outcome Measure: Comparison between laboratory and point of care testing haemoglobin concentrations. Results: The mean laboratory haemoglobin concentration was 155 g/l (range 30-226 g/l); the mean point of care testing haemoglobin concentration was 157 g/l (range 30-228 g/l). The mean (SD) difference between paired samples was 2 (11) g/l; 95% CI -4.0 to 0.1 g/l; and limits of agreement -23 to 19 g/l. Conclusions: The blood gas analyser on the neonatal unit at Royal Hallamshire Hospital, Sheffield, gives a useful estimation of haemoglobin concentration compared with laboratory measurement, with smaller sample volume. Although this does not replace a full blood count, it is a useful adjunct to neonatal care monitoring. abstract_id: PUBMED:32229211 Analytical techniques for the detection of glycated haemoglobin underlining the sensors. The increase in concentrations of blood glucose results arise in the proportion of glycated haemoglobin. Therefore, the percentage of glycated haemoglobin in the blood could function as a biomarker for the average glucose level over the past three months and can be used to detect diabetes. The study of glycated haemoglobin tends to be complex as there are about three hundred distinct assay techniques available for evaluating glycated haemoglobin which contributes to some differences in the recorded values from the similar samples. This review outlines distinct analytical methods that have evolved in the recent past for precise recognition of the glycated - proteins. abstract_id: PUBMED:35883408 Anaemia in Lambs and Kids Reared Indoors on Maternal Milk and the Impact of Iron Supplementation on Haemoglobin Levels and Growth Rates. This study aimed to assess iron deficiency anaemia in new-born lambs and goat kids and was carried out in two parts: (1) Twenty blood samples were taken from one-month-old lambs and kids under different systems and were tested for haemoglobin. Three groups of lambs were compared: indoor reared on maternal milk, indoor reared on milk replacer, and outdoor reared on maternal milk. Indoor-reared kids were compared: those fed on maternal milk and fed on milk replacer. Indoor-reared kids and lambs on maternal milk showed significantly lower haemoglobin levels than those on milk replacer or reared outdoors. (2) On farms with indoor-reared lambs or goat kids on maternal milk, an intervention trial was carried out: animals were randomly assigned at 1-8 days of age to either receive 300 mg (lambs) or 150 mg iron (goat kids) as intramuscular iron dextran, and growth rates were compared after one and two months. Haemoglobin levels at one month were also compared in randomly selected animals from both groups. Treated lambs and kids showed higher haemoglobin levels at one month of age and a numerically increased growth rate that was statistically significant for twin lambs. Iron dextran improves haemoglobin levels in these animals and may lead to higher growth rates, especially in twin lambs. abstract_id: PUBMED:37990621 Baseline haemoglobin variability by measurement technique in pregnant people on the day of childbirth. Point-of-care haemoglobin measurement devices may play an important role in the antenatal detection of anaemia in pregnant people and may be useful in guiding blood transfusion during resuscitation in obstetric haemorrhage. We compared baseline haemoglobin variability of venous and capillary HemoCue® haemoglobin, and Masimo® Rad-67 Pulse CO-Oximeter haemoglobin with laboratory haemoglobin in people on the day of their planned vaginal birth. A total of 180 people undergoing planned vaginal birth were enrolled in this prospective observational study. Laboratory haemoglobin was compared with HemoCue and Masimo Rad-67 Pulse CO-Oximeter measurements using Bland-Altman analysis, calculating mean difference (bias) and limits of agreement. Five (2.8%) people had anaemia (haemoglobin < 110 g.l-1 ). Laboratory haemoglobin and HemoCue venous haemoglobin comparison showed an acceptable bias (SD) 0.7 (7.54) g.l-1 (95%CI -0.43-1.79), with limits of agreement -14.10-15.46 g.l-1 and acceptable agreement range of 29.6 g.l-1 . Laboratory and HemoCue capillary haemoglobin comparison showed an unacceptable bias (SD) 13.3 (14.12) g.l-1 (95%CI 11.17-15.34), with limits of agreement - 14.42-40.93 g.l-1 and unacceptable agreement range of 55.3 g.l-1 . Laboratory and Masimo haemoglobin comparison showed an unacceptable bias (SD) -14.0 (11.15) g.l-1 (95%CI -15.63 to -12.34), with limits of agreement to -35.85 to 7.87 g.l-1 and acceptable agreement range of 43.7 g.l-1 . Venous HemoCue, with its acceptable bias and limits of agreement, should be applied more widely in the antenatal setting to detect, manage and risk stratify pregnant people with anaemia. HemoCue capillary measurement under-estimated haemoglobin and Masimo haemoglobin measurement over-estimated, limiting their clinical use. Serial studies are needed to determine if the accuracy of venous HemoCue haemoglobin measurement is sustained in other obstetric settings. abstract_id: PUBMED:24967765 Accuracy and reliability of the i-STAT point-of-care device for the determination of haemoglobin concentration before and after major blood loss. We investigated the accuracy of i-STAT(®) (Abbott Point of Care Inc., Princeton, NJ, USA) haemoglobin (Hb) measurement in surgical patients with an estimated blood loss of ≥25% of total blood volume. Blood tests for i-STAT(®) Hb, laboratory Hb (Sysmex XE-2100(™), Sysmex Corporation, Kobe, Japan) and total plasma proteins were obtained at the start of surgery (T=0) and when an estimated 25% total blood volume loss had occurred (T=1). Thirty-one patients were recruited. The coefficient of variation of the paired i-STAT(®) Hb estimates was 2.8% and 2.9% at T=0 and T=1, respectively. The mean difference between i-STAT(®) and laboratory Hb was -7.6 g/l (standard deviation 6.5) at T=0 and -5.1 g/l (standard deviation 12) at T=1. The mean total plasma protein difference (total plasma protein T=0 minus T=1) was 13.6 g/l (95% confidence interval 10.2 to 17.0). There was poor correlation between total plasma protein and bias in i-STAT(®) measurements. The i-STAT(®) Hb had an acceptable coefficient of variation, but the Hb levels were lower than those estimated by the laboratory. The standard deviation of i-STAT(®) Hb was greater after ≥25% estimated total blood volume loss. Clinicians should not use the i-STAT(®) Hb in isolation for clinical decision-making when considering blood transfusion in a situation of 25% or greater blood loss. abstract_id: PUBMED:31070468 Can the blood gas analyser results be believed? A prospective multicentre study comparing haemoglobin, sodium and potassium measurements by blood gas analysers and laboratory auto-analysers. Blood gas analysers are point-of-care testing devices used in the management of critically ill patients. Controversy remains over the agreement between the results obtained from blood gas analysers and laboratory auto-analysers for haematological and biochemistry parameters. We conducted a prospective analytical observational study in five intensive care units in Western Australia, in patients who had a full blood count (FBC), urea, electrolytes and creatinine (UEC), and a blood gas performed within 1 h of each other during the first 24 h of their intensive care unit admission. The main outcome measure was to determine the agreement in haemoglobin, sodium, and potassium results between laboratory haematology and biochemistry auto-analysers and blood gas analysers. A total of 219 paired tests were available for haemoglobin and sodium, and 215 for potassium. There was no statistically significant difference between the results of the blood gas and laboratory auto-analysers for haemoglobin (mean difference -0.35 g/L, 95% confidence interval (CI) -1.20 to 0.51, P = 0.425). Although the mean differences between the two methods were statistically significant for sodium (mean difference 1.49 mmol/L, 95% CI 1.23-1.76, P < 0.0001) and potassium (mean difference 0.19 mmol/L, 95% CI 0.15-0.24, P < 0.0001), the mean biases on the Bland-Altman plots were small and independent of the magnitude of the measurements. The two methods of measurement for haemoglobin, sodium and potassium agreed with each other under most clinical situations when their values were within or close to normal range suggesting that routine concurrent blood gas and formal laboratory testing for haemoglobin, sodium and potassium concentrations in the intensive care unit is unwarranted. abstract_id: PUBMED:34462218 Baseline haemoglobin and thromboelastometry are predictive of red blood cell requirements and one-year mortality in liver transplantation. Background: To determine the predictive capacity of baseline haemoglobin and maxim clot firmness (MCF) EXTEM thromboelastometry for intraoperative red blood cell (RBC) requirements and its influence on mortality. Methods: 591 adult liver transplant (LT) recipients from ten Spanish centres were reviewed. The main outcomes were the percentage of patients who received RBC and massive transfusion (≥ 6 RBC units), RBC units transfused, and mortality. Results: 76 % received a donor after brain death graft and 24 % a controlled donor after circulatory death graft. Median (interquartile ranges) RBC transfusion was 2 (0-4) units, and 63 % of patients were transfused. Comparing transfused and non-transfused patients, mean (standard deviation) for baseline haemoglobin was 10.4 (2.1) vs. 13.0 (1.9) g/dl (p = 0.001), EXTEM MCF was 51(11) vs. 55(9) mm (p = 0.001). Haemoglobin and EXTEM MCF were inversely associated with the need of transfusion odds ratio (OR) of 0.558 (95 % CI 0.497-0.627, p < 0.001) and OR 0.966 (95 % CI0.945-0.987, p = 0.002), respectively. Pre-operative baseline haemoglobin ≤ 10 g/dL predicted RBC transfusion, sensitivity of 93 % and specificity of 47 %. Massive transfusion (MT) was received by 19 % of patients. Haemoglobin ≤10 g/dL predicted MT with sensitivity 73 % and specificity of 52 %. One-year patient and graft survival were significantly lower in patients who required MT (78 % and 76 %, respectively) vs. those who did not (94 % and 93 %, respectively). Discussion: whereas EXTEM MCF is less dreterminant predicting RBC requirements, efforts are required to improve preoperative haemoglobin up to 10 g/dl in patients awaiting LT. abstract_id: PUBMED:38290276 Optimizing haemoglobin measurements in VLBW newborns: Insights from a comparative retrospective study. Introduction: Haemoglobin levels assessment is a crucial part of neonatal intensive care practice, the painful experience of repeated heel pricks and venepunctures blood sampling may negatively affect neonatal clinical course. To date the reliability of haemoglobin levels obtained by point-of-care testing (POCT) analysis if compared to standard blood cell count remains controversial. Materials And Methods: Retrospective study conducted on all inborn premature infants (gestational age < 32 weeks) admitted to NICU of the IRCCS Giannina Gaslini Institute during the period May 2021-April 2023. We considered blood samplings occurred within the first 28 days of life recording the laboratory haemoglobin levels (Hblab) (reference method), the point-of-care haemoglobin levels (HbPOCT) (alternative method) and the type of puncture (arterial, venous and capillary). A Bland-Altman analysis was performed to evaluate the Hb agreement, it determines the bias (mean difference between the reference and alternative methods) and limits of agreement (LOA; lower, l-LOA; upper, u-LOA) of measures. An acceptable limit of agreement was 1 g/dl according to the existing literature. Results: We considered 845 blood samplings from 189 enrolled patients. The comparison between the reference and the alternative method showed a good agreement for the capillary sampling technique with l-LOA of -0.717 (-0.776; -0.659) and u-LOA of 0.549 (0.490; 0.607), these results were not achievable with the other techniques, with LOAs over ±1 g/dl threshold (venous<arterial). Conclusions: The reliability of capillary POCT measured haemoglobin levels may reduce clinical-related costs and the number of painful experiences, with obvious positive effects on the daily neonatal life in the NICU and on the developing brain structures. Answer: Yes, baseline haemoglobin rates do point to the efficacy of blood saving techniques. A study analyzing total hip replacements and total knee replacements found that patients with a higher baseline haemoglobin were more likely to undergo surgery without the need for homologous blood transfusion. Conversely, the lower the baseline haemoglobin, the higher was the effectiveness of blood saving techniques. Specifically, in total hip replacement, preoperative blood donation significantly increased the patient population without homologous blood transfusion across all baseline haemoglobin groups. In total knee replacement, this was only significant in groups with lower baseline haemoglobin (group B and C). Additionally, mechanical autotransfusion and isovolemic haemodilution reduced the need for homologous transfusion in patients with low baseline haemoglobin, particularly when both techniques were combined (PUBMED:11573126). State-of-the-art blood saving techniques, such as preoperative autologous donation, erythropoietin use, acute normovolaemic haemodilution, and the use of specific drugs, have all been effective in reducing the need for allogeneic blood transfusions. With appropriate patient selection, all these techniques can be used efficiently (PUBMED:17013214). In conclusion, baseline haemoglobin levels are indicative of the potential need for blood transfusions and the efficacy of blood saving techniques. Patients with lower baseline haemoglobin levels benefit more from these techniques, suggesting that preoperative evaluation of haemoglobin levels can help in planning and optimizing the use of blood saving strategies for different types of surgery (PUBMED:11573126).
Instruction: Nucleated red blood cells: a marker for fetal asphyxia? Abstracts: abstract_id: PUBMED:11064007 Nucleated red blood cells as a marker in acute and chronic fetal asphyxia. Objective: To investigate the variations of nucleated red blood cell (NRBC) counts in acute and chronic fetal hypoxia and to examine if it could be possible to establish a cutoff value for the number of NRBCs for prediction of fetal acidosis. Method: We prospectively studied 77 pregnant women. Patients were grouped as acute (n=11) and chronic fetal distress (n=21) and controls (n=45). At delivery the umbilical cord was double clamped and blood samples were collected. Results: The mean NRBC counts in chronic fetal distress group was higher than acute fetal distress. NRBC counts were found to be correlated with umbilical cord pH (r=-0.57; P<0.001). The cutoff value predicting fetal acidosis was determined as 14/100 leukocytes (sensitivity 87%, specificity 81%) by using ROC analysis. Conclusion: The duration and the severity of fetal asphyxia may be predicted by the number of NRBCs per leukocyte. abstract_id: PUBMED:8885733 Nucleated red blood cells: an update on the marker for fetal asphyxia. Objective: Our goal was to update our experience with nucleated red blood cells as a marker for fetal asphyxia and to determine whether a relationship exists between the presence of nucleated red blood cells and long-term neurologic impairment. Study Design: Nucleated red blood cell data from 153 singleton term neurologically impaired neonates were compared with cord blood nucleated red blood cells of 83 term nonasphyxiated newborns. Newborns with anemia, intrauterine growth restriction, and maternal diabetes were excluded. The group of neurologically impaired neonates was separated into the following subgroups: group I, persistent nonreactive fetal heart rate pattern from admission to delivery (n = 69); group II, reactive fetal heart rate on admission followed by tachycardia with decelerations and absent variability (n = 47); group III, reactive fetal heart rate on admission followed by an acute prolonged deceleration (n = 37). The first and highest nucleated red blood cell value and the time of nucleated red blood cell disappearance were assessed. Results: The mean number of initial nucleated red blood cells was significantly higher in the group of neurologically impaired neonates (30.3 +/- 77.5, range 0 to 732 per 100 white blood cells) than in the control group (3.4 +/- 3.0, range 0 to 12 per 100 white blood cells) (p < 0.000001). When the group of neurologically impaired neonates was separated on the basis of timing of the neurologic impairment, distinct nucleated red blood cell patterns were observed. Significant differences were obtained between each of the three groups of neurologically impaired neonates and the normal group, with respect to initial nucleated red blood cells (group I, 48.6 +/- 106.9; group II, 11.4 +/- 9.8; group III, 12.6 +/- 13.4; p < or = 0.000002). Maximum nucleated red blood cell values were higher in group I (mean 51.5 +/- 108.9) than in groups II and III combined (mean 12.7 +/- 11.9) (p = 0.0005). Group I also had a longer clearance time (119 +/- 123 hours) than groups II and III combined (mean 59 +/- 64 hours) (p < 0.001). Conclusion: Our ongoing study indicates that nucleated red blood cells identify the presence of fetal asphyxia. When fetal asphyxia is present, distinct nucleated red blood cell patterns are observed that relate to the timing of fetal injury. In general, intrapartum injuries are associated with lower nucleated red blood cell values. Thus our data continue to support the concept that nucleated red blood cell levels may assist in determining the timing of fetal neurologic injury. abstract_id: PUBMED:26673879 Successful intrauterine treatment and good long-term outcome in an extremely severe case of fetal hemolytic disease. A 34-year-old multiparous woman presented with anti-Rh-D antibodies (1: 512) and fetal hydrops at the 21(st) week of gestation. Ultrasound revealed massive fetal skin edema, ascites, hepatomegaly, placentomegaly, and anhydramnios. No fetal movements were observed. Fetal heart was enlarged, with reportedly decreased contractibility. The Doppler parameters were abnormal: the peak systolic velocity in median cerebral artery (MCA PSV) was increased (84 cm/s, 3 MoM), and absent end diastolic flow (AEDF) was reported in the umbilical artery. Ultrasound examination indicated severe fetal anemia and heart failure. Umbilical vein puncture was performed and the fetal blood count was determined (RBC 0.01 × 10(6)/µl, Ht 0.1%, PLT 67 × 10(3)/µl, WBC 2.1 × 10(3)/µl, indeterminable hemoglobin level). Packed red blood cells (0 Rh-, 30 ml) were immediately transfused to the fetus. Altogether, seven intrauterine transfusions were performed. Fetal hydrops disappeared gradually during the next few weeks. The male neonate (1860 g, 45 cm, Apgar score 3-4) was delivered after the last transfusion at 34(th) week of gestation due of intrauterine asphyxia. The infant was discharged after 21 days, in good condition, on breastfeeding. There was one 10 mm focus of periventricular leukomalacia in the brain, diagnosed based on trans-fontanel ultrasound, without any signs of damage to other organs. At the age of 5 years, the child is healthy, with no abnormalities in his neurodevelopmental parameters. abstract_id: PUBMED:20455723 Perinatal asphyxia is associated with the umbilical cord nucleated red blood cell count in pre-eclamptic pregnancies. Nucleated red blood cells are commonly present in the blood of newborns. Our objective was to investigate the value of umbilical cord nucleated red blood cell (NRBC) count in predicting fetal asphyxia in pre-eclamptic women. NRBCs were counted in umbilical cord blood samples of neonates born to 43 pre-eclamptic and 25 healthy pregnant women. Pre-eclamptic women were further subgrouped based on the presence or absence of intrauterine growth restriction. The NRBC count differed significantly between pre-eclamptic women with and without intrauterine growth restriction, and controls (26.3 +/- 7.5; 17.1 +/- 6.8; and 9.9 +/- 2.7; p < 0.001). A NRBC count of 18.5 or above could predict fetal asphyxia with a sensitivity of 94.4% and a specificity of 80.0%. The umbilical cord NRBC count is effective in predicting fetal asphyxia in pre-eclamptic women. abstract_id: PUBMED:8210772 A conceptual approach to placental pathology and pregnancy outcome. This report focuses on the relationship of placental pathology to unfavorable pregnancy outcome. Relevant literature is cited and data from the author's investigations are reported and tabulated. The reader will find detailed information on placental lesions that have not been completely investigated or discussed previously. Particular considerations include placental meconium staining, edema, acute and chronic intrauterine infections, placental fetal vasculopathy with fetal nucleated red blood cells, and chorangiosis or other placental dysmaturity. These pathologic changes often signify the pathogenesis of cerebral palsy and other developmental disorders. Almost 90% of neurodevelopmental disorders are initiated before the intrapartum period. Prenatal asphyxia or severe chronic fetal hypoxia are probably present therein. Most investigations of these afflictions are invalid because they do not include placental study with well-designed epidemiologic methods. The pathologic placental findings that are most strongly associated with perinatal asphyxia include chronic ischemic changes, fetal nucleated red blood cells, intravillous hemorrhage, fetal fibrin vascular intimal cushions, meconium staining, and placental intervillous fibrin. Chorioamnionitis is a pathologic entity rather than a clinical syndrome as defined by obstetricians. When it causes severe prematurity, chorioamnionitis is also associated with cerebral palsy. abstract_id: PUBMED:16582539 The timing of neonatal brain damage. Although neonatal morbidity and mortality are less than in the past, the risk of pre-natal and neonatal brain damage has not been eliminated. In order to optimize pre-natal, perinatal and neonatal care, it is necessary to detect factors responsible for brain damage and obtain information about their timing. Knowledge of the timing of asphyxia, infections and circulatory abnormalities would enable obstetricians and neonatologists to improve prevention in pre-term and full-term neonates. Cardiotocography has been criticized as being too indirect a sign of fetal condition and as having various technical pitfalls, though its reliability seems to be improved by association with pulse oximetry, fetal blood pH and electrocardiography. Neuroimaging is particularly useful to determine the timing of hypoxic-ischemic brain damage. Cranial ultrasound has been used to determine the type and evolution of brain damage. Magnetic resonance has also been used to detect antenatal, perinatal and neonatal abnormalities and timing on the basis of standardized assessment of brain maturation. Advances in the interpretation of neonatal electroencephalograms have also made this technique useful for determining the timing of brain lesions. Nucleated red blood cell count in cord blood has been recognized as an important indication of the timing of pre-natal hypoxia, and even abnormal lymphocyte and thrombocyte counts may be used to establish pre-natal asphyxia. Cord blood pH and base excess are well-known markers of fetal hypoxia, but are best combined with heart rate and blood pressure. Other markers of fetal and neonatal hypoxia useful for determining the timing of brain damage are assays of lactate and markers of oxidative stress in cord blood and neonatal blood. Cytokines in blood and amniotic fluid may indicate chorioamnionitis or post-natal infections. The determination of activin and protein S100 has also been proposed. Obstetricians and neonatologists can therefore now rely on various methods for monitoring the risk of brain damage in the antenatal and post-natal periods. abstract_id: PUBMED:8445176 Some placental considerations related to neurodevelopmental and other disorders. Many newborns who appear normal at birth later manifest substantial neurologic and other disease. Pathologists are able to explain some of that sad enigma. Placental pathology frequently reveals the pathogenesis of cerebral palsy, mental retardation, and other neurodevelopmental disorders. This requires recognition of gross placental abnormalities and insightful light microscopic examination. Chorioamnionitis is now proven to be the major cause of premature onset of labor and prematurity. There is important need for investigation of pathogenetic processes associated with ascending intrauterine infection. Major complications therein include bacterially mediated fetal hypoperfusion resulting from placental and umbilical vasocontraction. Placentas of 10% of newborns have villitis of unknown etiology. The importance of villitis is incompletely known. The fetus may discharge meconium on more than one occasion, particularly so when the fetus is postmature. Clinicians may not recognize that fetal discharge has occurred if the event occurred 4 days or more prior to delivery. Intra-amniotic meconium associated with oligohydramnios probably causes placental and umbilical vasocontraction. Meconium probably thus contributes to the pathogenesis of pulmonary vasoconstriction, persistent fetal circulation, necrotizing enterocolitis, and damage of the fetal brain, liver, and kidneys. Fetal hypoxia and asphyxia may be acutely or chronically acquired. Major placental lesions associated with neonatal asphyxia include chronic ischemic change, nucleated red blood cells, intravillous hemorrhages, intimal vascular fibrin cushions, meconium staining, and intervillous fibrin. abstract_id: PUBMED:5085836 Red-cell potassium in chronic fetal asphyxia. N/A abstract_id: PUBMED:7503172 Nucleated red blood cells: a marker for fetal asphyxia? Objective: Our purpose was to determine whether a relationship exists between the presence of nucleated red blood cells, hypoxic ischemic encephalopathy, and long-term neonatal neurologic impairment. Study Design: Nucleated red blood cell data from 46 singleton term neurologically impaired neonates were compared with cord blood nucleated red blood cells of 83 term nonasphyxiated newborns. The neurologically impaired neonates group was also separated as follows: nonreactive, nonreactive fetal heart rate from admission to delivery; tachycardia, reactive fetal heart rate on admission followed by tachycardia with decelerations; rupture, uterine rupture. The first and highest nucleated red blood cells value and the time to nucleated red blood cells disappearance were assessed. Results: The neurologically impaired neonates group exhibited a significantly higher number of nucleated red blood cells per 100 white blood cells (34.5 +/- 68) than did the control group (3.4 +/- 3.0) (p < 0.00001). When the neurologically impaired neonates are separated as to the basis for the neurologic impairment, distinct nucleated red blood cell patterns were observed. Overall, the nonreactive group exhibited the highest mean nucleated red blood cell (51.4 +/- 87.5) count and the longest clearance times (236 +/- 166 hours). Conclusion: In this limited population, nucleated red blood cell data appear to aid in identifying the presence of fetal asphyxia. When asphyxia was present, distinct nucleated red blood cells patterns were identified that were in keeping with the observed basis for the fetal injury. In general, the closer the birth was to the asphyxial event, the lower was the number of nucleated red blood cells. Thus our data suggest that cord blood nucleated red blood cells could assist in the timing of fetal neurologic injury. abstract_id: PUBMED:3362753 Intravascular "mulberry-like" bodies: morphological, immunohistochemical, and ultrastructural observations of an incidental finding caused by autolysis? Intravascular "mulberry-like" bodies in a stillborn female infant with moderate maceration are reported. The histogenesis of these structures is discussed based on light-microscopic, immunohistochemical and ultrastructural findings. No demonstrable causal relation between the intravascular lesions and fetal death was found, the cause of death being attributed to intrauterine asphyxia. It is concluded, that intravascular "mulberry-bodies" most likely represent artifacts due to red blood cell autolysis. Answer: Yes, nucleated red blood cells (NRBCs) have been identified as a marker for fetal asphyxia. Studies have shown that the presence and count of NRBCs in the umbilical cord blood at delivery can be indicative of fetal hypoxia or asphyxia. For instance, one study found that the mean NRBC counts were higher in the chronic fetal distress group compared to the acute fetal distress group, and there was a correlation between NRBC counts and umbilical cord pH. A cutoff value predicting fetal acidosis was determined as 14 NRBCs per 100 leukocytes, with a sensitivity of 87% and specificity of 81% (PUBMED:11064007). Another study updated the experience with NRBCs as a marker for fetal asphyxia and found that the mean number of initial NRBCs was significantly higher in neurologically impaired neonates compared to a control group. The study also observed distinct NRBC patterns that related to the timing of fetal injury, suggesting that NRBC levels may assist in determining the timing of fetal neurologic injury (PUBMED:8885733). In pre-eclamptic pregnancies, the umbilical cord NRBC count was found to be effective in predicting fetal asphyxia, with a significant difference in NRBC count between pre-eclamptic women with and without intrauterine growth restriction, and controls. A NRBC count of 18.5 or above could predict fetal asphyxia with high sensitivity and specificity (PUBMED:20455723). Furthermore, NRBC count in cord blood has been recognized as an important indication of the timing of prenatal hypoxia, and it has been suggested that even abnormal lymphocyte and thrombocyte counts may be used to establish prenatal asphyxia (PUBMED:16582539). Overall, these studies support the concept that NRBCs are a marker for fetal asphyxia and may provide valuable information regarding the timing and severity of the hypoxic event.
Instruction: Does metformin induce ovulation in normoandrogenic anovulatory women? Abstracts: abstract_id: PUBMED:15547527 Does metformin induce ovulation in normoandrogenic anovulatory women? Objective: This study was undertaken to evaluate the efficacy of metformin in women with anovulation who do not have evidence for hyperandrogenism and classic polycystic ovary syndrome. Study Design: A randomized trial of metformin (1500 mg daily) and placebo in 24 anovulatory women was undertaken for 3 months. Assessments of changes in hormone levels and insulin sensitivity were carried out. Abnormal hormonal values were defined by levels exceeding the range in normal ovulatory controls. Results: Anovulatory women had normal androgen levels and luteinizing hormone but had higher serum insulin and lower insulin sensitivity compared with controls. Over 3 months, there were 16 ovulatory cycles with metformin and only 4 with placebo ( P < .05). Success of ovulation did not correlate with changes in androgen, insulin, or insulin sensitivity parameters. Conclusion: Metformin may be useful for inducing ovulation in anovulatory women who do not have hyperandrogenism. This effect may be independent of a lowering of androgen or insulin levels. abstract_id: PUBMED:31994179 The cumulative success of ovulation induction therapy with gonadotrophins in therapy-naïve anovulatory women: An observational study. Background: The use of gonadotrophins as a first-line treatment for anovulatory infertility has been limited by a perception of a risk of multi-fetal gestation and ovarian hyperstimulation syndrome (OHSS). However, it has recently been recognised as an acceptable first-line treatment if appropriate monitoring is performed. Aims: To determine the cumulative live birth rate, incidence of multiple gestation, cycle cancellation rate and incidence of OHSS for therapy-naïve anovulatory women undergoing ovulation induction with gonadotrophins. Materials And Methods: A prospective observational study of 258 patients undergoing ovulation induction with a 'low-dose step-up' protocol was performed over a three-year period across two fertility centres (40% of patients were currently or recently prescribed metformin). Results: Twenty-six percent of patients required concurrent use of luteinising hormone. The cumulative pregnancy and live birth rates were 22.5% and 18.2%, 40.3% and 34.5%, 47.7% and 41.1% after completion of the first, second and third cycles of stimulation, respectively, with a median duration of stimulation of 15 days. No patients developed OHSS and 10.5% of cycles were cancelled due to an excessive or no follicular response. The multiple pregnancy rate was 2%. The cumulative pregnancy rate was reduced for women over 35 years of age (23.8 vs 55.3%, P = 0.006) and for women with a body mass index greater than 25 kg/m2 (40.6 vs 56.7%, P = 0.027). CONCLUSIONS: This study demonstrated that ovulation induction with gonadotrophin therapy, in the context of appropriate monitoring, is a safe and effective treatment for young therapy-naïve patients with anovulatory infertility. abstract_id: PUBMED:27511809 The management of anovulatory infertility in women with polycystic ovary syndrome: an analysis of the evidence to support the development of global WHO guidance. Background: Here we describe the consensus guideline methodology, summarise the evidence-based recommendations we provided to the World Health Organisation (WHO) for their consideration in the development of global guidance and present a narrative review on the management of anovulatory infertility in women with polycystic ovary syndrome (PCOS). Objective And Rationale: The aim of this paper was to present an evidence base for the management of anovulatory PCOS. Search Methods: The evidence to support providing recommendations involved a collaborative process for: (i) identification of priority questions and critical outcomes, (ii) retrieval of up-to-date evidence and exiting guidelines, (iii) assessment and synthesis of the evidence and (iv) the formulation of draft recommendations to be used for reaching consensus with a wide range of global stakeholders. For each draft recommendation, the methodologist evaluated the quality of the supporting evidence that was then graded as very low, low, moderate or high for consideration during consensus. Outcomes: Evidence was synthesized and we made recommendations across the definition of PCOS including hyperandrogenism, menstrual cycle regulation and ovarian assessment. Metabolic features and the impact of ethnicity were covered. Management includes lifestyle changes, bariatric surgery, pharmacotherapy (including clomiphene citrate (CC), aromatase inhibitors, metformin and gonadotropins), as well as laparoscopic surgery. In-vitro fertilization (IVF) was considered as were the risks of ovulation induction and of pregnancy in PCOS. Approximately 80% of women who suffer from anovulatory infertility have PCOS. Lifestyle intervention is recommended first in women who are obese largely on the basis of general health benefits. Bariatric surgery can be considered where the body mass index (BMI) is ≥35 kg/m2 and lifestyle therapy has failed. Carefully conducted and monitored pharmacological ovulation induction can achieve good cumulative pregnancy rates and multiple pregnancy rates can be minimized with adherence to recommended protocols. CC should be first-line pharmacotherapy for ovulation induction and letrozole can also be used as first-line therapy. Metformin alone has limited benefits in improving live birth rates. Gonadotropins and laparoscopic surgery can be used as second-line treatment. There is no clear evidence for efficacy of acupuncture or herbal mixtures in women with PCOS. For women with PCOS who fail lifestyle and ovulation induction therapy or have additional infertility factors, IVF can be used with the safer gonadotropin releasing hormone (GnRH) antagonist protocol. If a GnRH-agonist protocol is used, metformin as an adjunct may reduce the risk of ovarian hyperstimulation syndrome. Patients should be informed of the potential side effects of ovulation induction agents and of IVF on the foetus, and of the risks of multiple pregnancy. Increased risks for the mother during pregnancy and for the child, including the exacerbating impact of obesity on adverse outcomes, should also be discussed. Wider Implications: This guidance generation and evidence-synthesis analysis has been conducted in a manner to be considered for global applicability for the safe administration of ovulation induction for anovulatory women with PCOS. abstract_id: PUBMED:11502783 Sensitization to insulin induces ovulation in nonobese adolescents with anovulatory hyperandrogenism. In nonobese girls with an adolescent variant of polycystic ovary syndrome, insulin-sensitizing treatment reduces hyperinsulinism, dyslipidemia, and hyperandrogenism and restores eumenorrhea; however, the effect on anovulation is unknown. We assessed whether metformin treatment is capable of inducing ovulation in nonobese adolescents with anovulatory hyperandrogenism after precocious pubarche. The study population consisted of 18 adolescents (mean age, 16 yr; body mass index, 21.4 kg/m2; 3-7 yr beyond menarche) with hyperinsulinemic hyperandrogenism. All girls received metformin for 6 months in a daily dose of 1275 mg. Before inclusion, persistent anovulation was documented by weekly serum progesterone measurements less than 4 ng/ml (months -3 and -1); the ovulation rate was assessed similarly after 2, 4 and 6 months on metformin; a premenstrual progesterone level greater than 8 ng/ml was used as ovulation marker. Regular menses were reported by 16 of 18 girls within 4 months on metformin, and all girls were eumenorrheic after 6 months on metformin. Of the 18 girls, 1 (6%) ovulated after 2 months on metformin, 7 (39%) after 4 months, and 14 (78%) after 6 months; ovulation induction failed in the girls with the lowest birth weight or most severe hyperandrogenism. Metformin treatment was well tolerated. In conclusion, sensitization to insulin was found to be an effective approach to induce ovulation in nonobese adolescents with anovulatory hyperandrogenism. abstract_id: PUBMED:23084977 Ovulation induction in the management of anovulatory polycystic ovary syndrome. The aim of this brief review is to describe the management of anovulatory infertility in the polycystic ovary syndrome (PCOS). This has traditionally involved the use of clomiphene citrate (CC), and then gonadotropin therapy or laparoscopic ovarian surgery, in those who are clomiphene resistant (The Thessaloniki ESHRE/ASRM-Sponsored PCOS Consensus Workshop Group, 2008). Recently developed therapeutic approaches include aromatase inhibitors and the potential use of in vitro maturation (IVM) of oocytes collected from unstimulated (or minimally stimulated) polycystic ovaries. Unfortunately the early promise of the insulin sensitizing drugs has not been translated into significant improvement in outcomes and therefore are not prescribed unless the patient has an impairment of glucose tolerance (The Thessaloniki ESHRE/ASRM-Sponsored PCOS Consensus Workshop Group, 2008). There has been an unfortunate shift away from Mono-follicular ovulation induction remains the first line approach for the management of anovulatory PCOS, and in vitro fertilization treatment (IVF) should be reserved for those who fail to respond or who have additional infertility factors (The Thessaloniki ESHRE/ASRM-Sponsored PCOS Consensus Workshop Group, 2008). Superovulation for IVF presents significant risks for women with polycystic ovaries, namely the potentially life-threatening complication of ovarian hyperstimulation syndrome (OHSS). Carefully conducted and monitored ovulation induction can achieve good cumulative conception rates and furthermore, multiple pregnancy rates can be minimized with strict adherence to criteria that limit the number of follicles that are permitted to ovulate. abstract_id: PUBMED:15840746 Prospective parallel randomized, double-blind, double-dummy controlled clinical trial comparing clomiphene citrate and metformin as the first-line treatment for ovulation induction in nonobese anovulatory women with polycystic ovary syndrome. Context: Although metformin has been shown to be effective in the treatment of anovulation in women with polycystic ovary syndrome (PCOS), clomiphene citrate (CC) is still considered to be the first-line drug to induce ovulation in these patients. Objective: The goal of this study was to compare the effectiveness of metformin and CC administration as a first-line treatment in anovulatory women with PCOS. Design: We describe a prospective parallel randomized, double-blind, double-dummy controlled clinical trial. Setting: The study was conducted at the University "Magna Graecia" of Catanzaro, Catanzaro, Italy. Patients: One hundred nonobese primary infertile anovulatory women with PCOS participated. Interventions: We administered metformin cloridrate (850 mg twice daily) plus placebo (group A) or placebo plus CC (150 mg for 5 d from the third day of a progesterone withdrawal bleeding) (group B) for 6 months each. Mean Outcome Measures: The main outcome measures were ovulation, pregnancy, abortion, and live-birth rates. Results: The subjects of groups A (n = 45) and B (n = 47) were studied for a total of 205 and 221 cycles, respectively. The ovulation rate was not statistically different between either treatment group (62.9 vs. 67.0%, P = 0.38), whereas the pregnancy rate was significantly higher in group A than group B (15.1 vs. 7.2%, P = 0.009). The difference found between groups A and B regarding the abortion rate was significant (9.7 vs. 37.5%, P = 0.045), whereas a positive trend was observed for the live-birth rate (83.9 vs. 56.3%, P = 0.07). The cumulative pregnancy rate was significantly higher in group A than group B (68.9 vs. 34.0%, P < 0.001). Conclusions: Six-month metformin administration is significantly more effective than six-cycle CC treatment in improving fertility in anovulatory nonobese PCOS women. abstract_id: PUBMED:28725153 Ovulation induction by metformin among obese versus non-obese women with polycystic ovary syndrome. Objectives: There is some evidence that the efficacy of metformin as an ovulation stimulation agent depends on the body mass index (BMI) of the treated anovulatory women with polycystic ovary syndrome (PCOS). The aim of this study was to examine the likelihood of successful ovulation among obese (BMI ⩾30 kg/m2) versus non-obese (BMI <30 kg/m2) women with PCOS. Methods: A total of 243 medical charts of women with PCOS who visited King Khaled University Hospital (KKUH) in Riyadh, Saudi Arabia, between 2006 and 2012 were reviewed. Patients' sociodemographic, laboratory, and medical data were collected. Descriptive statistics and multiple logistic regression analyses were performed to compare the patients' baseline data and successful ovulation among the obese and non-obese anovulatory women with PCOS, respectively. Results: One hundred and nine women with PCOS who were prescribed metformin for ⩾3 months were included in the study. Almost 60% of the women who were included in the study were obese. The likelihood of ovulation among obese women with PCOS was 77.9% (odds ratio = 0.221, 95% CI 0.052-0.947, P = 0.042) less than that in their non-obese counterparts. Conclusion: The findings of this study suggest that metformin is more effective as an ovulation stimulation agent when administered to non-obese women with PCOS. Lifestyle modifications such as diet and exercise should be emphasized upon as an integral part of any treatment plan for PCOS. abstract_id: PUBMED:27151490 Mono-ovulation in women with polycystic ovary syndrome: a clinical review on ovulation induction. Polycystic ovary syndrome (PCOS) affects 5-10% of women of reproductive age and is the most common cause of anovulatory infertility. The treatment approaches to ovulation induction vary in efficacy, treatment duration and patient friendliness. The aim was to determine the most efficient, evidence-based method to achieve mono-ovulation in women diagnosed with PCOS. Publications in English providing information on treatment, efficacy and complication rates were included until September 2015. Systematic reviews, meta-analyses and randomized controlled trials were favoured over cohort and retrospective studies. Clomiphene citrate is recommended as primary treatment for PCOS-related infertility. It induces ovulation in three out of four patients, the risk of multiple pregnancies is modest and the treatment is simple and inexpensive. Gonadotrophins are highly efficient in a low-dose step-up regimen. Ovulation rates are improved by lifestyle interventions in overweight women. Metformin may improve the menstrual cycle within 1-3 months, but does not improve the live birth rate. Letrozole is effective for ovulation induction, but is an off-label drug in many countries. Ovulation induction in women with PCOS should be individualized with regard to weight, treatment efficacy and patient preferences with the aim of achieving mono-ovulation and subsequently the birth of a singleton baby. abstract_id: PUBMED:11120504 Induction of ovulation in infertile women with hyperandrogenism and insulin resistance. The polycystic ovary syndrome is a common cause of anovulatory infertility. Women with severe insulin resistance are a unique subset of polycystic ovary syndrome. The syndrome of hyperandrogenism, insulin resistance, and acanthosis nigricans (HAIR-AN syndrome) is one presentation of the insulin-resistant subset of polycystic ovary syndrome. Insulin resistance and hyperandrogenism are caused by genetic and environmental factors. In women with anovulatory infertility caused by hyperandrogenism and insulin resistance, clomiphene citrate treatment often fails to result in pregnancy. For these women, weight loss and insulin sensitizers can be effective methods of inducing ovulation and pregnancy and may reduce the number of clomiphene-resistant women with polycystic ovary syndrome who are treated with gonadotropins, ovarian surgery, or in vitro fertilization-embryo transfer. abstract_id: PUBMED:16129615 Patient-tailored conventional ovulation induction algorithms in anovulatory infertility. Conventional treatment of normogonadotropic anovulatory infertility is ovulation induction using the antiestrogen clomiphene citrate, followed by follicle-stimulating hormone. Multiple follicle development, associated with ovarian hyperstimulation, and multiple pregnancy remain the major complications. Cumulative singleton and multiple pregnancy rate data after different induction treatments are needed. Newer ovulation induction interventions, such as insulin-sensitizing drugs, aromatase inhibitors and laparoscopic ovarian electrocoagulation, should be compared with conventional strategies. Ovulation induction efficiency might improve if patient subgroups with altered chances for success or complications with new or conventional techniques could be identified, using multivariate prediction models based on initial screening characteristics. This would make ovulation induction more cost-effective, safe and convenient, enabling doctors to advise patients on the most effective and patient-tailored treatment strategy. Answer: Yes, metformin does induce ovulation in normoandrogenic anovulatory women. A randomized trial comparing metformin with placebo in 24 anovulatory women with normal androgen levels found that over 3 months, there were significantly more ovulatory cycles with metformin (16 cycles) than with placebo (4 cycles). The success of ovulation with metformin did not correlate with changes in androgen, insulin, or insulin sensitivity parameters, suggesting that metformin's effect on ovulation may be independent of its effects on these levels (PUBMED:15547527). Additionally, metformin has been shown to be effective in inducing ovulation in nonobese adolescents with anovulatory hyperandrogenism, with a study reporting that 78% of participants ovulated after 6 months of treatment (PUBMED:11502783). However, the efficacy of metformin as an ovulation induction agent may be influenced by the body mass index (BMI) of the treated women, with evidence suggesting that metformin is more effective in non-obese women with PCOS compared to obese women (PUBMED:28725153). Furthermore, metformin has been compared to clomiphene citrate (CC) as a first-line treatment for ovulation induction in nonobese anovulatory women with PCOS, and the study found that metformin was significantly more effective than CC in improving fertility outcomes (PUBMED:15840746). Overall, these studies support the use of metformin for inducing ovulation in anovulatory women, including those who are normoandrogenic and nonobese.
Instruction: Study of medical students' malpractice fear and defensive medicine: a "hidden curriculum? Abstracts: abstract_id: PUBMED:24868307 Study of medical students' malpractice fear and defensive medicine: a "hidden curriculum?". Introduction: Defensive medicine is a medical practice in which health care providers' primary intent is to avoid criticism and lawsuits, rather than providing for patients' medical needs. The purpose of this study was to characterize medical students' exposure to defensive medicine during medical school rotations. Methods: We performed a cross-sectional survey study of medical students at the beginning of their third year. We gave students Likert scale questionnaires, and their responses were tabulated as a percent with 95% confidence interval (CI). Results: Of the 124 eligible third-year students, 102 (82%) responded. Most stated they rarely worried about being sued (85.3% [95% CI=77.1% to 90.9%]). A majority felt that faculty were concerned about malpractice (55.9% [95% CI=46.2% to 65.1%]), and a smaller percentage stated that faculty taught defensive medicine (32.4% [95% CI=24.1% to 41.9%]). Many students believed their satisfaction would be decreased by MC and lawsuits (51.0% [95% CI=41.4% to 60.5%]). Some believed their choice of medical specialty would be influenced by MC (21.6% [95% CI=14.7% to 30.5%]), and a modest number felt their enjoyment of learning medicine was lessened by MC (23.5% [95% CI=16.4% to 32.6%]). Finally, a minority of students worried about practicing and learning procedures because of MC (16.7% [95% CI=10.7% to 25.1%]). Conclusion: Although third-year medical students have little concern about being sued, they are exposed to malpractice concerns and taught considerable defensive medicine from faculty. Most students believe that fear of lawsuits will decrease their future enjoyment of medicine. However, less than a quarter of students felt their specialty choice would be influenced by malpractice worries and that malpractice concerns lessened their enjoyment of learning medicine. [West J Emerg Med. 2014;15(3):293-298.]. abstract_id: PUBMED:32677801 Approach to the hidden curriculum through narrative medicine in the training of medical students Introduction: Hidden curricula are those contents learned that do not belong to the formal curriculum. Narrative medicine allows knowledge of health professionals' experiences. Approaching to the hidden curriculum can allow modifying or reinforcing instances of the nonformal learning. Objective: Analyze narrative medical students' productions to identify hidden curriculum contents. Population And Methods: A cross-sectional study was conducted with Pediatric students during 2017. Results: All the students of the course (14) participated and performed 50 narrative productions: 53 % corresponded to the hospitalization context, 17 % to ambulatory care, 17 % to the emergency center and 13 % to neonatology. The main topics described were doctor-patient communication, importance of family inclusion in the pediatric consultation, empathy, impact of chronic disease in children and communication of bad news. Conclusions: Narrative productions allowed an approximation to the hidden curriculum through reflections on professional actions. abstract_id: PUBMED:29728132 The perception of hidden curriculum among undergraduate medical students: a qualitative study. Objectives: The effect of hidden curriculum on student learning has not been sufficiently recognized in most of the revised curriculums. This study is a qualitative study that measures the students' perception of hidden curriculum through semi-structured interviews. All of the interviews were recorded and then converted into scripts. These scripts were divided to sentences and phrases and named as units. Units aggregated with similar groups and named as codes, then the similar codes were aggregated into themes. Results: Four main themes emerged, role modeling, personal attitude and beliefs, hierarchy, social and ethical culture. The results of the present study showed that it is necessary to discuss the hidden curriculum. We are unaware of the hidden curriculum, but even when were are aware of it, we are unwilling to act. Information about issues related to the hidden and informal curriculum, as well as knowing the viewpoints of students is necessary. It seems necessary to provide data to students about the hidden curriculum and encouraging patient centered curriculums early in training, such as integrated curriculum. abstract_id: PUBMED:34457773 Exploring the Hidden Curriculum's Impact on Medical Students: Professionalism, Identity Formation and the Need for Transparency. The hidden curriculum within medical education has been a topic of recent debate. Consensus opinion regarding the continued relevance of this term, what constitutes the hidden curriculum, and the nature of its impact do not exist. Further research is required to contribute to this debate. This work sets out to investigate which factors beyond taught cognitive knowledge influence medical students in clinical and educational environments and examine how this occurs. Semi-structured focus group interviews were conducted with 39 students from one UK medical school. Fourteen faculty were interviewed individually to triangulate data. Data were analysed using constructivist thematic analysis, informed by grounded theory convention. The presence of the hidden curriculum was clearly demonstrated, acting through role modelling, organizational culture, stereotyping and professional dress. Mentioned frequently were the influences of the hidden curriculum on student professionalism and identity development. Professionalism was perceived as being negatively impacted by the hidden curriculum and seen as an imposition from senior faculty to control students. Students believe medical identity formation begins prior to medical school, in a process known as "anticipatory socialization", a previously unstudied identity transition. Students felt covert institutional agendas negatively impacted their identity, pushing them further from the identity their institution was encouraging them to acquire. Key messages for educators include the need to explore the hidden curriculum through discussion with students. Improving transparency of organizational culture may allow students to interpret institutional agendas in the way institutions formally intend, reducing orthogonal interpretations of organizational culture and subsequent impact upon identity formation. abstract_id: PUBMED:34457924 Students' and Faculty Perspectives Toward the Role and Value of the Hidden Curriculum in Undergraduate Medical Education: a Qualitative Study from Saudi Arabia. A hidden medical curriculum is defined as the unwritten, unofficial, and unintended learning that students experience in medical school along with more formal aspects of education. This term describes the behaviours, attitudes, assumptions and beliefs conveyed by teachers, peers and the surrounding environment. However, more research is needed to evaluate its impact on student and faculty interactions in this context. We conducted this qualitative study utilizing focus group and semi-structured interviews of students and faculty to evaluate the perspectives of medical students and faculty toward the role and impact of the hidden medical curriculum in medical education at Alfaisal University, Riyadh, Saudi Arabia. Data was analysed using open-, axial- and selective-coding using thematic framework analysis. Interviewees consisted of 24 students in years 1-3 during the spring semester 2018-2019, 8 faculty members and 4 teaching assistants. We identified six core themes of hidden curriculum at Alfaisal University (Appendix). Role and behavioural modelling, value-based teaching, interpersonal faculty-student interactions, effects of diversity and socialization, teaching methodologies and hidden curriculum, mentoring and student support systems. Although some of the themes identified in these focus group interviews were similar to previously published studies, the novel themes that we identified were diversity, socialization and interpersonal faculty-student interactions. We conclude that identifying the issues pertaining to hidden curriculum is important for the development of medical students and for nurturing and upholding the values that we want to instil in our future physicians. abstract_id: PUBMED:25694279 Assessment of hidden curriculum components by medical students Background: Hidden curriculum refers to the unwritten, unofficial, and often unintended lessons, values, and perspectives that students learn at the university, which influences the acquisition of professional skills. Aim: To analyze the perception about the influence of the hidden curriculum in the education of medical students at the Universidad de Concepción, Chile. Material And Methods: Qualitative investigation with case study approach. Seventeen graduated medical students were selected by probability sampling. A semi-structured interview was used to collect the information and a content analysis was applied. Results: Forty seven percent of participants recognized having fulfilled their academic expectations. As favorable factors for academic achievement the students underlined clinical practice, access to patients and to clinical fields. As negative factors, they identified the lack of commitment, educational support and over-specialization of their mentors. Conclusions: The results show the strengths and weaknesses of the educational environment of undergraduated medical students. This information should be used to modify teaching environments. abstract_id: PUBMED:34315681 Bursting the Hidden Curriculum Bubble: A Surgical Near-Peer Mentorship Pilot Program for URM Medical Students. The hidden curriculum of unspoken professional expectations negatively impacts medical student interest in surgery. Medical student mentorship and early surgical exposure have been shown to demystify the hidden curriculum. Although residents and faculty play a vital role, near-peer mentorship may aid in uncovering the hidden curriculum and promoting medical student interest in surgery, especially for those learners who are underrepresented in medicine. We developed and implemented a formalized near-peer mentorship program composed of quarterly small group Surgical Peer Teacher led lessons and one-on-one Surgical Support Team mentorship meetings covering surgical curriculum topics for medical students at an academic medical school. This structured near-peer mentorship model provides a mechanism to demystify surgical culture, increase early access to surgical mentorship, and develop mentorship skills amongst students. This program aims to uncover the surgical hidden curriculum to improve surgical career support and interest among medical students with less exposure and access to physician role models. This longitudinal mentorship model is student-run and can be easily adapted to enhance existing support models at medical schools. Future studies will evaluate utilization, impact on surgical specialty interest, and efficacy in demystifying the surgical hidden curriculum. abstract_id: PUBMED:35186428 A Novel Curricular Design Exposing Clinical Medical Students to the Hidden Curriculum. Backgrounds: The Curriculum Committee of a medical school introduced a longitudinal course for clinical medical students addressing the hidden curriculum as a way to enhance the overall learning environment in undergraduate medical education. Methods: This novel design included podcasts, virtual online sessions, and self-reflection videos. Outcomes: Students and faculty viewed it as successful. abstract_id: PUBMED:28127752 Malpractice Claim Fears and the Costs of Treating Medicare Patients: A New Approach to Estimating the Costs of Defensive Medicine. Objective: To estimate the cost of defensive medicine among elderly Medicare patients. Data Sources: We use a 2008 national physician survey linked to respondents' elderly Medicare patients' claims data. Study Design: Using a sample of survey respondent/beneficiary dyads stratified by physician specialty, we estimated cross-sectional regressions of annual costs on patient covariates and a medical malpractice fear index formed from five validated physician survey questions. Defensive medicine costs were calculated as the difference between observed patient costs and those under hypothetical alternative levels of malpractice concern, and then aggregated to estimate average defensive medicine costs per beneficiary. Data Collection Methods: The physician survey was conducted by mail. Patient claims were linked to survey respondents and reweighted to approximate the elderly Medicare beneficiary population. Principal Findings: Higher levels of the malpractice fear index were associated with higher patient spending. Based on the measured associations, we estimated that defensive medicine accounted for 8 to 20 percent of total costs under alternative scenarios. The highest estimate is associated with a counterfactual of no malpractice concerns, which is unlikely to be socially optimal as some extrinsic incentives to avoid medical errors are desirable. Among specialty groups, primary care physicians contributed the most to defensive medicine spending. Higher costs resulted mostly from more hospital admissions and greater postacute care. Conclusions: Although results are based on measured associations between malpractice fears and spending, and may not reflect the true causal effects, they suggest defensive medicine likely contributes substantial additional costs to Medicare. abstract_id: PUBMED:27909633 The Hidden Curriculum: Exposing the Unintended Lessons of Medical Education. Introduction: The hidden curriculum is a set of ethical, moral, and value-based teachings communicated to doctors-in-training, providing a basis for their future interactions with patients, peers, and colleagues. The aim of our study is to introduce the concept of the hidden curriculum to a cohort of third-year medical students and to subsequently evaluate their understanding. In particular, we sought to measure and benchmark the degree of hidden curriculum recognition within a Canadian medical education context. With the help of student feedback, we elicited ideas for future directions. Methods: One hundred and fifty-four third-year medical students completing their obstetrics and gynaecology core clinical rotation attended a workshop on the hidden curriculum. Students completed two sets of evaluations; a voluntary anonymous pre- and post-workshop questionnaire evaluating their knowledge and opinions regarding the hidden curriculum, and a mandatory workshop evaluation. Answers to pre- and post-workshop questionnaires were compared using Mann-Whitney U test, and thematic analysis was used to code the students' comments to identify common themes. Results: A standardized workshop on the hidden curriculum significantly improved students' understanding and highlighted the importance of the hidden curriculum. Voluntary student comments (N = 108) were categorized according to five themes: 1) Students who were not sensitized to the hidden curriculum (8; 7.4%); 2) students who were sensitized but unaware of the hidden curriculum (12; 11.1%); 3) students who were sensitized and aware of the hidden curriculum (34; 31.5%); 4) comments on teaching methodologies/environment (43; 39.8%); and 5) suggestions for enhancement (11; 10.2%). Conclusions: A simple, cost-effective intervention, such as a workshop, can effectively assess and address the hidden curriculum. Many students are highly sensitized to and are aware of the positive and negative effects of role modeling on their development. The students are calling for similar interventions to be directed at the postgraduate and faculty level. Answer: The study of medical students' malpractice fear and defensive medicine indeed touches upon the concept of the "hidden curriculum" in medical education. The hidden curriculum refers to the unwritten, unofficial, and often unintended lessons, values, and perspectives that students learn alongside the formal curriculum. It encompasses behaviors, attitudes, assumptions, and beliefs conveyed by teachers, peers, and the surrounding environment, which can significantly influence students' professional development and identity formation (PUBMED:34457924). In the context of malpractice fear and defensive medicine, the hidden curriculum can manifest through the attitudes and practices of faculty members who may convey their concerns about malpractice to students, potentially teaching them to practice defensive medicine. A study found that although most third-year medical students rarely worried about being sued, they were exposed to malpractice concerns and were taught considerable defensive medicine from faculty. Many students believed that fear of lawsuits would decrease their future enjoyment of medicine, and a minority worried about practicing and learning procedures because of malpractice concerns (PUBMED:24868307). The hidden curriculum can also impact students through role modeling, personal attitudes and beliefs, hierarchy, and social and ethical culture, as revealed in a qualitative study where students perceived the hidden curriculum through these lenses (PUBMED:29728132). Another study highlighted the impact of the hidden curriculum on medical students' professionalism and identity development, with students feeling that covert institutional agendas negatively impacted their identity (PUBMED:34457773). Efforts to address the hidden curriculum include exposing students to its elements through novel curricular designs, such as longitudinal courses, podcasts, virtual sessions, and self-reflection videos, which have been viewed as successful by students and faculty (PUBMED:35186428). Additionally, near-peer mentorship programs have been developed to demystify the hidden curriculum and promote medical student interest in surgery, particularly for underrepresented students (PUBMED:34315681). In summary, the study of medical students' malpractice fear and defensive medicine is intertwined with the hidden curriculum, which shapes students' learning experiences and professional identities in ways that are not always explicitly acknowledged or intended within the formal curriculum.
Instruction: Is three-dimensional power Doppler ultrasound useful in the assessment of placental perfusion in normal and growth-restricted pregnancies? Abstracts: abstract_id: PUBMED:18254148 Is three-dimensional power Doppler ultrasound useful in the assessment of placental perfusion in normal and growth-restricted pregnancies? Objectives: To investigate three-dimensional (3D) power Doppler ultrasound indices in the assessment of placental perfusion and their relationship to gestational age (GA), placental position and umbilical artery Doppler flow velocity waveform (FVW) patterns in normal and intrauterine growth-restricted (IUGR) pregnancies. Methods: Forty-five pregnant women at 23-37 weeks' gestation were studied, of whom 30 had IUGR and 15 were controls. Nine of the IUGR group had normal umbilical FVWs (NED), nine had abnormal patterns but positive end-diastolic velocities (PED) and 12 had absent or reversed end-diastolic velocities (AED). Placental position was assessed as being posterior, anterior or lateral. 3D power Doppler indices related to placental perfusion (vascularization index (VI), vascularization flow index (VFI) and flow index (FI)) were obtained from five different sampling sites in each placenta. Results: VI, VFI and FI were not significantly dependent on GA or placental position. VI and VFI were significantly lower in PED and AED cases compared with controls, while FI was reduced in the AED group only. VI and VFI showed high variability between different sampling sites within each placenta, while the variability of FI was much lower. Conclusions: 3D power Doppler sonography can provide new insights into placental pathophysiology. FI, which identifies the most severe cases of placental impairment, appears to be the most reliable index because of its low intraplacental variability. Further studies are needed to verify its accuracy when applied in clinical practice as a substitute for or an adjunct to umbilical artery Doppler studies. abstract_id: PUBMED:29793097 Quantitative assessment of placental perfusion by three-dimensional power Doppler ultrasound for twins with selective intrauterine growth restriction in one twin. Objective: We quantitatively assessed placental perfusion using three-dimensional (3D) power Doppler ultrasound for twins with selective intrauterine growth restriction (sIUGR) in one twin. Study Design: A prospective cross-sectional study was performed for 104 normal monochorionic diamniotic twins and 30 twins with sIUGR. Twins with sIUGR were divided into two groups: in group I (19 twins), both fetuses had continuous forward end-diastolic blood flow in the umbilical artery; in group II (11 twins), the smaller fetus had a loss or a reverse end-diastolic blood flow in the umbilical artery, but the larger fetus had continuous forward end-diastolic blood flow. Volume data of the placenta was acquired in each twin at the plane where the umbilical cord entered the placenta by 3D power Doppler ultrasound, and imaging software was used to calculate the placental vascularization index (VI), flow index (FI), and vascularization flow index (VFI). Results: Among smaller fetuses, the sIUGR and control groups showed significant differences in VI and VFI, as well as significant differences between sIUGR group II and control group in FI. No such differences in placental perfusion were found among larger fetuses. Conclusions: 3D power Doppler ultrasound could sensitively display the varying degrees of changes in placental perfusion, which could be used for clinical monitoring of placental perfusion in sIUGR twins. abstract_id: PUBMED:25074797 Assessment of placental volume and vascularization at 11-14 weeks of gestation in a Taiwanese population using three-dimensional power Doppler ultrasound. Background: The placental volume and vascular indices are crucial in helping doctors to evaluate early fetal growth and development. Inadequate placental volume or vascularity might indicate poor fetal growth or gestational complications. This study aimed to evaluate the placental volume and vascular indices during the period of 11-14 weeks of gestation in a Taiwanese population. Methods: From June 2006 to September 2009, three-dimensional power Doppler ultrasound was performed in 222 normal pregnancies from 11-14 weeks of gestation. Power Doppler ultrasound was applied to the placenta and the placental volume was obtained by a rotational technique (VOCAL). The three-dimensional power histogram was used to assess the placental vascular indices, including the mean gray value, the vascularization index, the flow index, and the vascularization flow index. The placental vascular indices were then plotted against gestational age (GA) and placental volume. Results: Our results showed that the linear regression equation for placental volume using gestational week as the independent variable was placental volume = 18.852 × GA - 180.89 (r = 0.481, p < 0.05). All the placental vascular indices showed a constant distribution throughout the period 11-14 weeks of gestation. A tendency for a reduction in the placental mean gray value with gestational week was observed, but without statistical significance. Conclusion: All the placental vascular indices estimated by three-dimensional power Doppler ultrasonography showed a constant distribution throughout gestation. abstract_id: PUBMED:19327824 Placental vascular sonobiopsy using three-dimensional power Doppler ultrasound in normal and growth restricted fetuses. Objective: To investigate placental vascular sonobiopsy using three-dimensional (3D) power Doppler ultrasound to assess placental vascularization in normal and growth restricted fetuses. Methods: Placental vascular sonobiopsy using 3D power Doppler ultrasound with the VOCAL imaging analysis program was performed on 208 normal fetuses between 12 and 40 weeks of gestation and 13 pregnancies with fetal growth restriction (FGR) at 22-39 weeks' gestation. Only pregnancies with an entirely visualized anterior placenta were included in the study. 3D power Doppler indices related to placental vascularization (vascularization index (VI), flow index (FI) and vascularization flow index (VFI)) were calculated in each placenta. Intra- and inter-class correlation coefficients and intra- and inter-observer agreements of measurements were assessed. Results: A weak linear relationship was found between the gestational age and VI, FI, and VFI, respectively. VI values in 8 of 13 FGR pregnancies (61.5%), FI value in one FGR pregnancy (7.7%) and VFI values of 6 FGR pregnancies (46.2%) were below -1.5SD of the reference ranges for VI, FI and VFI, respectively. After 32 weeks of gestation, VI, FI, and VFI values in 10 FGR pregnancies were significantly lower compared to 79 normal pregnancies, respectively (P < 0.01). All 3D power Doppler indices (VI, FI and VFI) showed a correlation greater than 0.85, with good intra- and inter-observer agreements. Conclusion: Our findings suggest that placental vascular sonobiopsy using 3D power Doppler ultrasound may provide new information on the assessment of placental vascularization in normal and FGR pregnancies, while placental perfusion is reduced in FGR compared to normal pregnancy. However, the data and its interpretation in our study should be taken with some degree of caution because of the small number of FGR subjects studied. Further studies involving a larger sample size of FGR pregnancies are needed to confirm the usefulness of placental vascular sonobiopsy using 3D power Doppler ultrasound in clinical practice. abstract_id: PUBMED:27577707 3D power Doppler ultrasound assessment of placental perfusion during uterine contraction in labor. Introduction: To assess placental perfusion during spontaneous or induced uterine contraction in labor at term using placental vascular sonobiopsy (PVS) by 3D power Doppler ultrasound with the VOCAL imaging analysis program. Method: PVS was performed in 50 normal pregnancies (32 in spontaneous labor group [SLG], and 18 in induced labor group with oxytocin or prostaglandin F2α [ILG]) at 37-41 weeks of gestation to assess placental perfusion during uterine contraction in labor. Only pregnancies with an entirely visualized anterior placenta were included in the study. Data acquisition was performed before, during (at the peak of contraction), and after uterine contraction. 3D power Doppler indices such as the vascularization index (VI), flow index (FI), and vascularization flow index (VFI) were calculated in each placenta. Results: There were no abnormal fetal heart rate tracings during contraction in either group. VI and VFI values were significantly reduced during uterine contraction in both groups (SLG, -33.4% [-97.0-15.2%], and ILG, -49.6% [-78.2--4.0%]), respectively (P < 0.001). The FI value in the ILG group was significantly lower during uterine contraction (P = 0.035), whereas it did not change during uterine contraction in the SLG group. After uterine contraction, all vascular indices returned almost to the same level as that before uterine contraction. However, the FI value in ILG (-8.6%, [-19.7-16.0%]) was significantly lower than that in SLG (2.4%, [-13.4-38.1%]) after uterine contraction (P < 0.05). All 3D power Doppler indices (VI, FI, and VFI) during uterine contraction (at the peak of contraction) showed a correlation greater than 0.7, with good intra- and inter-observer agreements. Discussion: Our findings suggest that uterine contraction in both spontaneous and induced labors causes a significant reduction in placental perfusion. Reduced placental blood flow in induced uterine contraction has a tendency to be marked compared with that in spontaneous uterine contraction. To the best of our knowledge, this is the first study on the non-invasive assessment of placental perfusion during uterine contraction in labor using 3D power Doppler ultrasound. However, the data and their interpretation in the present study should be taken with some degree of caution because of the small number of subjects studied. Further studies involving a larger sample size are needed to assess placental perfusion and vascularity using PVS during normal and abnormal uterine contractions in normal and high-risk pregnancies. abstract_id: PUBMED:32964472 Three-dimensional power Doppler ultrasonography indicates that increased placental blood perfusion during the third trimester is associated with the risk of macrosomia at birth. Purpose: To investigate the association between placental blood perfusion and the occurrence of macrosomia at birth. Methods: This was a prospective cohort study including women with singleton pregnancies that aimed to measure placental blood perfusion using three-dimensional (3D) power Doppler ultrasonography in the second and third trimester. We acquired three indices of placental blood flow, including vascularization index (VI), flow index (FI), vascularization flow index (VFI), along with routine two-dimensional (2D) biometric measurements, including abdominal circumference (AC) and estimated fetal weight (EFW). Pregnancy outcomes were divided into two groups: newborns with a normal birth weight and those with macrosomia. We then compared all of the recorded variables between these two groups. We also determined the predictive efficiency of each variable using receiver-operating characteristic (ROC) curves. Results: The placental 3D power Doppler indices, including VI and FI, were significantly higher in the third trimester of pregnancies developing macrosomia, but not during the second trimester, as compared to those with a normal birth weight. ROC curves analysis for third-trimester VI and FI suggested a slight ability to predict macrosomia; this was also the case for AC and EFW. Interestingly, VI showed high sensitivity and low specificity, while FI showed low sensitivity and high specificity; this was also the case for AC and EFW. Conclusions: Three-dimensional power Doppler ultrasound indices were significantly higher during the third-trimester for pregnancies developing macrosomia. However, these indices had only moderate ability to predict macrosomia. abstract_id: PUBMED:28651459 Examination of placental three-dimensional power Doppler indices and perinatal outcome in pregnancies complicated by intrauterine growth restriction Introduction: Development of intrauterine growth restriction (IUGR) can be traced back to maternal or fetal factors, but in many cases we find placental factors (reduced placental circulation) in the background. Aim: Our aim was to examine whether the reduced placental bloodperfusion and vascularity show any correlation with cesarean section frequency and the clinical outcome in IUGR pregnancies. The aim of the present study was also to use a properly calibrated and reproducible method for evaluating placental blood flow, that can later be incorporated into the routine examination. Method: 254 women were recruited in our prospective case-control study. The 3 dimensional power Doppler (3DPD) ultrasound indices; vascularisation index (VI), flow index (FI) and vascularization flow index (VFI) were measured on each participant. Results: Median VI was 3.7% (interquartile range [IQR] 3.2%-4.2%) in the IUGR group and 10.1% (IQR 8.6%-10.9%) in the control group (p = 0.001). Median FI value was 40.0 (IQR 39.7-42.5) in the IUGR group and 45.1 (IQR 44.1-53.1) in the control group (p = 0.012). Median VFI was 2.2 (IQR 2.1-2.4) in the IUGR group and 4.8 (IQR 4.4-5.3) in the control. Conclusions: The 3DPD indices may be useful for examining changes in circulation in IUGR pregnancies to characterize the underlying pathology. Orv Hetil. 2017; 158(26): 1008-1013. abstract_id: PUBMED:26955765 3D power Doppler ultrasound in early diagnosis of preeclampsia. Introduction: Preeclampsia is a known cause of maternal, fetal and neonatal morbidity and mortality. Thus, evaluation of the predicting value of comparing 3D power Doppler indices (3DPD) of uteroplacental circulation (UPC) in the first and second trimester in patients who developed preeclampsia (PE) and those who did not and testing the hypothesis that the parameters of vascularization and placenta flow intensity, as determined by three-dimensional ultrasound (3D), are different in normal pregnancies compared with preeclampsia, could be a suitable screening method. Methods: A prospective observational study using 3D power Doppler were performed to evaluate the placental perfusion in 96 pregnant women who came to do the ultrasound routine between 11 and 14 weeks. The placental vascular index (VI), flow index (FI), blood vessels and blood flow index (VFI) by three-dimensional Doppler histogram were calculated. All patients repeated the exam between 16 and 20 weeks. The outcome was scored as normal or preeclamptic. Results: Placental vascular indices including VI, FI and VFI were significantly lower in preeclamptic placentas compared with controls in the study performed in the second trimester (p<0.001). There was not any statistical difference in the patients examined in the first trimester. Discussion: Our findings suggest that 3D-power Doppler assessment of placental vascular indices in the second trimester has the potential to detect women at risk for subsequent development of PE. abstract_id: PUBMED:33008645 Feasibility of three dimensional power Doppler ultrasonography methods to assess placental perfusion. Objectives: Given the crucial role of the placenta in establishing a healthy pregnancy, reliable non-invasive methods to measure placental perfusion are desirable. The aim of this study is to determine the reproducibility and potential bias in different three-dimensional power Doppler (3DPD) methods assessing placenta perfusion. Methods: Ten singleton pregnancies around 16 weeks of gestation, with an anteriorly located placenta and centrally inserted umbilical cord were included in this study. Eight different combinations of a specific placental sweep and sonobiopsy method were used to evaluate placental perfusion. Vascularization index (VI), flow index (FI) and vascularization-flow index (VFI) were determined offline using the 4D-view program. Reproducibility and repeatability of the methods, expressed as correlation coefficients and Bland-Altman mean differences, were calculated. Differences between sampling methods were analyzed using t-test or Mann-Whitney U test. Results: Intra- and inter-class correlation coefficient (CC) was highest when using a spherical centrally placed sonobiopsy of 2 cm3 in a whole placenta sweep (method 1; IntraCC VI 0.985, FI 0.769, VFI 0.993, InterCC VI 0.986, FI 0.784, VFI 0.987). Overall, intraCCs were higher compared to interCCs. Lowest mean differences in VI and FI were found comparing spherical to manual sonobiopsies, whereas the mean differences in VFI were lowest when comparing central versus peripheral located sonobiopsies. Comparing the three vascular indices, best median intra- and interCC and lowest mean differences were found for VFI. Conclusions: Three dimensional placental vascularization analysis showed best reproducibility using whole placental sweep volume and centrally located, spherical sonobiopsy of 2 cm3. abstract_id: PUBMED:12604113 Assessment of placental fractional moving blood volume using quantitative three-dimensional power doppler ultrasound. To test the hypothesis that the placental fractional moving blood volume is different with advancing gestational age (GA), we assessed the vascularization index (VI), flow index (FI), and vascularization-flow index (VFI) of the placenta in normal pregnancy by using three-dimensional (3-D) power Doppler ultrasound (US). We enrolled 100 healthy pregnant women with gestational age between 20 to 40 weeks for this study. Three-dimensional power Doppler ultrasonography was used to assess the VI, FI and VFI in each case. Our results showed that the linear regression equations for VI, FI and VFI, by using GA as the independent variable, were VI = 0.27107 x GA -4.02748 (r = 0.84, p < 0.0001), FI = 0.56115 x GA + 34.28945 (r = 0.49, p < 0.001), and VFI = 0.15663 x GA -2.53810 (r = 0.82, p < 0.0001), respectively. In addition, the VI, FI and VFI values of the placental flow were also positively correlated with the fetal growth indices, namely, biparietal diameter, occipitofrontal diameter, head circumference, abdominal circumference and estimated fetal weight (all p values < 0.001). In conclusion, our study illustrates that the fractional moving blood volume of the placenta is positively correlated with the increment of gestational age and the fetal growth indices. Our data may be used as a reference in the assessment of the placental fractional moving blood volume using the quantitative 3-D power Doppler US. Answer: Three-dimensional (3D) power Doppler ultrasound has been shown to be useful in the assessment of placental perfusion in both normal and growth-restricted pregnancies. Studies have demonstrated that 3D power Doppler ultrasound can provide insights into placental pathophysiology by measuring indices such as the vascularization index (VI), flow index (FI), and vascularization flow index (VFI) (PUBMED:18254148; PUBMED:29793097; PUBMED:25074797; PUBMED:19327824; PUBMED:27577707; PUBMED:32964472; PUBMED:28651459; PUBMED:33008645; PUBMED:12604113). In normal pregnancies, these indices show a constant distribution throughout gestation (PUBMED:25074797), and the fractional moving blood volume of the placenta is positively correlated with the increment of gestational age and fetal growth indices (PUBMED:12604113). In cases of intrauterine growth restriction (IUGR), significant differences in VI and VFI have been observed compared to controls, indicating reduced placental perfusion (PUBMED:18254148; PUBMED:29793097; PUBMED:28651459). Specifically, in IUGR pregnancies, the VI and FI values were significantly lower, suggesting that 3D power Doppler indices may be useful for examining changes in circulation to characterize the underlying pathology (PUBMED:28651459). Moreover, the studies have shown that 3D power Doppler ultrasound can sensitively display varying degrees of changes in placental perfusion, which could be used for clinical monitoring of placental perfusion in selective intrauterine growth restriction (sIUGR) twins (PUBMED:29793097). Additionally, the method has been found to be reproducible and potentially useful for assessing placental perfusion and vascularity during normal and abnormal uterine contractions in normal and high-risk pregnancies (PUBMED:27577707). In conclusion, 3D power Doppler ultrasound is a valuable tool for assessing placental perfusion in both normal and growth-restricted pregnancies, providing quantitative data that can help in the monitoring and management of these conditions.
Instruction: Does oral health say anything about survival in later life? Abstracts: abstract_id: PUBMED:19614719 Does oral health say anything about survival in later life? Findings in a Swedish cohort of 80+ years at baseline. Background: Oral health is an integral part of general health; oral health contributes to and is influenced by a nexus of inputs from biological, psychological, and social functioning. Little is known about the relationship between markers of oral health and subsequent survival in late life. Objectives: The aim of this study was to examine the relative importance of oral health indicators in the context of demographic and psychosocial variables on subsequent survival during an 8-year period in a population-based sample of the oldest-old. Methods: The study sample comprised 357 individuals with a median age of 86 years who were selected from participants in the comprehensive longitudinal Origins of Variance in the Old-Old: Octogenarian Twins (OCTO-Twin) study, which examined monozygotic and dizygotic twins aged 80 years and older on five occasions at 2-year intervals. The OCTO-Twin study includes a broad spectrum of biobehavioural measures of health and functional capacity, personality, well-being, and interpersonal functioning. Oral health variables were number of teeth, per cent decayed and filled surfaces (DFS%), and periodontal disease experience. A longevity quotient (LQ), the ratio between years actually lived and those statistically expected, was determined. The survival categories were shorter than expected, as expected, or longer-than-expected. Multivariate analyses and the Kaplan-Meier method were used in the survival analyses. Results: No associations between LQ and number of teeth, edentulousness, and periodontal disease experience were found. But survival of men with severe periodontal disease experience was shorter than expected. DFS% was significantly associated with survival. Individuals with a low number of decayed and filled surfaces (DFS) had a shorter-than-expected survival time compared with those with high DFS scores. The overall predictor of survival was cognitive status, independent of age and gender when dental variables were analysed in the context of psychosocial factors. In addition, number of teeth, smoking, and better financial status in childhood and during working life were also significant predictors. Conclusions: Oral health was significantly associated with subsequent survival in a sample of oldest-old individuals, although psychosocial factors were stronger predictors. abstract_id: PUBMED:34322677 Study of Migration and Later Life Health in Nepal. Purpose: This study investigates long-term consequences of individual migration experience on later life health, specifically self-rated health and functional difficulty. Design/methodology/approach: The study uses multiple community-, household-, and individual-level data sets from the Chitwan Valley Family Study (CVFS) in Nepal. The CVFS selected a systematic probability sample of 151 neighborhoods in Western Chitwan and collected information on all households and individuals residing in the selected sample neighborhoods. This study uses data from multiple surveys featuring detailed migration histories of 1,373 older adults, and information on their health outcomes, households, and communities. Findings: Results of the multi-level multivariate analysis show a negative association between number of years of migration experience and self-rated health, and a positive association between migration and functional difficulty. These findings suggest a negative relationship between migration experience and later life health. Research Limitations/implications: Although we collected health outcome measures after the measurement of explanatory and control measures-a unique strength of this study-we were unable to control for baseline health outcomes. Also, due to the lack of time-varying measures of household socioeconomic status in the survey, this investigation was unable to control for measures associated with the economic prosperity hypothesis. Future research is necessary to develop panel data with appropriately timed measures. Practical Implications: The findings provide important insights that may help shape individual's and their family's migration decisions. Originality/value: This research provides important insight to individuals lured by potential short-term economic prospects in destination places, as well as to scholars and policy makers from migrant-sending settings that are grappling with skyrocketing medical expenses, rapid population aging, and old age security services. abstract_id: PUBMED:29349264 Life course influences on later life health in China: Childhood health exposure and socioeconomic mediators during adulthood. China's unprecedented population aging and social and economic change raise important issues concerning life course determinants of advantage or disadvantage into later life. Data from the China Health and Retirement Longitudinal Study (CHARLS) 2013 were analysed to identify the influence of childhood health on later life health as indicated by self-rated health and how this influence could be mediated by social and economic positions (SEP) and resources later in the life span. CHARLS provides nationally representative data on 18, 000 individuals aged 45 years and above in approximately 150 districts and 450 villages. Both multivariate logit regression model and KHB method (Karlson/Holm/Breen method) were applied to examine and decompose the life span influences on later life health. The results show that the childhood health, accounts for approximately half of the effect directly and another half of the effect indirectly through social and economic variations during adulthood. Relative living standard, marital status and urban residence are the most significant and important social and economic mediators for men; For women, living standard and secondary schooling are most influential while marital status is not significant. Implications for social and economic policies to improve later life health are discussed. abstract_id: PUBMED:34504384 The Contribution of Stressful Marital Interactions to Loneliness and Health across Mid-life and Later Adulthood. Although past research has noted longitudinal, and sometimes bi-directional, associations between marital interactions, loneliness, and physical health, previous work has not identified long-term associations and differential associations over life-course stages (i.e., mid-life and later adulthood). Utilizing a life-course stress process perspective and a sample of 250 couples in enduring marriages over 17 years (2001-2017), a structural equation model within a dyadic framework assessed the unique influences of stressful marital interactions on loneliness and physical health and the variation in bi-directional influences of loneliness and physical health over time. Marital interactions were relatively stable across life stages, yet marital interactions appear to influence loneliness and physical health. Notable distinctions were evident across life stages (from mid-life to later adulthood and then within later adulthood). Findings are discussed with an emphasis on the implications for health promotion and prevention programs targeting couples' quality of life in later years. abstract_id: PUBMED:29292775 Class and Health Inequality in Later Life: Patterns, Mechanisms and Implications for Policy. The growth of the post-retirement population, which has occurred as a result of rapid growth in life expectancy coupled with the ageing of the baby boomer cohort, has led to significant concern. This concern, however, typically neglects the heterogeneity of later life experiences and how these are patterned by inequalities that reflect how process of social stratification continue to operate into later life. This paper draws on a programme of work, based on analysis of the English Longitudinal Study of Ageing, to empirically examine questions of inequality in later life. It begins by illustrating the patterning of health inequality. It then investigates the importance of later life contexts and events in shaping inequality through and after the retirement process. In doing so it examines the extent to which later life continues to reflect stable social structures that shape inequalities and, consequently, health and wellbeing in later life. The paper then illustrates how the effects of socioeconomic position on health in later life can be theorised as a product of class processes, borrowing in part from Bourdieu. Other dimensions of inequality, such as gender, ethnicity, area and sexuality, are not discussed here. The paper concludes with a discussion of the need for a close focus on inequalities in later life in research, policy and practice. abstract_id: PUBMED:26970523 Oral Health-A Neglected Aspect of Subjective Well-Being in Later Life. Objectives: This study examined whether oral health is a neglected aspect of subjective well-being (SWB) among older adults. The key research question was whether deterioration in oral health among dentate older adults living in England was associated with decreases in SWB, using measures of eudemonic, evaluative, and affective dimensions of well-being. Methods: This secondary analysis used data from the third (2006-2007) and fifth (2010-2011) waves of respondents aged 50 and older from the English Longitudinal Study of Ageing (ELSA). We fitted multivariable regression models to examine the effects of changes in oral impacts on daily life and edentulism (complete tooth loss) on SWB (quality of life, life satisfaction, and depressive symptomatology). Results: A worsening in both oral health measures was associated with an increase in depressive symptoms even after adjusting for time-varying confounders including declining health, activities of daily living, and reduced social support. Becoming edentate was also associated with decreases in quality of life and life satisfaction. Discussion: A deterioration in oral health and oral health-related quality of life increases the risk of depressive symptoms among older adults and highlights the importance of oral health as a determinant of subjective well-being in later life. abstract_id: PUBMED:21491763 Oral health impact profile. an instrument for measuring the impact of oral health on the quality of life A Dutch-language version of the Oral Health Impact Profile, a questionnaire by means of which the impact of oral health on the quality of life of patients can be determined, was developed and subsequently psychometrically tested among a group of patients with complaints concerning missing dentition or their dentures. In addition, a shortened version of this so-called OHIP-NL49, the OHIP-NL14, was psychometrically tested among a group of patients with temporomandibular disorders. The psychometrical characteristics of both the OHIP-NL49 and the OHIP-NL14 were very good: both the reliability and the validity were high. The conclusion was, that the OHIP-NL49 and the OHIP-NL14 are well suited for determining the impact of oral health on the quality of life. abstract_id: PUBMED:25881160 Inequality in oral health related to early and later life social conditions: a study of elderly in Norway and Sweden. Background: A life course perspective recognizes influences of socially patterned exposures on oral health across the life span. This study assessed the influence of early and later life social conditions on tooth loss and oral impacts on daily performances (OIDP) of people aged 65 and 70 years. Whether social inequalities in oral health changed after the usual age of retirement was also examined. In accordance with "the latent effect life course model", it was hypothesized that adverse early-life social conditions increase the risk of subsequent tooth loss and impaired OIDP, independent of later-life social conditions. Methods: Data were obtained from two cohorts studies conducted in Sweden and Norway. The 2007 and 2012 waves of the surveys were used for the present study. Early-life social conditions were measured in terms of gender, education and country of birth, and later-life social conditions were assessed by working status, marital status and size of social network. Logistic regression and Generalized Estimating Equations (GEE) were used to analyse the data. Inverse probability weighting (IPW) was used to adjust estimates for missing responses and loss to follow-up. Results: Early-life social conditions contributed to tooth loss and OIDP in each survey year and both countries independent of later-life social conditions. Lower education correlated positively with tooth loss, but did not influence OIDP. Foreign country of birth correlated positively with oral impacts in Sweden only. Later-life social conditions were the strongest predictors of tooth loss and OIDP across survey years and countries. GEE revealed significant interactions between social network and survey year, and between marital status and survey year on tooth loss. Conclusion: The results confirmed the latent effect life course model in that early and later life social conditions had independent effects on tooth loss and OIDP among the elderly in Norway and Sweden. Between age 65 and 70, inequalities in tooth loss related to marital status declined, and inequalities related to social network increased. abstract_id: PUBMED:24974278 Creativity in later life. The ageing population presents significant challenges for the provision of social and health services. Strategies are needed to enable older people to cope within a society ill prepared for the impacts of these demographic changes. The ability to be creative may be one such strategy. This review outlines the relevant literature and examines current public health policy related to creativity in old age with the aim of highlighting some important issues. As well as looking at the benefits and negative aspects of creative activity in later life they are considered in the context of the theory of "successful ageing". Creative activity plays an important role in the lives of older people promoting social interaction, providing cognitive stimulation and giving a sense of self-worth. Furthermore, it is shown to be useful as a tool in the multi-disciplinary treatment of health problems common in later life such as depression and dementia. There are a number of initiatives to encourage older people to participate in creative activities such as arts-based projects which may range from visual arts to dance to music to intergenerational initiatives. However, participation shows geographical variation and often the responsibility of provision falls to voluntary organisations. Overall, the literature presented suggests that creative activity could be a useful tool for individuals and society. However, further research is needed to establish the key factors which contribute to patterns of improved health and well-being, as well as to explore ways to improve access to services. abstract_id: PUBMED:28474435 A review of the oral health-related quality of life (OHRQL) model for dental hygiene: Eighteen years later. Purpose: The purpose of this paper is to review available evidence to determine the impact of the oral health-related quality of life (OHRQL) Model for Dental Hygiene on education, research, and practice. Methods: A search was conducted in PubMed using the search terms oral health-related quality of life, OHRQL, and dental hygiene and limited to publications between 1998 and 2016. Google Scholar was also used to identify citations for the two original OHRQL model papers to capture the literature widely read by practicing dental hygienists but not indexed in PubMed. A total of 40 publications met the predetermined search strategy. Full publications were obtained and reviewed to determine how this research is being used in education, research, and practice. Findings: The OHRQL is being utilized to a very limited degree in education, research, and practice. The literature shows how the OHRQL is actively being used in the school where the model originated in addition to a dental hygiene programme in Japan. Studies have been conducted on a variety of research instruments, including the OHRQL, designed to capture OHRQL outcomes. Japanese researchers have used the OHRQL in studies designed to determine the impact of oral healthcare delivery on patients' oral health-related quality of life. Conclusion: Eighteen years of the literature confirms that the OHRQL is being used minimally in education, research, and practice. Greater emphasis is needed around the collection and assessment of oral health-related quality of life measures for providing patient-centred care. Answer: Yes, oral health does appear to have a relationship with survival in later life. A study on a Swedish cohort of individuals aged 80 and above found that oral health was significantly associated with subsequent survival, although psychosocial factors were stronger predictors. Specifically, the number of decayed and filled surfaces (DFS%) was significantly associated with survival, with individuals having a low number of DFS having shorter-than-expected survival times compared to those with high DFS scores. Cognitive status, number of teeth, smoking, and better financial status in childhood and during working life were also significant predictors of survival (PUBMED:19614719). Additionally, oral health has been identified as an important determinant of subjective well-being in later life. Deterioration in oral health and oral health-related quality of life increases the risk of depressive symptoms among older adults (PUBMED:26970523). This suggests that maintaining good oral health could contribute to better overall health and well-being, which may have implications for longevity. Moreover, life course studies have shown that early-life social conditions, including childhood health, can have a lasting impact on later life health outcomes, including oral health. These early-life conditions can indirectly affect later life health through social and economic variations during adulthood (PUBMED:29349264). Therefore, oral health in later life can be seen as a reflection of both past and present health determinants, which together may influence survival. In conclusion, while oral health is one of many factors that can influence survival in later life, it is a significant aspect that is associated with both physical health and subjective well-being. Maintaining good oral health could be an important component of strategies aimed at promoting health and longevity in older populations.
Instruction: Does increasing age have a negative impact on six-month functional outcome after coronary artery bypass? Abstracts: abstract_id: PUBMED:12219018 Does increasing age have a negative impact on six-month functional outcome after coronary artery bypass? Background: Advances in medical care and improvements in catheter-based interventions for coronary artery disease have advanced the age at which patients present for coronary artery bypass graft (CABG) operations. The impact of age on functional outcome after CABG is unclear. Methods: Functional outcome was assessed at baseline and at 6 months using the Duke Activity Status Index (DASI) in 337 consecutive patients undergoing primary or redo CABG. Complete functional outcome data were available in 292 patients (86%). Results: The mean baseline DASI score was significantly lower in patients > or = 70 years of age (24.8 +/- 14.5 vs 30.9 +/- 17.8, P =.002) and at 6 months (29.6 +/- 16.5 vs 38.0 +/- 17.1, P =.0001); however, the 6-month change in DASI score (7.2 +/- 17.5 in patients < 70 years of age vs 4.9 +/- 14.0 in patients > or = 70 years of age, P =.23) was comparable. When examined in terciles of age, the 6-month change in DASI score was not significantly different (P =.11) in patients aged 31 to 60 years (5.8 +/- 18.0), aged 61 to 71 years (8.7 +/- 15.7), or aged 72 to 91 years (4.0 +/- 14.6). Conclusions: These functional outcome results demonstrate that older patients present for CABG at a significantly lower functional level and remain at a significantly lower functional level 6 months after CABG. However, the functional improvement after CABG is not significantly different across age groups. abstract_id: PUBMED:10787462 Saphenous vein graft disease treated with the Wiktor Hepamed stent: procedural outcome, in-hospital complications and six-month angiographic follow-up. Objectives: To evaluate the effectiveness of electively placed heparin-coated stents in the treatment of coronary saphenous vein bypass grafts with de novo lesions less than 15 mm in diameter in a prospective study with all eligible consecutive patients presenting to Middelheim Hospital, Antwerp, Belgium between September 1997 and August 1998. Patients And Methods: Fifty patients with 53 lesions were studied. Anginal class, risk factors, quantitative coronary angiographic measurements pre- and postprocedure, procedural outcome, in-hospital events, clinical status on discharge, and six-month clinical and angiographic follow-up (in 48 patients) were recorded. All patients received acetylsalicylic acid and ticlopidine, unless known intolerance was present. Results: On average, 1.1 stents/patient were placed in very old saphenous vein grafts (11. 7+/-3.9 years). Procedural success was 98%. Only two non-Q wave myocardial infarctions (MIs) occurred, with no Q-wave MIs and no deaths during hospital stay. Length of hospital stay was short (2. 4+/-1.7 days), and 96% of patients were free of angina on discharge. At six-months' follow-up, two patients had died, one of whom died of a noncardiac cause. One patient suffered a non-Q wave MI. At six months, 86% of patients were free from angina. Minimal luminal diameter decreased from 1.14 mm before to 3.33 mm after stenting and to 2.52 mm at six months. Restenosis was present in 22% of patients (21.6% of lesions). Conclusions: In a selected population with coronary saphenous vein bypass graft disease, Wiktor heparin-coated stents can be delivered with an excellent periprocedural outcome. Six-month outcome appears favourable with a low recurrence of angina (18%) and a low rate of angiographic restenosis (21.6%). abstract_id: PUBMED:23074602 Immediate results and six-month clinical outcome after percutaneous coronary intervention in patients with prior coronary artery bypass surgery. Background: Redo coronary artery bypass grafting surgery (CABG) is associated with a higher risk of mortality than the first operation. However, the impact of percutaneous coronary intervention (PCI) on the outcome in such patients is currently unclear. We evaluated the in-hospital and six-month clinical outcomes of post-CABG patients who underwent PCI in our center. Methods: Between April 2008 and July 2009, 71 post-CABG patients (16 women and 55 men) underwent 110 stent implantations (74% drug-eluting stents) for 89 lesions. Sixty percent of the PCI procedures were performed on the native coronary arteries, 32% on graft arteries, and 8% on both types of vessels. Major adverse cardiac events (MACE) were recorded in hospital and at six months' follow-up. Results: The procedural success rate was 93%, and the in-hospital MACE rate was 5.6 % (1 death, 3 myocardial infarctions). At 6 months, the incidence of MACE was 5.6% (no death or myocardial infarction, but 4 target lesion revascularizations) and 4 (5.6 %) in-stent restenoses. There was no statistically significant difference in the comparison of MACE between the patients treated in either native arteries or in the grafts (15% vs.12%, p value = 0.8). According to the univariate analysis, hypertension and the use of the bare metal stent vs. the drug-eluting stent were the significant predictors of MACE, whereas the multivariate analysis showed that only hypertension (OR = 3.7, 95% CI 3.4-4, p value < 0.048) was the independent predictor of MACE. The mean of the left ventricular ejection fraction had no effect on the incidence of MACE (p value = 0.9). The multivariate analysis showed hypertension (p value < 0.048) and the use of the bare metal stent (p value < 0.018) were the independent predictors of MACE. The chronic total occlusion (CTO) (p value < 0.01) was the independent predictor of the success rate. The prevalence of diabetes had no impact on the incidence of MACE according to the univariate analysis (p value = 0.9). Our multivariate analysis showed that hypertension and the use of the bare metal stent were the independent predictors of MACE and that chronic total occlusion was the independent predictor of the procedural failure rate. Conclusion: PCI is preferable to redo CABG for post-CABG patients. The independent predictors of MACE were hypertension and bare metal stents. abstract_id: PUBMED:10898415 Six-month clinical and angiographic outcome after successful excimer laser angioplasty for in-stent restenosis. Objectives: This study evaluated the clinical and angiographic six-month follow-up after excimer laser coronary angioplasty (ELCA) for restenosed coronary stents. Background: Excimer laser coronary angioplasty has recently been shown to be safe and efficient for the treatment of in-stent restenosis. Methods: Ninety-six consecutive patients successfully treated with ELCA within 141 stents were included in a six-month clinical and angiographic follow-up. Results: During follow-up there was one sudden death and one patient with documented myocardial infarction. Angina pectoris classified as > or = Canadian Cardiovascular Society II reoccurred in 49 patients. Follow-up angiography was obtained in 89 patients (93%) with 133 stents. Quantitative coronary angiography revealed a mean diameter stenosis of 77 +/- 10% before intervention, 41 +/- 12% after laser treatment and 11% +/- 12% after adjunctive percutaneous transluminal coronary angioplasty (p < 0.001). Six months after ELCA the mean diameter stenosis had increased to 60 +/- 26% (p < 0.001). A > or =50% diameter stenosis was present in 48 patients (54%); in 24 of these patients diameter stenosis was > or =70%. Total occlusions occurred in an additional 10 patients (11%). There was a trend toward an increased recurrent restenosis rate in patients with diabetes mellitus and long lesions or total occlusions (p = 0.059). Forty-eight patients (50%) received medical treatment after six months. Reinterventions were necessary in 30 patients (31%), and coronary artery bypass surgery was performed in 17 patients (18%). Event-free survival was 50%. Conclusions: Excimer laser angioplasty for in-stent restenosis was associated with a high incidence of recurrent restenosis in this group of patients, suggesting that this technique is unlikely to reduce recurrent in-stent restenosis and that other approaches are necessary. abstract_id: PUBMED:10412916 The influence of age on the outcome of primary coronary artery bypass grafting. Background: With the steady increase in the number of elderly patients requiring coronary artery bypass grafting (CABG), scepticism still exists as to whether this operation is justified in older patients or not, and whether there is an upper age limit. The aim of this study was to examine the effects of increasing age on the outcome of CABG. Methods: A retrospective review was performed on 2127 consecutive patients undergoing primary CABG from January 1990 through June 1996. The patients were arbitrarily divided into age groups: 69 years or less (n=1607), 70-75 years (n=371), 76-80 years (n=129) and older than 80 years (n=20). Mortality, morbidity and long-term survival for each group was compared. Results: The groups containing the elderly patients showed an over-representation of women, as well as a higher frequency of arterial hypertension, hyperlipidemia, previous infarction and diabetes. More patients, amongst the elderly, had unstable angina and diffuse coronary disease requiring urgent surgery and coronary thrombendarterectomy compared to those <70 years. Hospital mortality did not differ between the groups, 1.8, 3.0, 2.3 and 5.0%. There was an increased incidence of low postoperative cardiac output and a higher incidence of gastro-intestinal complications amongst the elderly. The 5-year survival was 92.2% (<70 years), 87.0% (70-75 years) and 86.3% (76-80 years) and the cardiac event-free survival was 87.5% (<70 years), 78.4% (70-75 years) and 80.8% (76-80 years) at 5 years. Conclusions: An acceptable early mortality and medium-term survival (5 years) together with excellent functional medium-term results support the justification of primary CABG in older patients irrespective of age. abstract_id: PUBMED:12601438 Self-reported functional status as a predictor of coronary artery bypass graft surgery outcome in elderly patients. Background: Surgeons have traditionally relied mainly on clinical intuition in the selection of elderly candidates for coronary artery bypass grafting (CABG). The overall increasing number of patients undergoing CABG and limited resources require that a more rational approach be used to screen out candidates who are least likely to benefit from the surgery. Hypothesis: Preoperative functional status is a more sensitive predictor of mortality and poor postoperative functional status than age. Patients And Methods: Retrospective preoperative and postoperative geriatric assessment was obtained for 123 patients who had undergone CABG at the Montreal General Hospital. Montreal, Quebec. Preoperative and postoperative health and functional status were assessed using the Canadian version of the Medical Outcomes Study 36-Item Short-Form (SF-36) 1 to 1.5 years following surgery. The questionnaires were compiled according to the method described by Stewart et al and scores were transformed linearly to a 0 to 100 scale. In addition to functional status, the presence of comorbidities and other risk factors known to influence the outcome of CABG were recorded. Results: There was no significant difference in the likelihood of having a poor functional status before or after surgery, or death as an outcome of CABG in the young elderly group versus the old elderly group. Preoperative functional status was found to predict postoperative functional status; however, there was no significant association between preoperative functional status and the presence of comorbidity. The presence of comorbidity did not affect the postoperative functional status, but increased the likelihood of death. Conclusion: Functional status was demonstrated to be a significant predictor of CABG outcome. When compared with age, functional status was also found to be a more reliable predictor of CABG outcome, which had not been previously demonstrated. Women were found to be more likely to have a poor preoperative functional status than their male counterparts. This may account in part for the decreased success rate of CABG in elderly women. abstract_id: PUBMED:22334836 Immediate and Six-month Clinical Outcome of Percutaneous Coronary Intervention in a Tertiary Hospital in the Sultanate of Oman. Objectives: To evaluate the clinical characteristics, angiographic profile, in-hospital and six-month clinical outcome of patients who underwent percutaneous coronary intervention in a tertiary hospital in the Sultanate of Oman. Methods: Two hundred and five consecutive patients with both acute coronary syndrome and stable coronary artery disease, who underwent percutaneous coronary intervention between January 2007 and June 2007, were retrospectively analyzed. Follow-up information was obtained from outpatient visits of these patients at six-months. Main Outcome Measures: The primary end point in this study was the occurrence of major adverse cardiovascular events (MACE), defined as cardiac death, any myocardial infarction (MI), cerebrovascular accident (CVA) and target vessel revascularization (TVR) with either repeat percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG). Secondary end points included angiographic success rate, procedural success rate, angina status, and the rate of clinical and angiographic restenosis. Results: The angiographic and procedural success rate was 98% and 95% respectively. Fifty-one percent of patients surveyed had single vessel disease, 34% had double vessel disease and triple vessel disease was seen in 15% of patients. Type A lesion was found in 16%, Type B in 55% and Type C in 29% of patients. The majority of patients had single vessel stenting (83%). The mean ± SD number of stents per patient was 1.6 ± 0.9.There were four in-hospital deaths (2%) and six patients (2.9%) had non-ST elevation myocardial infarction before hospital discharge. Out of 205 patients, 53 patients were lost to follow-up. Among the 148 patients followed up, 105 patients (71%) were asymptomatic at follow-up, 36 (24%) patients had stable angina and 7 (5%) had a late myocardial infarction including three patients with stent thrombosis (2%). Among the 43 patients with angina or late infarction, 28 patients underwent coronary angiogram. Angiographic in-stent restenosis was seen in 14 patients. Of them, 8 patients underwent CABG and 6 patients repeat PCI. Fourteen patients had patent stents.The remaining fifteen patients were on optimal medications including two patients with stent thrombosis as they refused coronary angiogram. Overall, 132 of 148 patients (105 asymptomatic/14 patent stents/13 with angina) (89%) were free from major adverse cardiac events. Considering anginal status and repeat angiograms, composite clinical (15 patients) and angiographic (14 patients) six-month restenosis rate in percutaneous coronary intervention patients (29/148) was 19.5%. Conclusion: Results of percutaneous coronary intervention in our setup is excellent with good immediate results, low complication rate, good six-month clinical outcome and is comparable to international standards. abstract_id: PUBMED:15152290 Six-month outcomes after single- and multi-lesion percutaneous coronary intervention: results from the ROSETTA registry. Background: The American College of Cardiology/American Heart Association exercise testing guidelines suggest that routine functional testing may benefit patients at high risk of restenosis, such as those undergoing multi-lesion percutaneous coronary intervention (PCI). Objectives: To compare the six-month post-PCI clinical and procedural outcomes in patients following single- and multi-lesion PCI, and to examine the use of routine functional testing (ie, in all patients) versus selective functional testing (ie, only in those with recurrent symptoms) following multi-lesion PCI. Methods: Six-month outcomes among 562 patients after single-lesion PCI and 229 patients after multi-lesion PCI were examined. All patients were enrolled in the Routine versus Selective Exercise Treadmill Testing after Angioplasty (ROSETTA) registry, a prospective, multicentre registry examining the use of functional testing after successful PCI. Results: For single- versus multi-lesion PCI patients, respectively, rates of death (1.8% versus 2.2%, P=0.7) and myocardial infarction (0.7% versus 2.6%, P=0.03) were low in both groups. Rates of unstable angina (12.0% versus 11.7%, P=0.9) and the composite clinical end point of death, myocardial infarction or unstable angina (13.5% versus 13.9%, P=0.9) were similar. Multi-lesion PCI patients had a higher number of repeat PCI procedures (6.6% versus 13.4%, P=0.02) but there was no difference in the rates of coronary artery bypass graft surgery (3.0% versus 2.6%, P=0.7). A routine functional testing strategy was used in 28.0% of single-lesion and 31.6% of multi-lesion patients. In a multivariate analysis of the multi-lesion patients, routine functional testing was not associated with a significant reduction in the composite clinical event rate (odds ratio 0.5, 95% CI 0.2 to 1.7, P=0.27). Conclusions: During the six-month period following successful PCI, clinical event rates were similar among patients undergoing single- or multi-lesion PCI. Routine functional testing was not associated with a statistically significant benefit in patients after multi-lesion PCI. However, additional study is required to better define the role of routine functional testing in this subgroup of patients. abstract_id: PUBMED:16784069 Body mass index: a risk factor for 30-day or six-month mortality in patients undergoing aortic valve replacement? Background And Aim Of The Study: In recent studies, the impact of a low or high body mass index (BMI) on outcome after cardiac surgery has been the subject of controversy. A retrospective study was conducted to determine the influence of BMI on 30-day and six-month mortality after aortic valve replacement (AVR). Methods: A multivariable logistic regression was performed on data from 1,241 consecutive patients (mean age 69 +/- 11 years) who underwent AVR either with (n = 514; 41%) or without coronary artery bypass grafting CABG between 2000 and 2003. A wide spectrum of periprocedural variables was collected, including laboratory data as markers for nutritional status and comorbidity. Patients were followed up for six months after AVR (99% complete). Results: Mortality rates after 30 days and after six months were 3.9% (n = 49) and 7.6% (n = 94), respectively. A low BMI was identified as an independent risk factor for 30-day (OR (odds ratio) 0.87; CI (confidence interval) 0.8-0.94) and six-month mortality (OR 0.91; CI 0.86-0.96). The relationship between the logit function and BMI was linear; however, a BMI value of 24 was considered an appropriate cut-off point. Both models containing the BMI linearly or dichotomic were equivalent. As patients with a lower BMI differ in their preoperative risk profile compared to those with a higher BMI, a saturated propensity score estimating the propensity towards having a BMI < 24 was calculated. The propensity score was not significant in the final models for 30-day and six-month mortality (0.24 and 0.73, respectively), and the OR for BMI remained largely unaltered (0.89 and 0.91, respectively). Conclusion: A BMI < 24 is predictive of an increased risk of mortality after AVR, independently of malnutrition, advanced heart disease, or valve size. Further studies are required to investigate the role of adipose tissue in extreme situations and chronic disease. It is mandatory to include BMI in outcome studies after AVR. abstract_id: PUBMED:31499539 Impact of Preoperative Functional Status on Quality of Life after Cardiac Surgery. Background: With increasing importance, health-related quality of life (HRQoL) has become a crucial outcome measure of cardiac surgery. The aim of this study was to assess the dynamics of HRQoL change within 12 months after surgery and to identify predictors of deterioration in physical and mental health. Methods: The cohort of this prospective study included 164 consecutive patients who underwent elective surgery. HRQoL was assessed on the basis of the Short-Form 36 questionnaire at three different times: upon admission and at 3 and 12 months after surgery. The minimal clinically important difference (MCID) was used to determine whether the surgery resulted in deterioration of HRQoL. Results: In general, physical and mental health status improved within the first year after cardiac surgery. However, after 12 months, 7.9 and 21.2% of patients had clinically significant poorer physical (PCS) and mental component summary (MCS) scores, based on the MCID approach. The results of multivariate analysis identified preoperative health status, age < 70 years, coronary artery bypass grafting, and a previous neurological event as predictors of deterioration in postoperative HRQoL. The greatest risks for deterioration were higher preoperative PCS and MCS scores. Conclusion: Although we were able to demonstrate a general improvement in the HRQoL following cardiac surgery, in one-fifth of patients, there was no recovery of mental health status even after 1 year. As this effect is mainly determined by preoperative functional status, HRQoL should be an integral part of medical consultation, especially in younger patients with a positive perception of quality of life. Answer: Increasing age does have an impact on the functional outcome after coronary artery bypass graft (CABG) surgery, but this impact is nuanced. Older patients tend to present for CABG at a significantly lower functional level and remain at a lower functional level 6 months after CABG compared to younger patients. However, the functional improvement after CABG, as measured by the Duke Activity Status Index (DASI), is not significantly different across age groups. This suggests that while older patients start at a lower baseline and end at a lower level of function postoperatively, the degree of improvement they experience after CABG is comparable to that of younger patients (PUBMED:12219018). Additionally, a study examining the effects of increasing age on the outcome of CABG found that while elderly patients had an over-representation of comorbidities and required more urgent surgeries, the hospital mortality did not differ significantly between age groups. The 5-year survival and cardiac event-free survival rates were also acceptable in older patients, supporting the justification of primary CABG in this population irrespective of age (PUBMED:10412916). Another study focusing on elderly patients undergoing CABG showed that preoperative functional status was a more sensitive predictor of mortality and poor postoperative functional status than age alone. This study suggested that functional status, rather than chronological age, should be a more reliable predictor of CABG outcome (PUBMED:12601438). In summary, while older age is associated with a lower functional level before and after CABG, the relative improvement following surgery is similar across different age groups. Preoperative functional status appears to be an important predictor of outcomes, and CABG can still be justified in older patients given the acceptable mortality and medium-term survival rates.
Instruction: Can we predict which hospitalised patients are in their last year of life? Abstracts: abstract_id: PUBMED:35264274 Unsolved problems and unwanted decision-making in the last year of life: A qualitative analysis of comments from bereaved caregivers. Objectives: Patients in their last year of life, as well as their relatives, often feel that existent care structures of the healthcare system do not adequately address their individual needs and challenges. This study analyzes unmet needs in terms of unsolved problems and unwanted decision-making in the health and social care of patients in their last year of life from the perspective of bereaved caregivers. Methods: This qualitative study is based on free-text comments from informal caregivers of deceased patients collected as part of the Last-Year-of-Life-Study-Cologne (LYOL-C) using a postal survey. With qualitative content analysis, a category system with main and subcategories was developed in a multi-step process. Results: Free-text commentaries and demographic data were collected from 240 bereaved caregivers. Particularly outside of hospice and palliative care services, study participants addressed the following unsolved problems: poor communication with medical and nursing staff, insufficient professional support for informal caregivers, inadequate psycho-social support for patients, and poor management of pain and other symptoms. Respondents often stated that their relative had to be cared for and die outside their own home, which the relative did not want. Significance Of Results: Our findings suggest the necessity for greater awareness of patients' and their relatives' needs in the last year of life. Addressing individual needs, integrating palliative and hospice care in acute hospitals and other healthcare structures, and identifying patients in their last year of life and their caregivers could help to achieve more targeted interventions and optimization of care. abstract_id: PUBMED:36443793 Quality of care in the last year of life: adaptation and validation of the German "Views of Informal Carers' Evaluation of Services - Last Year of Life - Cologne". Background: To inform quality improvement and strengthen services provided in the last year of life, measuring quality of care is essential. For Germany, data on care experiences in the last year of life that go beyond diagnoses and care settings are still rare. The aim of this study was to develop and validate a German version of the 'Views of Informal Carers' Evaluation of Services - Short Form (VOICES-SF)' suitable to assess the quality of care and services received across settings and healthcare providers in the German setting in the last year of life (VOICES-LYOL-Cologne). Methods: VOICES-SF was adapted and translated following the 'TRAPD' team approach. Data collected in a retrospective cross-sectional survey with bereaved relatives in the region of Cologne, Germany were used to assess validity and reliability. Results: Data from 351 bereaved relatives of adult decedents were analysed. The VOICES-LYOL-Cologne demonstrated construct validity in performing according to expected patterns, i.e. correlation of scores to care experiences and significant variability based on care settings. It further correlated with the PACIC-S9 Proxy, indicating good criterion validity. The newly added scale "subjective experiences of process and outcome of care in the last year of life" showed good internal consistency for each given care setting, except for the homecare setting. Test-retest analyses revealed no significant differences in satisfaction ratings according to the length of time since the patient's death. Overall, our data demonstrated the feasibility of collecting patient care experiences reported by proxy-respondents across multiple care settings. Conclusion: VOICES-LYOL-Cologne is the first German instrument to analyse care experiences in the last year of life in a comprehensive manner and encourages further research in German-speaking countries. This instrument enables the comparison of quality of care between settings and may be used to inform local and national quality improvement activities. Trial Registration: This study was registered in the German Clinical Trials Register (DRKS00011925; Date of registration: 13/06/2017). abstract_id: PUBMED:35529537 Determinants of health expenditures during the last year of life of the Mexican population aged 50 or over. During the last year of life of the elderly health deteriorates the most, thus becoming the period of time when needs of medical services, personal care requirements and related costs are at highest. Using the Mexican Health and Aging Study, differences by sex in the determinants for out-of-pocket health expenditures during the last year of life of people 50 and over are discussed. Three levels of out-of-pocket expenses disbursement are contemplated: none, medium-low, and high. Controlling by sociodemographic variables, findings suggest that the number of hospital nights is the main determinant of health expenses, even above those related to diseases that are regarded as main causes of death. This fact impacts both out-of-pocket expences and the costs for medical institution. It is another issue in the over-hauling of the public health system. abstract_id: PUBMED:35190893 The last year of life for patients dying from cancer vs. non-cancer causes: a retrospective cross-sectional survey of bereaved relatives. Purpose: To compare health care experiences of patients with cancer or non-cancer diseases in their last year of life. Methods: A cross-sectional post-bereavement survey was conducted using an adapted German version of the VOICES questionnaire (VOICES-LYOL-Cologne). Differences in the reported experiences were assessed using a two-sided Pearson's chi-square test and Mann-Whitney U test. Results: We collected data from 351 bereaved relatives. More than half of non-cancer patients were not informed that their disease could lead to death (p < 0.001). When this was communicated, in 46.7% of non-cancer and 64.5% of cancer patients, it was reported by the hospital doctor (p = 0.050). In all, 66.9% of non-cancer and 41.6% of cancer patients were not informed about death being imminent (p < 0.001). On average, non-cancer patients had significantly fewer transitions and hospital stays in their last year of life (p = 0.014; p = 0.008, respectively). Non-cancer patients were treated more often by general practitioners, and cancer patients were treated more often by specialists (p = 0.002; p = 0.002, respectively). A substantially lower proportion of non-cancer patients were treated by at least one member of or in the setting of general or specialized palliative care (p < 0.001). Conclusions: Non-cancer patients experience disadvantages in communication regarding their care and in access to specialized palliative care in their last year of life compared to cancer patients. Regarding the assessment of palliative care needs and the lack of communication of an incurable disease, non-cancer patients are underserved. An early identification of patients requiring palliative care is a major public health concern and should be addressed irrespective of diagnosis. Trial Registration: Prospectively registered by the German Clinical Trials Register (DRKS00011925, data of registration: 13.06.2017). abstract_id: PUBMED:36214321 Variation in Patient-Reported Decision-Making Roles in the Last Year of Life among Patients with Metastatic Cancer: A Longitudinal Study. Objective: We investigated the variation in patient-reported decision-making roles in the past year of life among patients with metastatic solid cancer and the associations of these roles with patient quality of life and perceived quality of care. Methods: We used the last year of life data of 393 deceased patients from a prospective cohort study. Patients reported their decision-making roles, quality of life (emotional well-being, spiritual well-being, and psychological distress) and perceived quality of care (care coordination and physician communication) every 3 months until death. We used mixed effects linear regressions to investigate the associations of decision-making roles with patients' quality of life and perceived quality of care. Results: The most reported roles, on average, were patient-led (37.9%) and joint (23.4%; with physicians and/or family caregivers) decision making, followed by no patient involvement (14.8%), physician/family-led (12.9%), and patient alone (11.0%) decision making. Patient level of involvement in decision making decreased slightly as death approached (P < 0.05). Compared with no patient involvement, joint decision making was associated with better emotional well-being (β [95% confidence interval] = 1.02 [0.24, 1.81]), better spiritual well-being (1.48 [0.01, 2.95]), lower psychological distress (-1.99 (-3.21, -0.77]), higher perceived quality of care coordination (5.04 [1.24, 8.85]), and physician communication (5.43 [1.27, 9.59]). Patient-led decision making was associated with better spiritual well-being (1.69 [0.24, 3.14]) and higher perceived quality of care coordination (6.87 [3.17, 10.58]) and physician communication (6.21 [2.15, 10.27]). Conclusion: Joint and patient-led decision-making styles were reported by 61% of the patients and were associated with better quality of life and quality of care. A decrease in the level of patient involvement over time indicates reliance on family and physicians as death approached. Highlights: Among patients with metastatic cancer, the level of patient involvement in decision making decreased slightly as death approached.Joint decision making of patients with their physicians and/or family caregivers and patient-led decision making were associated with better quality of life and perceived quality of care.Patients with metastatic cancer should be encouraged to be involved in decision making together with their physicians and/or family caregivers to potentially improve their end-of-life experience. abstract_id: PUBMED:34579669 Costs of inpatient hospitalisations in the last year of life in older New Zealanders: a cohort study. Background: Rapidly ageing populations means that many people now die in advanced age. This paper investigated public hospital and long-term care home costs in the 12 months before death in Māori and non-Māori of advanced age in New Zealand. Methods: Data from an existing longitudinal study (LiLACS NZ) was used, in which 937 older New Zealanders were enrolled in 2010. At the time of this study, 213 Māori and 241 non-Māori in the cohort had died. National Health Index numbers were linked to the hospitalisation National Minimum Dataset to ascertain public hospitalisation and care home costs in the last year of life. Results: The average total publicly funded hospital and long-term care home costs in the 12 months prior to death were $16,211 and $17,351 for Māori and non-Māori respectively. Non-Māori tended to have long lengths of stay in their last year of life, and non-Māori men had the highest proportion with high costs and long lengths of stay in care homes. Costs in the last year of life were 8.1 times higher in comparison to costs for individuals who did not die in the same time period. Conclusion: Despite New Zealand's commitment to providing an equitable level of healthcare, this study illustrated that ethnic and gender disparities are still apparent at the end of life. This raises questions as to whether money at the end of life is being spent appropriately, and how it could potentially be more equitably targeted to meet the diverse needs of older people and their families. abstract_id: PUBMED:32576290 Health- and social care in the last year of life among older adults in Sweden. Background: In the last years of life, burden of disease and disability and need of health- and social care often increase. Social, functional and psychological factors may be important in regard to social- and health care utilization. This study aims to describe use of health- and social care during the last year of life among persons living in ordinary housing or in assisted living facilities. Methods: A retrospective study examining health- and social care utilization during their last year of life, using a subsample from the Swedish twin registries individually linked to several Swedish national quality registries (NQR). Persons that died during 2008-2009 and 2011-2012 (n = 1518) were selected. Results: Mean age at death was 85.9 ± 7.3 (range 65.1-109.0). Among the 1518 participants (women n = 888, 58.5%), of which 741 (49%) were living in assisted living facilities and 1061 (69.9%) had at least one hospitalization during last year of life. The most common causes of death were cardiovascular disease (43.8%) and tumors (15.3%). A multivariable logistic regression revealed that living in ordinary housing, younger age and higher numbers of NQR's increased the likelihood of hospitalization. Conclusions: Persons in their last year of life consumed high amount of health- and social care although 12% did not receive any home care. Married persons received less home care than never married. Persons living in ordinary housing had higher numbers of hospitalizations compared to participants in assisted living facilities. Older persons and persons registered in fewer NQR's were less hospitalized. abstract_id: PUBMED:33739596 InterRAI assessments: opportunities to recognise need for and implementation of palliative care interventions in the last year of life? Objectives: To explore how interRAI assessments could be used to identify opportunities to integrate palliative care into a plan of care. Methods: A population-based, cross-sectional design using unique identifiers to link deaths with a national interRAI database. Data were analysed using logistic regression models and chi-square tests. Results: A total of 4869 people died over a 12-month period in one district health board area; 50.9% (n = 2478) received one or more interRAI assessments in the year before death. Diagnosis impacted on the type and timing of interRAI assessments and the recognition of end-stage disease. Conclusion: People in the last year of life experience frequent interRAI assessments. There are opportunities to identify people who might benefit from a palliative care approach. Future research is needed to understand how interRAI assessors can be supported in the application of assessment items related to palliative care. abstract_id: PUBMED:33840095 Prophylactic platelet transfusions in hospitalised patients with thrombocytopenia-Protocol for a systematic review with meta-analysis. Background: Prophylactic platelet transfusions are frequently used to prevent bleeding in hospitalised patients with thrombocytopenia. Recommendations regarding the use of prophylactic platelet transfusions in non-haematological patients are based on extrapolations, observational studies, and expert opinions, and transfusion with platelets has been associated with adverse effects. We aim to assess the overall benefits and harms of prophylactic platelet transfusions in hospitalised patients with thrombocytopenia. Methods/design: We will conduct a systematic review with meta-analyses and trial sequential analyses in compliance with the recommendations by the Cochrane Collaboration and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We will prepare the manuscript in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement. We will include randomised clinical trials assessing prophylactic platelet transfusion versus no prophylaxis or placebo in hospitalised patients with thrombocytopenia. The primary outcome is mortality at longest follow-up. Secondary outcomes include episodes of clinically important bleeding, nosocomial infections, transfusion-related adverse events, thromboembolic events, length of hospital stay, quality of life, and days alive without the use of life support. We will conduct prespecified subgroup analyses and sensitivity analyses and assess the risk of random errors by trial sequential analyses. Discussion: The proposed systematic review will provide an overview of the certainty of evidence for the benefits and harms of prophylactic platelet transfusion in hospitalised patients with thrombocytopenia. abstract_id: PUBMED:38312552 The impact of virtual reality on the psychological well-being of hospitalised patients: A critical review. Background And Objectives: Improving hospitalised patients' psychological well-being (PWB) is relevant to their recovery and overall quality of life. Virtual Reality (VR) holds the potential to emerge as a novel tool for increasing the PWB of hospitalised patients. This study critically reviewed existing research concerning the use and impact of VR on the PWB of hospitalised patients with serious illnesses. The primary aim of this study was to evaluate the efficacy and practical applicability of VR in enhancing the PWB for hospitalised patients dealing with serious illnesses. Methodology: In the initial search, the authors identified 106 sources that provided valuable insights into the broader field of VR, PWB, and the influence of VR on the psychological functioning of hospitalised patients. However, only eight articles were selected for the critical review, following the SALSA method. The SALSA method comprises the following steps: (S) Search, (AL) Appraisal, (S) Synthesis and (A) Analysis. Data extracted from these eight articles were subjected to reflexive thematic analysis, through which patterns were identified to examine the use and impact of VR on the PWB of hospitalised patients with serious illnesses. The thematic analysis process involves data gathering and familiarisation, code identification, and formulating and refining various themes to produce a thematic analysis report. Results: The themes that emerged from the analysis were (i) positive psychological effects (with sub-themes including the effectiveness of VR in improving psychological symptoms, its role as an equivalent or adjunctive treatment, and symptom reduction), (ii) perceptions and the adoption of novel technology, (iii) characteristics that influence the effectiveness of VR, and (iv) statistical and practical applicability and diversity of VR. Conclusion And Recommendation: The use of VR to increase the PWB of hospitalised patients is a relatively recent innovation. Nevertheless, the themes identified in this study indicate that the use of VR within the context of hospitalised patients can benefit their psychological functioning, ultimately contributing to an improvement in their overall PWB. Further in-depth studies are needed to provide a comprehensive conclusion on the potential benefits of VR on the PWB of hospitalised patients dealing with serious illnesses. Answer: Based on the provided abstracts, predicting which hospitalised patients are in their last year of life is a complex task that involves considering various factors. The abstracts suggest that there are certain indicators and care patterns that could help in identifying patients who may be nearing the end of life. For instance, the abstract from PUBMED:35264274 indicates that patients and their relatives often feel that their needs are not adequately addressed by healthcare structures, particularly outside of hospice and palliative care services. This suggests that a closer examination of the care needs and communication with medical and nursing staff could provide insights into whether a patient is in their last year of life. The abstract from PUBMED:35529537 discusses the determinants of health expenditures during the last year of life, highlighting that the number of hospital nights is a major determinant of health expenses. This could imply that patients who have frequent or prolonged hospital stays may be in their last year of life. The abstract from PUBMED:35190893 compares the healthcare experiences of patients with cancer or non-cancer diseases in their last year of life. It notes that non-cancer patients often experience disadvantages in communication regarding their care and access to specialized palliative care. This suggests that a lack of communication about the incurability of a disease and the absence of palliative care could be indicators that a patient is in their last year of life. The abstract from PUBMED:32576290 describes the use of health- and social care during the last year of life among older adults in Sweden. It indicates that older persons and those registered in fewer national quality registries were less hospitalized, which could be used as a factor in predicting the last year of life. Lastly, the abstract from PUBMED:33739596 discusses the use of interRAI assessments to identify opportunities to integrate palliative care into a plan of care. This suggests that frequent interRAI assessments and the recognition of end-stage disease could be used to predict if a patient is in their last year of life. In summary, while no single method is provided for predicting the last year of life for hospitalised patients, a combination of factors such as healthcare utilization patterns, communication about the incurability of a disease, the presence of end-stage disease, and the frequency of interRAI assessments could potentially be used to identify patients who may be nearing the end of their life.
Instruction: 2012 American Geriatrics Society Beers criteria: enhanced applicability for detecting potentially inappropriate medications in European older adults? Abstracts: abstract_id: PUBMED:26446832 American Geriatrics Society 2015 Updated Beers Criteria for Potentially Inappropriate Medication Use in Older Adults. The 2015 American Geriatrics Society (AGS) Beers Criteria are presented. Like the 2012 AGS Beers Criteria, they include lists of potentially inappropriate medications to be avoided in older adults. New to the criteria are lists of select drugs that should be avoided or have their dose adjusted based on the individual's kidney function and select drug-drug interactions documented to be associated with harms in older adults. The specific aim was to have a 13-member interdisciplinary panel of experts in geriatric care and pharmacotherapy update the 2012 AGS Beers Criteria using a modified Delphi method to systematically review and grade the evidence and reach a consensus on each existing and new criterion. The process followed an evidence-based approach using Institute of Medicine standards. The 2015 AGS Beers Criteria are applicable to all older adults with the exclusion of those in palliative and hospice care. Careful application of the criteria by health professionals, consumers, payors, and health systems should lead to closer monitoring of drug use in older adults. abstract_id: PUBMED:24917083 2012 American Geriatrics Society Beers criteria: enhanced applicability for detecting potentially inappropriate medications in European older adults? A comparison with the Screening Tool of Older Person's Potentially Inappropriate Prescriptions. Objectives: To determine the prevalence of potentially inappropriate medications (PIMs) and related factors through a comparative analysis of the Screening Tool of Older Person's Potentially Inappropriate Prescriptions (STOPP), the 2003 Beers criteria, and the 2012 AGS update of the Beers criteria. Design: Cross-sectional. Setting: Primary care. Participants: Community-dwelling persons aged 65 and older who live on the island of Lanzarote, Spain (N = 407). Measurements: Sociodemographic characteristics; independence in activities of daily living; cognitive function; Geriatric Depression Scale; clinical diagnoses; and complete data on indication, dosage, and length of drug treatments. One thousand eight hundred seventh-two prescriptions were examined, and the rate of PIMs was assessed with the three criteria. The primary endpoint was the percentage of participants receiving at least one PIM. Multivariate logistic regression was used to examine the factors related to PIMs. Results: Potentially inappropriate medications were present in 24.3%, 35.4%, and 44% of participants, according to the 2003 Beers criteria, STOPP, and 2012 Beers criteria, respectively. The profile of PIMs was also different (the most frequent being benzodiazepines in both Beers criteria lists and aspirin in the STOPP). The number of drugs was associated with risk of prescribing PIMs in all three models, as was the presence of a psychological disorder in the 2003 Beers criteria (odds ratio (OR) = 2.07, 95% confidence interval (CI) = 1.26-3.40) and the 2012 Beers criteria (OR = 2.91, 95% CI = 1.83-4.66). The kappa for degree of agreement between STOPP and the 2012 Beers criteria was 0.35 (95% CI = 0.25-0.44). Conclusion: The 2012 Beers criteria detected the highest number of PIMs, and given the scant overlapping with the STOPP criteria, the use of both tools may be seen as complementary. abstract_id: PUBMED:30693946 American Geriatrics Society 2019 Updated AGS Beers Criteria® for Potentially Inappropriate Medication Use in Older Adults. The American Geriatrics Society (AGS) Beers Criteria® (AGS Beers Criteria®) for Potentially Inappropriate Medication (PIM) Use in Older Adults are widely used by clinicians, educators, researchers, healthcare administrators, and regulators. Since 2011, the AGS has been the steward of the criteria and has produced updates on a 3-year cycle. The AGS Beers Criteria® is an explicit list of PIMs that are typically best avoided by older adults in most circumstances or under specific situations, such as in certain diseases or conditions. For the 2019 update, an interdisciplinary expert panel reviewed the evidence published since the last update (2015) to determine if new criteria should be added or if existing criteria should be removed or undergo changes to their recommendation, rationale, level of evidence, or strength of recommendation. J Am Geriatr Soc 67:674-694, 2019. abstract_id: PUBMED:37139824 American Geriatrics Society 2023 updated AGS Beers Criteria® for potentially inappropriate medication use in older adults. The American Geriatrics Society (AGS) Beers Criteria® (AGS Beers Criteria®) for Potentially Inappropriate Medication (PIM) Use in Older Adults is widely used by clinicians, educators, researchers, healthcare administrators, and regulators. Since 2011, the AGS has been the steward of the criteria and has produced updates on a regular cycle. The AGS Beers Criteria® is an explicit list of PIMs that are typically best avoided by older adults in most circumstances or under specific situations, such as in certain diseases or conditions. For the 2023 update, an interprofessional expert panel reviewed the evidence published since the last update (2019) and based on a structured assessment process approved a number of important changes including the addition of new criteria, modification of existing criteria, and formatting changes to enhance usability. The criteria are intended to be applied to adults 65 years old and older in all ambulatory, acute, and institutionalized settings of care, except hospice and end-of-life care settings. Although the AGS Beers Criteria® may be used internationally, it is specifically designed for use in the United States and there may be additional considerations for certain drugs in specific countries. Whenever and wherever used, the AGS Beers Criteria® should be applied thoughtfully and in a manner that supports, rather than replaces, shared clinical decision-making. abstract_id: PUBMED:31338540 2015 Beers Criteria and STOPP v2 for detecting potentially inappropriate medication in community-dwelling older people: prevalence, profile, and risk factors. Purpose: To comparatively assess the prevalence rates of potentially inappropriate medications (PIMs) obtained by the former and latest versions of American Geriatrics Society Beers Criteria (AGS BC) and screening tool of older person's potentially inappropriate prescriptions (STOPP), and analyze the factors of influence on PIM. Methods: Cross-sectional study including 582 community-dwelling older adults over the age of 65. Sociodemographic, clinical, functional, and comprehensive drug therapy data were collected. The primary endpoint was the percentage of patients receiving at least one PIM. Results: A total of 3626 prescriptions were analyzed. PIMs were detected in 35.4% and 47.9% of patients according to the STOPP v1 and the 2012 AGS BC, respectively. This percentage rose to 54% when 2015 AGS BC were used and reached 66.8% with STOPP v2. The kappa coefficient between STOPP v2 and its former version was lower than the one between the updated Beers Criteria and their former version (0.41 vs 0.85). The agreement was good (0.65) between both latest criteria. The number of medications, psychological disorders, and insomnia were predictors of PIM. A novel finding was that bone and joint disorders increased the odds for PIM by 78%. Conclusions: The 2015 AGS BC showed high sensitivity and good applicability to the European older patients. Both updated tools identified some pharmacological groups (benzodiazepines, PPIs, and opioids, among others) and certain health problems (insomnia, psychological disorders, and osteoarticular diseases) as factors of influence on PIM. Based on these findings, interventions aimed at promoting appropriate use of medications should be developed. abstract_id: PUBMED:32841366 Estimating the Use of Potentially Inappropriate Medications Among Older Adults in the United States. Objectives: Inappropriate prescribing of medications is common in health care, and is an important safety concern, especially for older adults, who have a high burden of comorbidity and are at greater risk for medication-related adverse events. This study aims to estimate the extent and cost of potentially inappropriate prescribing of medications to older adults in the United States. Design: A cross-sectional study. Setting: Medicare Part D Prescription Drug Program data set (2014-2018). Participants: Older adults who were enrolled in Medicare Part D Prescription Drug Program between 2014 and 2018. Measurements: Potentially inappropriate medications were identified using the 2019 American Geriatrics Society Beers Criteria®. Results: In 2018, 7.3 billion doses of potentially inappropriate medications were dispensed. The most common medications by number of doses dispensed were proton pump inhibitors, benzodiazepines, and tricyclic antidepressants, and the top five unique medications by reported spending were dexlansoprazole, esomeprazole, omeprazole, dronedarone, and conjugated estrogens. From 2014 to 2018, 43 billion doses of potentially inappropriate medications were dispensed, with a reported spending of $25.2 billion. Conclusion: Potentially inappropriate medication use among older adults is both common and costly. Careful attention to potentially inappropriate medication use and deprescribing when clinically appropriate could reduce costs and potentially improve outcomes among older adults. abstract_id: PUBMED:31601189 Prevalence of Potentially Inappropriate Medication use in older drivers. Background: Potentially Inappropriate Medication (PIM) use has been studied in a variety of older adult populations across the world. We sought to examine the prevalence and correlates of PIM use in older drivers. Methods: We applied the American Geriatrics Society 2015 Beers Criteria to baseline data collected from the "brown-bag" review of medications for participants of the Longitudinal Research on Aging Drivers (LongROAD) study to examine the prevalence and correlates of PIM use in a geographically diverse, community-dwelling sample of older drivers (n = 2949). Proportions of participants who used one or more PIMs according to the American Geriatrics Society 2015 Beers Criteria, and estimated odds ratios (ORs) and 95% confidence intervals (CIs) of PIM use associated with participant characteristics were calculated. Results: Overall, 18.5% of the older drivers studied used one or more PIM. The most commonly used therapeutic category of PIM was benzodiazepines (accounting for 16.6% of the total PIMs identified), followed by nonbenzodiazepine hypnotics (15.2%), antidepressants (15.2%), and first-generation antihistamines (10.5%). Compared to older drivers on four or fewer medications, the adjusted ORs of PIM use were 2.43 (95% CI 1.68-3.51) for those on 5-7 medications, 4.19 (95% CI 2.95-5.93) for those on 8-11 medications, and 8.01 (95% CI 5.71-11.23) for those on ≥12 medications. Older drivers who were female, white, or living in urban areas were at significantly heightened risk of PIM use. Conclusion: About one in five older drivers uses PIMs. Commonly used PIMs are medications known to impair driving ability and increase crash risk. Implementation of evidence-based interventions to reduce PIM use in older drivers may confer both health and safety benefits. Trial Registration: Not applicable. abstract_id: PUBMED:31168913 Awareness of Beers Criteria and knowledge of potentially inappropriate medications among community pharmacists in the Klang Valley, Malaysia. Background: Potentially inappropriate medications (PIMs) in older adults are detrimental to both clinical outcomes and health care costs, with their prominence set to increase in tandem with a fast-growing ageing population. Beers Criteria is one of the most commonly used guidelines that lists specific PIMs. Community pharmacists would therefore benefit from knowledge of Beers Criteria in detecting PIMs in primary care. This study therefore investigates the awareness of Beers Criteria and knowledge of PIMs among community pharmacists in the Klang Valley, Malaysia. Methods: The study was conducted using a self-administered questionnaire. Knowledge of PIMs was assessed using 10 clinical vignettes based on the 2015 Beers Criteria. Practice behaviour towards older customers was assessed using 10 items with a 5-point Likert scale. Descriptive and inferential statistics were used to analyse the data. Results: A total of 277 community pharmacists participated in the study. Only 27.1% of the pharmacists were aware of Beers Criteria, and of these, only 37.3% were aware of the latest 2015 update. The respondents demonstrated moderate knowledge of PIMs with a mean total score of 5.46 ± 1.89 out of a maximum of 10. Pharmacists who were aware of Beers Criteria had significantly higher scores (6.31 vs 5.14, P < .001). Only a small proportion of pharmacists (17.0%) regularly used Beers Criteria in practice. However, most pharmacists reported good practices when dealing with older customers in terms of asking relevant questions, considering age, referring to other resources, and regularly updating their knowledge through continuous professional development. Conclusion: Awareness of Beers Criteria remains low among community pharmacists in Malaysia. Pharmacists who are aware of Beers Criteria have improved knowledge regarding PIMs in the older adults. There is a need to increase the awareness and use of Beers Criteria among community pharmacists. abstract_id: PUBMED:32920684 Potentially inappropriate medications in Chinese older adults: a comparison of two updated Beers criteria. Background Beers criteria have been into the mainstay to characterize the potentially inappropriate medication since its first publication, but the recent version, Beers 2019, is yet to be validated by clinical studies nationally. Objective To identify the prevalence and the predictors of potentially inappropriate medications in hospitalized geriatric patients based on the Beers 2019 and 2015 criteria. Setting Nanjing Drum Tower Hospital, a 3000-bed tertiary care teaching hospital in China. Method We conducted a cross-sectional study from July 1, 2018 to December 31, 2018. Data from all hospitalized patients aged ≥ 65 years were collected from the hospital database. Inappropriate prescriptions were identified using the Beers 2019 criteria and the Beers 2015 criteria. Main outcome measure Prevalence Ratio (PR) and predictors of potentially inappropriate medications. Results The prevalence of inappropriate prescriptions based on the Beers 2019 criteria was 64.80%. This result was slightly higher than that of the Beers 2015 criteria (64.31%). The most commonly encountered inappropriate prescriptions identified using the two criteria were proton-pump inhibitors. The kappa coefficient was 0.826 (p < 0.001) indicating a strong coherence between the two criteria. The most important factor associated with inappropriate medications use was the number of prescribed drugs (PR 5.17, 95% CI 2.89-8.43; PR 4.58, 95% CI 1.93-7.25). Conclusion This study showed a high prevalence of potentially inappropriate medication in the Chinese geriatric population, which was associated with the number of prescribed drugs. The predictors identified in this research might help pharmacists to detect high-risk drugs and intervene in time. abstract_id: PUBMED:29958151 Assessment of potentially inappropriate medications in elderly according to Beers 2015 and STOPP criteria and their association with treatment satisfaction. Objective: To assess and compare, according to Beers and STOPP criteria, the frequency and causes of potentially inappropriate medications (PIMs) in elderly, as well as the relationship between PIMs use and patient's treatment satisfaction, evaluated using the Treatment Satisfaction Questionnaire for Medications (TSQM). Method: 350 patients (mean age = 73.49 years), taking a total of 1893 medications, were enrolled from community pharmacies. Results: 6.2% and 20.4% of the medications were inappropriate according to the STOPP and Beers lists respectively; 103 (29.4%) and 210 (60%) patients had at least one PIM according to STOPP and Beers criteria respectively. The most common cause of PIM was a full dose of proton pomp inhibitor for >8 weeks (STOPP) and using a drug that exacerbates/causes syndrome of inappropriate antidiuretic hormone secretion (Beers). The number of medications taken by the participant, advanced age, female gender, prescription of medications for anxiety/depression, ulcers/gastroesophageal reflux, rheumatoid arthritis and epilepsy significantly increased the PIMs number. When using STOPP criteria, all TSQM subscale scores were significantly lower among patients with PIM use compared to those without PIM. The same trend was observed for Beers critertia, with a significant difference reached for side effects and convenience subscales only. Conclusion: Selecting the appropriate tools to assess PIMs, prescribers and patient education regarding the risks associated with PIMs, and patient's perception and satisfaction regarding his treatment, are critical issues to be addressed among older adults. Answer: The 2012 American Geriatrics Society (AGS) Beers Criteria were found to have enhanced applicability for detecting potentially inappropriate medications (PIMs) in European older adults when compared to the 2003 Beers criteria and the Screening Tool of Older Person's Potentially Inappropriate Prescriptions (STOPP). A study conducted in Lanzarote, Spain, with community-dwelling persons aged 65 and older, showed that the 2012 Beers criteria detected the highest number of PIMs at 44% of participants, compared to 24.3% with the 2003 Beers criteria and 35.4% with STOPP. The study also found that the profile of PIMs was different across the criteria, with benzodiazepines being the most frequent in both Beers criteria lists and aspirin in the STOPP. The number of drugs and the presence of a psychological disorder were associated with the risk of prescribing PIMs according to both the 2003 and 2012 Beers criteria. The kappa coefficient for the degree of agreement between STOPP and the 2012 Beers criteria was 0.35, indicating a fair level of agreement and suggesting that the use of both tools may be complementary (PUBMED:24917083).
Instruction: Bacterial vaginosis, race, and sexually transmitted infections: does race modify the association? Abstracts: abstract_id: PUBMED:18360319 Bacterial vaginosis, race, and sexually transmitted infections: does race modify the association? Background: There are significant racial disparities in the prevalence of sexually transmitted infections (STIs) in the United States. The purpose of this study was to evaluate whether the association of bacterial vaginosis and incident STI is modified by race even after adjustment for sexual practices and other potential confounding variables. Methods: We evaluated the association of bacterial vaginosis (BV) and STI acquisition in a group of 523 women at high risk for unplanned pregnancies and STI. BV was diagnosed by both Gram stain and Amsel criteria. STIs included Chlamydia trachomatis, Neisseria gonorrhoeae, pelvic inflammatory disease, trichomoniasis, syphilis, and HIV. Cox regression estimated the associations and the synergy index assessed whether race modified the association of BV and incident STI. Results: Sixteen percent of participants developed an STI during the 2-year follow-up. Compared with white women without BV at baseline, the adjusted hazard ratios were as follows: white women with BV = 0.59; African American women without BV = 1.96; and African American women with BV = 2.86. The synergy index of 3.38 implies a combined association of BV and African American race with STI in excess of each factor individually. Conclusions: African American race modifies the association of BV and incident STI. Future research should strive to determine the relative contributions of other factors, such as biologic variation, social network or the consequences of socioeconomic position, in this disparity. abstract_id: PUBMED:30913163 Association of Sexually Transmitted Diseases With Cervical Ectopy: A Systematic Review. Objective: The aim of this study was to review the available literature to explore evidence indicating an association between cervical ectopy and sexually transmitted diseases, which could help in the decision to treat or not to treat this condition. Methods: A review of the literature was conducted using the PubMed, EMBASE and clinicaltrials.gov databases on ectopy of the cervix using the terms "ectopy OR ectropium AND cervix" filtered only by language, without limit of date. A total of 71 studies were found in the initial selection, of which 56 were deleted by title, abstract, or full text. The remaining 15 articles were analyzed in this study. Results: Cervical ectopy showed a positive association with human papillomavirus, human immunodeficiency virus, bacterial vaginosis, cervical epithelial atypia, postcoital bleeding, and desquamative inflammatory vaginitis. High-quality studies reported no association between ectopy and chlamydia infection. It was also not associated with gonococcal infection and herpes simplex. Conclusions: Cervical ectopy shows a probable association with increased acquisition of some sexually transmitted diseases. Additional studies are required to confirm the possible beneficial effects of treatment and to evaluate the possible complications of these treatments. abstract_id: PUBMED:28472225 Association of Recent Bacterial Vaginosis With Acquisition of Mycoplasma genitalium. We assessed the association between recent bacterial vaginosis (BV) and incident Mycoplasma genitalium, a sexually transmitted bacterium associated with adverse female reproductive health outcomes. Female sex workers in Mombasa, Kenya, completed a monthly sexual behavior interview and clinical examination. During February 2005-February 2006, vaginal fluid specimens collected from women every other month were tested for M. genitalium by nucleic acid amplification testing. Vaginal microbiota were assessed monthly and categorized by Nugent score (0-3 = normal microbiota, 4-6 = intermediate microbiota disruption, and 7-10 = BV). A discrete failure time analysis for multiple events using logistic regression was employed to estimate the odds of incident M. genitalium infection at follow-up visits among women with BV (vs. normal microbiota) at the preceding visit. Among the 280 women, 54.3% were positive for human immunodeficiency virus. At baseline, 16.1% had prevalent M. genitalium infection and 40.4% had prevalent BV. There were 59 incident M. genitalium infections among 50 women, for an incidence rate of 34.6 cases per 100 person-years. Following adjustment for age, human immunodeficiency virus status, and time, prior BV was associated with a 3.5-fold increase in odds of incident M. genitalium (adjusted odds ratio = 3.49, 95% confidence interval: 1.86, 6.56). This strong association suggests that BV may enhance susceptibility to M. genitalium infection. abstract_id: PUBMED:16235170 Association between genital tract cytomegalovirus infection and bacterial vaginosis. Women with sexually transmitted diseases (STDs) and bacterial vaginosis (BV) have increased rates of cytomegalovirus (CMV) seroprevalence and CMV seroconversion. To characterize the association between genital tract CMV infection and BV, vaginal wash specimens from 52 women attending an STD clinic were analyzed. Significantly more women with BV shed CMV in the lower genital tract than did women without BV. In addition, most of the women who were shedding CMV were infected with >1 virus strain. These results suggest that local CMV replication and infection with multiple CMV strains is facilitated by the presence of BV. abstract_id: PUBMED:27275789 Association of Sexual Debut in Adolescents With Microbiota and Inflammatory Markers. Objective: To investigate the association of sexual debut and vaginal, anorectal, and oral microbiota and vaginal inflammatory markers in female adolescents. Methods: We conducted a school-based study in adolescents in Antwerp, Belgium. During three visits over 8 months, participants answered questionnaires and self-collected vaginal, anorectal, and oral swabs. Five Lactobacillus species, Lactobacillus genus, Gardnerella vaginalis, and Atopobium vaginae were quantified; and seven inflammatory markers were measured in the vaginal specimens. In the oral and anorectal specimens, Lactobacillus genus, G vaginalis, and A vaginae were ascertained. Results: Of the 93 adolescents (mean age 16.2 years) at the first visit, 41 (44.1%) had passed sexual debut (penile-vaginal intercourse) and five (5.4%) had sexual experience without passing sexual debut. Having sexual experience at the first visit was not found to be associated with species presence or concentrations (acknowledging an underpowered study because the required sample size was not attained). Modeling the longitudinal data on all girls showed that sexual debut was associated with increased odds of vaginal and anorectal G vaginalis (P=.021; P=.030) and A vaginae (P=.041; P=.012) with increments of interleukins (interleukin [IL]-1α P<.001, IL-1β P=.046, IL-8 P=.033) and chemokines (regulated on activation, normal T cell expressed and secreted P<.001; macrophage inflammatory protein-1β P=.040), whereas no difference was seen when modeling (before-after) the girls initiating and girls staying without sexual intercourse. The association of sexual intercourse with IL-1α (P<.001), IL-1β (P=.030), and IL-8 (P=.002) at the first visit was (greater than 70%) mediated by vaginal G vaginalis and A vaginae concentrations. Conclusion: Sexual debut in adolescents is associated with an inflammatory vaginal reaction and with the presence of bacterial vaginosis-related species. Strategies preventing the colonization of bacterial vaginosis-related organisms during early sexual debut are urgently needed and may prevent acquisition of sexually transmitted infections including human immunodeficiency virus in early life. abstract_id: PUBMED:17659883 Vaginal douching and intimate partner violence: is there an association? Objective: This study tests the hypothesis that vaginal douching among women of reproductive age is associated with exposure to intimate partner violence (IPV). Methods: The data source is a cross-sectional population-based sample of 2,002 women ages 18-45 in the Central Pennsylvania Women's Health Study. The survey included measures of IPV, douching behavior, and relevant sociodemographic and health-related covariates. Findings: Six percent of the sample reported experiencing any type of IPV in the past year, and 23% reported douching in the past year. IPV is significantly associated with douching after controlling for sociodemographic and health-related covariates. This finding holds for women with and without current reproductive capacity. Conclusions: This is the first study to identify an association between vaginal douching and IPV. Because vaginal douching is a risk factor for sexually transmitted infections, bacterial vaginosis, and adverse pregnancy outcomes, the findings could have important implications for prevention. Further research is needed to identify the reasons why women who have been exposed to IPV are more likely to douche. abstract_id: PUBMED:31329097 Molecular epidemiology of bacterial vaginosis and its association with genital micro-organisms in asymptomatic women. Introduction. Bacterial vaginosis (BV) is dysbiosis associated with an increased risk of several sexually transmitted infections. It is primarily diagnosed via Gram staining, although molecular analyses have presented higher diagnostic accuracy.Aim. This study aimed to evaluate the molecular epidemiology of BV in asymptomatic women to determine its association with several commensal and pathogenic micro-organisms of the genitalia.Methodology. The prevalence of BV was investigated through semiquantitative assessment of 201 women recruited during their routine gynaecological inspection at an outpatient clinic in Tabasco, Mexico.Results. Women with BV showed an increased prevalence of Chlamydia trachomatis (P=0.021) and Mycoplasma hominis (P=0.001). Of the BV-associated micro-organisms, Gardnerella vaginalis was significantly associated with C. trachomatis (P=0.005) and/or Ureaplasma parvum (P=0.003), whereas Atopobium vaginae and Megasphaera type 1 correlated significantly with Mycoplasma hominis (P=0.001). No significant association was observed between human papillomavirus (HPV) infection and BV, although there was increased prevalence of HPV59, HPV73, HPV52 and HPV58 in women displaying cervical cytological abnormalities.Conclusion. Identification of BV-associated micro-organisms via molecular analysis may help to distinguish recurrent cases from new infections and identify micro-organisms potentially associated with pharmacological resistance. abstract_id: PUBMED:9764791 Bacterial vaginosis and disturbances of vaginal flora: association with increased acquisition of HIV. Background: Cross-sectional studies suggest an association between bacterial vaginosis (BV) and HIV-1 infection. However, an assessment of a temporal effect was not possible. Objectives: To determine the association of BV and other disturbances of vaginal flora with HIV seroconversion among pregnant and postnatal women in Malawi, Africa. Design: Longitudinal follow-up of pregnant and postpartum women. Methods: Women attending their first antenatal care visit were screened for HIV after counselling and obtaining informed consent. HIV-seronegative women were enrolled and followed during pregnancy and after delivery. These women were again tested for HIV at delivery and at 6-monthly visits postnatally. Clinical examinations and collection of laboratory specimens (for BV and sexually transmitted diseases) were conducted at screening and at the postnatal 6-monthly visits. The diagnosis of BV was based on clinical criteria. Associations of BV and other risk factors with HIV seroconversion, were examined using contingency tables and multiple logistic regression analyses on antenatal data, and Kaplan-Meier proportional hazards analyses on postnatal data. Results: Among 1196 HIV-seronegative women who were followed antenatally for a median of 3.4 months, 27 women seroconverted by time of delivery. Postnatally, 97 seroconversions occurred among 1169 seronegative women who were followed for a median of 2.5 years. Bacterial vaginosis was significantly associated with antenatal HIV seroconversion (adjusted odds ratio = 3.7) and postnatal HIV seroconversion (adjusted rate ratio = 2.3). There was a significant trend of increased risk of HIV seroconversion with increasing severity of vaginal disturbance among both antenatal and postnatal women. The approximate attributable risk of BV alone was 23% for antenatal HIV seroconversions and 14% for postnatal seroconversions. Conclusions: This prospective study suggests that progressively greater disturbances of vaginal flora, increase HIV acquisition during pregnancy and postnatally. The screening and treating of women with BV could restore normal flora and reduce their susceptibility to HIV. abstract_id: PUBMED:7709322 Association between reproductive tract infections and cervical inflammatory epithelial changes. Objective: To determine the association, if any, between different reproductive tract infections and cervical inflammatory epithelial changes. Study Design: Clinical, colposcopic, cytologic, and microbiologic screening for reproductive tract infections was conducted. Subjects were screened for Chlamydia trachomatis, Neisseria. gonorrhoeae, Trichomonas vaginalis, bacterial vaginosis, yeast vaginitis, human papillomavirus, and serology for herpes simplex viruses, syphilis, and human immunodeficiency virus-1 and -2. The subjects were 257 women who visited a maternal and child health center between January 1992 and December 1993. Results: Cervical cytology revealed inflammatory epithelial changes in 207 women (80.5%), the highest proportion in the published series. Inflammatory epithelial changes were significantly associated with the number of reproductive tract infections. Risk increased up to 72.6 fold when women were infected with two or more agents. The positive predictive value of inflammatory epithelial changes for any infection was 88.4%, whereas the negative predictive value for any infection was 63.4%. In univariate analysis, the cervical inflammatory epithelial changes were significantly associated with infections such as chlamydia (28.0%), human papillomavirus (56.5%), Trichomonas vaginalis (18.3%), bacterial vaginosis (38.2%), and herpes simplex virus as evidenced by the presence of immunoglobulin M antibodies (28%). Multivariate analysis, however, revealed an independent association of inflammatory epithelial changes with chlamydia (odds ratio, 21.3; 95% confidence interval, 2.6, 181.3), human papillomavirus (odds ratio, 13.5; 95% confidence interval, 4.5, 39.6), and bacterial vaginosis (odds ratio, 22.6; 95% confidence interval, 2.9, 181) only. Conclusion: Cervical inflammatory epithelial changes are significantly associated with reproductive tract infection. Cervical inflammatory epithelial changes predict correctly to an extent of 81% infections with chlamydia, bacterial vaginosis, and human papillomavirus infection. A negative smear, on the other hand, rules out these infections to an extent of 83%. abstract_id: PUBMED:17763335 The association between Trichomonas vaginalis infection and level of vaginal lactobacilli, in nonpregnant women. The effect that vaginal lactobacilli have on trichomoniasis is not known. At 3 clinics for treatment of sexually transmitted disease, we recruited 521 female patients with trichomoniasis and 176 control subjects. All participants underwent physical examinations and testing for sexually transmitted infection and completed questionnaires. Low levels of vaginal lactobacilli were associated with trichomoniasis (odds ratio [OR], 2.2 [95% confidence interval {CI}, 1.4-3.2]). After adjustment for covariables, this association remained in women with a higher educational level (OR, 4.6 [95% CI, 2.2-9.5]) but not in women with a lower educational level (OR, 1.6 [95% CI, 0.7-3.4]). Vaginal lactobacilli may be associated with trichomoniasis in women with higher levels of education or a related behavioral factor. Answer: The study identified in abstract PUBMED:18360319 investigated whether race modifies the association between bacterial vaginosis (BV) and the acquisition of sexually transmitted infections (STIs). The study evaluated 523 women at high risk for unplanned pregnancies and STIs, diagnosing BV by both Gram stain and Amsel criteria. The STIs considered included Chlamydia trachomatis, Neisseria gonorrhoeae, pelvic inflammatory disease, trichomoniasis, syphilis, and HIV. The results showed that 16% of participants developed an STI during the 2-year follow-up. The adjusted hazard ratios indicated that African American women with BV had a higher risk (2.86) of developing an STI compared to white women without BV. In contrast, white women with BV had a lower risk (0.59) compared to white women without BV. African American women without BV had an increased risk (1.96) compared to white women without BV. The synergy index of 3.38 suggested a combined association of BV and African American race with STI that was greater than each factor individually. The study concluded that African American race does modify the association of BV and incident STI, implying that the risk of STIs associated with BV is higher among African American women than among white women. Future research was recommended to determine the relative contributions of other factors, such as biological variation, social network, or the consequences of socioeconomic position, in this disparity.
Instruction: Increase in T-cell subsets of oral mucosa: a late immune response in patients with treated coeliac disease? Abstracts: abstract_id: PUBMED:11119267 Increase in T-cell subsets of oral mucosa: a late immune response in patients with treated coeliac disease? Background And Aims: In coeliac disease, the gut involvement is gluten-dependent. Following the introduction of a gluten-free diet, inflammatory cell infiltration decreases in the small intestinal mucosa. Our hypothesis was that the oral mucosa might mirror the changes found in coeliac disease similarly to the mucosa of the small intestine. Thus, the number of inflammatory cells in the oral mucosa would decrease in patients with coeliac disease on a gluten-free diet. Methods: The distribution CD45RO+ and CD3(+) T cells, T-cell subpopulations (CD4(+), CD8(+), T-cell receptor (TCR)alpha beta+ and TCR gamma delta+ cells) and HLA DR expression were studied in the buccal mucosa of 15 untreated and 44 gluten-free diet treated coeliac disease patients, and of 19 controls. All 15 patients with untreated coeliac disease were immunglobulin (Ig)A endomysial antibody positive and all 44 patients on gluten-free diet except one were endomysial antibody negative, as were all control subjects. Results: Untreated coeliac disease patients did not differ from controls in the densities of CD45RO+ cells, CD3(+) cells or of T-cell subsets. In contrast, in treated coeliac disease patients, a significant increase in the numbers of mast cells, CD3(+) and CD4(+) lymphocytes was found in the lamina propria of oral mucosa as compared with patients with untreated coeliac disease and controls. The increase in CD3(+) T cells was in part owing to an increase in lymphocytes expressing no TCR. No differences were found in the expression of human leucocyte antigen (HLA) DR in the epithelium or in the lamina propria in the patient groups studied or in the controls. In treated coeliac disease patients only a few TCR gamma delta+ T cells were found intraepithelially and in the lamina propria, but these cells were not detected in the lamina propria of oral mucosa of patients with untreated coeliac disease or in the controls. Conclusions: The infiltration of T cells into oral mucosa was increased in treated coeliac disease patients in spite of adherence to a gluten-free diet. Because the CD3(+) T cell count was higher than those of the TCR alpha beta+ and TCR gamma delta+ T cells, there must be other cells involved, probably natural killer (NK) cells. The increase in T-cell subsets in the treated coeliac disease patients seems not to result from poor dietary compliance, but might occur as a late immune response in coeliac disease and reflect chronic immunologic stimulation followed by regeneration of memory T cells. abstract_id: PUBMED:27178470 Mass Cytometry of the Human Mucosal Immune System Identifies Tissue- and Disease-Associated Immune Subsets. Inflammatory intestinal diseases are characterized by abnormal immune responses and affect distinct locations of the gastrointestinal tract. Although the role of several immune subsets in driving intestinal pathology has been studied, a system-wide approach that simultaneously interrogates all major lineages on a single-cell basis is lacking. We used high-dimensional mass cytometry to generate a system-wide view of the human mucosal immune system in health and disease. We distinguished 142 immune subsets and through computational applications found distinct immune subsets in peripheral blood mononuclear cells and intestinal biopsies that distinguished patients from controls. In addition, mucosal lymphoid malignancies were readily detected as well as precursors from which these likely derived. These findings indicate that an integrated high-dimensional analysis of the entire immune system can identify immune subsets associated with the pathogenesis of complex intestinal disorders. This might have implications for diagnostic procedures, immune-monitoring, and treatment of intestinal diseases and mucosal malignancies. abstract_id: PUBMED:17660501 Concomitant increase of IL-10 and pro-inflammatory cytokines in intraepithelial lymphocyte subsets in celiac disease. Celiac disease (CD) is a small intestinal enteropathy caused by permanent intolerance to wheat gluten. Active disease is characterized by a prominent cytokine response of intraepithelial lymphocytes (IELs) to gluten-containing diet with concomitant increase in expression of pro-inflammatory IFN-gamma and down-regulatory IL-10 without increase in tumor necrosis factor-alpha (TNF-alpha) or transforming growth factor-beta1 (TGF-beta1). The aim was to understand the local immune reaction by determining which intraepithelial T cell subsets produce the different cytokines. The three major IEL-subsets gammadeltaIELs, CD4(+)alphabetaIELs and CD8(+)alphabetaIELs, as well as CD94(+)CD8(+)alphabetaIELs, selectively expanded in active CD, were retrieved from small intestinal biopsies of children with active CD and controls and analyzed quantitatively for cytokine mRNA expression. In active CD, CD8(+)alphabetaIELs showed a significant increase in expression levels of both IFN-gamma and IL-10. CD8(+)alphabetaIELs were also the IEL subset with highest expression level per cell of both cytokines and constituted the cellular source for almost all IFN-gamma and most IL-10. Expression levels of both cytokines were higher in CD94(-)CD8(+)alphabetaIELs than CD94(+)CD8(+)alphabetaIELs. TNF-alpha levels were only increased in CD4(+)alphabetaIELs, which also showed the highest expression level per cell and constituted the major source of this cytokine. Interestingly, IL-10 was increased also in CD4(+)alphabetaIELs. Cytokine levels were low in gammadeltaIELs. 'Classical' CD94(-)CD8(+)alphabeta T cells within the epithelium are responsible for the excessive production of IFN-gamma, believed to drive the formation of intestinal lesions in active CD. Production of IL-10 may be a common feature of IELs producing pro-inflammatory cytokines, thereby attempting to limit inflammation in an autocrine fashion. abstract_id: PUBMED:24852733 Histopathological findings in the oral mucosa of celiac patients. Background: Celiac disease (CD) is an immune-mediated enteropathy triggered by the ingestion of gluten in genetically susceptible subjects. Although the small intestinal mucosa is the main site of the gut's involvement in CD, other mucosal surfaces belonging to the gastrointestinal tract and the gut-associated lymphoid tissue are known to be affected. Aim: Assuming that the oral mucosa could reflect the histopathological inflammatory alterations of the intestine in CD patients, this study wishes to assess the pattern of T-cell subsets in the oral mucosa of young adults with CD. Methods: A group of 37 patients (age range 20-38 years; female: male ratio 28:9) with CD were enrolled. Out of 37 patients, 19 patients (group A) followed a gluten free diet (GFD) -2 patients from less than one year; 6 patients between 1 and 5 years; 11 patients more than 5 years- while 18 patients (group B) were still untreated. Fifteen healthy volunteers (age range 18-35 years, female: Male ratio 11:4) served as controls for the CD patients. Ethical approval for the research was granted by the Ethics Committee. Biopsy specimens were taken from normal looking oral mucosa. The immunohistochemical investigation was performed with monoclonal antibodies to CD3, CD4, CD8, and gamma/delta-chains T cell receptor (TCR). Results: The T-lymphocytic inflammatory infiltrate was significantly (p < 0.0001) increased in group B (both compared with group A and with the control group). Conclusion: This study confirms the oral cavity to be a site of involvement of CD and its possible diagnostic potentiality in this disease. abstract_id: PUBMED:32721554 Human gut-derived commensal suppresses generation of T-cell response to gliadin in humanized mice by modulating gut microbiota. The human intestinal tract is colonized by a large number of diverse microorganisms that play various important physiologic functions. In inflammatory gut diseases including celiac disease (CeD), a dysbiotic state of microbiome has been observed. Interestingly, this perturbed microbiome is normalized towards eubiosis in patients showing recovery after treatment. The treatment has been observed to increase the abundance of beneficial microbes in comparison to non-treated patients. In this study, we investigated the effect of Prevotella histicola or Prevotella melaninogenica, isolated from the duodenum of a treated CeD patient, on the induction and maintenance of oral tolerance to gliadin, a CeD associated subgroup of gluten proteins, in NOD.DQ8.ABo transgenic mice. Conventionally raised mice on a gluten free diet were orally gavaged with bacteria before and after injection with pepsin trypsin digested gliadin (PTD-gliadin). P. histicola suppressed the cellular response to gliadin, whereas P. melaninogenica failed to suppress an immune response against gliadin. Interestingly, tolerance to gliadin in NOD.DQ8.ABo mice may be associated with gut microbiota as mice gavaged with P melaninogenica harbored a different microbial diversity as compared to P. histicola treated mice. This study provides experimental evidence that gut microbes like P. histicola from treated patients can suppress the immune response against gliadin epitopes. abstract_id: PUBMED:28658546 Pathogenic T cell subsets in allergic and chronic inflammatory bowel disorders. Homeostasis in the gastrointestinal tract relies on a sensitive equilibrium between permissive and protective functions. This is closely reflected in the regulation of the intestinal immune system and especially T cells in the gut. This balance, however, is susceptible to disturbances as demonstrated by pathological conditions like food allergy, celiac disease, or inflammatory bowel disease. In these allergic and chronic inflammatory bowel disorders, luminal antigens get access to the lamina propria where they trigger a dysregulated immune response with crucial involvement of different T cell subsets. We will begin this review with some comprehensive remarks on current concepts on the pathogenesis of these diseases before taking a closer look at the life cycle of intestinal T cells consisting of priming, homing, differentiation and proliferation and apoptosis respectively. Subsequently we will discuss the specific implication of distinct T cell subsets in allergic and chronic inflammatory conditions of the gastrointestinal tract in detail and comment on current and future approaches to targeted therapy in this context. abstract_id: PUBMED:21143239 Adaptive T-cell responses regulating oral tolerance to protein antigen. The term oral (or mucosal) tolerance has been classically defined as the suppression of T- and B-cell responses to an antigen by prior administration of the antigen by the oral route. In recent years, it has become clear that both innate and acquired regulatory immune responses are essential for the development of oral tolerance. As such, mucosal microenvironmental factors such as transforming growth factor- β, prostaglandins but also dietary vitamin A create conditioning of an adaptive regulatory T-cell response that suppresses subsequent antigen-specific responses. Particular resident subsets of antigen presenting dendritic cells are pivotal to convey conditioning signals next to the presentation of antigen. This review discusses the primary mechanisms of adaptive regulatory T-cell induction to ingested soluble protein antigen. However, we also discuss the limitations of our knowledge with respect to understanding the very common food hypersensitivity Celiac disease caused by an aberrant adaptive immune response to the food protein gluten. abstract_id: PUBMED:10759761 Local challenge of oral mucosa with gliadin in patients with coeliac disease. In coeliac disease, gluten-containing diet challenges over many years are sometimes required for diagnosis, especially if the initial diagnosis was equivocal. The rectal gluten challenge has been proposed to simplify coeliac disease diagnosis. We were interested in studying whether the oral mucosa could be used for local challenge with gliadin as an aid in finalizing the diagnosis of coeliac disease. The study groups consisted of 37 treated coeliac disease patients and 10 controls. The challenges on the oral mucosa were performed either supramucosally with gliadin powder (coeliac disease patients) or by submucosal injection of dissolved gliadin (10 microg/ml) (coeliac disease patients and controls). A control challenge with submucosal gliadin solvent was made in the coeliac disease patients. B and T cells, mast cells and T cell subsets were counted and HLA-DR expression was determined. Biopsies were taken from each provoked area 24 h post-challenge. A significant increase in the number of CD4+ lymphocytes in the lamina propria (observed in 27/37 patients), but a decrease in the number of mast cells was observed in treated coeliac disease patients after submucosal challenge with gliadin. Following supramucosal challenge with gliadin the counts of intraepithelial CD4+ (in 25/37 patients) and CD8+ T cells (in 27/37 patients) increased significantly and the number of CD4+ T cells in the lamina propria was also significantly increased. Control subjects were tested by submucosal gliadin challenge and no significant changes in the number of cells were observed. HLA-DR expression did not show increased positivity in coeliac disease patients on submucosal challenge. For the first time the oral mucosa has been used for immunological testing and shown to react to gliadin challenge in coeliac disease patients. Recruitment of T cells upon submucosal gliadin challenge occurred towards the lamina propria, whereas it occurred towards the epithelium in supramucosal gliadin challenge. The numbers of T cells increased in the lamina propria after submucosal challenge. The results suggest that local oral challenge with gliadin may be used as a diagnostic method in coeliac disease; however, further studies in untreated coeliac disease patients are needed to evaluate the usefulness of this method. abstract_id: PUBMED:34960101 Oral Consumption of Bread from an RNAi Wheat Line with Strongly Silenced Gliadins Elicits No Immunogenic Response in a Pilot Study with Celiac Disease Patients. Celiac disease (CD) is a genetically predisposed, T cell-mediated and autoimmune-like disorder caused by dietary exposure to the storage proteins of wheat and related cereals. A gluten-free diet (GFD) is the only treatment available for CD. The celiac immune response mediated by CD4+ T-cells can be assessed with a short-term oral gluten challenge. This study aimed to determine whether the consumption of bread made using flour from a low-gluten RNAi wheat line (named E82) can activate the immune response in DQ2.5-positive patients with CD after a blind crossover challenge. The experimental protocol included assessing IFN-γ production by peripheral blood mononuclear cells (PBMCs), evaluating gastrointestinal symptoms, and measuring gluten immunogenic peptides (GIP) in stool samples. The response of PBMCs was not significant to gliadin and the 33-mer peptide after E82 bread consumption. In contrast, PBMCs reacted significantly to Standard bread. This lack of immune response is correlated with the fact that, after E82 bread consumption, stool samples from patients with CD showed very low levels of GIP, and the symptoms were comparable to those of the GFD. This pilot study provides evidence that bread from RNAi E82 flour does not elicit an immune response after a short-term oral challenge and could help manage GFD in patients with CD. abstract_id: PUBMED:15841768 Enhanced local immune response in children with prolonged gastrointestinal symptoms. Aim: Recently, we reported typical endoscopic findings and an increment in gammadelta+ T cells in the foregut among children with food-sensitive enteropathy other than coeliac disease. To find out the extend to which the upregulation of the local immune response might explain gastrointestinal (GI) complaints of the foregut, we sought to examine by the increment in gammadelta+ T cells a I-y consecutive series of children referred for recurrent GI complaints to a tertiary-level hospital. Methods: A 1-y cohort of 102 children scheduled for gastroduodenoscopy were examined for mucosal histology and the densities of CD3+, alphabeta+ and alphabeta+ T-cell subsets from mid-duodenal specimens. The final diagnostic categories were used in analysing the data. Results: Fifteen subjects showed villous atrophy and a high gammadelta+ T-cell density; the finding being compatible with coeliac disease (CD). At the other extreme, 20 subjects in whom diagnostic GI diseases were ruled out showed low densities and served as controls. The subjects reporting GI symptoms after an open food challenge with milk and/or cereals (n = 18) as well as children remitting with a milk- or cereal-eliminating diet but not responding to a challenge (n = 23) also expressed significantly higher densities of gammadelta+ T cells than the controls. In all, 45 of 102 children could be considered to have an elevated gamma6+ T-cell density as an indication of locally activated immune response. Lack of villous architecture and lymphonodular hyperplasia of the duodenal bulb as an endoscopic finding and atopic dermatitis but not the presence of DQ2 alleles showed a close association with these increased densities. Conclusion: Considering that an elevated incidence of gammadelta+ T cells is an indication of mucosal response against luminal antigens, up to half the children with prolonged GI symptoms have immune mediated disorder; CD and food allergy being the most obvious clinical entities. Answer: Yes, there is an increase in T cell subsets of the oral mucosa, which appears to be a late immune response in patients with treated coeliac disease. A study found that in treated coeliac disease patients, there was a significant increase in the numbers of mast cells, CD3(+) and CD4(+) lymphocytes in the lamina propria of the oral mucosa compared with patients with untreated coeliac disease and controls. This increase in CD3(+) T cells was partly due to an increase in lymphocytes expressing no T-cell receptor (TCR), suggesting the involvement of other cells, possibly natural killer (NK) cells. The increase in T-cell subsets in treated coeliac disease patients does not seem to result from poor dietary compliance but might occur as a late immune response in coeliac disease, reflecting chronic immunologic stimulation followed by the regeneration of memory T cells (PUBMED:11119267). Furthermore, another study confirmed that the oral cavity is a site of involvement in coeliac disease and that the T-lymphocytic inflammatory infiltrate was significantly increased in untreated coeliac disease patients (group B) compared with both treated patients (group A) and the control group. This suggests the oral mucosa could reflect the histopathological inflammatory alterations of the intestine in CD patients and has potential diagnostic value for the disease (PUBMED:24852733). These findings indicate that even after the introduction of a gluten-free diet, which generally leads to a decrease in inflammatory cell infiltration in the small intestinal mucosa, there can be persistent or late immune responses in other mucosal sites such as the oral mucosa in individuals with coeliac disease.
Instruction: Is T-helper type 2 shift schizophrenia-specific? Abstracts: abstract_id: PUBMED:23683153 Is T-helper type 2 shift schizophrenia-specific? Primary results from a comparison of related psychiatric disorders and healthy controls. Aim: An imbalance between T-helper type 1 (Th1) and type 2 (Th2) cytokines has been implicated in schizophrenia, although empirical evidence is rare. The aim of this study was to examine if a Th1/Th2 imbalance occurs in schizophrenia and schizophrenia-related disorder. Methods: Twenty-six subjects with schizophrenia, 26 subjects with schizophrenia-related disorders, and 26 healthy controls were recruited. The Human Th1/Th2 Cytokine Cytometric Bead Array Kit-II was utilized to assess serum Th1/Th2 cytokines and ratios simultaneously. MANOVA was used to detect differences among the three diagnostic groups in distinct Th1/Th2 cytokines/ratios. Pearson/Spearman correlations were used to examine the relationships between distinct Th1/Th2 cytokines/ratios and clinical/psychopathological data in schizophrenia. Results: Interferon (IFN)-γ/interleukin (IL)-4, IFN-γ/IL-10, IL-2/IL-4, and tumor necrosis factor (TNF)-α/IL-4 ratios were significantly decreased in schizophrenia, but not in schizophrenia-related disorders compared to healthy controls. IFN-γ/IL-4 and IFN-γ/IL-10 in schizophrenia subjects positively correlated with age, but not in schizophrenia-related disorder subjects or in healthy controls. Conclusion: A clear Th2 shift was observed in schizophrenia, but not in schizophrenia-related disorders. The Th2 shift in schizophrenia appeared to be an aberrant developmental phenomenon. abstract_id: PUBMED:33407556 An analytical method for the identification of cell type-specific disease gene modules. Background: Genome-wide association studies have identified genetic variants associated with the risk of brain-related diseases, such as neurological and psychiatric disorders, while the causal variants and the specific vulnerable cell types are often needed to be studied. Many disease-associated genes are expressed in multiple cell types of human brains, while the pathologic variants affect primarily specific cell types. We hypothesize a model in which what determines the manifestation of a disease in a cell type is the presence of disease module comprised of disease-associated genes, instead of individual genes. Therefore, it is essential to identify the presence/absence of disease gene modules in cells. Methods: To characterize the cell type-specificity of brain-related diseases, we construct human brain cell type-specific gene interaction networks integrating human brain nucleus gene expression data with a referenced tissue-specific gene interaction network. Then from the cell type-specific gene interaction networks, we identify significant cell type-specific disease gene modules by performing statistical tests. Results: Between neurons and glia cells, the constructed cell type-specific gene networks and their gene functions are distinct. Then we identify cell type-specific disease gene modules associated with autism spectrum disorder and find that different gene modules are formed and distinct gene functions may be dysregulated in different cells. We also study the similarity and dissimilarity in cell type-specific disease gene modules among autism spectrum disorder, schizophrenia and bipolar disorder. The functions of neurons-specific disease gene modules are associated with synapse for all three diseases, while those in glia cells are different. To facilitate the use of our method, we develop an R package, CtsDGM, for the identification of cell type-specific disease gene modules. Conclusions: The results support our hypothesis that a disease manifests itself in a cell type through forming a statistically significant disease gene module. The identification of cell type-specific disease gene modules can promote the development of more targeted biomarkers and treatments for the disease. Our method can be applied for depicting the cell type heterogeneity of a given disease, and also for studying the similarity and dissimilarity between different disorders, providing new insights into the molecular mechanisms underlying the pathogenesis and progression of diseases. abstract_id: PUBMED:16145186 Diabetes knowledge among persons with serious mental illness and type 2 diabetes. Type 2 diabetes is an important medical condition associated with serious mental illness. The authors studied the disease-specific knowledge about diabetes in a sample of 201 psychiatric outpatients with a diagnosis of schizophrenia or major mood disorders, all of whom had type 2 diabetes. In a multivariate analysis, disease-specific diabetes knowledge was associated with higher cognitive functioning, a higher level of education, and recent receipt of diabetes education. Disease-specific diabetes knowledge predicted lower levels of perceived barriers to diabetes care. Gaps in diabetes knowledge may be reduced by specialized interventions that take into account the cognitive deficits of persons with serious mental illness. abstract_id: PUBMED:28174595 HLA DRB1*03 as a possible common etiology of schizophrenia, Graves' disease, and type 2 diabetes. Background: Autoimmune diseases and schizophrenia share many common features. Association studies confirm a shared genetic association in the human leukocyte antigen (HLA) region between schizophrenia and most autoimmune diseases. To our knowledge, the simultaneous syndromes of Graves' disease (GD) and type 2 diabetes (T2D) in schizophrenia are rare in Tunisia. Case Presentation: We report a case of a 42-year-old woman admitted to the department of psychiatry for an acute relapse of chronic schizophrenia. Her medical history revealed that she was followed for Graves' disease and for a type 2 diabetes mellitus. A low-resolution HLA typing was performed by polymerase chain reaction sequence-specific primer (PCR-SSP) techniques according to determine the patient's haplotype. Conclusions: Our study suggests that the HLA DRB1*03 allele may explain a common etiology underlying the co-morbidity of Graves' disease, type 2 diabetes, and schizophrenia in our patient. abstract_id: PUBMED:27061195 Studying the Genetics of Complex Disease With Ancestry-Specific Human Phenotype Networks: The Case of Type 2 Diabetes in East Asian Populations. Genome-wide association studies (GWAS) have led to the discovery of over 200 single nucleotide polymorphisms (SNPs) associated with type 2 diabetes mellitus (T2DM). Additionally, East Asians develop T2DM at a higher rate, younger age, and lower body mass index than their European ancestry counterparts. The reason behind this occurrence remains elusive. With comprehensive searches through the National Human Genome Research Institute (NHGRI) GWAS catalog literature, we compiled a database of 2,800 ancestry-specific SNPs associated with T2DM and 70 other related traits. Manual data extraction was necessary because the GWAS catalog reports statistics such as odds ratio and P-value, but does not consistently include ancestry information. Currently, many statistics are derived by combining initial and replication samples from study populations of mixed ancestry. Analysis of all-inclusive data can be misleading, as not all SNPs are transferable across diverse populations. We used ancestry data to construct ancestry-specific human phenotype networks (HPN) centered on T2DM. Quantitative and visual analysis of network models reveal the genetic disparities between ancestry groups. Of the 27 phenotypes in the East Asian HPN, six phenotypes were unique to the network, revealing the underlying ancestry-specific nature of some SNPs associated with T2DM. We studied the relationship between T2DM and five phenotypes unique to the East Asian HPN to generate new interaction hypotheses in a clinical context. The genetic differences found in our ancestry-specific HPNs suggest different pathways are involved in the pathogenesis of T2DM among different populations. Our study underlines the importance of ancestry in the development of T2DM and its implications in pharmocogenetics and personalized medicine. abstract_id: PUBMED:22256413 Psychiatric co-morbidity with type 1 and type 2 diabetes mellitus. We evaluated the prevalence of diabetes comorbidity in Greek psychiatric patients, differences between type 1 and type 2 diabetics and the outcome of psychiatric disorder. Of 800 psychiatric patients meeting our inclusion criteria, 82 (10.2%) had diabetes mellitus; 28% type 1 and 72% type 2. The mean age at onset of mental illness was earlierfortype 1 diabetics (mean 26.95, SD 9.09 years) than type 2 (mean 33.22, SD 10.71 years) (P < 0.015) and the duration of untreated mental illness was shorter (mean 2.86, SD 3.06 years compared with mean 4.13, SD 6.24 years for type 2 diabetics). Schizophrenia was the commonest psychiatric diagnosis in both types of diabetes. There was no significant difference in outcome of psychiatric disorder between the 2 types of diabetics. Existence of diabetes mellitus (regardless of type), duration of untreated mental illness and lack of patient therapeutic education were negative predictors of (unfavourable) outcome. These findings provide evidence of a high prevalence of diabetes in psychiatric patients and that having diabetes has an adverse effect on outcome of psychiatric illness. abstract_id: PUBMED:36825444 Schizophrenia and type 2 diabetes: Perceptions and understandings of illness management in everyday life. People with schizophrenia and type 2 diabetes face complex challenges in daily life and the management of both illnesses is burdensome. This qualitative interview study aimed to explore perceptions and understandings of the day-to-day management of schizophrenia and type 2 diabetes. Fourteen semi-structured interviews were conducted between January 2020 and October 2021 in the participants' respective mental health clinics, in their homes or by phone. Thematic analysis led to four themes representing participants' self-management strategies and perceived challenges. The first theme showed that participants use self-learned strategies for managing schizophrenia. In contrast, they perceived type 2 diabetes self-management as governed by a set of rules and guidelines given by health professionals. The second theme showed that both psychotic and negative symptoms present challenges to diabetes management. Theme 3 illustrated that participants consider their type 2 diabetes to be a very serious illness. They worried about potential long-term consequences and expressed wishes and motivation to improve their lifestyle. The final theme showed that participants discuss challenges related to their schizophrenia with family and friends but not type 2 diabetes. In conclusion, this study highlights the importance of considering individual challenges and everyday routines when supporting this population. It underlines the need for future research to further explore the complexity of managing the illnesses and to understand the needs for treatment and support. abstract_id: PUBMED:36252338 Association of comorbid mental disorders with cardiovascular disease risk in patients with type 2 diabetes: A nationwide cohort study. Objective: To examine the association between comorbid mental disorders and cardiovascular disease (CVD) risk among patients with type 2 diabetes. Method: This retrospective cohort study was conducted using the claims data of 2,227,394 South Korean patients with type 2 diabetes. We analyzed the occurrence of CVD including myocardial infarction (MI) and ischemic stroke, CVD-specific mortality, and all-cause mortality according to comorbid mental disorders including depressive disorders, bipolar and related disorders, schizophrenia spectrum disorders, insomnia, and anxiety disorders. Results: Among the patients, 9.1% had a comorbid mental disorder. The adjusted hazard ratios (aHR) for MI, ischemic stroke, CVD-specific mortality, and all-cause mortality in patients with any mental disorder were 1.20 (95% CI, 1.17-1.24), 1.13 (95% CI, 1.11-1.16), 1.16 (95% CI, 1.12-1.20), and 1.21 (95% CI, 1.19-1.23), respectively. Each mental disorder increased the risk of all outcomes, particularly bipolar and related disorders and schizophrenia spectrum disorders. Conclusion: Comorbid mental disorders increased the CVD risk in patients with type 2 diabetes, with significantly increased risks associated with schizophrenia spectrum disorders (aHR: 1.27 for MI and 1.50 for ischemic stroke) and bipolar and related disorders (aHR: 1.27 for MI and 1.45 for ischemic stroke). abstract_id: PUBMED:36615099 Temporal Trends in Mortality Associated with Comorbid Type 2 Diabetes and Schizophrenia: The Fremantle Diabetes Study. Background: In Phase I of the community-based Fremantle Diabetes Study (FDS1), there was evidence of a deleterious interactive effect of schizophrenia and type 2 diabetes on mortality. Our aim was to investigate whether the mortality gap had improved in FDS Phase II (FDS2) conducted 15 years later. Methods: Participants with type 2 diabetes from FDS1 (n = 1291 recruited 1993-1996) and FDS2 (n = 1509 recruited 2008-2011) were age-, sex- and postcode-matched 1:4 to people without diabetes. Schizophrenia at entry and incident deaths were ascertained from validated administrative data. Results: Schizophrenia affected 50/11,195 (0.45%) of participants without diabetes and 17/2800 (0.61%) of those with type 2 diabetes (p = 0.284). During 142,304 person-years of follow-up, the mortality rate (95% CI) was lowest for the FDS2 subgroup without diabetes/schizophrenia (18.2 (16.9, 19.6)/1000 person-years) and highest in FDS2 and FDS1 subgroups with type 2 diabetes/schizophrenia (53.3 (14.5, 136.6) and 98.0 (31.8, 228.8)/1000 person-years, respectively). Compared to the respective FDS subgroup without diabetes/schizophrenia, the mortality rate ratio was approximately 50% higher in the type 2 diabetes subgroup, and three times higher in those with type 2 diabetes/schizophrenia. In Cox regression, unadjusted hazard ratios were highest in those with type 2 diabetes/schizophrenia in FDS1 (HR (95% CI): 3.71 (1.54, 8.93) and FDS2 (2.96 (1.11, 7.91)), increasing to 5.61 (2.33, 13.5) and 26.9 (9.94, 72.6), respectively, after adjustment for age. Conclusions: Although limited by small numbers of schizophrenia cases, these data suggest that comorbid type 2 diabetes and schizophrenia remains associated with a substantial and possibly increasing mortality gap. abstract_id: PUBMED:28046198 Deleterious Results of Safety Seeking Behaviours in Panic Disorder: Polydipsia and Diabetes Mellitus Type 2 Panic disorder is an anxiety disorder that involves recurrent panic attacks, which emerge when a harmless stimulus is interpreted as "catastrophic". In an attempt to avoid the panic attack or prevent confrontation, the patient exhibits a dysfunctional attitude and behavior, such as evasion and safety-seeking behavior (SSB). Dysfunctional behavior leads to an increase in the recurrence of panic attacks and affects the patient's life in a negative way. According to the cognitive behavioral therapy model, SSB contributes to the continuation of unrealistic beliefs (e.g. physical experiences) regarding and prevents the patient from grasping new information that may potentially contradict the unrealistic cognitions. In this paper, we present a case with a primary diagnosis of panic disorder. Interestingly, this patient developed diabetes mellitus (DM) type 2 and psychogenic polydipsia (PPD) as a consequence of his SSB. PPD is a common occurrence in patients with psychiatric disorders, especially in schizophrenia. Up to now, no case of a panic disorder with either DM or PPD has been reported in the literature. While it is accepted that major depression poses a risk for DM type 2, panic disorder may also increase this risk. Treatment of the panic disorder with cognitive behavioral therapy (CBT) resulted in improvement of PPD and DM type 2. In conclusion, the role of SSB in medical disorders accompanied by psychiatric disorders should be kept in mind when treating these patients. Answer: No, the T helper type 2 (Th2) shift is not specific to schizophrenia. According to the study with PUBMED:23683153, a clear Th2 shift was observed in schizophrenia but not in schizophrenia-related disorders when compared to healthy controls. This suggests that while a Th2 shift may be associated with schizophrenia, it is not a characteristic shared by all related psychiatric disorders. The study found that certain cytokine ratios, such as IFN-γ/IL-4 and IFN-γ/IL-10, were significantly decreased in schizophrenia, indicating a Th2 shift, but this was not the case for subjects with schizophrenia-related disorders. Therefore, the Th2 shift appears to be more closely associated with schizophrenia rather than being a common feature across various psychiatric conditions.
Instruction: Does the size of the vaginal probe affect measurement of pelvic floor muscle strength? Abstracts: abstract_id: PUBMED:15683371 Does the size of the vaginal probe affect measurement of pelvic floor muscle strength? Background: The most commonly used method to measure pelvic floor muscle (PFM) strength is vaginal squeeze pressure. There are, however, several apparatuses available for this purpose, and sizes of the probes differ significantly. The aim of the present investigation was to assess whether the size of two commonly used vaginal probes influences measurement of PFM strength. Methods: Twenty female physical therapy students, mean age 25.1 years (range 21-38), participated in the study. All were able to contract the PFM, as assessed by means of vaginal palpation and observation of inward movement of the perineum. Two measuring devices with different lengths and diameters, the Peritron and the Camtech, were used in order to assess vaginal squeeze pressure. Each woman performed six contractions with each apparatus. Results: Mean maximum squeeze pressure for the whole group with the Camtech was 19.7 cm H(2)O (95% CI: 16.5-22.9) and with the Peritron 36.5 cm H(2)O (95% CI: 31.7-41.3), P < 0.01. Nine women preferred the Camtech, four preferred the Peritron, and seven did not have any specific preferences. Conclusions: Measurements of vaginal squeeze pressure differ depending on the vaginal probe used. Results from published studies using various probes should, therefore, not be compared or combined in systematic reviews or meta-analysis. abstract_id: PUBMED:19495980 Multidirectional pelvic floor muscle strength measurement. Pelvic floor muscle (PFM) strength measurement provides useful information for the study of pelvic floor dysfunctions. Vaginal digital palpation, intravaginal pressure measurements, and the use of a dynamometric speculum represent currently available clinical methods for evaluating PFM strength. However, none of these methods provide a dynamic measurement of pelvic floor strength in multiple directions simultaneously. The aim of the present paper is to report the development and first measurement trial of a device that follows the vaginal canal morphology and is able to measure pelvic floor strength multidirectionally. abstract_id: PUBMED:19950758 Muscle strength measurement of pelvic floor in women by vaginal dynamometer Introduction: The pelvic floor is made of a mutually connected system that consists of muscles, connecting tissue and nerve components. Damage to any of these elements creates dysfunction which is exerted through stress, urinary incontinence, prolapse of genital organs and faecal incontinence. Objective: The primary aim of this study was to present the possibility of objective assessment of pelvic floor muscle force in healthy and sick women using a newly designed instrument, the vaginal dynamometer, as well as to establish the correlation between the values of pelvic floor muscle force obtained by the vaginal dynamometer and digital palpation method. Methods: The study included 90 female patients, age 20-58 years. One group of respondents was made of healthy women (who gave birth, and those who have not given birth), while the other one consisted of sick women (who suffered from incontinence or prolapse of genital organs, operated on or not). The pelvic floor muscle strength of every woman was measured with a newly-constructed device for measuring and monitoring of the pelvic floor muscle force in women, the vaginal dynamometer. Then it was compared with the valid clinical digital palpation (palpation with two fingers) based on the scale for measuring muscle contractions with the digital palpation - the digital pelvic assessment rating scale. The vaginal dynamometer consists of a redesigned speculum which is inserted into the vagina and a sensor for measuring the force. Results: Statistically significant linear correlation was found in the values of the measured muscle force with the vaginal dynamometer and ratings produced by digital palpation (r = 0.92; p < 0.001). Mean value of the muscle force of the healthy women measured by the vaginal dynamometer was 1.44 +/- 0.38 daN and that value of the sick women was 0.78 +/- 0.31 daN (t = 8.89 for df = 88; p < 0.001). Mean value of the ratings produced by digital palpation in healthy women was 4.10 (95% of trust limits 3.83-4.37), while the value in sick women was 2.41 (95% of trust limits 2.10-4.16) (Z = -6.38; p < 0.001). Conclusion: The vaginal dynamometer has been presented as an attempt to overcome the limitations of the previously presented techniques for muscle force measurement. The application of the vaginal dynamometer in clinical practice makes objective and numerical assessment of pelvic floor muscle force possible, independent of the subjective assessment of the examiner. The usage of this instrument enables not only the diagnostics of women's pelvic floor muscle problem, but also the objective monitoring of rehabilitation gynaecological medicine results. abstract_id: PUBMED:28025685 Comparison of pelvic floor muscle strength in nulliparous women and those with normal vaginal delivery and cesarean section. Introduction And Hypothesis: Weakness of the pelvic floor is quite common among women, and may occur following childbirth. The aim of this study was to compare pelvic floor muscle strength in women of reproductive age who were nulliparous or who had a cesarean section or normal vaginal delivery. Methods: In this cross-sectional study, 341 women including 96 nulliparous women, 73 women with a history of normal vaginal delivery with and without episiotomy, and 172 women with a history of elective or emergency cesarean section were recruited randomly from public health centers in Ahvaz, Iran. Data were collected using a sociodemographic questionnaire and a checklist was used to record weight, height, body mass index, and pelvic floor muscle strength. Pelvic floor muscle strength was measured with the woman in the lithotomy position using a Peritron 9300 V perineometer. Data were analyzed using one-way analysis of variance, the least significant difference test and the chi-squared test. Results: The nulliparous women had the highest mean pelvic muscle strength (55.62 ± 15.86 cm H2O). Women who had vaginal delivery with episiotomy had the lowest pelvic muscle strength (32.71 ± 14 cm H2O). In nulliparous women pelvic floor muscle strength was higher than in women who had normal vaginal delivery with episiotomy (p < 0.001), but was not significantly different from that in women with normal vaginal delivery without episiotomy or in women with cesarean section (elective or emergency, p = 0.245). Conclusions: Nulliparous women had the highest pelvic floor muscle strength and there was no significant difference in pelvic floor muscle strength between women with normal vaginal delivery and those with cesarean section. abstract_id: PUBMED:31422064 Pelvic floor muscle strength and the incidence of pelvic floor disorders after vaginal and cesarean delivery. Background: Pelvic floor disorders (including urinary and anal incontinence and pelvic organ prolapse) are associated with childbirth. Injury to the pelvic floor muscles during vaginal childbirth, such as avulsion of the levator ani muscle, is associated with weaker pelvic floor muscle strength. As weak pelvic floor muscle strength may be a modifiable risk factor for the later development of pelvic floor disorders, it is important to understand how pelvic floor muscle strength affects the course of pelvic floor disorders over time. Objective: To investigate the association between pelvic floor muscle strength and the incidence of pelvic floor disorders, and to identify maternal and obstetrical characteristics that modify the association. Materials And Methods: This is a longitudinal study investigating pelvic floor disorders after childbirth. Participants were recruited 5-10 years after their first delivery and were assessed for pelvic floor disorders annually for up to 9 years. Stress incontinence, overactive bladder, and anal incontinence were assessed at each annual visit using the Epidemiology of Prolapse and Incontinence Questionnaire. Pelvic organ prolapse was assessed on physical examination, and was defined as descent of the vaginal walls or cervix beyond the hymen during forceful Valsalva. The primary exposure of interest was pelvic floor muscle strength, defined as the peak pressure during a voluntary pelvic muscle contraction (measured with a perineometer). The relationship between pelvic floor muscle strength and the cumulative incidence (time to event) of each pelvic floor disorder was evaluated using lognormal models, stratified by vaginal vs cesarean delivery. The relative hazard for each pelvic floor disorder (among those women free of the disorder at enrollment and thus more than 5-10 years from first delivery), was estimated using semiparametric proportional hazard models as a function of delivery mode, pelvic floor muscle strength, and other covariates. Results: Of 1143 participants, the median age was 40 (interquartile range, 36.6-43.7) years, and 73% were multiparous. On perineometry, women with at least 1 vaginal delivery were more likely to have a low peak pressure, defined as <20 cm H2O (243 of 588 women with at least 1 vaginal delivery vs 107 of 555 women who delivered all of their children by cesarean delivery, P < .001). Among women who had at least 1 vaginal delivery, a pelvic floor muscle strength of <20 cm H2O was associated with a shorter time to event for stress incontinence (time ratio, 0.67; 95% confidence interval, 0.50-0.90), overactive bladder (time ratio, 0.67; 95% confidence interval, 0.51-0.86), and pelvic organ prolapse (time ratio, 0.76; 95% confidence interval, 0.65-0.88). No such association was found among women who delivered all of their children by cesarean delivery. Among women with at least 1 vaginal delivery and considering only pelvic floor disorders that developed during study observation (5-10 years after the first delivery), and controlling for maternal characteristics (body mass index and genital hiatus), women who had a peak pressure of <20 cm H2O had hazard ratios (relative to ≥20 cm H2O) of 1.16 (95% confidence interval, 0.74-1.81) for stress incontinence, 1.27 (95% confidence interval, 0.78-2.05) for overactive bladder, and 1.43 (95% confidence interval, 0.99-2.07) for pelvic organ prolapse. Among women who delivered all of their children by cesarean delivery, there was no association between muscle strength and relative hazard of pelvic floor disorders when controlling for maternal characteristics. Conclusion: After vaginal delivery, but not cesarean delivery, the cumulative incidence of pelvic organ prolapse, stress incontinence, and overactive bladder is associated with pelvic muscle strength, but the associations attenuate when adjusting for genital hiatus and body mass index. abstract_id: PUBMED:36518785 Portable Dynamometer-Based Measurement of Pelvic Floor Muscle Force. Objective: In attempts to improve the quality of life of women, continuous projects are sought between rehabilitation intervention and engineering. Using the knowledge of the pelvic floor muscle (PFM) physiology, assessment and training methods are developed to reduce lower urinary tract symptoms such as urinary incontinence. Therefore, this paper covers the design and implementation of a portable vaginal dynamometer. Methods: A PFM probe is designed, 3D printed, assembled, and tested in ten women to assess its acceptability and usability. The feedback from the usability study is used to optimize the PFM probe design. A vaginal dynamometer is developed based on the designed PFM probe, then tested for linearity, repeatability, hysteresis, noise and heat effect, and power consumption. The variability between the different produced PFM probe prototypes is evaluated. Results: Force measurements are made using a load cell. Wireless communication is performed through a Bluetooth low energy transceiver v5.0, with a corresponding interface on both computer and smartphone. The device operates at a 3.3V supply and achieves a power consumption of 49.5 mW in operating mode. Two PFM probe sizes are designed to accommodate different vaginal hiatus sizes, based on usability study feedback. The proposed system allows the physiotherapist to wirelessly monitor variation in pelvic floor muscle force during assessment and/or training. Discussion/conclusion: The testing results showed that the newly designed system has the potential to measure the PFM function in functional conditions such as the standing position. abstract_id: PUBMED:19656514 Vaginal probe transducer: characterization and measurement of pelvic-floor strength. The pelvic-floor muscles (PFM) play an important role in urinary and fecal continence. Several investigators have studied the PFM using intra-vaginal pressure measurements, but their methods have not been validated. We describe the characteristics of a probe transducer developed to measure PFM strength according to its dynamic response and the effects of temperature variation. This probe transducer was used to evaluate changes in the contraction strength of pelvic muscles in a group of patients who participated in a PFM training program. Experiments allowed the identification of the probe's characteristics at different temperatures, definition of a calibration equation, and measurements of the dynamic response to pressure pulse. Evaluation of patients before and after the PFM training program showed significant differences in the peak pressure achieved during the contraction (p<0.001) and in pressure-rise time (p<0.01). The tests performed with the probe allowed the characterization of the proposed transducer, and the intra-vaginal pressure measurements in volunteers undergoing a PFM training program allowed a quantitative evaluation of the PFM strength. abstract_id: PUBMED:26522102 Pelvic reconstruction improves pelvic floor strength in pelvic organ prolapse patients. Objective: To evaluate pelvic floor muscle strength after the modified pelvic reconstruction procedure for pelvic organ prolapse (POP). Materials And Methods: Patients were assigned to two groups consisting of 37 patients diagnosed with POP and undergoing modified pelvic reconstruction (reconstruction group), and 30 patients admitted to our hospital during the same period for other surgical indications (control group). Vaginal palpation of pelvic floor muscle strength was performed according to the modified Oxford grading system before operating on the two groups and again in the 3(rd) month following surgery for the reconstruction group. A comparative study was performed to evaluate the differences between the two groups and the improvement of pelvic floor muscle strength in the reconstruction group. Results: The pelvic floor muscle strength was significantly improved postoperatively when compared with preoperative results in the reconstruction group (t = -17.478, p < 0.001). However, pre- and postoperative muscle strength in the reconstruction group was significantly lower relative to the control group, respectively (χ(2) = 63.293, p < 0.001; χ(2) = 31.550, p < 0.001). Conclusion: The modified pelvic reconstruction procedure could improve pelvic floor muscle strength in POP patients, which remains lower when compared with the normal population. Pelvic floor muscle strength should be included in the assessment of surgical outcomes in POP. abstract_id: PUBMED:28455942 The evaluation of pelvic floor muscle strength in women with pelvic floor dysfunction: A reliability and correlation study. Aims: The purposes of this study were: (i) to evaluate the reliability of vaginal palpation, vaginal manometry, vaginal dynamometry; and surface (transperineal) electromyography (sEMG), when evaluating pelvic floor muscle (PFM) strength and/or activation; and (ii) to determine the associations among PFM strength measured using these assessments. Methods: One hundred and fifty women with pelvic floor disorders participated on one occasion, and 20 women returned for the same investigations by two different raters on 3 different days. At each session, PFM strength was assessed using palpation (both the modified Oxford Grading Scale and the Levator ani testing), manometry, and dynamometry; and PFM activation was assessed using sEMG. Results: The interrater reliability of manometry, dynamometry, and sEMG (both root-mean-square [RMS] and integral average) was high (Lin's Concordance Correlation Coefficient [CCC] = 0.95, 0.93, 0.91, 0.86, respectively), whereas the interrater reliability of both palpation grading scales was low (Cohen's Kappa [k] = 0.27-0.38). The intrarater reliability of manometry (CCC = 0.96), and dynamometry (CCC = 0.96) were high, whereas intrarater reliability of both palpation scales (k = 0.78 for both), and of sEMG (CCC = 0.79 vs 0.80 for RMS vs integral average) was moderate. The Bland-Altman plot showed good inter and intrarater agreement, with little random variability for all instruments. The correlations among palpation, manometry, and dynamometry were moderate (coefficient of determination [r2 ] ranged from 0.52 to 0.75), however, transperineal sEMG amplitude was only weakly correlated with all measures of strength (r2 = 0.23-0.30). Conclusions: Manometry and dynamometry are more reliable tools than vaginal palpation for the assessment of PFM strength in women with pelvic floor disorders, especially when different raters are involved. The different PFM strength measures used clinically are moderately correlated; whereas, PFM activation recorded using transperineal sEMG is only weakly correlated with PFM strength. Results from perineal sEMG should not be interpreted in the context of reporting PFM strength. abstract_id: PUBMED:29566564 Association between preterm labour and pelvic floor muscle function. We hypothesised that the pressure on the cervix increases with advancing gestation and it may lead to a cervical shortening and cause preterm labour in women with weak pelvic floor muscles. The aim of this prospective study was to measure vaginal resting pressure and pelvic floor muscle strength in the first trimester of pregnancy and to investigate their effects on labour. A study was conducted on the pregnant women with a low risk for preterm birth. The pelvic floor muscle strength and vaginal resting pressure were assessed in 320 pregnant women at their first trimester with a vaginal pressure measurement device. Fifty-two pregnant women were hospitalised for tocolytic therapy because of spontaneous preterm labour. Thirty-two of them (10.2%) had a preterm delivery despite the tocolytic therapy. Both the vaginal resting pressure (p = .009, 95%CI: 0.8; 5.9) and the pelvic floor muscle strength (p = .01, 95%CI: 3.5; 13.1) were significantly lower in the women with a preterm labour. Impact statement What is already known on this subject? The pelvic floor muscles have an essential role in continence and provide support to the pelvic organs. They also have an impact on labour. The pelvic floor muscles should distend to allow the passage of the foetus during labour. The rotation and flexion of the foetal head is due to the pelvic floor resistance. The effect of a vaginal birth on the pelvic floor's function is readily understood. On the other hand, the effect of the pelvic floor muscle function on labour is still controversial. What do the results of this study add? This prospective study showed that there is a negative association between the pelvic floor muscle strength and preterm labour. This is the first clinical study indicating that weak pelvic floor muscles may cause a preterm labour. What are the implications of these findings for clinical practice and/or further research? Pelvic floor physical therapy may be an alternative preventive strategy to reduce the risk of a spontaneous preterm birth. Answer: Yes, the size of the vaginal probe does affect the measurement of pelvic floor muscle strength. A study found that measurements of vaginal squeeze pressure differed depending on the vaginal probe used, with two different probes yielding significantly different mean maximum squeeze pressures (PUBMED:15683371). This indicates that results from studies using various probes should not be compared or combined in systematic reviews or meta-analysis due to the variation in measurements caused by the probe size.
Instruction: Does work stress predict insomnia? Abstracts: abstract_id: PUBMED:35414296 Work-Related Stress, Psychosocial Resources, and Insomnia Symptoms Among Older Black Workers. Objectives: To examine the association between work-related stress (job lock and job stress appraisal) and insomnia symptoms among older Black workers, as well as the extent to which psychosocial resources (mastery, social support, and religious involvement) mediate or moderate this association. Methods: This study uses Ordinary Least Squares regression analysis and data from the Health and Retirement Study (HRS) on Black workers aged 51 and older (N = 924). Results: Job lock due to financial reasons and job stress appraisal are associated with increased insomnia symptoms among older Black workers. Religious attendance buffers the harmful effects of financial job lock on sleep quality, while religiosity exacerbates the effects of job stress on insomnia symptoms. Discussion: Taken together, findings underscore the utility of the Stress Process Model for understanding diverse stress and sleep experiences in later life. Furthermore, findings have the potential to inform efficacious policies for reducing work-related stress and mitigating its harmful consequences. abstract_id: PUBMED:38288275 The relationship of shift work disorder with symptoms of depression, anxiety, and stress. Shift workers commonly suffer from disturbed sleep, which is known to affect mental health in other populations. Shift work disorder (SWD) is characterized by complaints of insomnia and/or excessive daytime sleepiness temporally associated with working non-standard schedules that occur during the usual time for sleep. Few studies have explored the extent to which workers with vs. without SWD experience worse mental health. We administered the Shift Work Disorder Screening Questionnaire to 60 adults engaged in various shift work schedules to categorize workers as being at high or low risk for SWD. Mental health outcomes were measured using the Depression Anxiety Stress Scale-21 (DASS-21). Linear regression was performed for each DASS-21 subscale, adjusting for age, sex, shift type, sleep duration, and frequency of alcohol use. Most participants (55 %) were at high risk for SWD. High-risk participants had higher depressive symptoms than low-risk participants, B = 3.59, 95 % CI [0.54, 6.65], p = .02. The estimated value for those at high risk for SWD corresponded to clinically significant mild depressive symptoms, (M = 13.43), compared to those at low risk, (M = 9.84). High risk for SWD was marginally associated with increased stress symptoms, B = 2.48, 95 % CI [-0.06,5.02], p = .06. Our findings add to the body of evidence that SWD is associated with poor mental health outcomes. Providing interventions specific to the sleep impacts of SWD, including tailored cognitive behavioral therapy for insomnia, may improve shift workers' mental health. abstract_id: PUBMED:15125800 Does work stress predict insomnia? A prospective study. Objectives: The aim of this study was to investigate the one-year development of self-reported sleep problems in workers with no sleep problem at baseline, and to evaluate the role of work stress in the etiology of a new episode. Design: A prospective design was employed. Methods: A total of 816 employees with no sleeping problems during the past three months completed a baseline questionnaire concerning their general health, working hours and working conditions. One year later they were contacted again to ascertain whether they were experiencing problems sleeping. Results: At the follow-up, the three-month point prevalence of self-reported sleep problems for this population was 14.3%. While controlling for age and gender, it was found that irregular working hours and general health were not significantly related to the development of a new episode of sleeping problems. However, stress in the form of a 'poor' psychosocial work environment increased the risk of a new episode by more than twofold (odds ratio 2:15). The attributable fraction suggested that eliminating stress could prevent 53% of the cases. Conclusion: In a population of employees with no reported sleeping problems, 14.3% developed a sleeping problem during the coming year. Even when controlling for possible confounders, stress in the form of a 'poor' psychosocial work environment doubled the risk of developing a sleep problem. abstract_id: PUBMED:30396198 Structural Equation Model for Sleep Quality of Female Shift Work Nurses Purpose: This study aimed to develop and test a structural model for sleep quality in female shift work nurses. The hypothetical model was constructed on the basis of Spielman's 3P model of insomnia and previous research related to the sleep quality of shift nurses. Methods: This cross-sectional study used structural equation modeling and recruited 285 female shift work nurses from four general and university hospitals with over 300 beds located in C and J cities in Gyeongsangnamdo. Data were collected from September 27 to October 20, 2016, and then analyzed using descriptive statistics, Pearson's correlation, and structural equation modeling. The study used SPSS/Win 18.0 and AMOS 18.0 in processing the data. Results: The final model showed good fit to the empirical data: χ²/df=2.19, SRMR=.07, RMSEA=.07, AGFI=.85, TLI=.91, GFI=.93, GFI=.89, NFI=.87. The factors that influenced sleep quality were sleep hygiene (β=.32), perceived shift work status (β=-.16), stress response (β=.16), shift work experience (β=.15), perceived health status (β=-.14), and circadian rhythm (β=-.13) explaining 36.0% of the variance. Conclusion: The model of sleep quality of the shift work nurses constructed in this study is recommended as a model to understand and predict the sleep quality of shift work nurses. The results suggest that strategies for improving the sleep quality of shift work nurses should focus on sleep hygiene, perceived health status, stress response, circadian rhythm, perceived shift work status, and shift work experience. abstract_id: PUBMED:26174845 A cross-sectional study on the relationships among the polymorphism of period2 gene, work stress, and insomnia. Purpose: Work-related stress factors are commonly considered to be increasing the risk of insomnia. But the association between circadian-rhythm gene period2 (per2) and insomnia remains unknown. The paper aimed to examine the relationships among the polymorphism in rs7602358 of Per2 gene, work stress, and insomnia. Methods: In the cross-sectional study of a total of 364 Chinese workers recruited, 218 had insomnia tested by the Athens Insomnia Scale (AIS-5). Work stress was assessed using the General Job Stress Questionnaire (GJSQ). Per2 genotypes were determined by SNaPshot technology. Unconditional logistic regression analysis was used to examine the interaction between polymorphism of Per2 and work stress in insomnia. Results: The genotype AC or allele C of Per2 significantly increased the risk of insomnia (OR, 5.41; 95 % CI, 2.23-13.1) (OR, 3.29; 95 % CI, 1.55-6.98). Compared to low work stress, high work stress had a higher risk of insomnia (OR, 2.73; 95 % CI, 1.77-4.22). A significant interaction between Per2 and work stress in the risk of insomnia was found. Compared to workers with low work stress and genotype AA, those with high work stress and genotype AC had a higher risk of insomnia (OR, 15.12; 95 % CI, 3.46-66.09). Conclusions: The effect of genotype AC or allele C of Per2 on insomnia was relatively stronger than that of high work stress, suggesting that individual's susceptibility should be taken into consideration when intervening and controlling insomnia of workers. abstract_id: PUBMED:34075016 Work, family and nurses perception about their own health: relationship with breast cancer and shift work. Objective: Breast cancer is a commonly diagnosed disease in nurses that, from recent years, has been linked to shift work and night work. Also, different components of work stress have such an impact on the nurses' health and work, family and social conciliation. The objective of this research was to analyze the family and working characteristics of Spanish nurses who perform shift work (including night shifts) in search of possible associations with manifestations of psychosomatic stress and the risk of breast cancer. Methods: A cross-sectional descriptive study was conducted through a virtual questionnaire in a sample of 966 Registered Nurses in Spain between December 2019 and November 2020. A descriptive analysis of sociodemographic and occupational variables was performed, and statistically significant differences and associations were contrasted by estimating risks and confidence intervals. Results: The number of night shifts throughout life and the number of years worked were statistically significant for the association with breast cancer. In addition, other psychosomatic manifestations such as insomnia, palpitations or extreme tiredness were highlighted. Among the most appreciated aspects to generate job satisfaction, co-worker's relationship was underlined. Conclusions: Rotating shift work can cause multiple clinical alterations that could lead to problems related to family conciliation, self-care or employment wellness. It is important to control the physical, psychological and emotional overload of nurses. abstract_id: PUBMED:31062863 Does removal of work stress explain improved sleep following retirement? The Finnish Retirement and Aging study. Study Objectives: Relief from work stress has been hypothesized to explain improvements in sleep duration and quality following retirement, but this has not been confirmed with longitudinal studies. By using repeat sleep data, we examined the role of removal of work-related stressors in changes in sleep at retirement. Methods: The study population consisted of 2,053 participants from the Finnish Retirement and Aging study. Participants' sleep duration, sleep difficulties (difficulties falling asleep, difficulties maintaining sleep, waking up too early in the morning, nonrestorative sleep), daytime tiredness, and sleep loss due to worry were assessed using surveys conducted once a year before and after retirement (average number of repeat surveys 3.5 [range 2-5] per participant). We used Poisson regression with generalized estimating equations to examine the associations between work-related stressors (job strain, low work time control, effort-reward imbalance, and organizational injustice) and changes in sleep at retirement. Results: An increase in sleep duration and decrease in waking up too early in the morning, nonrestorative sleep, daytime tiredness, and sleep loss due to worry were observed shortly after retirement. No systematic associations across the work-related stressors and changes in sleep characteristics were observed. Higher number of work-related stressors before retirement was not associated with a greater magnitude of favorable changes in any of the postretirement sleep characteristics investigated. Conclusions: This longitudinal study suggests that perceived sleep improves shortly after retirement and that these changes are mainly driven by factors other than relief from work stress. abstract_id: PUBMED:38027822 Occupational stress and mental health among civil servants during COVID-19 in China: The mediation effect of social support and work-family conflict. Mental health problems of various populations during the COVID-19 pandemic have received high attention, but there is little research on the mental health of Chinese civil servants. The present study investigated occupational stress, mental health problems (i.e., anxiety, depression, and insomnia), social support, and work-family conflict in Chinese civil servants during the COVID-19 pandemic. A total of 327 civil servants in Wenzhou city, China, participated in an online survey, which collected data on socio-demographic characteristics, occupational stress, mental health problems (i.e., anxiety, depression, and insomnia), social support, and work-family conflict. Data were analyzed using non-parametric tests and a multiple mediation model. Self-reported risk at work and support from mental health workers were relatively low. Anxiety, depression, and insomnia prevalence were 49.24 %, 47.1 %, and 20.48 %, respectively. Those who held higher ranks at work had lower levels of anxiety and depression. In addition, those who felt bad about their health status had more mental health problems. Social support and work-family conflict mediated the relationship between occupational stress and mental health problems significantly. Stress management training, organizational-level improvement in work arrangements, and professional mental health services are warranted for Chinese civil servants during the pandemic. abstract_id: PUBMED:32895348 Secondary Data Analysis on the Factors Influencing Premenstrual Symptoms of Shift Work Nurses: Focused on the Sleep and Occupational Stress Purpose: This study aimed to examine premenstrual symptoms (PMS) of shift nurses and identify the association between PMS, sleep, and occupational stress. Methods: This study was conducted with a secondary data analysis that used data from the Shift Work Nurse's Health and Turnover study. The participants were 258 nurses who were working in shifts including night shifts. PMS, sleep patterns (sleep time and sleep time variability), sleep quality, and the occupational stress of each participant were measured using the Moos Menstrual Distress Questionnaire, a sleep diary, an actigraph, the Insomnia Severity Index, and the Korean Occupational Stress Scale, respectively. Data were analyzed using SPSS 23 and STATA 15.1 to obtain descriptive statistics, Pearson's correlation coefficients, multiple linear regression with generalized estimating equations (GEE) and Baron and Kenny's mediating analysis. Results: The average PMS score, average sleep time, average sleep time variability, average sleep quality score, and average occupational stress score of the participants was 53.95 ± 40.45, 7.52 ± 0.89 hours, 32.84 ± 8.43%, 12.34 ± 5.95, and 49.89 ± 8.98, respectively. A multiple linear regression analysis with GEE indicated that sleep time variability (B = 0.86, p = .001), and sleep quality (B = 2.36, p < .001) had negative effects on nurses' PMS. We also found that sleep quality had a complete mediating effect in the relationship between occupational stress and PMS. Conclusion: These findings indicate that both sleep time variability and sleep quality are important factors associated with PMS among shift work nurses. To improve shift nurses' PMS status, strategies are urgently needed to decrease sleep time variability and increase sleep quality. abstract_id: PUBMED:32624561 Shift work sleep disorder and job stress in shift nurses: implications for preventive interventions. Background: A growing literature has revealed a relationship between shift-work, including night-shift, and the disturbance of sleep-wake cycle, leading to insomnia and/or increased daytime sleepiness in shift nurses; recent findings showed an association between shift work sleep disorders and distress, work accidents, decreased job performance and, consequently, lower quality of health service provision and lower standards of care. Objectives: To analyze across the gender how shift nurses experience shift work sleep disorders and job stress. Methods: A cross-sectional study was performed following the STROBE Statement. The Italian version of the Job Content Questionnaire, the Bergen Insomnia Scale and the Epworth Sleepiness Scale were administered to the registered nurses employed in three Departments of General Practice and Elderly Care Medicine. Results: No significant association was found between high job strain and insomnia and daytime sleepiness. Significantly more women than men experienced high job strain, insomnia and daytime sleepiness; among women the level of social support was significantly and negatively associated with insomnia and daytime sleepiness. Conclusions: The findings of this study suggest that interventions aimed to prevent shift work sleep disorder and job stress in shift-nurses should incorporate the assessment of social support across the gender. Moreover, longitudinal studies are required to evaluate the effectiveness of interventions targeted on social support to minimize the occurrence of insomnia and daytime sleepiness in shift nurses. Answer: Yes, work stress does predict insomnia. A prospective study found that stress in the form of a 'poor' psychosocial work environment more than doubled the risk of developing a sleep problem among employees with no reported sleeping problems at baseline (PUBMED:15125800). Additionally, work-related stress factors, such as job lock due to financial reasons and job stress appraisal, are associated with increased insomnia symptoms among older Black workers (PUBMED:35414296). Shift work disorder (SWD), characterized by complaints of insomnia and/or excessive daytime sleepiness associated with non-standard work schedules, is also linked to worse mental health outcomes, including higher depressive symptoms (PUBMED:38288275). Furthermore, the polymorphism of the period2 gene, when combined with work stress, significantly increases the risk of insomnia, suggesting that individual susceptibility should be considered when intervening and controlling insomnia among workers (PUBMED:26174845). In nurses, shift work can cause multiple clinical alterations, including insomnia, which could lead to problems related to family conciliation, self-care, or employment wellness (PUBMED:34075016). Moreover, a study on female shift work nurses found that factors influencing sleep quality included sleep hygiene, perceived shift work status, stress response, shift work experience, perceived health status, and circadian rhythm (PUBMED:30396198). Another study indicated that sleep quality had a complete mediating effect in the relationship between occupational stress and premenstrual symptoms (PMS) among shift work nurses, highlighting the importance of sleep quality in the context of work stress (PUBMED:32895348). Lastly, a study on shift nurses suggested that interventions aimed to prevent shift work sleep disorder and job stress should incorporate the assessment of social support across genders (PUBMED:32624561). Overall, these findings suggest a clear association between work stress and the development or exacerbation of insomnia symptoms.
Instruction: The Singapore impaired glucose tolerance follow-up study: does the ticking clock go backward as well as forward? Abstracts: abstract_id: PUBMED:14578234 The Singapore impaired glucose tolerance follow-up study: does the ticking clock go backward as well as forward? Objective: To 1). document the change in glucose tolerance for subjects with normal glucose tolerance (NGT) and impaired glucose tolerance (IGT) over time, 2). identify baseline factors associated with worsening of glucose tolerance, and 3). determine whether cardiovascular disease (CVD) risk factors associated with IGT improved in tandem with glucose tolerance. Research Design: Subjects with IGT and NGT (matched for age, sex, and ethnic group) were identified from a cross-sectional survey conducted in 1992. Subjects with IGT (297) and NGT (298) (65.0%) were reexamined in 2000. Glucose tolerance (assessed by 75-g oral glucose tolerance test), anthropometric data, serum lipids, blood pressure, and insulin resistance were determined at baseline and at the follow-up examination. Results: For NGT subjects, 14.0% progressed to IGT and 4.3% to diabetes over 8 years. For IGT subjects, 41.4% reverted to NGT, 23.0% remained impaired glucose tolerant, and 35.1% developed diabetes. Obesity, hypertriglyceridemia, higher blood pressure, increased insulin resistance, and lower HDL cholesterol at baseline were associated with worsening of glucose tolerance in both IGT and NGT subjects. Those with IGT who reverted to NGT remained more obese and had higher blood pressure than those with NGT in both 1992 and 2000. However, serum triglyceride, HDL cholesterol, and insulin resistance values in 2000 became indistinguishable from those of subjects who maintained NGT throughout the study period. Conclusions: Some, but not all, CVD risk factors associated with IGT and with the risk of future diabetes normalize when glucose tolerance normalizes. Continued surveillance and treatment in subjects with IGT, even after they revert to NGT, may be important in the prevention of CVD. abstract_id: PUBMED:25512305 Circadian rhythms, the molecular clock, and skeletal muscle. Circadian rhythms are the approximate 24-h biological cycles that function to prepare an organism for daily environmental changes. They are driven by the molecular clock, a transcriptional:translational feedback mechanism that in mammals involves the core clock genes Bmal1, Clock, Per1/2, and Cry1/2. The molecular clock is present in virtually all cells of an organism. The central clock in the suprachiasmatic nucleus (SCN) has been well studied, but the clocks in the peripheral tissues, such as heart and skeletal muscle, have just begun to be investigated. Skeletal muscle is one of the largest organs in the body, comprising approximately 45% of total body mass. More than 2300 genes in skeletal muscle are expressed in a circadian pattern, and these genes participate in a wide range of functions, including myogenesis, transcription, and metabolism. The circadian rhythms of skeletal muscle can be entrained both indirectly through light input to the SCN and directly through time of feeding and activity. It is critical for the skeletal muscle molecular clock not only to be entrained to the environment but also to be in synchrony with rhythms of other tissues. When circadian rhythms are disrupted, the observed effects on skeletal muscle include fiber-type shifts, altered sarcomeric structure, reduced mitochondrial respiration, and impaired muscle function. Furthermore, there are detrimental effects on metabolic health, including impaired glucose tolerance and insulin sensitivity, which skeletal muscle likely contributes to considering it is a key metabolic tissue. These data indicate a critical role for skeletal muscle circadian rhythms for both muscle and systems health. Future research is needed to determine the mechanisms of molecular clock function in skeletal muscle, identify the means by which skeletal muscle entrainment occurs, and provide a stringent comparison of circadian gene expression across the diverse tissue system of skeletal muscle. abstract_id: PUBMED:20562852 Disruption of the clock components CLOCK and BMAL1 leads to hypoinsulinaemia and diabetes. The molecular clock maintains energy constancy by producing circadian oscillations of rate-limiting enzymes involved in tissue metabolism across the day and night. During periods of feeding, pancreatic islets secrete insulin to maintain glucose homeostasis, and although rhythmic control of insulin release is recognized to be dysregulated in humans with diabetes, it is not known how the circadian clock may affect this process. Here we show that pancreatic islets possess self-sustained circadian gene and protein oscillations of the transcription factors CLOCK and BMAL1. The phase of oscillation of islet genes involved in growth, glucose metabolism and insulin signalling is delayed in circadian mutant mice, and both Clock and Bmal1 (also called Arntl) mutants show impaired glucose tolerance, reduced insulin secretion and defects in size and proliferation of pancreatic islets that worsen with age. Clock disruption leads to transcriptome-wide alterations in the expression of islet genes involved in growth, survival and synaptic vesicle assembly. Notably, conditional ablation of the pancreatic clock causes diabetes mellitus due to defective beta-cell function at the very latest stage of stimulus-secretion coupling. These results demonstrate a role for the beta-cell clock in coordinating insulin secretion with the sleep-wake cycle, and reveal that ablation of the pancreatic clock can trigger the onset of diabetes mellitus. abstract_id: PUBMED:25457619 Clock genes, pancreatic function, and diabetes. Circadian physiology is responsible for the temporal regulation of metabolism to optimize energy homeostasis throughout the day. Disturbances in the light/dark cycle, sleep/wake schedule, or feeding/activity behavior can affect the circadian function of the clocks located in the brain and peripheral tissues. These alterations have been associated with impaired glucose tolerance and type 2 diabetes. Animal models with molecular manipulation of clock genes and genetic studies in humans also support these links. It has been demonstrated that the endocrine pancreas has an intrinsic self-sustained clock, and recent studies have revealed an important role of clock genes in pancreatic β cells, glucose homeostasis, and diabetes. abstract_id: PUBMED:17686888 Metabolic homeostasis in mice with disrupted Clock gene expression in peripheral tissues. The role of peripheral vs. central circadian rhythms and Clock in the maintenance of metabolic homeostasis and with aging was examined by using Clock(Delta19)+MEL mice. These have preserved suprachiasmatic nucleus and pineal gland rhythmicity but arrhythmic Clock gene expression in the liver and skeletal muscle. Clock(Delta19)+MEL mice showed fasting hypoglycemia in young-adult males, fasting hyperglycemia in older females, and substantially impaired glucose tolerance overall. Clock(Delta19)+MEL mice had substantially reduced plasma insulin and plasma insulin/glucose nocturnally in males and during a glucose tolerance test in females, suggesting impaired insulin secretion. Clock(Delta19)+MEL mice had reduced hepatic expression and loss of rhythmicity of gck, pfkfb3, and pepck mRNA, which is likely to impair glycolysis and gluconeogenesis. Clock(Delta19)+MEL mice also had reduced glut4 mRNA in skeletal muscle, and this may contribute to poor glucose tolerance. Whole body insulin tolerance was enhanced in Clock(Delta19)+MEL mice, however, suggesting enhanced insulin sensitivity. These responses occurred although the Clock(Delta19) mutation did not cause obesity and reduced plasma free fatty acids while increasing plasma adiponectin. These studies on clock-gene disruption in peripheral tissues and metabolic homeostasis provide compelling evidence of a relationship between circadian rhythms and the glucose/insulin and adipoinsular axes. It is, however, premature to declare that clock-gene disruption causes the full metabolic syndrome. abstract_id: PUBMED:21918578 Adipokines and adipocyte function in Clock mutant mice that retain melatonin rhythmicity. Clock(δ19)+MEL mutant mice, which retain melatonin rhythmicity, but lack peripheral tissue rhythmicity have impaired glucose tolerance, but reduced plasma free fatty acids, increased plasma adiponectin, and improved insulin sensitivity. Here, we report their response to a high-fat diet and adipocyte rhythmicity and function. The diet increased epigonadal fat weight similarly (twofold) in both wild-type and Clock(δ19)+MEL mice. The Clock(δ19) mutation abolished rhythmicity of Per2, Rev erbα and peroxisome proliferator-activated receptor-γ (Pparγ ) mRNA in epigonadal fat, but not Bmal1 mRNA, and reduced Rev erbα mRNA by 59 and 70% compared to the wild-type mice on the control and high-fat diets, respectively. The mutants had increased Adipoq mRNA expression in epigonadal fat (22%; P < 0.05) on a control diet, but showed no further change on a high-fat diet, and no change in Lep, Nampt or Retn mRNA on either diet. The Clock(δ19) mutation abolished rhythmicity of genes in epigonadal fat that contribute to plasma free fatty acids for mice on both diets, and increased Lipe mRNA expression in those on the high-fat diet. The persistent melatonin rhythm and reduced plasma free fatty acids in Clock(δ19)+MEL mutants may contribute to their enhanced insulin sensitivity, ameliorate the extent of impaired glucose homeostasis, and protect against the adverse effects of a high-fat diet. abstract_id: PUBMED:11024582 The changing demography of diabetes mellitus in Singapore. Diabetes mellitus has been on the rise in Singapore, while Singaporeans are becoming more affluent, our lifestyles are more sedentary and our population is ageing rapidly. The prevalence of diabetes mellitus rose from 2% in 1975 to 4.7% in 1984, 8.6% in 1992 and 9.0% of adults 18-69 years old in 1998. Malay and Indian women and Indian men were at higher risk, with 14.3, 14.9 and 16.7% prevalence rates, respectively. A further 15% of the adult population have impaired glucose tolerance (IGT). Diabetes was a factor in 39.7% of strokes and in 9.3% of all deaths in Singapore, and is the sixth most common cause of death. In the Diabcare Singapore 1998 Study, 91% of participants were diagnosed with Type 2 diabetes, with mean BMI of 25.1+/-4.4 kg/m(2). The incidence of Type 1 diabetes in childhood is 2.46 per 100000 children 0-12 years of age, while Type 2 diabetes in childhood is an emerging problem. The prevalence of obesity (BMI >30 kg/m(2)) among persons aged 18-69 years rose to 6% in 1998, up from 5.1% in 1992. The prevalence of obesity was highest among the Malays (16.2%) followed by the Indians (12.2%) and the Chinese (3.8%). About 12% of schoolchildren are obese. Increased efforts must be made to change lifestyle and eating patterns in our society, reduce childhood obesity and encourage adults to make lifelong sports and exercise part of the Singaporean way of life. Singapore has one of the world's fastest ageing populations, and even now, 32.4% of Singaporeans 60-69 years of age have diabetes. We should consider screening for diabetes in obese schoolchildren and seek to improve quality of care for people with diabetes, including enlisting the aid of community organisations to improve access to diabetes education, monitoring, support and complications screening services. abstract_id: PUBMED:7137908 Research and development in obstetrics & gynaecology in the Department of Obstetrics & Gynaecology at the National University of Singapore. This is a review of the research and achievements of the Department of Obstetrics and Gynaecology, National University of Singapore since 1949. The research activities reviewed are Fertility Control, Subfertility, Reproductive Endocrinology, In-Vitro Fertilization, Trophoblastic Disease, Prostaglandins and Perinatal Medicine. The University Department has kept abreast with the Singapore norm of two-child families by providing the most sophisticated technology and expertise. The perinatal mortality and stillbirth rates have been reduced to 11 and 6 per thousand births. Prostaglandin research in the Department includes an investigation of the involvement of these substances in various physiological processes, pharmacological studies with different prostaglandins and development of clinical applications. Between 1974 and 1981 we had studied 12 intrauterine contraceptive devices, both on a departmental basis and in conjunction with international agencies. The newer medicated devices have proved to offer significant advantages over the original inert plastic configuration. Metabolic changes in Singapore women on the oral pill and injection Depoprovera were studied. Impaired glucose tolerance and raised fasting total lipids were found in women on the pill. Marginal changes in carbohydrate metabolism only were found in women on injection Depoprovera. Hypercoagulation changes in the blood occurring in Singapore women varied with the dose of the synthetic or natural oestrogen medication. Initially, the fibrinolytic activity was enough to compensate for these changes; however after two years there were signs of decompensation. Progestogens affect coagulation via their influence on the liver functions. Hypercoagulation changes also occurred in pregnancy and was of a greater magnitude than oestrogen or progestogen medication. In many obstetric disorders, including hydatidiform mole, there was evidence of intravascular coagulation. The introduction of endocrine function tests has greatly improved patient care in our Department, in particular those related to endocrine disorders. Application of the immunoassays to basic research has shown that physiological levels of oestradiol exert a negative feedback effect on both FSH and LH secretions in men. The greater suppression of LH than of FSH secretion by pharmacological doses of estradiol is possibly due to different control mechanisms in the pituitary for the synthesis and release of both gonadotrophins. Physiological level of testosterone per se has a definite negative feedback effect on the secretion of LH but not on FSH. Pharmacological doses of the 5-alpha-reduced metabolites of testosterone have been shown to suppress both LH and FSH indicating that some of the actions of testosterone could be medicated by these metabolites of which the 3 alpha-androstanediol and 3 beta-androstanediol are the more likely candidates. abstract_id: PUBMED:2221812 Diabetes mellitus and its chronic complications in Singapore: an increasing healthcare problem. We studied the prevalence of diabetes mellitus in Singapore and compared it to the study conducted ten years previously. A rise in prevalence rates from 2.0% to 4.7% was demonstrated. Impaired glucose tolerance (IGT) was studied for the first time, and a prevalence rate of 0.9% was found. Findings on chronic complications of diabetes were also reported. A high frequency of coronary heart disease and hypertension were detected in both diabetic and IGT subjects. Obesity and hyperlipidaemia were identified as important risk factors. This study demonstrates the scope and impact of diabetes mellitus as a major healthcare problem in Singapore. Strategies directed at prevention and control of this disease needs to be implemented so as to check its rising trend. abstract_id: PUBMED:30127499 Screening for diabetes with HbA1c: Test performance of HbA1c compared to fasting plasma glucose among Chinese, Malay and Indian community residents in Singapore. The prevalence of diabetes in Singapore is high. Screening to facilitate early detection and intervention has been shown to be cost-effective. Current clinical practice guidelines in Singapore recommend screening with fasting plasma glucose (FPG), followed by an oral glucose tolerance test (OGTT) in those with FPG 6.1-6.9 mmol/L. Glycated haemoglobin A1c (HbA1c) has robust stability at ambient temperature, and can be performed on non-fasted capillary blood samples, making it an attractive potential alternative for screening. However, limitations of HbA1c include differential performance in different races, and its performance as a screening test has not been well characterized in Asian populations. This study compares HbA1c and FPG as diabetes screening modalities in 3540 community-dwelling Singapore residents of Chinese, Malay and Indian race to detect diabetes mellitus diagnosed based on blood glucose (FPG ≥ 7.0 mmol/L, 2 hr OGTT ≥ 11.1 mmol/L). The area under the receiver-operating-characteristic curve (AUC) was higher for FPG compared to HbA1c in the overall population and age, race and age-race strata, but these differences were not statistically significant. HbA1c > = 7.0% identified 95% of individuals with diabetes mellitus, and the remainder had impaired glucose tolerance (IGT). HbA1c cut-off at 6.1% had better sensitivity (0.825) to FPG at 6.1 mmol/L. The positive predictive value of HbA1c at 6.1% was 40-50% in different age-race combinations with a negative predictive value of about 98%. If follow-up screening with FPG is used, a lower cut-off at 5.6 mmol/L is appropriate in identifying people with pre-diabetes, as about 85% of people with HbA1c 6.1-6.9% and FPG 5.6-6.9 mmol/L had IFG/IGT or diabetes in the study sample. HbA1c is an appropriate alternative to FPG as a first-step screening test, and the combination of Hba1c > = 6.1% and FPG > = 5.6 mmol/L would improve the identification of individuals with diabetes mellitus and prediabetes. Answer: The Singapore Impaired Glucose Tolerance Follow-up Study aimed to document changes in glucose tolerance over time in subjects with normal glucose tolerance (NGT) and impaired glucose tolerance (IGT), identify baseline factors associated with worsening glucose tolerance, and determine whether cardiovascular disease (CVD) risk factors associated with IGT improved in tandem with glucose tolerance. The study followed up with subjects initially identified in a 1992 cross-sectional survey and reexamined them in 2000. It found that for NGT subjects, 14.0% progressed to IGT and 4.3% to diabetes over 8 years. For IGT subjects, 41.4% reverted to NGT, 23.0% remained impaired glucose tolerant, and 35.1% developed diabetes. Baseline factors such as obesity, hypertriglyceridemia, higher blood pressure, increased insulin resistance, and lower HDL cholesterol were associated with worsening glucose tolerance in both IGT and NGT subjects. Those with IGT who reverted to NGT remained more obese and had higher blood pressure than those with NGT in both 1992 and 2000. However, by 2000, their serum triglyceride, HDL cholesterol, and insulin resistance values became indistinguishable from those who maintained NGT throughout the study period. The study concluded that while some CVD risk factors associated with IGT and the risk of future diabetes normalize when glucose tolerance normalizes, continued surveillance and treatment in subjects with IGT, even after they revert to NGT, may be important in the prevention of CVD (PUBMED:14578234).
Instruction: Should repeated partial splenectomy be attempted in patients with hematological diseases? Abstracts: abstract_id: PUBMED:9314241 Should repeated partial splenectomy be attempted in patients with hematological diseases? Technical pitfalls and causes of failure in Gaucher's disease. Background/purpose: The awareness of the risk of overwhelming sepsis after splenectomy prompted surgeons to attempt splenic preservation in patients who had hematologic diseases for which splenectomy was the conventional treatment. Partial splenectomy for Gaucher's disease was widely performed before the introduction of alglucerase. In sporadic cases a second partial splenectomy had also been attempted. Methods: The authors present three cases of failed repeated partial splenectomy attempted before alglucerase was available. The role of angiography in planning operative strategy and the surgical pitfalls of this unusual reintervention are discussed. Conclusion: New indications for partial splenectomy in other hematologic diseases makes the experience gained with Gaucher's disease valuable for management decisions. abstract_id: PUBMED:34832592 Role of Partial Splenectomy in Hematologic Childhood Disorders. The spleen is a secondary lymphoid organ that belongs to the reticular-endothelial system, directly connected to blood circulation. The spleen is greatly involved in the immune response, especially against capsulated bacteria. Splenectomy plays a fundamental role in the treatment of numerous pediatric hematologic disorders. Taking into account all the possible complications (especially infections) linked to this procedure, alternatives to total splenectomy have been sought. Partial splenectomy has been proposed as a treatment that allows the reduction of infectious risk. This approach has proven safe and feasible in most patients, but multicentric and prospective studies are necessary to more accurately define the indications for performing partial splenectomy. However, vaccinations and antibiotic prophylaxis remain fundamental for preventing serious infections, even in the case of partial splenectomy. We review anatomical and functional properties of the spleen, with a focus on medical or surgical indications to splenectomy, aiming to give practical educational information to patients and their families after splenectomy. Furthermore, we discuss the feasibility of partial splenectomy in children with hematologic diseases who require splenectomy. abstract_id: PUBMED:33468309 Partial splenectomy in children: Long-term reoperative outcomes☆. Purpose: Partial, or subtotal, splenectomy (PS) has become an accepted alternative to total splenectomy (TS) for management of hematologic disorders in children, but little is known about its long-term outcomes. Here, we present our institutional experience with partial splenectomy, to determine rate of subsequent TS or cholecystectomy and identify if any factors affected this need. Methods: All patients who underwent partial splenectomy at a single tertiary children's hospital were retrospectively reviewed from 2002 through 2019 after IRB approval. Primary outcome of interest was rate of reoperation to completion splenectomy (CS) and rate of cholecystectomy. Secondary outcome were positive predictor(s) for these subsequent procedures. Results: Twenty-four patients underwent PS, at median age 6.0 years, with preoperative spleen size of 12.7 cm by ultrasound. At median follow up time of 8.0 years, 29% of all patients and 24% of hereditary spherocytosis (HS) patients underwent completion splenectomy at median 34 months and 45 months, respectively. Amongst HS patients who did not have a cholecystectomy with or prior to PS, 39% underwent a delayed cholecystectomy following PS. There were no significant differences in age at index procedure, preoperative splenic volume, weight of splenic specimen removed, transfusion requirements, preoperative or postoperative hematologic parameters (including hemoglobin, hematocrit, total bilirubin, and reticulocyte count) amongst patients of all diagnoses and HS only who underwent PS alone compared to those who went on to CS. There were no cases of OPSS or deaths. Conclusion: Partial splenectomy is a safe alternative to total splenectomy in children with hematologic disease with theoretical decreased susceptibility to OPSS. However, families should be counseled of a 29% chance of reoperation to completion splenectomy, and, in HS patients, a 39% chance of delayed cholecystectomy if not performed prior to or with PS. Further studies are needed to understand predictors of these outcomes. abstract_id: PUBMED:30665627 Partial splenectomy: Who, when and how. A systematic review of the 2130 published cases. Background/purpose: In order to avoid consequences of total splenectomy (including severe postsplenectomy sepsis), partial splenectomy (PS) is increasingly reported. Without guidelines and indications concerning a rarely-indicated procedure, a review of literature should be an asset. Methods: A systematic review of all PSs from 1960 to December 2017 was performed, with special focus on surgical indications, sites of resection, approaches and techniques of vascular dissection and parenchymal section/hemostasis of the spleen, perioperative morbidity/mortality, including complications compelling to perform total splenectomy. Results: Among 2130 PSs, indications for resection were hematological disease in 1013 cases and nonhematological conditions in 1078, including various tumors in 142 and trauma in 184. Parenchymal transection was performed using several techniques through the years, most frequently after having induced partial ischemia by splenic hilum vascular dissection/ligation. 371 laparoscopic/robotic PSs were reported. Rescue total splenectomy was required in 75 patients. Conclusions: Although good results are probably overestimated by such a retrospective review, PS should be considered as a procedure associated with a low morbidity/mortality. Nevertheless, severe complications are also reported, and the need of total splenectomy should not to be minimized. Laparoscopic/robotic procedures are increasingly performed, with good results and rare conversions. Type Of Study: Systematic review. Level Of Evidence: IV. abstract_id: PUBMED:2764400 An improved technique for partial splenectomy. Partial splenectomy should be a part of the armamentarium of every surgeon managing traumatized patients. Patients with hematologic disorders may also be candidates for partial splenic resection. An improved technique for partial splenic resection is presented. abstract_id: PUBMED:33471586 Laparoscopic Partial Splenectomy: A Critical Appraisal of an Emerging Technique. A Review of the First 457 Published Cases. Background: Laparoscopic partial splenectomy (LPS) may allow avoiding total splenectomy (TS) complications and maximizing the advantages of mini-invasive approach. The objective of this review is to assess feasibility and safety of LPS, to compare this approach with alternative options. Materials and Methods: A literature review of articles reporting LPS is performed. Several parameters, including age, indication, surgical technique, devices used for splenic section/hemostasis, adverse outcomes, including morbidity/mortality, conversions to open surgery, conversions to TS, operative time (OT), and hospital stay (HS), are analyzed. Articles comparing LPS' results with those of open partial splenectomy and laparoscopic TS are also analyzed. Results: Fifty-nine articles reporting 457 LPS were included. Patients' characteristics varied widely, concerning age and indications, including hematological disease (hereditary spherocytosis, drepanocytosis), splenic focal masses, and trauma. Several technical options are reported. Mean OT and HS are 128 ± 43.7 minutes and 4.9 ± 3.8 days, respectively. No mortality and 5.7% morbidity are reported. Conversion rates to open surgery and to TS are 3.9% and 3.7%, respectively. Conclusions: In conclusion LPS is feasible and safe, with no mortality, low morbidity, and low conversion rates to laparotomy and to TS. LPS may be accomplished by various techniques and tools. Major complications are sporadically reported, thus potential risks should not be underestimated. abstract_id: PUBMED:23932623 3D virtual rendering before laparoscopic partial splenectomy in children. Introduction: Partial splenectomy in children is a good surgical option for hematological diseases and focal splenic tumors because it allows the preservation of the spleen's immunological function. Furthermore, it can be performed by laparoscopy in children as it is a safe procedure, offering the benefits of a minimally invasive approach. Materials And Methods: The software VR-render LE version 0.81 is a system that enables the visualization of bidimentional 3D images with magnification of anatomical details. We have applied this system to five cases of non-parasitic splenic cysts before laparoscopic partial splenectomy. Results: The images obtained with VR rendering software permitted the preoperative reconstruction of the vascularization of the splenic hilum, allowing the surgeon safe vessel control during laparoscopic procedures. All five partial splenectomies were carried out with no complications or major blood loss. Conclusions: Laparoscopic partial splenectomy should be a first choice procedure because it is feasible, reproducible, and safe for children; furthermore, it preserves enough splenic tissue thereby preventing post-splenectomy infections. Volume rendering provides high anatomical resolution and can be useful in guiding the surgical procedure. abstract_id: PUBMED:31973928 Splenic function is not maintained long-term after partial splenectomy in children with sickle cell disease. Background: Partial splenectomy (PS) may allow preservation of splenic function in cases where splenectomy is indicated for hematologic diseases; however, the long-term outcomes are uncertain. We investigated the long-term outcomes of PS in patients with sickle cell disease (SCD). Methods: A single-institution retrospective chart review was performed for children with SCD who underwent PS from 1997 to 2017. For comparison, we reviewed outcomes for patients who underwent PS for hereditary spherocytosis (HS). The primary endpoint was viability of the splenic remnant as inferred by the presence of remnant perfusion on ultrasound and/or liver spleen scan. Results: Nine patients with SCD and 26 patients with HS underwent PS at a median age of 11 (IQR, 9-14) and 7.5 (IQR, 6-13) years, respectively. All underwent laparoscopic PS with three (7.9%) conversions to open. Two SCD patients were lost to long-term follow-up. The remaining seven SCD patients had initial postoperative splenic remnant perfusion demonstrated by ultrasonography. By 42 months postoperatively, however, none had a functioning splenic remnant. The median time to loss of splenic remnant was 12.6 (IQR 9.2-28.5) months. In contrast, all HS patients demonstrated robust splenic remnant blood flow with a median follow-up of 46 (IQR 37-82) months. Conclusion: No patient with SCD who underwent PS had viable splenic tissue for more than 42 months, likely due to continued autoinfarction typical of patients with this disease. Therefore, we believe that PS to preserve splenic function is not indicated in patients with SCD. Level Of Evidence: III. abstract_id: PUBMED:15868588 Partial splenectomy before a hematopoietic stem cell transplantation in children. Unlabelled: Hematopoietic stem cell (HSC) engraftment is delayed in children with hypersplenism, and splenectomy may improve HSC engraftment. However, the use of total splenectomy in children is limited because of concerns for postsplenectomy sepsis. In this study, the authors sought to assess the role of partial splenectomy for children with hypersplenism undergoing HSC transplantation. Methods: Five children with a variety of conditions and associated hypersplenism underwent partial splenectomy before an HSC transplantation at the authors' institution between 2000 and 2003. Primary outcome measures were rates of neutrophil and platelet engraftment. Secondary outcome measures included perioperative complications, splenic regrowth, graft-versus-host disease, and infection rate. All outcomes were compared with recipients of an HSC transplant from both age-matched nonsplenectomized children (n = 497) and hypersplenic children who underwent total splenectomy (n = 10). Outcomes were compared using Wilcoxon's rank sum test. Results: The rate of both neutrophil and platelet engraftment was faster in children who underwent either partial or total splenectomy as compared with nonsplenectomized children (mean rates of neutrophil engraftment were 26, 19, and 19 days for the nonsplenectomy, total splenectomy, and partial splenectomy groups, respectively; mean rates of platelet engraftment were 97, 37, and 45 days for the nonsplenectomy, total splenectomy, and partial splenectomy groups, respectively). Graft-versus-host disease rates were similar between the 3 groups. The mean percentage of splenic regrowth after partial splenectomy was 39%. There were no perioperative complications. Conclusions: Partial splenectomy may be safely performed before HSC transplantation and, similar to total splenectomy, may improve the rate of HSC engraftment. Although this series has a limited number of patients, the use of partial splenectomy appears to be safe and may allow for splenic salvage to minimize the risk of postsplenectomy sepsis. abstract_id: PUBMED:21954176 Robot-assisted partial and total splenectomy. Background: The interest of robotics in performing partial and total splenectomy is poorly reported so far. We report herein our experience. Methods: From November 2001 to November 2009, 24 consecutive robotic splenectomies were performed by the same surgeon. All data were prospectively collected and reviewed retrospectively. Results: Twelve men and 12 women with a median age of 48 years underwent a robotic splenectomy, three of which were partial splenectomies. The indications were: ABO incompatibility for kidney transplantation (n = 7), haematological disease (n = 7) and miscellaneous pathologies (n = 10). Mean operative time was 199 ± 65 min. Median blood loss was 75 (range 5-300) ml. There was one intraoperative complication and two conversions. The postoperative morbidity was 8.3% with no mortality. Median hospital stay was 5.5 days. Conclusions: This series reports the safety and feasibility of robotic partial and total splenectomy. Its use as an alternative to the standard laparoscopic approach is particularly beneficial in more challenging cases. Answer: The decision to attempt repeated partial splenectomy in patients with hematological diseases should be made with caution and on a case-by-case basis. While partial splenectomy has been performed as a means to preserve splenic function and reduce the risk of overwhelming sepsis after total splenectomy, there are technical challenges and potential for failure, particularly in diseases like Gaucher's disease (PUBMED:9314241). The experience gained from partial splenectomy in Gaucher's disease before the introduction of alglucerase is valuable for management decisions in other hematologic diseases (PUBMED:9314241). Partial splenectomy is considered a safe and feasible treatment in most pediatric patients with hematologic disorders, but it is important to note that multicentric and prospective studies are needed to more accurately define the indications for this procedure (PUBMED:34832592). Moreover, even with partial splenectomy, vaccinations and antibiotic prophylaxis remain essential for preventing serious infections (PUBMED:34832592). Long-term outcomes of partial splenectomy should also be considered. In children, there is a 29% chance of reoperation to completion splenectomy and, for hereditary spherocytosis patients, a 39% chance of delayed cholecystectomy if not performed with the initial partial splenectomy (PUBMED:33468309). Additionally, partial splenectomy may not maintain splenic function long-term in children with sickle cell disease, as these patients are likely to lose viable splenic tissue within 42 months postoperatively (PUBMED:31973928). The systematic review of 2130 partial splenectomy cases indicates that while the procedure is associated with low morbidity and mortality, severe complications and the need for total splenectomy should not be minimized (PUBMED:30665627). Laparoscopic and robotic approaches to partial splenectomy are increasingly performed with good results and rare conversions (PUBMED:33471586, PUBMED:21954176). In conclusion, repeated partial splenectomy may be considered in certain cases, but the risks, benefits, and long-term outcomes must be carefully weighed. The decision should involve a thorough discussion with the patient and family about the potential for reoperation and the need for ongoing prophylaxis against infection (PUBMED:33468309, PUBMED:31973928).
Instruction: Can the ICF osteoarthritis core set represent a future clinical tool in measuring functioning in persons with osteoarthritis undergoing hip and knee joint replacement? Abstracts: abstract_id: PUBMED:22948262 Can the ICF osteoarthritis core set represent a future clinical tool in measuring functioning in persons with osteoarthritis undergoing hip and knee joint replacement? Objective: To determine the dimensionality, reliability, model fit, adequacy of the qualifier levels, response patterns across different factors, and targeting of the International Classification of Functioning, Disability and Health (ICF) osteoarthritis core set categories in people with osteoarthritis undergoing hip and knee arthroplasty. Methods: The osteoarthritis core set was rated in 316 persons with osteoarthritis who were either in the pre-operative or within one year post-operative stage. Rasch analyses were performed using the RUMM 2030 program. Results: Twelve of the 13 body functions categories and 13 of the 19 activity and participation categories had good model fit. The qualifiers displayed disordered thresholds necessitating rescoring. There was uneven spread of ICF categories across the full range of the patients' scores indicating off--targeting. Subtest analysis of the reduced ICF categories of body functions and activity and participation showed that the two components could be integrated to form one measure. Conclusion: The results suggest that it is possible to measure functioning using a unidimensional construct based on ICF osteoarthritis core set categories of body functions and activity and participation in this population. However, omission of some categories and reduction in qualifier levels are necessary. Further studies are needed to determine whether better targeting is achieved, particularly during the pre-operative and during the sub-acute care period. abstract_id: PUBMED:28911242 European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip. Purpose: To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. Materials And Methods: This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. Results: The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. Conclusions: This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with osteoarthritis. The differences in results between this Europe validation study and a previous Singaporean validation study underscore the need to validate the International Classification of Functioning, Disability and Health Core Sets in different regions of the world. abstract_id: PUBMED:33572081 Development International Classification of Functioning, Disability and Health Core Set for Post Total Knee Replacement Rehabilitation Program: Delphi-Based Consensus Study in Taiwan. Osteoarthritis is one of the leading causes of disability. Total knee arthroplasty (TKA) is a surgical intervention for patients with severe osteoarthritis. Post TKA rehabilitation is crucial for improving patient's quality of life. However, traditional rehabilitation has only focused on physical function; a systemic analysis of other dimensions such as social participation and environmental factors of post TKA rehabilitation is lacking. The aim of this study was to develop a core set from the International Classification of Functioning, Disability and Health (ICF) to create a comprehensive rehabilitation program for patients with osteoarthritis post TKA. Before the Delphi-based consensus process, a literature review process was performed for related ICF categories selection. We used a three-round Delphi-based consensus among 20 physical therapists with orthopedic rehabilitation expertise in a university-based hospital. A five-point Likert scale was used to rate the importance of each item. The consensus of ratings was analyzed using Spearman's rho and semi-interquartile range indices. The ICF core set for post TKA rehabilitation was determined based on a high level of consensus and a mean score of ≥4.0 in the third Delphi-based consensus round. The ICF core set comprised 32 categories, with 13 regarding body function, four regarding body structures, nine regarding activities and participation, four regarding environmental factors, and two regarding personal factors. Our ICF core set for post TKA rehabilitation can provide information on effective rehabilitation strategies and goal setting for patients post TKA. However, further validation and feasibility assessments are warranted. abstract_id: PUBMED:35533598 Knowledge and use of the International Classification of Functioning, disability and health (ICF) and ICF Core Sets for musculoskeletal conditions among saudi physical therapists. Background: The International Classification of Functioning, Disability, and Health (ICF) provides a coherent biopsychosocial view of health states. The ICF Core Sets were developed to facilitate and encourage the use of the ICF in clinical practice and research. Objective: To examine the level of knowledge and use of the ICF and ICF Core Sets for musculoskeletal conditions among Saudi physical therapists. Design: Cross-sectional study. Method: We approached Saudi physical therapists (PTs) involved in managing patients with musculoskeletal conditions. Data were collected through an electronic survey consisting of multiple-choice questions related to the knowledge of the ICF, general ICF Core Sets and ICF Core Sets for musculoskeletal conditions. Results: A total of 203 PTs participated in this study. Seventy percent of PTs knew about the meaning of ICF, but only 33% recognized aspects of the ICF. For the osteoarthritis (OA) ICF Core Set, 71.9% had poor knowledge, and for the low back pain (LBP) ICF Core Set, 61.8% had poor knowledge. Conclusions: The Saudi PTs surveyed in this study had a good level of knowledge about the meaning of the ICF; however, this knowledge appeared to be limited when they were asked about aspects of the ICF. They had poor knowledge of musculoskeletal ICF Core Sets but seemed to be more familiar with the LBP ICF Core Set compared to the OA ICF Core Set. abstract_id: PUBMED:20860582 Indications for hip and knee replacement in Sweden. Objectives: The aim of this paper was to compare selected indication parameters for patients scheduled for hip and knee replacement at orthopaedic units in Sweden. Methods: Swedish orthopaedic clinics performing joint replacement were invited to enroll in the study. The study time was set to 2 years (from June 2006 to June 2008). The study subjects were patients undergoing hip or knee replacement for osteoarthritis (OA). For data collection, we used a Swedish priority criteria tool based on a translation from a form used in Canada with minor changes. The reliability and validity of the Swedish tool were investigated, with good reproducibility. The questionnaires (one for the doctor and one for the patient) were completed during decision making for surgery. Results: Eleven hospitals enrolled in the study. In total, 2961 patients were included during the study period. Among these, 1662 were hip replacement patients and 1299 were knee replacement patients. The vast majority of patients undergoing hip or knee replacement had findings indicating severe OA, both clinically and radiologically according to the clinical priority tool. Statistically significant self-reported problems with pain at rest, walking and impaired activities of daily living were also observed. There were statistically significant differences in reported indications between the hospitals, both for hip OA patients and for knee OA patients. Conclusions: A clinical priority criteria tool is a useful means of following changes in indications for certain procedures. It could also contribute to explaining differences in case mix when evaluating clinical outcome and patient satisfaction. abstract_id: PUBMED:30647185 The OMERACT-OARSI Core Domain Set for Measurement in Clinical Trials of Hip and/or Knee Osteoarthritis. Objective: To update the 1997 OMERACT-OARSI (Outcome Measures in Rheumatology-Osteoarthritis Research Society International) core domain set for clinical trials in hip and/or knee osteoarthritis (OA). Methods: An initial review of the COMET database of core outcome sets (COS) was undertaken to identify all domains reported in previous COS including individuals with hip and/or knee OA. These were presented during 5 patient and health professionals/researcher meetings in 3 continents (Europe, Australasia, North America). A 3-round international Delphi survey was then undertaken among patients, healthcare professionals, researchers, and industry representatives to gain consensus on key domains to be included in a core domain set for hip and/or knee OA. Findings were presented and discussed in small groups at OMERACT 2018, where consensus was obtained in the final plenary. Results: Four previous COS were identified. Using these, and the patient and health professionals/researcher meetings, 50 potential domains formed the Delphi survey. There were 426 individuals from 25 different countries who contributed to the Delphi exercise. OMERACT 2018 delegates (n = 129) voted on candidate domains. Six domains gained agreement as mandatory to be measured and reported in all hip and/or knee OA clinical trials: pain, physical function, quality of life, and patient's global assessment of the target joint, in addition to the mandated core domain of adverse events including mortality. Joint structure was agreed as mandatory in specific circumstances, i.e., depending on the intervention. Conclusion: The updated core domain set for hip and/or knee OA has been agreed upon. Work will commence to determine which outcome measurement instrument should be recommended to cover each core domain. abstract_id: PUBMED:18469736 International Classification of Functioning, Disability and Health (ICF) core sets for osteoarthritis. A useful tool in the follow-up of patients after joint arthroplasty. Aim: The first aim of this study was to verify the applicability of the International Classification of Functioning Disability and Health (ICF) core set for osteoarthritis (OA) as an outcome tool after the total hip arthroplasty (THA) and total knee arthroplasty (TKA), in order to follow the changes of the profile of functioning after joint arthroplasty. Methods: Seventy-two OA inpatients were consecutively enrolled during the first three days of rehabilitation after THA (34.7%) or TKA (65.3%) and were evaluated of the ICF Core Set for OA. Patients were interviewed at the beginning of the study one month prior to surgery, at the end of the rehabilitation treatment lasting on average three weeks and three and six months after their discharge. Results: The ICF core set data comparison showed many significant differences among the four evaluations with a significant improvement in many categories assessed. At the six months interview activity limitations and participation restrictions were still reported, even if with a lower percentage as compared to the first evaluation. Conclusion: The ICF core set for OA resulted an interesting outcome tool after the THA or TKA, even if more research is needed, mainly on data reliability and category definition. Even tough problems in the administration were encountered, the ICF core set allowed to focus on issues and aspects of patient's every day life usually not taken into account or not codified in the usual care assessment. abstract_id: PUBMED:21575238 Exploring the relationships between International Classification of Functioning, Disability and Health (ICF) constructs of Impairment, Activity Limitation and Participation Restriction in people with osteoarthritis prior to joint replacement. Background: The International Classification of Functioning, Disability and Health (ICF) proposes three main constructs, impairment (I), activity limitation (A) and participation restriction (P). The ICF model allows for all paths between the constructs to be explored, with significant paths likely to vary for different conditions. The relationships between I, A and P have been explored in some conditions but not previously in people with osteoarthritis prior to joint replacement. The aim of this paper is to examine these relationships using separate measures of each construct and structural equation modelling. Methods: A geographical cohort of 413 patients with osteoarthritis about to undergo hip and knee joint replacement completed the Aberdeen measures of Impairment, Activity Limitation and Participation Restriction (Ab-IAP). Confirmatory factor analysis was used to test the three factor (I, A, P) measurement model. Structural equation modelling was used to explore the I, A and P pathways in the ICF model. Results: There was support from confirmatory factor analysis for the three factor I, A, P measurement model. The structural equation model had good fit [S-B Chi-square = 439.45, df = 149, CFI robust = 0.91, RMSEA robust = 0.07] and indicated significant pathways between I and A (standardised coefficient = 0.76 p < 0.0001) and between A and P (standardised coefficient = 0.75 p < 0.0001). However, the path between I and P was not significant (standardised coefficient = 0.01). Conclusion: The significant pathways suggest that treatments and interventions aimed at reducing impairment, such as joint replacement, may only affect P indirectly, through A, however, longitudinal data would be needed to establish this. abstract_id: PUBMED:11330253 A comparison of outcomes in osteoarthritis patients undergoing total hip and knee replacement surgery. Objective: The aims of this study were to assess changes in physical function and quality of life with the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and the instrument of the Medical Outcomes Study SF-36 Health Survey (MOS SF-36), respectively, in patients undergoing hip anf knee joint replacement surgery and to compare the responsiveness of these two outcome measures 1 year after surgery. Design: One hundred and ninety-four patients with osteoarthritis (OA knee 108, OA hip 86) admitted to four hospitals in Sydney were followed over a period of 1 year at 3 monthly intervals. Results: WOMAC measures improved significantly after 1 year for OA hip and OA knee: there was reduction in pain of 71% and 53%, reduction of stiffness of 55% and 43% and improvement in physical function of 68% and 43%, respectively. MOS SF-36 measures in those having hip surgery improved significantly for pain (222%), physical function (247%), physical role functioning (402%), general health (110%), vitality (143%0, social functioning (169%) and mental health (114%). For those in the knee surgery group, significant improvement was seen for pain (175%), physical function (197%), physical role functioning (275%), vitality (125%) and social functioning (119%). The WOMAC was a more responsive measure than the MOS SF-36. Conclusion: WOMAC and MOS SF-36 detect significant and clinically meaningful changes in outcome after hip and knee replacement. WOMAC requires a smaller sample size and is more responsive in the short term. For a follow-up longer than 6 months MOS SF-36 provides additional information. The improvement in outcomes following hip joint surgery were significantly greater than those following knee surgery. abstract_id: PUBMED:36376028 Genome-wide association meta-analysis of knee and hip osteoarthritis uncovers genetic differences between patients treated with joint replacement and patients without joint replacement. Objectives: Osteoarthritis is a common and severe, multifactorial disease with a well-established genetic component. However, little is known about how genetics affect disease progression, and thereby the need for joint placement. Therefore, we aimed to investigate whether the genetic associations of knee and hip osteoarthritis differ between patients treated with joint replacement and patients without joint replacement. Methods: We included knee and hip osteoarthritis cases along with healthy controls, altogether counting >700 000 individuals. The cases were divided into two groups based on joint replacement status (surgical vs non-surgical) and included in four genome-wide association meta-analyses: surgical knee osteoarthritis (N = 22 525), non-surgical knee osteoarthritis (N = 38 626), surgical hip osteoarthritis (N = 20 221) and non-surgical hip osteoarthritis (N = 17 847). In addition, we tested for genetic correlation between the osteoarthritis groups and the pain phenotypes intervertebral disc disorder, dorsalgia, fibromyalgia, migraine and joint pain. Results: We identified 52 sequence variants associated with knee osteoarthritis (surgical: 17, non-surgical: 3) or hip osteoarthritis (surgical: 34, non-surgical: 1). For the surgical phenotypes, we identified 10 novel variants, including genes involved in autophagy (rs2447606 in ATG7) and mechanotransduction (rs202127176 in PIEZO1). One variant, rs13107325 in SLC39A8, associated more strongly with non-surgical knee osteoarthritis than surgical knee osteoarthritis. For all other variants, significance and effect sizes were higher for the surgical phenotypes. In contrast, genetic correlations with pain phenotypes tended to be stronger in the non-surgical groups. Conclusions: Our results indicate differences in genetic associations between knee and hip osteoarthritis depending on joint replacement status. Answer: The ICF osteoarthritis core set has the potential to represent a future clinical tool in measuring functioning in persons with osteoarthritis undergoing hip and knee joint replacement. Studies have shown that the core set categories of body functions and activity and participation can be integrated to form a unidimensional construct to measure functioning in this population. However, some categories may need to be omitted, and qualifier levels reduced to achieve better targeting, particularly during the pre-operative and sub-acute care periods (PUBMED:22948262). The European validation of the Comprehensive ICF Core Set for Osteoarthritis supports its validity from the patient perspective, indicating its applicability in European patients with knee or hip osteoarthritis (PUBMED:28911242). Additionally, a Delphi-based consensus study in Taiwan developed an ICF core set for post-total knee replacement rehabilitation, comprising various categories that can inform effective rehabilitation strategies and goal setting for patients post-surgery, although further validation and feasibility assessments are warranted (PUBMED:33572081). However, knowledge and use of the ICF and ICF Core Sets among physical therapists, particularly those related to musculoskeletal conditions like osteoarthritis, appear to be limited, suggesting a need for increased education and training to enhance the implementation of these tools in clinical practice (PUBMED:35533598). In conclusion, while the ICF osteoarthritis core set shows promise as a clinical tool for measuring functioning in individuals undergoing hip and knee joint replacement, further research, validation, and education are necessary to optimize its use and ensure it meets the needs of both clinicians and patients in different regions and healthcare settings.
Instruction: Breast arterial calcifications and carotid intima-media thickness and haemodynamics: Is there any association? Abstracts: abstract_id: PUBMED:31636768 Association of vascular calcification of breast with carotid intima media thickness. Purpose: To assess the relationship between breast arterial calcification (BAC) on mammography and carotid intima medial thickness (C-IMT) by comparing these patients with healthy control subjects. Material And Methods: The study included 100 women divided into two groups based on presence of BAC on mammography using the WIPRO GE - DMR PLUS mammography unit. The groups included 50 BAC(+) (case group) and 50 BAC(-) (control group). BAC is defined as two linear calcific deposits forming the rail-road track pattern, with ring-like appearance in cross section. Relevant previous clinical and surgical history was taken and C-IMT was measured using B-mode ultrasonography Aplio XG (Toshiba medical systems corp., Japan) using an 8-13 MHz linear transducer within 1 cm from the proximal and distal portion of the common carotid artery and the mean value was taken. Results: All the participants included in this study were within the age group of 40-70 years. Mean age with BAC(+) was 59.18 ± 8.59 years and BAC(-) was 50.70 ± 7.93 years. Mean C-IMT in the BAC(+) group was 0.86 ± 0.21 mm vs. 0.71 ± 0.12 mm in the BAC(-) group. Chi-square test, independent t test and Spearman's rho correlation to assess the association of the breast arterial calcification were performed. The study showed significant difference in mean C-IMT between two groups (p value < 0.001) when correlated with age, diabetes, hypertension, presence of BAC and history of surgery and drug intake. Conclusions: BAC in mammography is independently associated with C-IMT. Furthermore, apart from age and menopausal status a statistically significant positive correlation with C-IMT was observed. abstract_id: PUBMED:37798228 Carotid intima layer thickness but not intima-media thickness is related to coronary artery calcification in type 2 diabetes individuals: Results from the Brazilian diabetes study. Background And Aims: Carotid intima-media thickness (cIMT) is inconsistent in predicting cardiovascular risk. This may stem from the variability of the media thickness (cM) outweighing the intimal thickness (cIT) as the sign of atherosclerosis. Thus, we evaluated in type 2 diabetes (T2D) individuals, the association between carotid measures and coronary artery calcification (CAC). Methods And Results: Association between the presence of CAC and cIT, cM, and cIMT were examined on 224 individuals. Logistic binary regression was used to assess CAC predictors. The Akaike information criterion (AIC) and log-likelihood test (LLT) were used to assess differences among univariate models. The cIT (0.335 mm vs 0.363 mm; p = 0.001) and cIMT (0.715 vs 0.730; p = 0.019), but not cM (0.386 mm vs 0,393 mm; p = 0.089) were higher among individuals with CAC. In unadjusted analysis, cIT (273;-134; p = 0.001) showed greater relationship with CAC, when compared to cIMT (279;-137; p = 0.022) and cM (281;-139; p = 0.112) based on the AIC and LLT, respectively. In multivariate logistic regression, CAC was related to carotid plaque (OR): 1.91, 95% confidence interval (CI):1.08, 3.38; p = 0.027), and high-cIT (OR: 2.70, 95%CI:1.51, 4.84; p = 0.001), but not to high-cIMT (OR:1.70, 95%CI:0.96, 3.00; p = 0.067) nor high-cM (OR:1.33, 95%CI:0.76, 2.34; p = 0.322). Conclusion: In T2D individuals, cIT is a better predictor of CAC than cIMT; cM is not associated with CAC. abstract_id: PUBMED:19037584 Association of breast arterial calcification and carotid intima-media thickness. Breast arterial calcification (BAC) on mammography has been identified as calcific medial sclerosis of medium-sized breast arteries, and has been reported to be associated with cardiovascular risk factors, coronary artery disease, and cardiovascular mortality. Carotid intima-media thickness (C-IMT) is a well-known surrogate marker of atherosclerosis and predictor of cardiovascular morbidity and mortality. Consequently, the present study was designed to investigate the association between the presence of BAC on mammography and C-IMT. Twenty-five postmenopausal cases with BAC and 29 subjects without BAC on mammography were included in the study. Cardiovascular risk factors, number of childbirths, postmenopausal duration, and age at menopause were all noted besides detailed physical and laboratory examination. In the whole study population C-IMT was measured with B-mode ultrasound. The women with BAC had significantly increased number of childbirths, postmenopausal duration, frequency of diabetes mellitus, systolic blood pressure, fasting glucose, and CIMT (0.87 +/- 0.17 mm versus 0.60 +/- 0.19 mm) in comparison with the women without BAC (P < 0.05 for all). The C-IMT was correlated with age, number of childbirths, postmenopausal duration, presence of BAC, and serum triglyceride level (P < 0.05 for all). Independent predictors of C-IMT were the presence of BAC on mammography (beta = 0.463, P < 0.001) and serum triglyceride level (beta = 0.222, P = 0.042), whereas the only independent predictor of BAC was CIMT (chi(2) = 23.41, beta = 7.56, P = 0.004). Findings of the present study suggest that the BAC on mammography is independently associated with C-IMT. Screening mammographies merit to be evaluated for the presence of BAC, which might benefit cardiovascular preventive medicine in women by predicting atherosclerosis. abstract_id: PUBMED:33551394 A Comparison of Segment-Specific and Composite Measures of Carotid Intima-Media Thickness and their Relationships with Coronary Calcium. Aims: The utility of carotid intima-media thickness (cIMT) as a marker for coronary heart disease is under heavy debate. This is predominantly due to the lack of a standard definition of cIMT, leading to inconsistent results. We investigated and compared the relationships of five different measures of cIMT with coronary calcium. Methods: Japanese men aged 40-79y ( n=869) from Shiga Epidemiological Study of Subclinical Atherosclerosis were examined. Mean cIMT was measured in three segments of the carotid arteries: common carotid artery (CCAmean), internal carotid artery (ICAmean) and bifurcation (Bifmean). Mean cIMT of average values (Mean cIMT) and mean cIMT of maximum values (Mean-Max cIMT) of all segments combined were assessed. Coronary calcium was assessed as coronary artery calcification (CAC). Ordinal logistic regression was used to determine the odds ratio (OR) of higher CAC per 1 standard deviation higher cIMT measure. Analyses were adjusted for cardiovascular covariates and stratified by age quartiles. Results: All cIMT measures had positive associations with CAC (p<0.001): [OR, 95% Confidence Interval]: ICAmean [1.23, 1.07-1.42], CCAmean [1.27, 1.08-1.49], Bifmean [1.33, 1.15-1.53], Mean cIMT [1.42, 1.22-1.66], and Mean-Max [1.50, 1.28-1.75]. In age-stratified analyses, only Mean-Max cIMT maintained a significant relationship with CAC in every age quartile (p<0.05), while CCAmean had some of the weakest associations among age quartiles. Conclusions: Mean-Max cIMT had consistently stronger associations with coronary calcium, independent of important confounders, such as age. The most oft-used measure, CCAmean, was no longer associated with coronary calcium after age-adjustment and stratification. abstract_id: PUBMED:24954699 Carotid intima-media thickness and calcification in relation to bone mineral density in postmenopausal women-the OSTPRE-BBA study. Objectives: Atherosclerosis (AS) and osteoporosis are common diseases in elderly people and may be metabolically related. The aim of this cross-sectional population-based study was to explore the association between common carotid artery intima-media thickness (cIMT), carotid artery calcification (CAC), and BMD in postmenopausal women. In addition, the association of postmenopausal hormone therapy (HT) and selected diseases with cIMT and carotid calcification was studied. Study Design: The 290 women (mean age 73.6 years) included in this Bone Brain Atherosclerosis study (OSTPRE-BBA) were randomly selected from the population-based Kuopio Osteoporosis Risk Factor and Prevention (OSTPRE) study cohort, Finland. Main Outcome Measures: For this cross-sectional study, cIMT was measured with B-mode ultrasound; femoral neck and total body BMD were measured with dual-energy X-ray absorptiometry. Results: There were no statistically significant associations between mean cIMT and femoral neck T-score (p>0.05). However, an increased maximum cIMT was significantly associated with low femoral neck T-score. In the osteoporotic group (T-score <-2.5, n=20), the maximum cIMT was 2.51±0.88mm (mean±SD); in the normal BMD group (T-score >-1, n=122), it was 1.93±0.64mm (p=0.001). The odds of having CAC were approximately four-fold higher in the osteoporotic group compared with the group with a normal femoral neck T-score (odds ratio [OR]=4.2, p=0.038). The maximum cIMT was smaller in HT users (1.98±0.56mm, n=190) than in non-users (2.16±0.74mm, n=156, p=0.036). Conclusions: The results of our population-based study suggest that BMD is related to AS, at least in carotid arteries. They indirectly support the hypothesis of partially shared pathophysiological mechanisms between these two disorders. abstract_id: PUBMED:21951861 Coronary artery calcification score and carotid intima–media thickness in patients with hemophilia. Background/objectives: The traditional view that patients with hemophilia are protected against cardiovascular disease is under debate. The aim of the present study was to evaluate the presence and extent of atherosclerosis by coronary artery calcification score (CACS) and carotid intima media thickness (IMT) in patients with hemophilia, and to evaluate their cardiovascular risk profile. Methods: Sixty-nine patients (51 with hemophilia A; 18 with hemophilia B) were studied [median age: 52 years (interquartile range [IQR] 43–64)]. Cardiovascular risk factors and prior major adverse cardiovascular events (MACEs) were recorded. CACS was derived from electron-beam or dual-source computed tomography, and carotid IMT was assessed by ultrasound measurements and compared with age-specific reference values. Results: The median CACS in all patients was 35 (IQR 0–110) and the geometric mean IMT was 0.80 mm (95% confidence interval [CI] 0.76–0.84); neither was different from the reference values. Patients with a previous MACE (n = 9) had significantly higher CACS and IMT than patients without a previous MACE:CACS median 1013 (IQR 530–1306) vs. 0 (IQR 0–67), and IMT geometric mean 1.09 mm (95% CI 0.95–1.26) vs. 0.76 mm (95% CI 0.73–0.79), both P < 0.001. A higher calculated 10-year cardiovascular risk was related to higher IMT and CACS. Conclusion: Patients with hemophilia are not protected against the development of atherosclerosis as measured by CACS and IMT. The extent of atherosclerosis is related to the traditional cardiovascular risk factors. This suggests that traditional cardiovascular risk factors should be monitored and treated in patients with hemophilia. abstract_id: PUBMED:24818628 Breast arterial calcifications and carotid intima-media thickness and haemodynamics: Is there any association? Objective: In this study, we aimed to research the relation between breast arterial calcifications (BACs) detected on mammography and two well-known markers of cardiovascular diseases-carotid artery intima-media thickness (C-IMT) and haemodynamics parameters like carotid peak-systolic velocity (PSV), end-diastolic velocity (EDV) and resistive index (RI). Methods: The study group consisted of 50 consecutive BAC (+) women and the control group consisted of 55 BAC (-) women. In all participants, BAC was diagnosed using mammography and C-IMT was measured using B-mode and Doppler ultrasonography. BAC was defined as two linear calcification depositions in a conical periphery or as calcific rings at the mammographic evaluation. Doppler spectrum samples were obtained from 2 cm proximal to the main carotid artery bifurcation. Results: Postmenopausal female patients ranging in age from 40 to 86 included in this study. When the groups were adjusted for age, a statistically significant difference was found between mean C-IMT of BAC (+) and BAC (-) groups (0.81 ± 0.2 vs. 0.69 ± 0.2 mm; p<0.001). No significant differences were observed between BAC (+) and BAC (-) groups in terms of PSV, EDV, RI. Conclusion: The findings of the present study suggest that BAC, diagnosed by mammography, is independently associated with C-IMT. C-IMT measurement is suggested as a useful tool to detect early atherosclerotic changes. However, haemodynamic variables (PSV, EDV, RI) were not statistically different between the BAC (+) and BAC (-) groups. Prospective larger cohort studies are needed to further elucidate whether BAC is an independent risk factor for cardiovascular disease. abstract_id: PUBMED:17466442 Calcium distribution in the vessel wall and intima-media thickness of the human carotid arteries. Increased common carotid artery (CCA) intima-media thickness (IMT) measured by B-mode ultrasound is an early marker of the atherosclerotic process. Arterial calcification is not clearly understood. Using the particle-induced X-ray emission (PIXE) method, we have looked for the location in the artery wall where calcium accumulated in the early phase of atherosclerosis. Twelve segments of CCAs of deceased stroke patients were investigated. In-vivo, carotid duplex ultrasound was performed with bilateral CCA IMT measurement at plaque-free sections. During autopsy, segments of carotid arteries were collected and filled under pressure with a stained histologic embedding material. The frozen arteries were cut into 60-microm-thick slices. Calcium distribution maps from the segments of arteries were determined by PIXE method. IMT measured by ultrasound and calcium distribution maps measured by PIXE were compared. In our cross-sectional study, using the PIXE analysis and ultrasound images, we could demonstrate early calcium accumulation in the media layer. Our results have also shown a significant relationship between calcium content of distributional maps measured by PIXE analysis and corresponding IMT on B-mode ultrasound images of human CCAs. abstract_id: PUBMED:27678437 Associations of serum LDL particle concentration with carotid intima-media thickness and coronary artery calcification. Background: Low-density lipoprotein particle (LDL-P) has recently been found to be a stronger predictor of cardiovascular disease (CVD) than LDL-cholesterol (LDL-C). Objectives: Whether LDL-P is associated with subclinical atherosclerosis, independent of LDL-C, as well as other lipid measures has not been fully examined. We aimed to analyze LDL-P associations with measures of subclinical atherosclerosis. Methods: We examined 870 Japanese men randomly selected from Kusatsu City, Shiga, Japan, aged 40-79 years from 2006-2008, free of clinical CVD and not using lipid-lowering medication. Cross-sectional associations of lipid measures with carotid intima-media thickness (cIMT) and coronary artery calcification (CAC; >0 Agatston score) were examined. Results: LDL-P was significantly positively associated with cIMT and maintained this association after adjustments for LDL-C and other lipid measures. Although these lipid measures were positively associated with cIMT, model adjustment for LDL-P removed any significant relationships. Higher LDL-P was associated with a significantly higher odds ratio of CAC and further adjustment for LDL-C did not affect this relationship. In contrast, the LDL-C association with CAC was no longer significant after adjustment for LDL-P. Other lipid measures attenuated associations of LDL-P with CAC. Likewise, associations of these measures with CAC were attenuated when model adjustments for LDL-P were made. Conclusions: In a community-based sample of Japanese men, free of clinical CVD, LDL-P was a robust marker for subclinical atherosclerosis, independent of LDL-C and other lipid measures. Associations of LDL-C and other lipid measures with either cIMT or CAC were generally not independent of LDL-P. abstract_id: PUBMED:36294964 Increased Carotid Intima-Media Thickness in Asymptomatic Individuals Is Associated with the PCSK9 (rs2149041) Gene Polymorphism in the Mexican Mestizo Population: Results of the GEA Cohort. The increase in carotid intima-media thickness (CIMT) and coronary artery calcification (CAC) are features of subclinical atherosclerosis that might be determined by the genetic background of patients. Among the multiple risk factors, the proprotein convertase subtilisin kexin type 9 (PCSK9) has a great impact on atheroma development. Then, we focused on the potential association of the PCSK9 gene polymorphism (rs2149041) with the risk of an increased CIMT. We included 881 unrelated, asymptomatic individuals (732 normal CIMT and 149 increased CIMT) who lacked coronary calcification (CAC score = 0). Under the recessive inheritance model and adjusted by several cardiovascular risk factors, the rs2149041 polymorphism, determined by TaqMan genotyping assay, was associated with a high risk of increased CIMT (OR = 2.10, 95% IC = 1.26-3.47, P recessive = 0.004). Our results suggest that the rs2149041 polymorphism could be a risk marker for increased CIMT in asymptomatic individuals without coronary artery disease determined by the absence of a CAC score. Answer: Yes, there is an association between breast arterial calcifications (BAC) and carotid intima media thickness (C-IMT). Several studies have investigated this relationship and found a significant association between the presence of BAC on mammography and increased C-IMT, which is a well-known surrogate marker of atherosclerosis and predictor of cardiovascular morbidity and mortality. One study included 100 women divided into two groups based on the presence of BAC on mammography and found that the mean C-IMT in the BAC(+) group was significantly higher than in the BAC(-) group. The study concluded that BAC in mammography is independently associated with C-IMT, and apart from age and menopausal status, there is a statistically significant positive correlation with C-IMT (PUBMED:31636768). Another study involving postmenopausal women found that those with BAC had significantly increased C-IMT compared to women without BAC. The study suggested that BAC on mammography is independently associated with C-IMT and that screening mammographies should be evaluated for the presence of BAC, which might benefit cardiovascular preventive medicine in women by predicting atherosclerosis (PUBMED:19037584). Furthermore, a study that aimed to research the relation between BAC detected on mammography and C-IMT found that when groups were adjusted for age, there was a statistically significant difference in mean C-IMT between BAC(+) and BAC(-) groups. However, haemodynamic variables like carotid peak-systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index (RI) were not statistically different between the two groups. This suggests that while BAC is independently associated with C-IMT, it may not be associated with changes in haemodynamics parameters (PUBMED:24818628). In summary, the evidence indicates that there is an association between breast arterial calcifications and carotid intima media thickness, which may serve as an indicator of cardiovascular risk and atherosclerotic disease.
Instruction: High prevalence of hyperchomocysteinemia in patients with inflammatory bowel disease: a pathogenic link with thromboembolic complications? Abstracts: abstract_id: PUBMED:9798965 High prevalence of hyperchomocysteinemia in patients with inflammatory bowel disease: a pathogenic link with thromboembolic complications? Background And Aims: Why patients with inflammatory bowel disease are at increased risk for thrombosis is unknown. Since they may have impaired absorption of vitamins that regulate the metabolism of homocysteine, we tested the hypothesis that they have hyperhomocysteinemia, an established risk factor for arterial and venous thrombosis. Methods: The concentrations of total homocysteine (tHcy), folate and cobalamin were measured in blood samples from 61 consecutive patients with inflammatory bowel disease and 183 age- and sex-matched healthy controls. Results: The mean (+/- S.D.) concentration of plasma tHcy was higher in patients (12.2 +/- 7.7 micromol/l) than in controls (10.5 +/- 4.6, p = 0.045). Eight patients (13%) had concentrations of tHcy higher than the 95th percentile of distribution among controls, as compared with 9 healthy controls (5%, p = 0.04). The prevalence of folate deficiency was higher in patients (15%) than in controls (5%, p = 0.02). Oral administration of folate, cobalamin and pyridoxine to 15 patients for 30 days decreased their mean tHcy levels from 20.3 +/- 9.9 to 9.5 +/- 3.4 (p <0.001). Conclusions: In patients with inflammatory bowel disease there is an increased prevalence of hyperhomocysteinemia, which can be corrected by the administration of folate, cobalamin and pyridoxine. The high prevalence of hyperhomocysteinemia may account for the thrombotic risk of IBD patients; whether or not its correction will decrease the thrombotic risk should be tested in properly designed clinical trials. abstract_id: PUBMED:22198895 Prevalence and risk factors of thromboembolic complications in inflammatory bowel disease Background: Patients with inflammatory bowel disease (IBD) are at increased risk of thromboembolic complications (TEC), which represent an important cause of morbidity and mortality. Aim: To assess the prevalence and risk factors of TEC in patients with IBD. Methods: We conducted a retrospective study including all the IBD patients in the gastroenterological department of Charles Nicolle hospital between 2000 and 2010. Only thromboembolic events that had been diagnosed by an imaging procedure were counted. Results: A total of 266 patients with IBD were consecutively included. TE events occurred in nine patients (3.4%); six men and three women. Their mean age was 31 years [15-64 years]. Five patients had Crohn's disease and four had ulcerative colitis. The types of TEC were deep venous thrombosis of the leg in five cases with pulmonary embolism in one of them, cerebral venous thrombosis in two cases, portal thrombosis in one case and jugular vein thrombosis in one case. Active disease was present in all cases at the time TEC occurred. Conclusions: In our study, the prevalence of TEC is 3.4% in patients with IBD. Deep venous thromboses of the leg are the most common TEC and all our cases occurs during the active phase of IBD. abstract_id: PUBMED:11569694 Hyperhomocysteinemia and prevalence of polymorphisms of homocysteine metabolism-related enzymes in patients with inflammatory bowel disease. Objectives: Patients with inflammatory bowel disease (IBD) have an increased risk of thrombotic complications. Moreover, a hypercoagulable state has been hypothesized as a contributing factor in the pathogenesis of IBD. Recently, a growing amount of interest has focused on mild-to-moderate hyperhomocysteinemia as a risk factor for thromboembolic disease. We aimed to evaluate the prevalence of hyperhomocysteinemia in patients with IBD and to investigate the contribution of genetic defects in the enzymes involved in homocysteine (Hcy) metabolism and vitamin status in determining increased levels of plasma total Hcy (tHcy). Methods: The concentrations of tHcy, folate, and vitamin B12 as well as the prevalence of methylenetetrahydrofolate reductase (MTHFR) 677C to T mutation and the 68-bp insertion at exon 8 of cystathionine beta-synthase (CBS) were measured in patients with IBD and healthy controls. Results: In all, 17 out of 64 IBD patients (26.5%) and four out of 121 (3.3%) controls had hyperhomocysteinemia with a statistically significant difference (p < 0.0001). No significant difference was found between IBD patients and controls with regard to the prevalence of homozygotes for the C677T variant (TT) of MTHFR or the prevalence of heterozygotes for the CBS-gene mutation (IN). Among the IBD patients the only independent factor significantly associated with hyperhomocysteinemia was folate deficiency (p = 0.0002), regardless of the MTHFR or the CBS genotype. Conclusions: IBD patients have a higher prevalence of hyperhomocysteinemia than do healthy controls. Folate deficiency is the only independent risk factor in developing hyperhomocysteinemia. abstract_id: PUBMED:27211077 Prevalence and risk factors for venous thromboembolic complications in the Swiss Inflammatory Bowel Disease Cohort. Objective: Inflammatory bowel disease (IBD), including Crohn's disease (CD) and ulcerative colitis (UC), is associated with the occurrence of venous thromboembolism (VTE) such as deep vein thrombosis (DVT) and pulmonary embolism (PE). We aimed to assess the prevalence and associated risk factors for VTE in a large national cohort of IBD patients. Material And Methods: Data from patients of the Swiss IBD Cohort Study (SIBDCS) enrolled between 2006 and 2013 were analyzed. Results: A total of 2284 IBD patients were analyzed of which 1324 suffered from CD and 960 from UC. VTE prevalence was 3.9% (90/2284) overall and 3.4% (45/1324) in CD patients (whereof 2.4% suffered from DVT and 1.5% from PE) and 4.7% (45/960) in UC patients (whereof 3.2% suffered from DVT and 2.4% from PE). Median disease duration in CD patients with VTE was 12 years [IQR 8-23] compared to eight years [3-16] in CD patients without VTE (p = 0.001). Disease duration in UC patients with VTE was seven years [4-18] compared to six years [2-13] in UC patients without VTE (p = 0.051). Age at CD diagnosis ≥40 years (OR 1.851, p = 0.073) and disease duration >10 years (OR 1.771, p = 0.088) showed a trend to be associated with VTE. In UC patients, IBD-related surgery (OR 3.396, p = 0.004) and pancolitis (OR 1.927, p = 0.050) were significantly associated with VTE. Conclusions: VTE are prevalent in CD and UC patients. Pancolitis and UC-related surgery are significantly associated with VTE in UC patients. abstract_id: PUBMED:25866483 Thrombosis in inflammatory bowel diseases: what's the link? Inflammatory bowel disease affects more than 2 million people in Europe, with almost 20% of patients being diagnosed in pediatric age. Patients with inflammatory bowel disease are at increased risk of thromboembolic complications which may affect patients' morbidity and mortality. The risk of the most common thromboembolic events, such as deep venous thrombosis and pulmonary embolism, are estimated to be three-fold increased compared to controls, but many other districts can be affected. Moreover, patients with ulcerative colitis and Crohn's disease experience thromboembolic events at a younger age compared to general population. Many factors have been investigated as determinants of the pro-thrombotic tendency such as acquired risk factors or genetic and immune abnormalities, but a unique cause has not been found. Many efforts have been focused on the study of abnormalities in the coagulation cascade, its natural inhibitors and the fibrinolytic system components and both quantitative and qualitative alterations have been demonstrated. Recently the role of platelets and microvascular endothelium has been reviewed, as the possible link between the inflammatory and hemostatic process. abstract_id: PUBMED:33098885 Prevalence and Effect of Genetic Risk of Thromboembolic Disease in Inflammatory Bowel Disease. Background And Aims: The largest cause of mortality in patients with inflammatory bowel disease (IBD) remains thromboembolic disease (TED). Recent reports have demonstrated that both monogenic and polygenic factors contribute to TED and 10% of healthy subjects are genetically at high risk for TED. Our aim was to utilize whole-exome sequencing and genome-wide genotyping to determine the proportion of IBD patients genetically at risk for TED and investigate the effect of genetic risk of TED in IBD. Methods: The TED polygenic risk score was calculated from genome-wide genotyping. Thrombophilia pathogenic variants were extracted from whole-exome sequencing. In total, 792 IBD patients had both whole-exome sequencing and genotyping data. We defined patients at genetically high risk for TED if they had a high TED polygenic risk score or carried at least 1 thrombophilia pathogenic variant. Results: We identified 122 of 792 IBD patients (15.4%) as genetically high risk for TED. Among 715 of 792 subjects whose documented TED status were available, 63 of the 715 patients (8.8%) had TED events. Genetic TED risk was significantly associated with increased TED event (odds ratio, 2.5; P = .0036). In addition, we confirmed an additive effect of monogenic and polygenic risk on TED (P = .0048). Patients with high TED genetic risk more frequently had thrombosis at multiple sites (78% vs 42%, odds ratio, 3.96; P = .048). Conclusions: Genetic risk (both poly- and monogenic) was significantly associated with TED history. Our results suggest that genetic traits identify approximately 1 in 7 patients with IBD who will experience 2.5-fold or greater risk for TED. abstract_id: PUBMED:11012640 Laboratory findings associated with thrombophilia are not more common in inflammatory bowel disease. Thromboembolic disease (TED) has been recognized as a complication of inflammatory bowel disease (IBD) since the 1930s (Bargen & Barker 1936). The relative contributions of inherited or acquired thrombophilia and the inflammatory response to the mechanism of this tendency is unclear. Thrombotic events are more common in active disease although significant numbers also occur spontaneously, when the disease is in clinical remission (Talbot et al. 1986; Jackson et al. 1997). Studies looking at the prevalence of specific thrombophilic states such as Antithrombin III deficiency (Jackson et al. 1997; Lake, Stauffer & Stuart 1978; Cianco et al. 1996; Ghosh et al. 1983), Factor V Leiden mutation (APC Resistance) (Jackson et al. 1997; Probert et al. 1997; Ardizzone et al. 1998; Liebman et al. 1998), anticardiolipin antibodies (Ciancio et al. 1996), Protein C (Wyshock, Caldwell & Crowley 1988; Korsten & Reis 1992) and Protein S deficiencies (Jorens et al. 1990; Aadland et al. 1992) in IBD have been contradictory or equivocal. We had previously found that IBD patients with a history of TED are not more likely to have a laboratory thrombophilic abnormality than those with uncomplicated disease. We also demonstrated that the prevalence of heterogenous laboratory thrombophilic abnormalities (usually minor) in all IBD patients may be as high as 60%, much higher than the recognized prevalence of TED (Lim, Jones & Gould 1996). We wondered how this would compare with the healthy non-IBD population. We have therefore explored the prevalence of such thrombophilic abnormalities in a group of IBD patients who had no history of TED and compared them with healthy age and sex matched controls. abstract_id: PUBMED:17924554 Oral challenge with a methionine load in patients with inflammatory bowel disease: a better test to identify hyperhomocysteinemia. Background: Patients with inflammatory bowel disease have an increased risk of thrombosis. Hyperhomocysteinemia is one of the factors that have been related to thromboembolic complications. Patients with hyperhomocysteinemia and normal fasting homocysteine levels can be identified with an oral methionine load. We studied homocysteine levels in patients with IBD during fasting and after methionine load to determine the true prevalence of hyperhomocysteinemia and its relation with thrombotic events. Methods: Prospective analysis of homocysteine levels in consecutive patients with IBD during fasting and 6-8 hours after an oral methionine load. Levels of folate and vitamin B12 were also determined. History of thrombotic events were recorded. Results: Eighty-two patients with IBD, 56 with UC and 26 with CD were included. Eighteen patients (22%) had hyperhomocysteinemia during fasting. Mean levels of homocysteine after methionine load were 20.4 +/- 18.1 micromol/l (range, 1-79.7 micromol/l), and 43 patients (52%) had hyperhomocysteinemia (> or =20 micromol/l) after methionine load. Six patients (7.3%) had history of thrombosis. The homocysteine levels during fasting and after methionine load were significantly higher in patients with thrombotic events than in patients without thrombosis (15.5 +/- 3.7 micromol/l vs. 6.6 +/- 6.5 micromol/l; P = 0.002; 44.5 +/- 20.9 micromol/l vs. 18.4 +/- 16.5 micromol/l; P < 0.001, respectively). Conclusions: There is a higher prevalence of hyperhomocysteinemia in IBD patients than previously thought, this can be identified with an oral challenge of a methionine load. Hyperhomocysteinemia increases the risk of thromboembolic complications in patients with IBD. abstract_id: PUBMED:15947551 Prevalence of hyperhomocysteinaemia, activated protein C resistance and prothrombin gene mutation in inflammatory bowel disease. Background: Thromboembolic disease is a significant cause of morbidity and mortality in patients with inflammatory bowel disease (IBD). A hypercoagulable state exists in IBD that may involve many components of haemostasis and is closely linked to the disease pathogenesis. It has been proposed that microvascular thrombosis and infarction may trigger the underlying inflammatory process. Aim: To determine the prevalence of prothrombotic factors including hyperhomocysteinaemia, activated protein C (APC) resistance and prothrombin gene mutations as well as vitamin levels in the local IBD population. Method: A total of 68 patients (37 men and 31 women) attending the IBD clinic were enrolled into the study. Citrated and ethylenediamine tetraacetic acid blood samples were collected from all patients as well as from 30 controls. Homocysteine levels were measured using the IMX immunoassay. APC resistance was measured using an unmodified activated partial thromboplastin time-based clotting assay. Prothrombin mutations were determined using polymerase chain reaction with the HB-gene factor II detection system. Results: Mean homocysteine levels were significantly higher and APC resistance ratios significantly lower in IBD patients compared with controls. No significant difference was detected between patients with ulcerative colitis or Crohn's disease. There was no significant increase in the incidence of prothrombin mutation in IBD patients. IBD patients had lower vitamin B12 and higher serum folate levels than controls. Conclusion: High homocysteine and high serum folate may be associated with low vitamin B12 levels in IBD patients. We did not find any association between a low APC ratio and the factor V Leiden mutation or high factor VIII levels. Both hyperhomocysteinaemia and a low APC ratio may contribute to an increased risk of thromboembolic disease in IBD patients. abstract_id: PUBMED:32265017 Thromboembolic events in inflammatory bowel disease. Goal Of The Study: Inflammatory bowel disease (IBD) is associated with an increased risk of thromboembolic events (TEE) during outbreaks, increasing morbidity and mortality. The aim of our study was to specify the prevalence of TEE in IBD patients and to determine their epidemiological, clinical and evolutionary characteristics. Materiel And Methods: This is a retrospective study collecting all patients with IBD, who had a thromboembolic complication confirmed by imagery, between January 2012 and December 2018. Results: One hundred patients with IBD were diagnosed during the study period. A TEE occurred in 6 patients (5.9%). These patients had an average age of 41 years, divided into 4 women and 2 men. Five patients had Crohn's disease and one patient had ulcerative colitis. The IBD was active in all patients. Five patients were already hospitalized and under preventive heparin therapy. Patients had deep venous thrombosis of the lower limbs in 3 cases, associated with pulmonary embolism in 1 case, cerebral venous thrombosis in 2 cases and pulmonary embolism isolated in 1 case. Thrombophilia investigations were negative in all patients. Evolution under medical treatment was favorable in 4 patients and fatal in 2 patients. Conclusion: In our study, the prevalence of TEE in patients with IBD was 5.9%. Thrombosis occurred during the active phase of IBD in all cases. Answer: The available literature suggests that there is a high prevalence of hyperhomocysteinemia among patients with inflammatory bowel disease (IBD), which may be a pathogenic link to thromboembolic complications in this population. Studies have found that patients with IBD have higher concentrations of plasma total homocysteine (tHcy) compared to healthy controls, and the prevalence of hyperhomocysteinemia is significantly higher in IBD patients (PUBMED:9798965; PUBMED:11569694). Hyperhomocysteinemia is an established risk factor for arterial and venous thrombosis, and its presence in IBD patients may account for their increased thrombotic risk (PUBMED:9798965). The prevalence of thromboembolic complications (TEC) in IBD patients has been reported to be around 3.4%, with deep venous thrombosis of the leg being the most common type of TEC. All cases of TEC in one study occurred during the active phase of IBD (PUBMED:22198895). Another study in the Swiss IBD Cohort found a VTE prevalence of 3.9% overall, with certain factors such as IBD-related surgery and pancolitis being significantly associated with VTE in UC patients (PUBMED:27211077). Furthermore, folate deficiency has been identified as an independent risk factor for developing hyperhomocysteinemia among IBD patients (PUBMED:11569694). Supplementation with folate, cobalamin, and pyridoxine has been shown to decrease mean tHcy levels in IBD patients (PUBMED:9798965). Additionally, an oral methionine load test has been suggested as a better method to identify hyperhomocysteinemia and its relation to thrombotic events in IBD patients (PUBMED:17924554). Genetic factors also play a role in the risk of thromboembolic disease in IBD. A study using whole-exome sequencing and genome-wide genotyping found that approximately 15.4% of IBD patients were at genetically high risk for thromboembolic disease, and this genetic risk was significantly associated with increased TED events (PUBMED:33098885). In summary, the evidence indicates that hyperhomocysteinemia is prevalent in IBD patients and may contribute to the increased risk of thromboembolic complications. This relationship underscores the importance of monitoring homocysteine levels and managing potential deficiencies in vitamins involved in homocysteine metabolism as part of the clinical care for patients with IBD.
Instruction: Does subjective social status predict health and change in health status better than objective status? Abstracts: abstract_id: PUBMED:16314589 Does subjective social status predict health and change in health status better than objective status? Objective: To examine, among middle-aged individuals, if subjective socioeconomic status (SES) predicts health status and change in health status over time better than objective SES. Methods: Data are from the Whitehall II study, a prospective study of British civil servants. SES data are drawn from Phase 5 (1997-1999) of the study and health data from Phases 5 and 6 (2000-2001). Physical and mental component scores from the Short Form 36, the General Health Questionnaire, and self-rated health were used to assess health status. Multiple linear regressions were used to examine the relationship between SES and health and change in health status. Results: Complete data were available on 5486 people. Results show both measures of SES to be global measures of SES. Both measures of SES were significantly associated with health outcomes and with decline in health status over time. However, when both objective and subjective measures of SES are entered simultaneously in the model to predict change in health status, it was only the latter that continues to be significantly associated with health and changes in health. Conclusions: Subjective SES is a better predictor of health status and decline in health status over time in middle-aged adults. These results are discussed in terms of three possible explanations: subjective SES is a more precise measure of social position, the results provide support for the hierarchy-health hypothesis, and the results could be an artifact of common method variance. abstract_id: PUBMED:33146471 Subjective and objective social status: associations with psychosocial predictors and oral health. Objective: While social status and health have been investigated, there is less focus on the effects of objective and subjective social status and psychosocial factors. This study aimed to investigate oral health impacts by subjective social status (SSS) and psychosocial predictors stratified by subjective social status. Methods: A random cross-sectional sample of 45-54-year old South Australians was surveyed in 2004-05. Oral health impact was assessed using OHIP-14. Socio-economic status was determined using objective (income) and subjective (McArthur scale) measures. Psychosocial variables comprised social support, health self-efficacy, coping and affectivity. Results: Responses were collected from 986 persons (response rate=44.4%). Lower SSS was more frequently observed in the low (70.2%) than high-income group (28.5%). Lower SSS was associated (p⟨0.05) with lower education, social support, health competence, and coping, but higher negative affect within income groups. The interaction of SSS and income showed OHIP was consistently lower at high SSS regardless of higher or lower income, but at low SSS, OHIP was higher (p⟨0.05) in the lower than higher income group. Conclusions: SSS was associated with income. Their interaction indicated low SSS in combination with low income was associated with higher oral health impacts. abstract_id: PUBMED:33494488 Associations between Objective and Subjective Housing Status with Individual Mental Health in Guangzhou, China. Housing is an important social determinant of mental health. However, few studies simultaneously measure the objective housing status (i.e., housing tenure, living space, housing conditions, and housing stability) and subjective housing status (i.e., housing satisfaction) as well as examine their effects on people's mental health (i.e., stress, anxiety, and depression). Thus, using a sample size of 1003 participants by two-stage random sampling survey in Guangzhou, China, this study applies multivariate ordinary least square regression models to comprehensively explore and compare the associations between objective and subjective housing status with mental health, and then analyze the moderating effects of subjective housing status on the relationships between objective housing status and mental health. The findings suggest that there are significant differences in people's mental health based on different housing status. The subjective housing status can better explain the variances in mental health than objective housing status. Also, subjective housing status may partly mitigate the adverse impacts of objective housing disadvantages on some aspects of an individual's mental health. Therefore, housing improvement policies and public health initiatives should be designed based on a comprehensive account of objective and subjective housing characteristics as well as their influences on specific aspects of mental health. abstract_id: PUBMED:34034701 Examining the relation between the subjective and objective social status with health reported needs and health-seeking behaviour in Dande, Angola. Background: Assessing subjective social status (SSS) may be easily accommodated in the context of a Health and Demographic Surveillance System (HDSS). To our knowledge, no prior studies have examined the association of SSS and health in Angola. Subjective socioeconomic measures may provide a rapid assessment of a relevant social status construct, important for studying health inequalities. In this study, we addressed social determinants of health by examining the relationship between the subjective and objective social status, reported health and healthcare-seeking behaviour. Methods: This research results from a cross-sectional study performed during 2015 in the Dande HDSS, in Angola. We tested the application of the MacArthur scale as a measure of SSS in a developing setting, in a sample of 12,246 households. First, we investigated its relation to objective socioeconomic indicators, and then we explored how subjective and objective social status associate with health reported needs and health-seeking behaviour of the surveyed population. Chi-square, ANOVA tests, and Receiver Operating Characteristics (ROC) Curves analysis were computed for testing relationships between subjective status ladder quartiles, sociodemographic and household characteristics. Logistic regression was used to examine the influence of subjective perception of status in self-reported health and health-seeking behaviour. Results: Our findings suggest that the SSS follows a gradient distribution obtained with more objective socioeconomic indicators. Additionally, we found that subjective perception of status influence health needs reporting and health-seeking behaviour and its significant effect remained after controlling for the objective socioeconomic markers. Individuals standing in the second quartile of the social ladder have more odds of reporting illness and those in the highest quartiles of the ladder were twice more likely (OR = 2.23, 95% CI = 1.52-3.26) to seek help from formal health services than those at the bottom of the ladder. Conclusions: The MacArthur Scale is a valuable tool to measure SSS in the Dande HDSS, relevant for studying socioeconomic disparities and health inequalities. It is also an easier alternative to traditional measures such as income, usually difficult to measure in developing settings. The social perception of status should be considered as a complement with objective indicators when exploring social determinants of health. abstract_id: PUBMED:36992716 Subjective social status and mental health among adolescents in Ethiopia: Evidence from a panel study. Numerous studies have found that a relationship between subjective status and measures of human health persists even after controlling for objective measures, including income, education, and assets. However, few studies have probed how status shapes health among adolescents, particularly those in low-and-middle-income settings. This study examines the relative effects of subjective and objective status on mental health among Ethiopian adolescents. Using data from two waves of the Jimma Longitudinal Family Survey of Youth (N = 1,045), this study uses a combination of linear regression and linear mixed-effects models to examine the relationships between objective social status, subjective social status, and mental well-being among adolescents in Ethiopia. Three measures of objective status, including household income, adolescent education, and a multidimensional measure of material wealth, were assessed. Social network and support variables were constructed using factor analysis. A community version of the 10-rung McArthur ladder was used to assess the subjective socioeconomic status of adolescents. The self-reporting questionnaire was used to assess mental well-being during both waves of the study. The significant effect of higher subjective status on reports of fewer non-specific psychological distress (-0.28; 95% CI: -0.43 to -0.14) was not mediated by objective status, material deprivation, or social support covariates. The observed relationship between status and mental well-being was consistent across successive study waves. Among a cohort of adolescents in Jimma, Ethiopia, several measures of objective status are associated with subjective status. However, akin to research among adults, the findings of our study suggest that the relationship between adolescent subjective social status and mental health persists above and beyond the effects of objective status. Future research is needed on the factors, environments, and experiences that inform adolescent perceptions of status and well-being over time. abstract_id: PUBMED:36011511 Subjective Social Status, Area Deprivation, and Gender Differences in Health among Chinese Older People. This study examined the gender differences in the main and interactive effects of subjective social status and area deprivation on health among older adults in Hong Kong. Data for this study came from the baseline of MrOs and MsOs studies, including 4000 Chinese men and women ≥ 65 in Hong Kong. Subjective social status was assessed using the MacArthur Scale of subjective social status scale. Our results reaffirm that subjective social status is an independent indicator of health after adjusting for objective SES measures (e.g., education and income). Perceived rank on the community ladder was more closely related to health among older people than was the society ladder, particularly for women. Although area-level social deprivation was not significantly associated with the health of older people, it may moderate the effect of subjective social status on health. Women with a lower perceived status in the community were more likely to experience depressive symptoms but better grip strength when living in more deprived neighborhoods. The findings suggested that subjective social status provides important information for the physical and mental health of the older population. Policymakers may implement interventions to enhance the subjective social status of older adults. Given the greater contribution of relative status in the community to the health of women, these policies and interventions should target to improve women's perceived status in the community. abstract_id: PUBMED:30886566 It's "the Joneses": the influence of objective and subjective socioeconomic status on subjective perceptions of aging. Positive subjective perceptions of aging predict various well-being, physical health, mental health, and longevity outcomes. Thus, it is important to consider what factors contribute to their formation. Socioeconomic status (SES) has been hypothesized to be one such factor, but past research has been mixed. We propose that subjective assessments of SES may better predict subjective perceptions of aging than traditional objective measures (e.g., income and education). The purpose of this study was to examine the effects of objective and subjective indicators of SES on subjective perceptions of aging. Participants (n = 296) from the Mindfulness and Anticipatory Coping Everyday study were recruited with a Human Intelligence Task on Amazon's Mechanical Turk. Participants reported on their SES (i.e., income, education, and subjective social status) and subjective perceptions of aging (i.e., attitudes toward own aging, subjective age, and awareness of age-related gains and losses). Data were analyzed via hierarchical multiple regression with demographic and health variables entered first, followed by income and education, then subjective social status. Results demonstrated that although objective measures of SES did not contribute to predicting subjective perceptions of aging, those who rated themselves subjectively higher in their community social standing were more likely to possess positive aging attitudes, younger subjective ages, more awareness of age-related gains, and fewer awareness of age-related losses. These findings suggest that perceptions of doing better than one's neighbors (i.e., "the Joneses") may matter more for understanding subjective perceptions of aging than objective indicators of social status. abstract_id: PUBMED:34837775 Childhood maltreatment, subjective social status, and health disparities in bereavement. Background: Spousal bereavement can lead to adverse health outcomes; however, not all widow(er)s experience the same degree of health problems. Thus, it is important to understand the contribution of disparities (e.g., childhood maltreatment and subjective social status) that may underlie adverse health outcomes that arise following bereavement. Methods: We collected data from 130 spousally bereaved individuals at 3-time points (3 months post-loss, 4 months post-loss, and 6 months post-loss). Using mixed models, we assessed the interaction of childhood maltreatment, subjective social status, and time to predict changes in proinflammatory cytokine production, depressive symptoms, grief symptoms. Results: We found a significant interaction between childhood maltreatment, subjective social status, and time predicting proinflammatory cytokine production (beta > -0.01, p = 0.048), depressive symptoms (beta = 0.008, p = .010), and grief symptoms (beta = 0.001 p = .001). Conclusion: This study highlights the role of disparities related to childhood maltreatment and subjective social status on adverse health outcomes following spousal bereavement. abstract_id: PUBMED:34717282 Subjective social status and physical health: The role of negative affect and reappraisal. Rationale: Having low subjective social status is associated with an array of negative health outcomes. However, the mechanisms linking subjective social status to health are not yet clear. One candidate mechanism is negative affect. Researchers have proposed that having low subjective social status may be associated with higher levels of negative affect, and these higher levels of negative affect may be associated with poor health. However, research demonstrating that status-related negative affect is prospectively associated with health in humans is limited. Objective: We examined whether negative affect prospectively mediates the relationship between subjective social status and physical health. In addition, we tested whether reappraisal - an affect regulation strategy used to downregulate negative affect - moderates the links among subjective social status, negative affect, and physical health. Method: We used two-wave longitudinal data from the Midlife in the United States (MIDUS) project to conduct a series of regression, mediation, and moderated mediation analyses to examine the relationships among subjective social status, negative affect, reappraisal, and health (i.e., four outcomes, chronic conditions, somatic symptoms, self-reported health and mortality). Results: Negative affect mediates the relationship between status and change in morbidity (n = 3289; i.e., change in number of chronic conditions, somatic symptoms, self-reported health) and mortality (n = 4953), such that subjective social status is inversely associated with negative affect, and negative affect is positively associated with poor health. Reappraisal moderates each of these relationships, such that individuals who are low on subjective social status and have high scores on a novel measure of reappraisal experience lower levels of negative affect and better health than individuals who are low on subjective social status but who have low scores on this measure. Conclusion: These results have important implications for our understanding of subjective social status and how it relates to physical health. abstract_id: PUBMED:35295744 Trajectories of health-related quality of life by change pattern of objective and subjective social status. Long-term and cumulative social experiences influence an individual's objective and subjective social status. Social determinants of health are more effectively investigated by longitudinal rather than cross-sectional studies. The primary focus of this study was the prospective effect of socioeconomic transition on health-related quality of life trajectories. The study population were adults over 18 years of age who responded in all nine waves of the Korea Health Panel (2009-2017). Data were analyzed by group-based trajectory modeling to identify health trajectories, and group-based multi-trajectory modeling to investigate combined change patterns of objective and subjective social status (i.e., multi-SES trajectories). To predict the effects of underlying socioeconomic measures on health trajectory group membership, we included these time-stable covariates in trajectory modeling and estimated the risk of belonging to each trajectory based on the measures. The health-related quality of life trajectories showed three patterns during the period 2013 to 2017; 13.7% of individuals had a low and declining health trajectory and the others had a higher stable health trajectory. Four types of multi-SES trajectory were derived during the period 2009 to 2013; the richer had a steeper income slope while there were slight changes in subjective social status among all groups. These combined longitudinal SES patterns in 2009-2013 were strong predictors of subsequent health trajectory group membership in 2013-2017. These findings indicate that rich countries, such as South Korea, may encounter growing income inequality, where individuals become entrenched in income disparity that pins down their perceptions of social position. Over time this rigid social structure will widen the gap in health-related quality of life. Answer: Yes, subjective social status (SSS) appears to predict health and change in health status better than objective status. Studies have shown that both subjective and objective measures of socioeconomic status (SES) are significantly associated with health outcomes and with decline in health status over time. However, when both measures are included in predictive models, subjective SES often continues to be significantly associated with health and changes in health, while objective SES may not (PUBMED:16314589). This suggests that subjective SES may be a more precise measure of social position and could provide support for the hierarchy-health hypothesis. Further research supports the notion that subjective social status is associated with various health outcomes. For instance, in a study on oral health impacts, subjective social status was found to be associated with income, and the interaction between SSS and income indicated that low SSS combined with low income was associated with higher oral health impacts (PUBMED:33146471). Similarly, subjective housing status was found to better explain variances in mental health than objective housing status, and it may partly mitigate the adverse impacts of objective housing disadvantages on mental health (PUBMED:33494488). In Angola, subjective social status was found to influence health needs reporting and health-seeking behavior, with its significant effect remaining after controlling for objective socioeconomic markers (PUBMED:34034701). Among adolescents in Ethiopia, higher subjective status was significantly associated with reports of fewer non-specific psychological distress, and this relationship persisted above and beyond the effects of objective status (PUBMED:36992716). Gender differences in the effects of subjective social status on health were also observed among older adults in Hong Kong, with subjective social status being an independent indicator of health after adjusting for objective SES measures (PUBMED:36011511). Additionally, subjective perceptions of aging were influenced more by subjective social status than by objective measures such as income and education (PUBMED:30886566). In the context of spousal bereavement, disparities related to childhood maltreatment and subjective social status were highlighted as factors contributing to adverse health outcomes (PUBMED:34837775). Negative affect was found to mediate the relationship between subjective social status and physical health, and reappraisal was identified as a moderating factor in this relationship (PUBMED:34717282). Finally, longitudinal studies have indicated that socioeconomic transitions, as reflected in changes in both objective and subjective social status, can predict health-related quality of life trajectories, with subjective status often being a strong predictor of subsequent health trajectory group membership (PUBMED:35295744). These findings collectively suggest that subjective social status, which reflects an individual's perceived social position relative to others, may have a more direct and persistent impact on health outcomes than objective measures of socioeconomic status, such as income and education. This could be due to the psychological and social implications of perceived status, which may influence stress levels, access to resources, and health behaviors, ultimately affecting an individual's health and well-being.
Instruction: Does the ethnicity of teenage children influence oral self perception and prevalence of dental disease? Abstracts: abstract_id: PUBMED:9807920 Does the ethnicity of teenage children influence oral self perception and prevalence of dental disease? Aim: The primary aim was to evaluate the effect of ethnicity, social deprivation and oral health on oral self perceptions of 14-15-year-old Asians and Whites. A secondary aim was to assess the influence of ethnicity and social deprivation on oral treatment need in the same sample. Design: A cross-sectional epidemiological study. Data Source: A stratified, random sample of 408 14-15-year-old Asian and White children from schools in Manchester. Method: Information was collected on oral self perceptions using a questionnaire and on oral treatment need with a clinical examination. Results: Multivariate data analysis revealed that oral treatment need, but not ethnicity or social deprivation, was an important predictive variable with respect to oral self perceptions. Ethnicity was the only variable to influence periodontal treatment need. Social deprivation influenced the level of untreated caries. Conclusions: 1. Socially deprived children have higher caries levels than their more affluent counterparts and this is evident regardless of ethnic background. 2. Although Asian 14-15-year-old children have a higher periodontal treatment need than Whites, there was no ethnic influence on how they perceive their oral health. 3. Oral treatment need is an important factor with respect to oral self perceptions. abstract_id: PUBMED:33176779 Perception and belief in oral health among Karen ethnic group living along Thai-Myanmar border, Thailand. Background: Utilization of oral health services has been low among rural populations in Thailand, especially for minority ethnicity populations living along the borders. The aim of this study was to increase understanding of the determinants of the underutilization of oral health services among these populations. Methods: A qualitative study using in-depth interview and semi-structured interview was conducted among participants of Karen ethnicity living in Mae Tan Sub-district, Thasongyang District, Tak Province, Thailand. The interviews focused on exploring the participants' perceptions of oral health problems, oral health and hygiene, and oral health-seeking behaviors. The verbatim-transcribed interviews were analyzed using thematic analysis. Results: A total of 101 participants (50 adults and 51 children) with a Karen ethnic background took part in the interviews. Most participants could not identify oral health problems and did not perceive dental disease as a problem unless there was severe pain that could not be relieved by painkillers. The Karen ethnic community worked as subsistence farmers, and their busy daily activities consisted of farming, going to the forest to hunt and gather, performing housework, and taking care of their children. Dental health was given a lower priority compared with general health. The perceived value of primary teeth was low, which was identified as an underlying factor resulting in delayed oral health care seeking among this population. The participants had relied on self-care throughout their lifetimes, using either traditional medicines or modern painkillers to relieve toothaches. Fear of dental procedures among children was also described as a barrier to seeking dental health care. Conclusions: This study found that the lifestyle and traditions of Karen people living in this area influence their oral health care and hygiene activities as well as their health-seeking behaviors. Further research should emphasize how to improve oral health promotion by providing necessary services and health education appropriately to Karen ethnic populations living along the Thai-Myanmar border. abstract_id: PUBMED:17263880 Parental perceptions of children's oral health: the Early Childhood Oral Health Impact Scale (ECOHIS). Background: Dental disease and treatment experience can negatively affect the oral health related quality of life (OHRQL) of preschool aged children and their caregivers. Currently no valid and reliable instrument is available to measure these negative influences in very young children. The objective of this research was to develop the Early Childhood Oral Health Impact Scale (ECOHIS) to measure the OHRQL of preschool children and their families. Methods: Twenty-two health professionals evaluated a pool of 45 items that assess the impact of oral health problems on 6-14-year-old children and their families. The health professionals identified 36 items as relevant to preschool children. Thirty parents rated the importance of these 36 items to preschool children; 13 (9 child and 4 family) items were considered important. The 13-item ECOHIS was administered to 295 parents of 5-year-old children to assess construct validity and internal consistency reliability (using Cronbach's alpha). Test-retest reliability was evaluated among another sample of parents (N = 46) using the intraclass correlation coefficient (ICC). Results: ECOHIS scores on the child and parent sections indicating worse quality of life were significantly associated with fair or poor parental ratings of their child's general and oral health, and the presence of dental disease in the child. Cronbach's alphas for the child and family sections were 0.91 and 0.95 respectively, and the ICC for test-retest reliability was 0.84. Conclusion: The ECOHIS performed well in assessing OHRQL among children and their families. Studies in other populations are needed to further establish the instrument's technical properties. abstract_id: PUBMED:27392061 Oral health status, supply of oral and dental care among Senegalese children: a review of available data Introduction: Oral and dental diseases are a major public health problem due to their high prevalence and their impact on general health. Objective: This article reviews the oral health status and oral health care supply and uptake in children. Method: The available literature were reviewed. The Medline and Embase databases, and the Senegal Ministry of Health and Social Action and National Agency of Statistics and Demography (ANSD) websites were consulted. A total of 7 articles, 3 PhD theses and 8 reports were selected for analysis. The review was supplemented by interviews. Results: Overall, children have poor oral health status. Health care supply is insufficient (1 dentist/38,000 residents) and poorly distributed (mostly in the capital) with unequal oral health care uptake. Conclusion: This systematic review of the literature revealed a lack of reliable data on the oral health of children. It also concluded on a generally poor oral health status. abstract_id: PUBMED:32417855 Prevalence of Self-medication Practices for Oral Health Problems among Dental Patients in a dental college: A Descriptive Cross-sectional Study. Introduction: Self-medication means the use of medications for the treatment of any disease on their own, without consulting any healthcare professional. At times self-medication can be useful if practiced correctly by saving time and money, whereas disadvantages often occur due to lack of evaluation by trained medical professionals and delay ineffective treatment and can result in unnecessary expenses and drug dependence. This study was conducted to find out the self-medication behavior and its associated factors among patients visiting a dental hospital in Kathmandu. Methods: A hospital-based, cross-sectional study was conducted on 265 patients in Kantipur Dental College from December 2019 to January 2020 among the patients attending the dental Out Patient Department. Ethical clearance was obtained from the Institutional Review Committee of Kantipur Dental College. A convenience sampling technique was used. Proformas were prepared in English, translated to Nepali and re-translated to English by the back-translation method. Data entry was done in Microsoft Excel and analysis in SPSS 20. Descriptive statistics was done. Results: The prevalence of self-medication practice was found to be 166 (62.6%). Out of total participants, 99 (59.6%) consumed medicines for few days only and the most common triggering factor was found to be toothache in 101 (60.8%) participants. The most common reason for selfmedication was found to be a previous experience of treating similar illnesses. Conclusions: The prevalence of self-medication was found to be low as compared to the study done in similar settings. Self-medication practice is a sensitive issue that hasn't been given the required consideration. abstract_id: PUBMED:36260498 Dental Caries, Traumatic Dental Injuries and Gingivitis among Street-Children in Kano, Nigeria. Background: Street-children (Almajirai) constitute a significant proportion of the adolescent population in northern Nigeria. They face health challenges, especially oral health, from being inadequately protected, supervised or directed by responsible adults. Objectives: The objective of the study was to determine the prevalence of dental caries, dental trauma, gingivitis and oral hygiene scores in street-children. Methodology: Three hundred and sixty-six school-aged children were selected through a multi-stage sampling of 10 to 12-year-old children from four modern and traditional qur'anic schools, and examined for oral conditions using the WHO protocol. The DMFT index, the modified Ellis criteria, the Gingival Bleeding Index and the Oral Hygiene Index-Simplified (OHI-S) were used to assess dental caries, traumatic dental injuries, gingivitis and oral hygiene scores respectively. Chi-squared tests and Odds Ratio were used to compare the difference and test the association between Almajirai groups. Data analysis was carried out using the Statistical Package for the Social Sciences (SPSS version 20). Statistical significance was considered when p < 0.05. Results: One hundred and sixty eight (45.9%) traditional Almajirai and 198 (59.4%) modern-type Almajirai, participated in the study. Their mean age was 10.8(±0.8) years. The prevalence of dental caries was 25.4% with mean DMFT and SiC scores of 0.6 (±1.3) and 1.7 respectively. DMFT scores ranged from 0 to 12. The prevalence of traumatic dental injuries and gingivitis were 8.7% and 82.2% respectively, while the OHI-S score was 3.0. The odds for traumatic dental injuries were 41% less in the modern-type Almajirai (p=0.02); the modern-type Almajirai were twice as likely to develop gingivitis (aOR 95%CI =2.0, p=0.01). OHI-S scores showed poorer oral hygiene levels in the modern Almajirai (p=0.02). Conclusion: Dental caries, traumatic dental injuries, gingivitis and poor oral hygiene were prevalent among street-children in Kano. There is a high level of untreated dental disease in the study population. abstract_id: PUBMED:24690235 Self-efficacy and self-rated oral health among pregnant aboriginal Australian women. Background: Self-efficacy plays an important role in oral health-related behaviours. There is little known about associations between self-efficacy and subjective oral health among populations at heightened risk of dental disease. This study aimed to determine if low self-efficacy was associated with poor self-rated oral health after adjusting for confounding among a convenience sample of pregnant women. Methods: We used self-reported data from 446 Australian women pregnant with an Aboriginal child (age range 14-43 years) to evaluate self-rated oral health, self-efficacy and socio-demographic, psychosocial, social cognitive and risk factors. Hierarchical entry of explanatory variables into logistic regression models estimated prevalence odds ratios (POR) and 95% confidence intervals (95% CI) for fair or poor self-rated oral health. Results: In an unadjusted model, those with low self-efficacy had 2.40 times the odds of rating their oral health as 'fair' or 'poor' (95% CI 1.54-3.74). Addition of socio-demographic factors attenuated the effect of low self-efficacy on poor self-rated oral health by 10 percent (POR 2.19, 95% CI 1.37-3.51). Addition of the psychosocial factors attenuated the odds by 17 percent (POR 2.07, 95% CI 1.28-3.36), while addition of the social cognitive variable fatalism increased the odds by 1 percent (POR 2.42, 95% CI 1.55-3.78). Inclusion of the behavioural risk factor 'not brushing previous day' attenuated the odds by 15 percent (POR 2.11, 95%CI 1.32-3.36). In the final model, which included all covariates, the odds were attenuated by 32 percent (POR 1.80, 95% CI 1.05, 3.08). Conclusions: Low self-efficacy persisted as a risk indicator for poor self-rated oral health after adjusting for confounding among this vulnerable population. abstract_id: PUBMED:30511465 Children's oral health in Australia: The past decade's research agenda. Objectives: Research into dental diseases in children is no longer primarily focussed on biological and dietary factors, with social determinants an important influence. The aim of this study was to systematically review the focus of research on influences on children's oral health in Australia. Methods: Relevant databases were searched for English language publications between 2008 and 2017. Included studies were original research articles with a focus on influences of oral health for Australian children. A thematic synthesis approach was adopted, and included studies were coded using the Fisher-Owens et al (2007) framework which included child-, family- and community level influences. Results: A total of 252 studies were included in this review. The focus of the included studies was most frequently on child level influences (n = 200, 79.4%), followed by community level (n = 175, 69.4%) and family (n = 86, 38.1%)-level influences. Overall, the highest proportion of included studies were on the child level influences of physical attributes (n = 86, 34.1%) and child maturing (n = 83, 32.9%). The influences with the least number of studies were social environment (n = 2, 0.8%), social support (n = 3, 1.2%) and physical safety on both a family and community levels (n = 3, 1.2% and n = 1, 0.4%, respectively). Discussion: Monitoring child level influences of oral health has consistently occurred in national surveys and epidemiological studies. Family level influences are comparatively underinvestigated and should be a focus of future research in Australia. This study has helped to identify area gaps in the literature and can guide future studies. abstract_id: PUBMED:32220180 Oral Health Management of Children during the Epidemic Period of Coronavirus Disease 2019 Coronavirus disease 2019 (COVID-19) is becoming a major public health event affecting China and even the whole world. During the epidemic period of corona virus disease, appropriate oral health management and disease prevention of children is very important for children's oral and general health. In order to prevent the occurrence of cross-infection and epidemic spreading of COVID-19 during dental practice, the recommendations to parents include: not only training children to maintain hand hygiene at home, exercise appropriately, strengthen physical resistance, but also helping children develop good oral and diet habit such as effective brushing and flossing to avoid oral diseases and emergency. If non-emergency oral situation occur, parents could assist their child to take home based care such as rinsing to relieve the symptoms. When oral emergencies such as acute pulpitis, periapical periodontitis, dental trauma, oral and maxillofacial infections happen, parents and children should visit dental clinic in time with correct personal protection. During the epidemic period, children's oral emergencies should be treated in accordance with current guidelines and control of COVID-19. abstract_id: PUBMED:29936657 Interdisciplinary Community-Based Oral Health Program for Women and Children at WIC. Objectives To evaluate the women, infants, and children (WIC) Oral Health Program in a county in Florida. Methods The non-traditional interdisciplinary program of the current study was designed to reach at-risk populations with untreated dental diseases and limited access to care; it provides oral health education, dental screenings, preventive dental services, and referrals for women, children, and families at WIC offices. We evaluated the health status of patients enrolled in the program and the services provided. Results From 2013 to 2016, the program provided dental screenings for 576 children and 180 women. Caries prevalence for 3-5 year olds was 46.0%. Only 6.6% (12/114) of pregnant women were eligible for comprehensive dental care under Medicaid (< 21 years). Further, 71.2% (47/66) of all pregnant women had unmet dental care needs. Conclusions for Practice Our results suggested that many children and women had untreated dental diseases and need preventive services and dental care. Also, many pregnant women were not covered by Medicaid. This program demonstrates that collaboration with the WIC program can improve access to oral health services for underserved populations. Answer: The ethnicity of teenage children does influence certain aspects of oral health, but not necessarily their oral self-perception. A study conducted on 14-15-year-old Asian and White children in Manchester found that while Asian children had a higher periodontal treatment need than Whites, ethnicity did not influence how they perceived their oral health. Social deprivation, rather than ethnicity, influenced the level of untreated caries. Oral treatment need was identified as an important factor with respect to oral self-perceptions, indicating that the actual need for dental care had a more significant impact on how teenagers perceived their oral health than their ethnicity (PUBMED:9807920). In contrast, other studies have shown that ethnicity and cultural background can significantly affect oral health behaviors and beliefs, which could indirectly influence oral health status and self-perception. For instance, among the Karen ethnic group living along the Thai-Myanmar border, oral health problems were not perceived as a significant issue unless severe pain was present, and dental health was given lower priority compared to general health. This suggests that cultural practices and lifestyle can influence oral health care and hygiene activities, as well as health-seeking behaviors, which may in turn affect the prevalence of dental disease (PUBMED:33176779). Overall, while ethnicity may not directly influence oral self-perception among teenagers, it can be an important factor in determining periodontal treatment needs and can influence oral health behaviors and beliefs, which may ultimately impact the prevalence of dental disease.
Instruction: Does ketamine affect intraoperative electrophysiological monitoring in children undergoing selective posterior rhizotomy? Abstracts: abstract_id: PUBMED:18768043 Does ketamine affect intraoperative electrophysiological monitoring in children undergoing selective posterior rhizotomy? Objective: Direct dorsal rootlet stimulation with intraoperative electrophysiological monitoring is an adjunct to clinical evaluation during selective posterior rhizotomy. The purpose of this study was to evaluate the impact of ketamine on intraoperative electrophysiological monitoring during selective posterior rhizotomy. Specifically, we sought to determine if low dose ketamine given as part of the anesthesia was associated with changes in intraoperative electrophysiological monitoring in patients who underwent selective posterior rhizotomy. Methods: A retrospective cohort study was conducted using anesthetic records and electrophysiological records of 32 children who had intraoperative electrophysiological monitoring during selective posterior rhizotomy under general anesthesia. Administration and dosage of ketamine preceding the stimulation of dorsal roots was determined from the anesthetic record. A pediatric neurologist, blinded to patient, and to ketamine exposure, evaluated different electrophysiological criteria. Results: Eight children received ketamine and 24 did not receive it. The mean average dose of ketamine was 0.18 mg x kg(-1) (sd: 0.04). We did not find any statistically significant difference in intraoperative electrophysiological response between the ketamine and the control groups. However, we noted some trends: Administration of ketamine preceding the stimulation of dorsal roots was associated with a lower maximal threshold (2.7 mA vs 3.5 mA, P = 0.663) and root thresholds compared with children who did not receive ketamine. In addition, the train response following delivery of the suprastimulation tended to last longer with the presence of ketamine. Conclusions: Administration of low dose ketamine preceding the stimulation of dorsal roots during selective posterior rhizotomy might be associated with lower maximal thresholds and a more sustained train response following stimulation. Physicians should be aware of this finding in order to avoid misinterpreting intraoperative electrophysiological monitoring. abstract_id: PUBMED:33763733 Feasibility and diagnostic accuracy of intra-operative monitoring of motor evoked potentials in children <2 years of age undergoing tethered cord surgery: results in 100 children. Objective: This study documents the monitorability using different anesthesia regimes and accuracy of muscle motor evoked potentials (mMEPs) in children ≤2 years of age undergoing tethered cord surgery (TCS). Methods: Intraoperative mMEP monitoring was attempted in 100 consecutive children, ≤2 years of age, undergoing TCS. MEP monitoring was done under 4 different anesthetic regimes: (Total intravenous anesthesia (TIVA); balanced anesthesia with sevoflurane and ketamine; balanced anesthesia with isoflurane and ketamine; and balanced anesthesia with sevoflurane). Factors analyzed for their effect on monitorability were: age, neurological deficits, type of anesthesia, and the number of pulses used for stimulation. Results: Baseline mMEPs were obtained in 87% children. Monitorability of mMEPs was similar in children ≤1 year and 1-2 years of age (85.7% and 87.5%). In multivariate analysis, anesthesia regime was the only significant factor predicting presence of baseline mMEPs. Children undergoing TIVA (p=0.02) or balanced anesthesia with a combination of propofol, sevoflurane, and ketamine (p=0.05) were most likely to have baseline mMEPs. mMEPs had a sensitivity of 97.4%, specificity of 96.4%, negative predictive value of 98.2% and accuracy of 96.8%. Conclusions: Baseline mMEPs were obtained in >85% of children ≤2 years of age including those who had motor deficits. TIVA and balanced anesthesia with sevoflurane and ketamine are ideal for mMEP monitoring. mMEPs have a high accuracy although, false positive and false negative results can occasionally be experienced. abstract_id: PUBMED:17426638 Intraoperative monitoring of motor-evoked potentials in children undergoing spinal surgery. Study Design: Clinical case series. Objective: To study the combined use of modifications of stimulation methods and adjustments of anesthetic regimens on the reliability of motor-evoked potential (MEP) monitoring in a large group of children undergoing spinal surgery. Summary Of Background Data: Monitoring of MEPs is advocated during spinal surgery, but systematic data from children are sparse. Methods: A total of 134 consecutive procedures in 108 children <18 years of age were analyzed. MEPs were elicited by transcranial electrical stimulation (TES) and supplemented by temporal and spatial facilitation. The standard anesthesia regimen consisted of propofol, nitrous oxide, and remifentanil. Propofol was replaced with ketamine if no reliable MEPs could be recorded. In children <6 years of age, a ketamine-based anesthesia was used. Results: With temporal facilitation alone, reliable MEPs were obtained in 78% (105 of 134) of the procedures and, if combined with spatial facilitation, in 96% (129 of 134) of the procedures. Reliable MEPs were documented in 98% (111 of 113) of children >6 years and in 86% (18 of 21) in children <6 years of age. Conclusions: Combining spatial facilitation with a TES protocol improved monitoring of corticospinal motor pathways during spinal surgery in children. A ketamine-based anesthetic technique was preferred in children <6 years of age. abstract_id: PUBMED:38152535 Effect of Ketamine and Dexmedetomidine as Adjuvant to Total Intravenous Anesthesia on Intraoperative Cranial Nerve Monitoring in the Patients Undergoing Posterior Fossa Craniotomies-A Randomized Quadruple Blind Placebo-Controlled Study. Objectives Total intravenous anesthesia (TIVA) is used during surgery with intraoperative neurophysiological monitoring. Addition of adjuvant may minimize suppression of potentials by reducing doses of propofol. We studied the effect of addition of ketamine or dexmedetomidine to propofol-fentanyl-based TIVA on corticobulbar motor evoked potential (CoMEP) in patients undergoing posterior fossa surgeries. Materials and Methods Forty-two patients were assigned to three groups ( n = 14 each), Group S-saline, Group D-dexmedetomidine (0.25 μg/kg/h), and Group K-ketamine (0.25 mg/kg/h). Patients received propofol and fentanyl infusions along with study drugs. CoMEPs were recorded from muscles innervated by cranial nerves bilaterally at predefined intervals (T baseline , T 2 , T 3 , T 4 , and T 5 ). Effect on amplitude and latency of CoMEPs was assessed. Results A significant fall in CoMEP amplitude was observed across all analyzed muscles at time T 4 and T 5 in saline and dexmedetomidine group as compared with ketamine group, p -value less than 0.05. A significant increase in latency was observed at T4 and T5 among groups ( p -value, D vs. K = 0.239, D vs. S = 0.123, and K vs. S = 0.001). Conclusion Both ketamine and dexmedetomidine provide and allow effective recording of CoMEPs. Ketamine emerges as a better agent especially when prolonged surgical duration is expected as even propofol-fentanyl-based TIVA adversely affects CoMEPs when used for long duration. abstract_id: PUBMED:1921209 Intraoperative computed EEG monitoring for evaluating the adequacy of general anesthesia The article analyses the possibilities of computed EEG monitoring for evaluation of the magnitude and adequacy of the modern methods of general anesthesia: NLA, NLA with halothane inhalation, NLA including Kalipsol bolus administration. An original coefficient, called the adequacy index (AI), was found, which reflects the percentage contribution of deep rhythms (delta and theta) to the total power of the EEG in the range of 0 to 20 Hz. It is shown that the dynamics of this index correlates with the degree of the blocking of entry of sensory nociceptive information into the CNS. The lower boundary of AI-70% was determined, below which the generally accepted signs of anesthesia inadequacy appear: hyperactivity develops, a shift of pH in the acid direction occurs, and the base deficiency increases. It is suggested that determination of AI is used as a highly sensitive and operative test for routine monitoring of the patient's intraoperative condition and express diagnosis of inefficacy of anesthesia. abstract_id: PUBMED:33642299 KetaDex: A Saviour for Intraoperative Multimodal Neurophysiological Monitoring in Complex Neurosurgeries. Background: The anaesthetic mandate for a combination of electrocorticography (ECoG) and subcortical motor evoked potential mapping (SCMEPM) substantially differs from that required for each of these monitors in isolation. There is no current consensus defining the anaesthetic management for intraoperative multimodal monitoring combining these two modalities. Case Description: We report our experience of anaesthetising a drug resistant epileptic patient for craniotomy and resection of his frontal gliotic lesion. We propose a novel anaesthetic technique to cater to the multiple goals of this surgery like optimal neuromonitoring under adequate depth of anaesthesia. We used balanced anaesthesia technique. Continuous intravenous (IV) infusions of Inj. Dexmedetomidine of 1.5 mcg/kg/hour and Inj. Ketamine at 0.5mg/kg/hour were used to supplement inhalational anaesthetic titrated to a MAC up to 0.5. Neuromuscular blockade was avoided after the initial tracheal intubation dose. Conclusion: To the best of our knowledge, this is the first case reporting the safety and efficacy of balanced anaesthetic technique for concurrent ECoG and SCMEPM. Successful intraoperative ECoG and SCMEPM monitoring with absent intraoperative awareness confirmed the safety of our anaesthetic technique. Through this, a composite of patient safety, surgeon satisfaction and adequate intraoperative monitoring could be achieved. abstract_id: PUBMED:10752314 Intraoperative motor evoked potential monitoring: a review of 115 cases We reviewed our experiences of intraoperative motor evoked potentials (MEPs) monitoring for 115 operations on the spine or spinal cord. We observed compound muscle action potentials from bilateral anterior tibial muscles by electrical transcranial stimulation of the motor cortex under general anesthesia induced and maintained with intravenous anesthetics (ketamine, propofol, or droperidol), fentanyl, and 50% nitrous oxide. Partial neuromuscular blockade was obtained with continuous infusion of vecuronium. MEPs were recorded bilaterally in 91 cases (79%) and laterally in 18 cases (16%). Postoperative deterioration of motor function was observed in 2 cases and amplitude of MEPs decreased more than 50% of control values in both cases. Intraoperative monitoring of MEPs might be a reliable indicator of spinal cord motor function. abstract_id: PUBMED:25770644 The effect of KETODEX on the incidence and severity of emergence agitation in children undergoing adenotonsillectomy using sevoflurane based-anesthesia. Background: Postoperative emergency agitation (EA) is a common problem often observed in children undergoing general anesthesia. The purpose of this study was to evaluate whether a bolus of intraoperative low-dose ketamine followed by dexmedetomidine i.v. could reduce the incidence of EA in children undergoing adenotonsillectomy following sevoflurane-based anesthesia. Methods: A total of 92 children undergoing adenotonsillectomy, aged 3-7 years, were randomly allocated to receive either low-doseketamine 0.15 mg/kg followed by dexmedetomidine 0.3 μg/kg i.v. (KETODEX, n=45) or volume-matched normal saline (Control, n=47), about 10 min before the end of surgery. Anesthesia was induced and maintained with sevoflurane. Postoperative pain and EA were assessed with objective pain score (OPS) and the Pediatric Anesthesia Emergence Delirium scale (PAED), respectively. EA was defined as a PAED≥10 points. Recovery profile and postoperative complications were recorded. Result: The incidence and severity of EA was lower in KETODEX group than controls (11% vs. 47%) and (2% vs. 13%), respectively (P<0.05). The frequency of fentanyl rescue was lower in KETODEX group than in controls (13.3 vs. 38.3%, P<0.05). Heart rate during extubation was significantly higher in the control group compared with children who received KETODEX (P<0.05). The incidence of postoperative pain was significantly less in the KETODEX group (15.5% vs. 63.8%, P<0.05). Times to interaction and extubation were significantly longer in the KETODEX group (P<0.05). Conclusion: KETODEX reduces the incidence and severity of EA in children undergoing adenotonsillectomy following sevoflurane-based anesthesia and provided smooth extubation. abstract_id: PUBMED:28666848 A randomized crossover study of the effects of lidocaine on motor- and sensory-evoked potentials during spinal surgery. Background Context: Lidocaine has emerged as a useful adjuvant anesthetic agent for cases requiring intraoperative monitoring of motor-evoked potentials (MEPs) and somatosensory-evoked potentials (SSEPs). A previous retrospective study suggested that lidocaine could be used as a component of propofol-based intravenous anesthesia without adversely affecting MEP or SSEP monitoring, but did not address the effect of the addition of lidocaine on the MEP and SSEP signals of individual patients. Purpose: The purpose of this study was to examine the intrapatient effects of the addition of lidocaine to balanced anesthesia on MEPs and SSEPs during multilevel posterior spinal fusion. Study Design: This is a prospective, two-treatment, two-period crossover randomized controlled trial with a blinded primary outcome assessment. Patient Sample: Forty patients undergoing multilevel posterior spinal fusion were studied. Outcome Measures: The primary outcome measures were MEP voltage thresholds and SSEP amplitudes. Secondary outcome measures included isoflurane concentrations and hemodynamic parameters. Methods: Each participant received two anesthetic treatments (propofol 50 mcg/kg/h and propofol 25 mcg/kg/h+lidocaine 1 mg/kg/h) along with isoflurane, ketamine, and diazepam. In this manner, each patient served as his or her own control. The order of administration of the two treatments was determined randomly. Results: There were no significant within-patient differences between MEP threshold voltages or SSEP amplitudes during the two anesthetic treatments. Conclusions: Lidocaine may be used as a component of balanced anesthesia during multilevel spinal fusions without adversely affecting the monitoring of SSEPs or MEPs in individual patients. abstract_id: PUBMED:9408359 Intraoperative monitoring of the motor pathway using transtracheal stimulation of the cervical spine in dogs. Although SEP monitoring of the spinal cord has been a well established method recently, not an ultimate, perfectly developed technique for monitoring of the motor system is known so far, particularly, because of the disturbing effect of narcotic drugs and relaxants on the motor evoked potentials. In this study the upper part of the spinal cord was stimulated in 14 anesthetized and relaxed dogs with a cathode attached to the intratracheal tube and an anode fixed to the cervical spinous processes. Single and serial stimuli were applied. Recordings were obtained from the exposed right femoral nerve and quadriceps muscle. Averaging was necessary when using serial stimulations. Responses were consequent and reproducible during regular anesthesia. The origin of the different responses in the spinal cord is discussed. The method seems to be appropriate for intraoperative monitoring of the thoracolumbar spine. Answer: According to the study with PUBMED:18768043, the administration of low dose ketamine preceding the stimulation of dorsal roots during selective posterior rhizotomy might be associated with lower maximal thresholds and a more sustained train response following stimulation. The study did not find any statistically significant difference in intraoperative electrophysiological response between the ketamine and the control groups, but it did note some trends that suggest ketamine's influence on the electrophysiological monitoring. Therefore, it is suggested that physicians should be aware of this finding to avoid misinterpreting intraoperative electrophysiological monitoring.
Instruction: Is the orthopedically correct lifting technique rational from a cardiopulmonary and metabolic viewpoint? Abstracts: abstract_id: PUBMED:9214175 Is the orthopedically correct lifting technique rational from a cardiopulmonary and metabolic viewpoint? Introduction: In spite of educational work for many years by the orthopaedic surgeons wrong lifting techniques are still used in everyday life as well as during exercise. The reason for this could be the fact that there is an advantage regarding these techniques from the energetic point of view. Material And Methods: To clarify this issue we examined 30 healthy males within the age range of 17 and 30 years (weight: 74.0 +/- 9.2 kg, height: 182.5 +/- 6.7 cm). In a randomised sequence the volunteers carried out two different lifting techniques: 1. Lifting with stretched legs and bent back, 2. Lifting with bent legs and stretched back (Brügger technique). The test scheme consisted of four different 3-minute exercise levels with increasing weights (0 kg, 5.2 kg, 12.1 kg, 19.2 kg), the repetition rate was 30 times per level. Results: Using the Brügger technique the results for heart rate, VO2, VCO2 and VE were highly significant (p < 0.001) above the results of the other technique at all exercise levels. On the RPE (rate of perceived exertion) scale the differences were statistically significant regarding the 0.01-level. Conclusion: The results give rise to the supposition that the energy consumption for the lifting technique with the bent back is lower and is therefore preferred in everyday life and during exercise. These findings should be taken into consideration when informing people about possible damages. abstract_id: PUBMED:28893423 Incisions and baseline operative technique of cervicofacial lifting Cervicofacial lifting is one of the most iconic procedure of plastic surgery and is about hundred years old. In the following chapters of this report, numerous technical points will be specify. A baseline reliable and reproducible technique, appealing to the largest possible audience is presented in order to begin this surgery in optimum conditions. Pre- and postoperative management is also exposed. The aim of this chapter is to precise incisions and baseline operative technique of cervicofacial lifting, with description of SMAS and platysma suspensions as well as complementary procedures like liposuccion and lipofilling. This chapter will lay the foundation of more complex elements that will be described in the various following chapters. abstract_id: PUBMED:12671202 A review of research on techniques for lifting low-lying objects: 2. Evidence for a correct technique. Teaching 'correct' lifting technique is common, with squat lifting generally recommended. However the available evidence is less clear about which technique should be favoured. The purpose of this paper was to present an accessible synthesis of the evidence to assist professionals in their decision about whether to teach a specific technique for lifting objects lying on or near the ground. Squat, stoop and semi-squat techniques are described and the psychophysical, physiological, biomechanical, psychological, performance and clinical evidence for each technique summarised. Evidence for other lifting guidelines is also presented. It is concluded that no one technique has clear evidence and that a work design approach should be the prime focus of intervention. Recommendations for correct lifting technique guidelines are given for where technique training must be provided. abstract_id: PUBMED:10416848 Prompting correct lifting posture using signs. The use of a symbol to prompt the adoption of correct lifting posture was examined in three studies. Study 1 used an Appropriateness Test to evaluate nine symbols designed to encourage the adoption of correct lifting posture. Four symbols met the appropriateness criteria and were tested for comprehension in Study 2. Study 3 examined the effect of the best performing symbol from Study 2 in a field setting which involved subjects lifting a small box. Results indicate significant increases in the adoption of the use of correct lifting posture when the symbol was present compared to a control condition. The study also identified the placement of a lifting criterion symbol onto packaging as a useful technique for communicating safety information. abstract_id: PUBMED:28080938 Dynamic analysis of isoinertial lifting technique. The objectives of the study were threefold: (1) to quantify dynamic measures (displacement, velocity, force/acceleration, and power) of a 1·83 m isoinertial lift on an incremental lifting machine (ILM); (2)to identify any gender differences in ILM lifting technique; and (3) to assess the implications of these data for the use of the ILM as a screening device. One hundred and thirty-two military personnel (33 females and 99 males) completed a maximal isoinertial lifting test from a starting height of 0·34 m to a target height of 1·83 m on the ILM. A force transducer attached to the back of the armature provided continuous velocity and displacement data from which the displacement, velocity, acceleration/force, and power profiles were determined. These data were summarized into 37 lift parameters; 33 representing the dynamic components of the lift, and four representing averages taken across the entire lift. The results revealed that the 1·83 m isoinertial lift could be described in three phases: (1) a powerful pulling phase, which incorporated measures of maximal acceleration/force, velocity, and power; 2) a wrist changeover manoeuvre, wherein momentum was required to compensate for minimal force and acceleration values; and (3) a pushing phase, during which second maximal force and acceleration measures were attained. Statistically significant differences were found between genders on various parameters of the technique profiles, suggesting that the testing protocol may have placed different demands on males and females. Females spent a greater proportion of the total lift time in the pushing phase, and had less opportunity to generate power during the pulling phase. The resulting TLM scores may have underestimated the lifting capacity of females. It was recommended that females and males be given independent consideration in the design of ILM lifting protocols. abstract_id: PUBMED:35024210 Lifting Techniques: Why Are We Not Using Evidence To Optimize Movement? Lifting something off the ground is an essential task and lifting is a documented risk factor for low back pain (LBP). The standard lifting techniques are stoop (lifting with your back), squat (lifting with your legs), and semi-squat (midway between stoop and squat). Most clinicians believe the squat technique is optimal; however, training on squat lifting does not prevent LBP and utilizing greater lumbar flexion (i.e. stoop) when lifting is not a risk factor for LBP. The disconnect between what occurs in clinical practice and what the evidence suggests has resulted in ongoing debate. Clinicians must ask the right questions in order to apply the evidence appropriately. A proposed clinical framework of calm tissue down, build tissue up, improve work capacity can be used to determine which lifting technique is optimal for a patient at any given time. When applying this clinical framework, clinicians should consider metabolic, biomechanical, physical stress tolerance, and pain factors in order to address the movement system. For example, stoop lifting is more metabolically efficient and less challenging to the cardiopulmonary system. There may be few biomechanical differences in spinal postures and gross loads on the lumbar spine between stoop, squat, and semi-squat lifting; however, each lift has distinct kinematic patterns that affects muscle activation patterns, and ultimately the movement system. Clinicians must find the optimal dosage of physical stress to address all aspects of the movement system to minimize the risk of injury. There is no universal consensus on the optimal lifting technique which will satisfy every situation; however, there may be a lifting technique that optimizes movement to achieve a specific outcome. The calm tissue down, build tissue up, improve work capacity framework offers an approach to determine the best lifting technique for an individual patient at any give time. Level Of Evidence: 5. abstract_id: PUBMED:28627349 Clinical application of cardiopulmonary resuscitation with abdominal lifting and compression in emergency treatment Objective: To investigate the clinical value of cardiopulmonary resuscitation (CPR) with abdominal lifting and compression on patients with breathing and cardiac arrest induced by severe chest trauma. Methods: Sixty-six breathing and cardiac arrest patients induced by severe chest trauma admitted to the General Hospital of Jingyuan Coal Industry Group Company from October 2011 to October 2016 were enrolled, and they were divided into abdominal lifting and compression group (n = 32) and unarmed abdominal compression group (n = 34) by random number table. The patients in both two groups were given the airway open, respiration support, defibrillation treatment, venous access establishment, vasoactive drugs application and other conventional treatments. On the basis of the routine treatment, the patients in abdominal lifting and compression group were given application of abdominal lifting and compression device with 100 times/min frequency and continuously alternating press down to lift the abdomen, the amplitude of pressing and pulling were 3-5 cm below or above the original level of the abdomen. Those in unarmed abdominal compression group were given abdominal CPR pressing method by hand, the frequency of pressing and depth of subsidence was the same as abdominal lifting and compression group. Heart rate (HR) and arterial blood gas at 30 minutes after CPR as well as the success rate of resuscitation were compared between the two groups. The changes in HR, mean arterial pressure (MAP) and pulse oxygen saturation (SpO2) before and 30 minutes and 60 minutes after CPR were dynamically observed in patients with restoration of spontaneous circulation (ROSC) after abdominal lifting and compression CPR treatment. Results: Compared with the unarmed abdominal compression group, HR (bmp: 136.13±6.14 vs. 148.45±5.16) and arterial partial pressure of carbon dioxide [PaCO2 (mmHg, 1 mmHg = 0.133 kPa): 48.51±2.60 vs. 62.51±2.50] at 30 minutes after CPR in abdominal lifting and compression group were significantly lowered, and arterial partial pressure of oxygen (PaO2) was significantly increased (mmHg: 88.07±3.92 vs. 74.12±2.12, all P < 0.05). Four patients with ROSC were found in abdominal lifting and compression group, and 2 in unarmed abdominal compression group. The success rate of resuscitation in abdominal lifting and compression group was significantly higher than that of unarmed abdominal compression group (12.50% vs. 5.82%, P < 0.05). In 4 patients with ROSC after abdominal lifting and compression CPR showed a downward trend in HR and an upward trend in MAP and SpO2 with CPR time prolongation. Conclusions: The effect of abdominal lifting and compression CPR is better than that of unarmed abdominal compression CPR, which is of great value for the life saving of patients with breathing and cardiac arrest induced by severe chest trauma. abstract_id: PUBMED:26406312 A case of mediastinal goiter treated surgically using a clavicle-lifting technique. Introduction: Mediastinal goiter is a benign disease, which is defined as a goiter with the greater portion of its mass lying below the thoracic inlet. It is controversial whether the cervical approach is the best approach for all mediastinal goiter surgeries. Case Presentation: A 71-year-old woman presented with respiratory discomfort during exertion. Computed tomography (CT) revealed a mediastinal goiter extending to the arch of the aorta. Surgical resection was performed using a clavicle-lifting technique. The excised specimen was 13×10×5cm in size and weighed 220g. The pathological diagnosis was nodular goiter. Discussion: The clavicle-lifting technique is a simple and safe technique that involves lifting the clavicles with a pediatric extension retractor (Kent Retractor Set, Takasago Medical Industry, Tokyo, Japan). This is a good choice for surgery on upper mediastinal lesions such as mediastinal goiters as it obviates the need for a median sternotomy. Conclusion: Although further study is necessary, it appears that a transcervical approach using the clavicle-lifting technique may be an acceptable treatment for mediastinal goiters that extend to the aortic arch. abstract_id: PUBMED:37905650 Utility of Novel Clavicle-lifting Technique in Mediastinal Thyroid Surgery: A Case Series Study. Background/aim: Symptomatic mediastinal goitre requires surgery and is usually resectable using the cervical approach alone; however, sternotomy is occasionally required. Sternotomy is a highly invasive procedure, and its complications, including mediastinitis and osteomyelitis, can be critical. To date, there have been no reports of non-invasive techniques to avoid sternotomy for mediastinal thyroid tumours. We investigated the safety and efficacy of thyroidectomy using the clavicle lifting technique with a paediatric Kent hook. Patients And Methods: This was a retrospective study of 8 patients who underwent thyroidectomy with a clavicle lifting technique between November 2014 and July 2021 at the Department of Surgery, International University of Health and Welfare Hospital. The primary endpoint was sternotomy avoidance rate and R0 resection rate. An extension retractor used in paediatric surgery was used for the clavicle lifting technique. Results: Sternotomy avoidance rate and R0 resection rate were 100%. The mean operative time was 161±53.5 min, and the mean blood loss was 125.6±125.8 ml. There were no intraoperative or postoperative complications related to the clavicle lifting technique. Conclusion: Thyroidectomy with a clavicle lifting technique for mediastinal goitre and thyroid cancer is safe and useful because it avoids sternotomy without causing massive intraoperative bleeding or damage to other organs. abstract_id: PUBMED:31286735 Summary and core technique on the origin and evolution of reinforcing-reducing manipulation achieved by lifting and thrusting the needle By analyzing the evolution of reinforcing-reducing manipulation achieved by lifting and thrusting the needle recorded in ancient literature of traditional Chinese medicine, it is found that the main contents of reinforcing-reducing manipulation by lifting and thrusting the needle include manipulating speed change, manual amplitude, insertion layer, gender, the direction to the acupuncture receiver, forenoon and afternoon and relevant quantity. Among them, gender, the direction to the acupuncture receiver, forenoon and afternoon and relevant quantity are the unnecessary parameters, while the manipulating speed change, manual amplitude and insertion layer are the indispensable parameters. The manipulating speed change is the core of the necessary parameters for the reinforcing-reducing manipulation achieved by lifting and thrusting the needle. Combined with the manual amplitude, the manipulating speed of needle determines the volume of needling stimulation. The insertion layer is decided on the base of the clinical demand. In the core technique of reinforcing-reducing manipulation by lifting and thrusting the needle, the reinforcing is achieved by thrusting the needle forcefully and quickly and then lifting the needle body slowly and evenly back to the original layer. The reducing is achieved by lifting the needle forcefully and quickly and then thrusting the needle body slowly and evenly back to the original layer. The manipulating speed and manual amplitude of needling are the parameters to quantize acupuncture manipulation. In association with the acupuncture effects in human body, these parameters contribute to the interpretation of the dose-effect relationship of acupuncture and the improvement of clinical effects. Answer: The orthopedically correct lifting technique, often referred to as the Brügger technique, which involves lifting with bent legs and a straight back, has been questioned from a cardiopulmonary and metabolic viewpoint. A study examining the energy consumption of two different lifting techniques found that using the Brügger technique resulted in higher heart rate, VO2, VCO2, and VE at all exercise levels compared to lifting with stretched legs and a bent back. Additionally, the rate of perceived exertion (RPE) was significantly higher when using the Brügger technique (PUBMED:9214175). This suggests that the energy consumption for the lifting technique with a bent back is lower, which could explain why it is often preferred in everyday life and during exercise. However, it is important to note that while the Brügger technique may be more demanding on the cardiopulmonary and metabolic systems, it does not necessarily mean it is irrational to use from these perspectives. The technique is often recommended to reduce the risk of musculoskeletal injuries, particularly to the lower back. Therefore, the rationale for using the orthopedically correct lifting technique may be more related to injury prevention rather than metabolic efficiency. In the context of lifting techniques and their impact on health and safety, it is also important to consider that there is no universal consensus on the optimal lifting technique for every situation. Different lifting techniques, such as stoop, squat, and semi-squat, have distinct kinematic patterns and muscle activation patterns, which can affect the movement system differently. Clinicians should consider various factors, including metabolic, biomechanical, physical stress tolerance, and pain, to determine the best lifting technique for an individual at any given time (PUBMED:35024210). In summary, while the orthopedically correct lifting technique may be more metabolically and cardiopulmonary demanding, it is still rational to use it from an orthopedic perspective, particularly for injury prevention. The choice of lifting technique should be individualized based on a variety of factors to optimize movement and minimize the risk of injury.
Instruction: Can melatonin delay oxidative damage of human erythrocytes during prolonged incubation? Abstracts: abstract_id: PUBMED:23640945 Can melatonin delay oxidative damage of human erythrocytes during prolonged incubation? Purpose: Melatonin (MEL) is an effective antioxidant in numerous experimental models, both in vitro and in vivo. However, it should be stressed that there are also papers reporting limited antioxidative activity of MEL or even giving evidence for its pro-oxidative properties. In the present paper we investigated the influence of MEL on the oxidative damage of human erythrocytes during prolonged incubation. Material/methods: Human erythrocytes suspended in phosphate-buffered saline (PBS), pH 7.4 were incubated at 37ºC either in absence or presence of melatonin at concentration range 0.02 mM-3 mM for up to 96 hrs. The influence of MEL on erythrocyte damage was assessed on the basis of the intensity of intracellular oxidation processes (the oxidation of HbO₂, GSH, fluorescent label DCFH₂) as well as damage to the plasma membrane (lipid peroxidation, the potassium leakage) and the kinetics of hemolysis. Results: The prolonged incubation of erythrocytes induced a progressive destruction of erythrocytes. Melatonin prevented lipid peroxidation and hemolysis whereas the oxidation of HbO₂ and DCFH₂ was enhanced by melatonin at concentrations higher than 0.6 mM. In the case of erythrocytes incubated with 3 mM of MEL, the hemolysis rate constant (0.0498±0.0039 H%•h⁻¹) was 50% lower than that of the control while the HbO₂ oxidation rate constants were about 1.4 and 1.5 times higher for 1.5 and 3 mM of MEL, respectively. Melatonin had no influence on the oxidation of GSH and the potassium leakage. Conclusions: Probably, MEL can stabilize the erythrocyte membrane due to interaction with lipids, thus prolonging the existence of cells. On the contrary, in the presence of MEL the accelerated oxidation of HbO₂ and generally, increased oxidative stress was observed in erythrocytes. Pro- and antioxidative properties of melatonin depend on the type of cells, redox state, as well as experimental conditions. abstract_id: PUBMED:30654084 Prolonged erythrocyte auto-incubation as an alternative model for oxidant generation system. This study investigated the effects of incubation period and melatonin treatment on red blood cell (RBC) metabolism in an auto-incubation model of H2O2-induced oxidative stress. The study was carried out on three healthy adult donors by incubating RBCs in their own plasma at 37 °C, or under the influence of 1 mM H2O2 with and without 100 μM melatonin at different times (0, 1, 3 and 6 h). We assessed incubation period, treatment, as well as any interaction effects between these predictors on erythrocyte osmoregulation, hemolytic rate, oxidative stress markers, and adenylate nucleotide levels. We did not find any relevant effects of both incubation period and treatments on osmotic, antioxidant and adenylate parameters. On the other hand, hemolysis degree and biomolecule oxidation levels in the plasma increased over time, 3-fold and about 25%, respectively, regardless any treatment influence. H2O2 treatment more than doubled protein carbonyl groups, regardless time in plasma, and in a time-depending way in erythrocyte membrane extract, effects that were neutralized by melatonin treatment. Through multivariate analyses, we could expand the understanding of energy and redox metabolisms in the maintenance of cellular integrity and metabolic homeostasis. Another interesting observation was the 65-75% contribution of the oxidative lesion markers on hemolysis. Hence, these findings suggested a new and more intuitive RBC suspension model and reinforced the beneficial use of melatonin in human disorders. abstract_id: PUBMED:31357737 Melatonin Protects Band 3 Protein in Human Erythrocytes against H2O2-Induced Oxidative Stress. The beneficial effect of Melatonin (Mel), recognized as an anti-inflammatory and antioxidant compound, has been already proven to prevent oxidative stress-induced damage associated to lipid peroxidation. As previous studies modeled the impact of oxidative stress on Band 3 protein, an anion exchanger that is essential to erythrocytes homeostasis, by applying H2O2 at not hemolytic concentrations and not producing lipid peroxidation, the aim of the present work was to evaluate the possible antioxidant effect of pharmacological doses of Mel on Band 3 protein anion exchange capability. The experiments have been performed on human erythrocytes exposed to 300 μM H2O2-induced oxidative stress. To this end, oxidative damage has been verified by monitoring the rate constant for SO4= uptake through Band 3 protein. Expression levels of this protein Mel doses lower than 100 µM have also been excluded due to lipid peroxidation, Band 3 protein expression levels, and cell shape alterations, confirming a pro-oxidant action of Mel at certain doses. On the other hand, 100 µM Mel, not provoking lipid peroxidation, restored the rate constant for SO4= uptake, Band 3 protein expression levels, and H2O2-induced cell shape alterations. Such an effect was confirmed by abolishing the endogenous erythrocytes antioxidant system. Therefore, the present findings show the antioxidant power of Mel at pharmacological concentrations in an in vitro model of oxidative stress not associated to lipid peroxidation, thereby confirming Band 3 protein anion exchange capability measurement as a suitable model to prove the beneficial effect of Mel and support the use of this compound in oxidative stress-related diseases affecting Band 3 protein. abstract_id: PUBMED:23217563 Melatonin ameliorates oxidative damage in hyperglycemia-induced liver injury. Purpose: Melatonin (N-acetyl-5-methoxy-tryptamine) is synthesized mainly by the pineal gland and its antioxidant properties have been demonstrated both in short and long term studies. Our aim was to clarify the effects of hyperglycemia and to administer melatonin on lipid peroxidation, protein oxidation and oxidative DNA damage in rat. Methods: Malondialdehyde (MDA), protein carbonyl (PCO) and total thiol (T-SH) levels were determined in plasma and liver tissue, glutathione (GSH) levels in erythrocyte and liver tissue, and 8-hydroxy-2-deoxyguanosine (8-OHdG) levels in plasma and liver. Thirty-eight male Wistar rats were divided into four groups: 1--injected with saline (n = 8), 2--injected with melatonin (n = 10), 3--injected with STZ (65 mg/kg, i.p.) (diabetic group) (n = 10) and 4--injected with melatonin (10 mg/kg/day, i.p.) and STZ (65 mg/kg, i.p.) (n = 10) for 8 weeks (diabetic+ melatonin group). Colorimetric methods were used to determine the level of the oxidative stress markers. 8-OhdGwas measured using ELISA. Results: MDA, PCO and 8-OHdG levels in the plasma and the liver homogenates of diabetic rats were higher than controls and were significantly reduced after melatonin treatment. T-SH and GSH levels in samples were markedly reduced in untreated diabetic rats compared with control rats; however, these parameters were increased in diabetic rats following melatonin treatment. Conclusion: Our findings showed that melatonin administration partially ameliorated oxidative damage in liver injury in STZ-induced diabetic rats. The present study suggests that melatonin functions as a potent antioxidant agent in diabetes. Melatonin, a nutritional supplement, may be a good therapeutic option for diabetic patients. abstract_id: PUBMED:10496145 Melatonin protects human red blood cells from oxidative hemolysis: new insights into the radical-scavenging activity. Antioxidant activity of melatonin in human erythrocytes, exposed to oxidative stress by cumene hydroperoxide (cumOOH), was investigated. CumOOH at 300 microM progressively oxidized a 1% suspension of red blood cells (RBCs), leading to 100% hemolysis in 180 min. Malondialdehyde and protein carbonyls in the membrane showed a progressive increase, as a result of the oxidative damage to membrane lipids and proteins, reaching peak values after 30 and 40 min, respectively. The membrane antioxidant vitamin E and the cytosolic reduced glutathione (GSH) were totally depleted in 20 min. As a consequence of the irreversible oxidative damage to hemoglobin (Hb), hemin accumulated into the RBC membrane during 40 min. Sodium dodecyl sulfate (SDS) gel electrophoresis of membrane proteins showed a progressive loss of the cytoskeleton proteins and formation of low molecular weight bands and protein aggregates, with an increment of the intensity of the Hb band. Melatonin at 50 microM strongly enhanced the RBC resistance to oxidative lysis, leading to a 100% hemolysis in 330 min. Melatonin had no effect on the membrane lipid peroxidation, nor prevented the consumption of glutathione (GSH) or vitamin E. However, it completely inhibited the formation of membrane protein carbonyls for 20 min and hemin precipitation for 10 min. The electrophoretic pattern provided further evidence that melatonin delayed modifications to the membrane proteins and to Hb. In addition, RBCs incubated for 15 min with 300 microM cumOOH in the presence of 50 microM melatonin were less susceptible, when submitted to osmotic lysis, than cells incubated in its absence. Extraction and high-performance liquid chromatography (HPLC) analysis showed a much more rapid consumption of melatonin during the first 10 min of incubation, then melatonin slowly decreased up to 30 min and remained stable thereafter. Equilibrium partition experiments showed that 15% of the melatonin in the incubation mixture was recovered in the RBC cytosol, and no melatonin was extracted from RBC membrane. However, 35% of the added melatonin was consumed during RBC oxidation. Hydroxyl radical trapping agents, such as dimethylsulfoxide or mannitol, added into the assay in a 1,000 times molar excess, did not vary melatonin consumption, suggesting that hydroxyl radicals were not involved in the indole consumption. Our results indicate that melatonin is actively taken up into erythrocytes under oxidative stress, and is consumed in the defence of the cell, delaying Hb denaturation and release of hemin. RBCs are highly exposed to oxygen and can be a site for radical formation, under pathological conditions, which results in their destruction. A protective role of melatonin should be explored in hemolytic diseases. abstract_id: PUBMED:37261742 Elucidation of 7,8-dihydroxy flavone in complexing with the oxidative stress-inducing enzymes, its impact on radical quenching and DNA damage: an in silico and in vitro approach. Oxidative stress (OS) has been attributed to the progression of various disorders, including cancer, diabetes, and cardiovascular diseases. Several antioxidant compounds and free radical quenchers have been shown to mitigate oxidative stress. However, large-scale randomized controlled trials of such compounds on chronic disease aversion have yielded paradoxical and disappointing results due to the constrained cognizance of their oxidative mechanisms and therapeutic targets. The current study sought to identify the potential therapeutic targets of 7,8-Dihydroxyflavone (7,8-DHF) by analyzing its interactions with the enzymes implicated in oxidative stress and also to explore its radicle quenching potential and prophylactic impact on the H2O2-induced DNA damage. Through the in silco approach, we investigated the antioxidant potential of 7,8-DHF by evaluating its interactions with the human oxidative stress-inducing enzymes such as myeloperoxidase (MPO), NADPH oxidase (NOX), nitric oxide synthase (NOS), and xanthine oxidase (XO) and a comparative analysis of those interactions with known antioxidants (Ascorbic acid, Melatonin, Tocopherol) used as controls. The best-scoring complex was adopted for the simulation analysis in investigating protein-ligand conformational dynamics. The in vitro radicle quenching potential was evaluated by performing a spectrum of antioxidant assays, and radical quenching was observed in a dose-dependent fashion with IC50 values of < 60 µM/mL. Further, we probed its anti-hemolytic potential and prophylactic impact in avian erythrocytes subjected to H2O2-induced hemolysis and DNA damage by implementing hemolysis and comet assays. The protective effect was more pronounced at higher concentrations of the drug.Communicated by Ramaswamy H. Sarma. abstract_id: PUBMED:15066572 Hypochlorous acid-induced oxidative stress in Chinese hamster B14 cells: viability, DNA and protein damage and the protective action of melatonin. This study provides further evidence for the toxicity of hypochlorous acid (HOCl) in mammalian cells. Using the Chinese hamster B14 cell line, a significant decrease in cell viability was demonstrated after exposure to 100-200 microM HOCl for 1 h. Loss of viability was accompanied by a slight increase in DNA damage as shown by the Comet assay and by oxidation of cellular thiols. Exposure of B14 cells, erythrocyte membranes and human serum albumin to HOCl resulted in an extensive protein carbonyl accumulation. Thus, the cytotoxicity of HOCl may be due to both protein damage (carbonyl formation and oxidation of protein thiol groups) and DNA damage. The well-known antioxidant melatonin interacted with the oxidant and significantly protected cells during HOCl exposure, diminishing its cytotoxic effects and reducing protein carbonyl generation. abstract_id: PUBMED:12859237 The effect of melatonin against oxidative damage during total-body irradiation in rats. Melatonin has been reported to participate in the regulation of a number of important physiological and pathological processes. Melatonin, which is a powerful endogenous antioxidant, may play a role in the prevention of oxidative damage. The aim of this study was to investigate the effect of pretreatment with melatonin (5 mg kg(-1) and 10 mg kg(-1)) on gamma-radiation-induced oxidative damage in plasma and erythrocytes after total-body irradiation with a single dose of 5 Gy. Total-body irradiation resulted in a significant increase in plasma and erythrocyte MDA levels. Melatonin alone increased the levels of SOD and GSH-Px. Erythrocyte and plasma MDA levels in irradiated rats that were pretreated with melatonin (5 or 10 mg kg(-1)) were significantly lower than those in rats that were not pretreated. There was no significant difference between the effects of 5 and 10 mg kg(-1) on plasma MDA activities and CAT activities. However, erythrocyte MDA levels showed a dose-dependent decrease, while GSH-Px activities increased with dose. Our study suggests that melatonin administered prior to irradiation may protect against the damage produced by radiation by the up-regulation of antioxidant enzymes and by scavenging free radicals generated by ionizing radiation. abstract_id: PUBMED:21404018 Ameliorative action of melatonin on oxidative damage induced by atrazine toxicity in rat erythrocytes. Excessive generation of reactive oxygen species (ROS) can induce oxidative damage to vital cellular molecules and structures including DNA, lipids, proteins, and membranes. Recently, melatonin has attracted attention because of their free radical scavenging and antioxidant properties. The aim of this study was to evaluate the possible protective role of melatonin against atrazine-induced oxidative stress in rat erythrocytes in vivo. Adult male albino rats of Wistar strain were randomly divided into four groups. Control group received isotonic saline; melatonin (10 mg/kg bw/day) group; atrazine (300 mg/kg of bw/day) group; atrazine + melatonin group. Oral administration of atrazine and melatonin was given daily for 21 days. Oxidative stress was assessed by determining the glutathione (GSH) and malondialdehyde (MDA) level, and alteration in antioxidant enzymes such as superoxide dismutase (SOD), glutathione peroxidase (GPx), catalase (CAT), glutathione-S-transferase (GST), and glucose-6-phosphate dehydrogenase (G-6-PD) in the erythrocytes of normal and experimental animals. A significant increase in the MDA levels and decrease in the GSH was observed in the atrazine treated animals (P < 0.05). Also, significant increase in the activities of SOD, CAT, GPx, and GST were observed in atrazine treated group compared to controls (P < 0.05). Moreover, significant decrease in protein, total lipids, cholesterol, and phospholipid content in erythrocyte membrane were demonstrated in atrazine treated rats. Administration of atrazine significantly inhibits the activities of G-6-PD and membrane ATPases such as Na(+)/K(+)-ATPase, Mg(2+)-ATPase, and Ca(2+)-ATPase (P < 0.05). Scanning electron microscopic (SEM) examination of erythrocytes revealed morphological alterations in the erythrocytes of atrazine treated rats. Furthermore, supplementation of melatonin significantly modulates the atrazine-induced changes in LPO level, total lipids, total ATPases, GSH, and antioxidant enzymes in erythrocytes. In conclusion, the increase in oxidative stress markers and the concomitant alterations in antioxidant defense system indicate the role of oxidative stress in erythrocytes of atrazine-induced damage. Moreover, melatonin shows a protective role against atrazine-induced oxidative damage in rat erythrocytes. abstract_id: PUBMED:15258361 In vitro effects of melatonin on the filtrability of erythrocytes in SNP-induced oxidative stress. Erythrocyte deformability is one of the most important charactheristics of erythrocytes for an effective microcirculatory function and is affected from a number of factors, including the oxidative-damage-induced by nitric oxide (NO). This study was performed to investigate the effects of in vitro melatonin incubation on the antioxidant status and deformability of erythrocytes in sodium nitroprusside (SNP), a nitric oxide donor, induced oxidative stress. 40 blood samples taken from the adult healthy people were divided into 4 groups randomly and incubated with saline, SNP (1 mM), melatonin (MEL, 1 mM), MEL + SNP and SNP + L-NAME (5 mM) respectively. Relative filtration rate (RFR), relative filtration time (RFT) and relative resistance (Rrel) were determined as the indexes of erythrocyte filterability. In addition, malondialdehyde (MDA, as an index of lipid peroxidation) and the antioxidant activities of glutathione peroxidase (GSH-Px), superoxide dismutase (SOD) and catalase (CAT) were also determined in the red blood cells of all groups revealing the oxidant-antioxidant activity. RFT and the Rrel of the erythrocytes incubated with SNP increased significantly (p<0.05) whereas the RFR of the erythrocytes decreased (p<0.05) in comparison to all groups. This reduction in RFR was prevented with both L-NAME or MEL incubation. Furthermore, MEL was found to be significantly efficient in preventing the erythrocytes from lipid peroxidation in these groups. In addition, GSH-Px and SOD activities were elevated with SNP incubation reflecting the oxidative stress in erythrocytes, whereas the CAT activity remained unchanged. Melatonin has no significant effect on the GSH-Px and CAT activity but, it caused a significant decrease in SOD activity (p<0.05). These results reveal that, melatonin can protect the erythrocytes from impaired deformability in SNP-induced oxidative stress due to antioxidant effects as revealed by lipid peroxidation and antioxidant enzyme activities. Answer: Yes, melatonin can delay oxidative damage of human erythrocytes during prolonged incubation. Studies have shown that melatonin has antioxidant properties that can protect erythrocytes from oxidative stress. For instance, melatonin prevented lipid peroxidation and hemolysis in human erythrocytes during prolonged incubation, although it enhanced the oxidation of HbO₂ at concentrations higher than 0.6 mM (PUBMED:23640945). Additionally, melatonin treatment neutralized the effects of H₂O₂ treatment on protein carbonyl groups in erythrocyte membranes, suggesting its beneficial use in human disorders (PUBMED:30654084). Melatonin has also been shown to protect Band 3 protein in human erythrocytes against H₂O₂-induced oxidative stress, restoring the rate constant for SO₄= uptake and Band 3 protein expression levels, without provoking lipid peroxidation at pharmacological concentrations (PUBMED:31357737). Moreover, melatonin has been reported to ameliorate oxidative damage in hyperglycemia-induced liver injury, suggesting its function as a potent antioxidant agent in diabetes (PUBMED:23217563). In another study, melatonin was found to protect human red blood cells from oxidative hemolysis induced by cumene hydroperoxide, delaying modifications to membrane proteins and hemoglobin denaturation (PUBMED:10496145). Furthermore, melatonin has been shown to diminish the cytotoxic effects of hypochlorous acid in mammalian cells and reduce protein carbonyl generation (PUBMED:15066572). Melatonin's effect on gamma-radiation-induced oxidative damage in plasma and erythrocytes after total-body irradiation in rats also supports its protective role against radiation damage by up-regulating antioxidant enzymes and scavenging free radicals (PUBMED:12859237). Additionally, melatonin has been demonstrated to have a protective role against atrazine-induced oxidative damage in rat erythrocytes, modulating changes in lipid peroxidation level, total lipids, total ATPases, glutathione, and antioxidant enzymes (PUBMED:21404018). Lastly, melatonin has been found to protect erythrocytes from impaired deformability in sodium nitroprusside-induced oxidative stress due to its antioxidant effects, as evidenced by lipid peroxidation and antioxidant enzyme activities (PUBMED:15258361).
Instruction: Do very sick neonates born at term have antenatal risks? Abstracts: abstract_id: PUBMED:33564518 Frequency of Immediate Neonatal Complications (Hypoglycemia and Neonatal Jaundice) in Late Preterm and Term Neonates. Background Evidence suggests that neonates born at 34-36 weeks should not be considered full-term neonates, given the magnitude of morbidities they experience compared with term infants. Neonates born at 34 to 36 weeks are at increased risk for early illness such as hypoglycemia and hyperbilirubinemia compared to term infants. Objective This study's objective was to determine the frequency of immediate neonatal complications (hypoglycemia and neonatal jaundice) in late preterm and term neonates. Subjects and methods A serial descriptive case study was conducted at the private tertiary care hospital. Random samplings were taken, and the sample size was calculated on Epi Info software (Centers for Disease Control and Prevention, Atlanta, GA). All the eligible samples were taken into confidence following approval by the College of Physicians and Surgeons Pakistan's institutional review board. A structured questionnaire was used in which demographic information of the patient was collected, and all neonates were closely observed for early targeted morbidities (hypoglycemia, hyperbilirubinemia) Results A total of 215 neonates were born during the study period, of whom 108 (50.2%) were term babies and 107 (49.8%) late preterm babies. There were 122 (56.7%) male infants and 93 (43.3%) female infants. Jaundice was observed in 6.5% (n=7) of term neonates and 22.4% (n=24) of late preterm neonates (p<0.0). Similarly, hypoglycemia was observed in only 4.6% (n=5) of term neonates and 15.9% (n=17) of late preterm neonates (p<0.01). Conclusion There is a significant association between gestational age and immediate neonatal complications of jaundice and hypoglycemia. Compared with term neonates, late preterm neonates are at a higher risk of neonatal jaundice and hypoglycemia. Gender and mode of delivery did not correlate to complications rate. abstract_id: PUBMED:33292564 Impact of antenatal care on neonatal mortality among neonates in Ethiopia: a systematic review and meta-analysis. Background: As compared to other regions of the world, Sub Saharan Africa (SSA) is the region with the highest neonatal mortality and is the region showing the least progress in the reduction of newborn death. Despite better progress made in reducing neonatal mortality, Ethiopia contributes the highest rate of neonatal death in Africa. In Ethiopia, findings from few studies were inconsistent and there is a need to systematically pool existing data to determine the impact of antenatal care on neonatal mortality among mother-neonate pairs in Ethiopia. Methods: Published articles from various electronic databases such as Medline, Hinari, Pub Med, Cochrane library, the Web of Science, and Google Scholar were accessed. Also, unpublished studies from library catalogs were identified. All observational studies that were conducted on the association between antenatal care follow-up and neonatal mortality among neonates in Ethiopia were included. Data were extracted on the Microsoft Excel spreadsheet and analyzed using STATA 14.1 version. A random-effects model was used to estimate the pooled estimate with a 95% confidence interval (CI). Forest plots were used to visualize the presence of heterogeneity and estimate the pooled impact on antenatal care on neonatal mortality. The presence of publication bias was assessed by funnel plots and Egger's statistical tests. Results: Initially, a total of 345 studies were accessed. Finally, 28 full-text studies were reviewed and fourteen studies fulfilled inclusion criteria and included in the final meta-analysis. The overall pooled estimate indicates the odds of neonatal death among neonates from women with antenatal care were 65% lower than those neonates from women who had no antenatal care follow-up (OR: 0.35, 95% CI: 0.24, 0.51). Conclusions: In this systematic review and meta-analysis, lack of ANC follow-up increase the probability of neonatal mortality as compared to having ANC follow-up. Thus, we will recommend for more coverages of appropriate antenatal care where risk groups can best be identified and managed. abstract_id: PUBMED:35657132 Hypothyroxinemia in sick term neonates and its risk factors in an extramural neonatal intensive care unit: a prospective cohort study. Objective: Thyroid functions in the sick newborns may be altered in the first week of life. Transient hypothyroxinemia has been reported in these babies, which could be due to the immaturity of the hypothalamic-pituitary-thyroid axis or to acute illness. We conducted this study to estimate the incidence of hypothyroxinemia and determine its risk factors in sick term newborns. Materials And Methods: We analyzed free T4 (FT4) and thyroid-stimulating hormone (TSH) levels in sick term neonates (≤7 days of life) admitted to the neonatal intensive care unit. FT4 and TSH levels were estimated in the first week of life in all the enrolled neonates (N = 98) and then repeated at 14-21 days of life in 46 babies. Risk analysis was conducted using univariate and multivariate logistic regression, and numerical data was compared using the Mann-Whitney U test and t-test. Results: Hypothyroxinemia was seen in 10 (10.2%) of the admitted term babies. Male gender, vaginal delivery, presence of hypoxic ischemic encephalopathy, and need for mechanical ventilation (>24 hours) were identified as risk factors. There was a significant negative linear correlation between FT4 level in the first week of life and duration of hospital stay. Conclusion: Hypothyroxinemia is common in sick term neonates. abstract_id: PUBMED:35872037 Long-term childhood outcomes for babies born at term who were exposed to antenatal corticosteroids. Background: Antenatal corticosteroids improve neonatal outcomes when administered to infants who are at risk of preterm delivery. Many women who receive antenatal corticosteroids for threatened preterm labor proceed to deliver at term. Thus, long-term outcomes should be evaluated for term-born infants who were exposed to antenatal corticosteroids in utero. Objective: This study aimed to compare long-term outcomes between term-born children aged ≥5 years who were born to women who received antenatal corticosteroids for threatened preterm labor and children whose mothers were also evaluated for threatened preterm labor but did not receive antenatal corticosteroids. Study Design: We performed a retrospective cohort study of children born at ≥37 weeks' gestation, aged ≥5 years, and born to mothers diagnosed with threatened preterm labor during pregnancy. The primary exposure of interest was receiving antenatal corticosteroids. Among the collected childhood medical conditions, the primary outcome of interest was a diagnosis of asthma. Results: Of the 3556 term-born children aged ≥5 years, 629 (17.6%) were exposed to antenatal corticosteroids (all betamethasone), and 2927 (82.3%) were controls whose mothers were evaluated for threatened preterm birth but did not get antenatal corticosteroid injections. Women receiving antenatal corticosteroids had higher rates of maternal comorbidities (diabetes mellitus, hypertension; P≤.01). Antenatal corticosteroid-exposed children had no difference in diagnosis of asthma (12.6% vs 11.6%), attention deficit disorder, or developmental delay (P=.47, .54, and .10, respectively). Controlling for maternal and neonatal characteristics, asthma was not different between those exposed to antenatal corticosteroids and controls (odds ratio, 1.05; 95% confidence interval, 0.79-1.39). The odds of the child's weight percentile being <10% were increased for antenatal corticosteroid-exposed children born at term (odds ratio, 2.00; 95% confidence interval, 1.22-3.25). Conclusion: Children born at term who were exposed to antenatal corticosteroids may have increased odds of being in a lower growth percentile than those not exposed. However, rates of diagnoses such as asthma, developmental delay, and attention deficit disorders were not different. abstract_id: PUBMED:37554349 Assessment of prenatal depression among U.S. pregnant women without access to paid sick leave and regular place of care: National Health Interview Survey of U.S.-born and non-U.S.-born. Prenatal depression is one of the most common risks during pregnancy. This study examined the prevalence and likelihood of prenatal depression association with sociodemographic factors, paid sick leave, and place of care among U.S. pregnant women. We conducted bivariate Chi-square tests to assess the statistical difference and multivariable logistic regression models to assess the association of prenatal depression using the National Health Interview Survey, cross-sectional data from 2010 to 2019 of pregnant women aged 18-44 years (N = 957). The prevalence of prenatal depression was 40.6%, 28.5%, and 27.2% among White, Black, and other racial pregnant women, respectively. Pregnant women with no regular/routine place of care had a prenatal depression prevalence rate of 58.1%, and those without access to paid sick leave had 46.9%. Also, pregnant women without access to paid sick leave were found to have an increased likelihood of reporting prenatal depression ([adjusted odds ratio] AOR = 2.50, 95% CI = 1.72-3.64), as well as those without a regular place of care (AOR = 2.43, 95% CI = 1.32-4.47). The findings identify factors that need to be addressed to minimize depression among U.S. pregnant women and establish the need for tailored interventions to address prenatal depression. abstract_id: PUBMED:11580736 Do very sick neonates born at term have antenatal risks? 2. Infants ventilated primarily for lung disease. Aims: 1. Ascertain antenatal and intrapartum risk factors for term neonates ventilated primarily for respiratory problems. 2. Describe the neonatal morbidity and mortality. Methods: Population-based case control cohort study. Setting: Sydney and four large rural/urban Health Areas in New South Wales, 1996. Subjects: Singleton term infants, no major congenital anomaly: subset of 99 infants ventilated primarily for respiratory problems from 182 cases admitted to a tertiary neonatal intensive care unit (NICU) for mechanical ventilation, and 550 randomly selected controls. Outcome: Risk factors for case status by maternal, antenatal, labor, delivery, and combined epochs, adjusted Odds Ratios (OR), 95 per cent Confidence Intervals (CI), p<0.05. Results: Predictors of case status by multivariate epochs: mother's age > or =35 years (1.9 (1.1, 3.2) p=0.03), primigravida (1.8 (1.1, 2.8) p=0.01), any antenatal complication (3.8 (2.4, 5.9) p=0.0001), birth weight < 3rd percentile (3.7 (1.5, 9.1) p=0.006), gestational diabetes (2.9 (1.3, 6.9) p=0.01), maternal pyrexia (6.5 (1.6, 27.2) p=0.01), birth weight >90th percentile (1.8 (1.01, 3.2) p=0.047), gestation 37-38 weeks (2.3 (1.5, 3.6) p=0.0004), forceps (4.4 (2.1, 9.1) p=0.0001), elective cesarean section (3.7 (2.0, 6.5) p=0.0001), emergency cesarean section (4.5 (2.4, 8.4) p=0.0001). Case mortality rate was 5 per cent. Conclusion: The pathways to neonatal respiratory morbidity in term infants are multifactorial. Several areas which warrant more in-depth study are: elective cesarean section at 37-38 weeks gestation, fetal growth restriction, macrosomia and the pattern of in-utero growth, maternal weight gain during pregnancy, gestational diabetes, pyrexia in labor and the role of chorioamnionitis. abstract_id: PUBMED:38002910 Early-Term Neonates Demonstrate a Higher Likelihood of Requiring Phototherapy Compared to Those Born Full-Term. Early-term neonates (with a gestational age (GA) of 37 and 0/7 weeks to 38 and 6/7 weeks) face higher morbidities, including respiratory and neurodevelopmental issues, than full-term (39 and 0/7 weeks to 40 and 6/7 weeks) infants. This study explores whether hyperbilirubinemia necessitating phototherapy also differs between these groups. A retrospective study was conducted on neonates born from January 2021-June 2022, excluding those with specific conditions. Evaluated factors included GA, birth weight, bilirubin levels, glucose-6-phosphate dehydrogenase (G6PD) deficiency, and feeding type, with phototherapy given as per AAP guidelines. Of 1085 neonates, 356 met the criteria. When stratifying the neonates based on the need for phototherapy, a higher proportion of early-term neonates required phototherapy compared to full-term (p < 0.05). After factoring in various risks (GA; birth weight; gender; feeding type; G6PD deficiency; transcutaneous bilirubin levels at 24 h and 24-48 h postpartum; maternal diabetes; and the presence of caput succedaneum or cephalohematoma), early-term neonates were more likely to need phototherapy than full-term babies (OR: 2.15, 95% CI: 1.21 to 3.80). The optimal cut-off for transcutaneous bilirubin levels 24-48 h postpartum that were used to predict phototherapy need was 9.85 mg/dl. In conclusion, early-term neonates are at a greater risk for developing jaundice and requiring phototherapy than full-term neonates. Monitoring bilirubin 24-48 h postpartum enhances early prediction and intervention. abstract_id: PUBMED:31720310 Cerebral hemodynamic responses in preterm-born neonates to visual stimulation: classification according to subgroups and analysis of frontotemporal-occipital functional connectivity. How neurovascular coupling develops in preterm-born neonates has been largely neglected in scientific research. We measured visually evoked (flicker light) hemodynamic responses (HRs) in preterm-born neonates ( n=25 , gestational age: 31.71±3.37 weeks, postnatal age: 25.48±23.94 days) at the visual cortex (VC) and left frontotemporal lobe (FTL) using functional near-infrared spectroscopy (fNIRS) neuroimaging. We found that the HR characteristics show a large intersubject variability but could be classified into three groups according to the changes of oxyhemoglobin concentration at the VC [(A) increase, (B) decrease, or (C) inconclusive]. In groups A and B, the HRs at the left FTL were correlated with those at the VC, indicating the presence of a frontotemporal-occipital functional connectivity. Neonates in group A had a higher weight at measurement compared to those in group B, and had the lowest baseline total hemoglobin concentration and hematocrit compared to group C. To the best of our knowledge, this is the first fNIRS study showing (1) that the HRs of preterm-born neonates need to be classified into subgroups, (2) that the subgroups differed in terms of weight at measurement, and (3) that HRs can be observed also at the FTL during visual stimulation. These findings add insights into how neurovascular coupling develops in preterm-born neonates. abstract_id: PUBMED:38341166 Survival without severe neonatal morbidity after antenatal betamethasone dose reduction: a post hoc analysis of a randomized non-inferiority trial. Background: Antenatal betamethasone is recommended before preterm delivery to accelerate fetal lung maturation. However, its optimal dose remains unknown. A 50% dose reduction was proposed to decrease the potential dose-related long-term neurodevelopmental side effects, including psychological development, sleep, and emotional disorders. Because noninferiority of the half dose in terms of the need for exogenous surfactant was not shown in the primary analysis, its impact on survival without major neonatal morbidity needs to be investigated. Objective: This study aimed to investigate the impact of antenatal betamethasone dose reduction on survival of very preterm infants without severe neonatal morbidity, a factor known to have a strong correlation with long-term outcomes. Study Design: We performed a post hoc secondary analysis of a randomized, multicenter, double-blind, placebo-controlled, noninferiority trial, testing half (11.4 mg once; n=1620) vs full (11.4 mg twice, 24 hours apart; n=1624) antenatal betamethasone doses in women at risk of preterm delivery. To measure survival without severe neonatal morbidity at hospital discharge among neonates born before 32 weeks of gestation, we used the definition of the French national prospective study on preterm children, EPIPAGE 2, comprising 1 of the following morbidities: grade 3 to 4 intraventricular hemorrhage, cystic periventricular leukomalacia, necrotizing enterocolitis stage ≥2, retinopathy of prematurity requiring anti-vascular endothelial growth factor therapy or laser, and moderate-to-severe bronchopulmonary dysplasia. Results: After exclusion of women who withdrew consent or had pregnancy termination and of participants lost to follow-up (8 in the half-dose and 10 in the full-dose group), the rate of survival without severe neonatal morbidity among neonates born before 32 weeks of gestation was 300 of 451 (66.5%) and 304 of 462 (65.8%) in the half-dose and full-dose group, respectively (risk difference, +0.7%; 95% confidence interval, -5.6 to +7.1). There were no significant between-group differences in the cumulative number of neonatal morbidities. Results were similar when using 2 other internationally recognized definitions of severe neonatal morbidity and when considering the overall population recruited in the trial. Conclusion: In the BETADOSE trial, severe morbidity at discharge of newborns delivered before 32 weeks of gestation was found to be similar among those exposed to 11.4-mg and 22.8-mg antenatal betamethasone. Additional studies are needed to confirm these findings. abstract_id: PUBMED:27002268 Significance of antenatal glucocorticoid exposure for pressure injury prevalence in neonates. Aims: Studies have highlighted that antenatal steroids could have an effect on neonatal skin maturation. This study examined if there was a relationship between the administration of antenatal glucocorticoids for mothers and the skin injuries in their neonates. Data from skin injury audit were extracted from the neonatal database and analyzed to determine differences in the prevalence of neonates with pressure injuries [cases] whose mothers had received antenatal steroids, compared to those without pressure injuries [control]. Results: Of 247 neonates audited, 77 [31%], had documented pressure injuries, 170 [69%] had no documented injury. The median birth weight and gestation were 1400 g [IQR 893-2268 g] and 30.3 weeks [IQR 26.3-40.0 weeks] respectively. Of the neonates born less than 34 weeks, 80% were exposed to antenatal steroids and were equally distributed across patient genders. Within the 77 cases, 53 [66%] were exposed to antenatal steroids compared to controls in which 88 [53%] had not. The effect between cases and controls was not statistically significant [χ2 = 2.81, P = 0.09]. However a difference was noted between genders, as female neonates benefited from the exposure to steroids [OR = 0.317, 95% [CI 0.105-0.956], p value -0.041]. Conclusion: Antenatal glucocorticoids appear to be beneficial in reducing pressure injury prevalence in female neonates. Answer: Yes, very sick neonates born at term do have antenatal risks. A study that focused on term neonates ventilated primarily for respiratory problems identified several antenatal and intrapartum risk factors. These included the mother's age being 35 years or older, being a primigravida, experiencing any antenatal complication, having a birth weight below the 3rd percentile or above the 90th percentile, gestational diabetes, maternal pyrexia, gestation between 37-38 weeks, and delivery by forceps, elective cesarean section, or emergency cesarean section (PUBMED:11580736). These findings suggest that the pathways to neonatal respiratory morbidity in term infants are multifactorial and that certain antenatal conditions and choices made during delivery can increase the risk of neonatal complications.
Instruction: Do organizational barriers to pneumococcal and influenza vaccine access exist? Abstracts: abstract_id: PUBMED:17441532 Do organizational barriers to pneumococcal and influenza vaccine access exist? Context: Despite the implementation of a Quebec immunization program against influenza and pneumococcal disease (PQIIP), vaccine coverage has remained low. There have been many studies on personal barriers to vaccination, but few have explored other kinds of barriers. Objectives: To explore the presence of barriers in relation to the organization of the health care system and to propose recommendations for increasing vaccine coverage. Methods: Within a mixed protocol, a phone survey of 996 people in the target population and a case study implicating the follow-up of the PQIIP with all the site and actor categories via 43 semistructured interviews and 4 focus groups were realized. Survey data underwent a descriptive statistical analysis. Qualitative analysis followed the Miles and Huberman approach. Results: The results indicate the presence of barriers with regard to information accessibility. These include access to: the physicians' recommendation, knowledge of the efficacy or the security of vaccines, and admissibility of clients to the PQIIP. Organizational barriers were also found to limit access to vaccination, especially in terms of restricted choices of time and location. Coordination and incentives mechanisms are not optimal. Removal of organizational barriers depends more on strategic rather than structural factors. Conclusion: Addressing organizational barriers should be an important component of strategies aimed at improving vaccine coverage. Public health authorities should focus on strategic management of the information and inter-organizational environment. abstract_id: PUBMED:34086876 Barriers and facilitators to influenza and pneumococcal vaccine hesitancy in rheumatoid arthritis: a qualitative study. Objectives: Immunization is an essential component of RA care. Nevertheless, vaccine coverage in RA is suboptimal. Contextual, individual and vaccine-related factors influence vaccine acceptance. However, barriers and facilitators of vaccination in RA are not well defined. The aim of this study was to assess perspectives of RA patients and healthcare professionals (HCPs) involved in RA care of barriers and facilitators regarding influenza and pneumococcal vaccines. Methods: Eight focus groups (four with RA patients and four with HCPs) and eight semi-structured open-ended individual interviews with vaccine-hesitant RA patients were conducted. Data were audio recorded, transcribed verbatim and imported to MAXQDA software. Analysis using the framework of vaccine hesitancy proposed by the Strategic Advisory Group of Experts on Immunization was conducted. Results: RA patients and HCPs reported common and specific barriers and facilitators to influenza vaccination that included contextual, individual and/or group and vaccine- and/or vaccination-specific factors. A key contextual influence on vaccination was patients' perception of the media, pharmaceutical industry, authorities, scientists and the medical community at large. Among the individual-related influences, experiences with vaccination, knowledge/awareness and beliefs about health and disease prevention were considered to impact vaccine acceptance. Vaccine-related factors including concerns about vaccine side effects such as RA flares, the safety of new formulations, the mechanism of action, access to vaccines and costs associated with vaccination were identified as actionable barriers. Conclusion: Acknowledging RA patients' perceived barriers to influenza and pneumococcal vaccination and implementing specific strategies to address them might increase vaccination coverage in this population. abstract_id: PUBMED:35078659 Predictors of pneumococcal vaccination among Australian adults at high risk of pneumococcal disease. Background: Although nearly all Australian children are vaccinated against pneumococcal disease, pneumococcal vaccine uptake is low among high-risk adults. This study aimed to identify perceived barriers to pneumococcal vaccination among high-risk adults. Methods: This paper reports combined data on pneumococcal vaccination collected from three different online, cross-sectional surveys that were administered in Australia between August 2019 and September 2020. Using Poisson regression, we identified characteristics and beliefs associated with self-reported pneumococcal vaccination among adults aged 65 and over or with chronic health conditions. Results: The weighted estimate for pneumococcal vaccine coverage was 24% for high-risk adults under 65 and 53% for adults aged 65 and over. Nearly half of those under 65 reported they had never heard of the pneumococcal vaccine, while 26% of those aged 65 and over had never heard of the vaccine. Among those under 65, pneumococcal vaccination was associated with high perceived disease susceptibility (PR = 1.97, 95% CI: 1.23, 3.18), not having heard of the pneumococcal vaccine (PR = 0.44, 95% CI: 0.28-0.69), awareness that their chronic health condition puts them at increased risk of pneumonia (PR = 2.44, 95% CI: 1.51-3.98), and having a doctor recommend the vaccine (PR = 3.02, 95% CI: 2.05-4.44). Among adults aged 65 and over, self-reported pneumococcal vaccination was associated with influenza vaccination in the previous 12 months (PR = 4.28, 95% CI: 2.85-6.44) and awareness that they are eligible for free pneumococcal vaccination (PR = 5.02, 95% CI: 2.34-10.77). Conclusion: Awareness of pneumococcal vaccines was low among adults at high risk of pneumococcal disease, which appears to be contributing to low vaccine uptake. A doctor's recommendation was associated with increased uptake of pneumococcal vaccine, so interventions should be developed to promote pneumococcal vaccine uptake in GP practices. abstract_id: PUBMED:28579735 Barriers and Facilitators to Provision of Influenza and Pneumococcal Vaccines in Home Health Care Agencies. The recommendations for influenza and pneumococcal vaccines are widely known yet the rates of both immunizations for home health care patients are 70 percent or less. The purpose of this study was to identify the facilitators and barriers to immunizations ultimately to improve vaccination rates among home health care patients. Using a multi-case study approach with five agencies and one group of administrators, facilitators included providing patients with a vaccine "card" and using the agency electronic medical record (EMR) for decision support/reminders. We determined that there were patient barriers (misperceptions about vaccines in general, difficulty in recalling vaccine status) and provider barriers (misperceptions about vaccines among health care workers, home health care agencies not receiving accurate information from other providers or difficulty in determining vaccine status). abstract_id: PUBMED:28685134 Vaccinations against respiratory infections in Arabian Gulf countries: Barriers and motivators. Aim: To study the uptake, barriers and motivators of influenza, pneumococcal, meningococcal and pertussis vaccines among members of public in Arabian Gulf countries. Methods: A cross-sectional survey among the Gulf Cooperation Council (GCC) countries' residents. Data collected electronically through a smartphone app. The survey variables aimed to investigate the respondents' awareness about vaccines against influenza, pneumococcal, meningococcal and pertussis infections. Collected data concerning the respondents' socio-demographic characteristics, their perception toward vaccine uptake and the factors that motivate or demotivate them from taking influenza vaccine. The data were analysed statistically using the SPSS v.23.0. Differences in the characteristics of users from different countries were quantified through bivariate analysis. Other important variables and controlling factors were studied using logistic regression. Results: A total of 1812 respondents participated in the study. Their mean age was 27 years, 82% were male and 24% had ≥ 1 chronic diseases. The overall uptake of influenza vaccine was 17% (21% among "at risk" people) and ranged from 15% in Saudi Arabia to 24% in Qatar. Doctor's advice (23%) and a perception of having low body immunity (21%) were the main cited reasons for being vaccinated, whereas unawareness about the vaccine (43%) was the main barrier. The overall uptake of pneumococcal vaccine in the preceding three years was 22% (25% among "at risk" individuals) and ranged from 0% in Bahrain to 79% in Kuwait. The overall uptake of pertussis vaccine was 16% (31% among "vulnerable" people), and ranged from 7% in Saudi Arabia to 75% in Oman. The overall uptake of meningococcal vaccine was 20% (29% among the "at risk" people) and ranged from 3% in Oman to 50% in Bahrain. Conclusion: The vaccination uptake across GCC countries is suboptimal and varies widely across the countries. Further research is needed to unearth the reasons and formulate action plan. abstract_id: PUBMED:31334189 Prevalence of influenza and pneumococcal vaccine uptake in Saudi type 2 diabetic individuals. Introduction: Individuals with diabetes are at particularly at high risk for many of the negative health consequences associated with influenza and pneumococcal infections. This study aimed to determine the prevalence of influenza and pneumococcal vaccination among a population of type 2 diabetic patients in Saudi Arabia and to determine the factors associated with vaccine uptake. Methods: A cross-sectional survey was conducted among patients with type 2 diabetes at Security Forces Hospital, Riyadh in Saudi Arabia. The survey asked basic demographic questions as well as questions about awareness, vaccination status, and beliefs about the influenza and pneumococcal vaccines. Results: From a total number of 422 responses, 360 participants were ultimately included in the final sample. The overall prevalence of influenza and pneumococcal vaccination in this population were 47.8% and 2.8%, respectively. In general, there was a very low awareness of the pneumococcal vaccine. Older individuals, unmarried individuals, those with less education, and those living with certain chronic conditions were less likely to have gotten the influenza vaccine. Beliefs in the importance of vaccination for people with diabetes, the efficacy of the influenza vaccine, and not being worried about the side effect of the vaccine were strongly associated with having received the vaccine. Conclusions: Attention should be given to increasing awareness of the pneumococcal vaccine among people living with diabetes. Particular consideration should also be paid to increasing access and awareness to both vaccines among those groups that have the lowest prevalence of vaccination and may be at the highest risk for the negative consequences associated with these infections. Finally, education interventions should be used to increase the understanding of the safety and efficacy of the influenza vaccine. abstract_id: PUBMED:36625658 Facilitating access to pneumococcal vaccine for people living with HIV: an experience report. Objective: The article describes a strategy to facilitate access to pneumococcal conjugate vaccine 13 (PCV-13) for people living with HIV/AIDS (PLHIV) during the COVID-19 pandemic. Method: report on the experience regarding the organization of a care service for PLHIV in the city of São Paulo to facilitate access to PCV-13 in the framework of the 2020 influenza vaccination campaign during the COVID-19 pandemic. Results: through the integration between a PLHIV care service and an Immunization Center (CRIE in Portuguese), it was possible to offer PCV-13 to PLHIV at the point of care, reducing physical barriers to access to immunization. Thus, of the 1,906 PLHIV who passed through the service during the period March 23-July 31, 2020, 84.4% (1,609) received the influenza vaccine, PCV-13 or both. Of the 1609 vaccinated, 50.6% (814) were eligible and received PCV-13. Conclusion: offering the vaccine at the point of care and orienting PLHIV on the importance of vaccination as a disease prevention strategy, identifying those eligible to receive it, was an important action carried out by the institution together with the nursing team, as a strategy to facilitate access to vaccination. abstract_id: PUBMED:36184405 Effectiveness of the 23-valent pneumococcal polysaccharide vaccine against community-acquired pneumonia in older individuals after the introduction of childhood 13-valent pneumococcal conjugate vaccine: A multicenter hospital-based case-control study in Japan. Background: In the era of childhood pneumococcal conjugate vaccine (PCV) immunization, especially 13-valent pneumococcal conjugate vaccine (PCV13) immunization, serotype replacement of Streptococcus pneumoniae and herd immunity in adults have been reported worldwide. Therefore, continuous evaluation of the effectiveness of the pneumococcal vaccine in adults is crucial because vaccine effectiveness may change owing to these factors. The purpose of this study was to evaluate the effectiveness of the 23-valent pneumococcal polysaccharide vaccine (PPSV23) against all-cause pneumonia and pneumococcal pneumonia in older individuals with community-acquired pneumonia (CAP) after the introduction of childhood PCV13 in Japan, a topic that has remained largely unexplored. Methods: We evaluated pneumococcal vaccine effectiveness in this multicenter, matched case-control study conducted in hospitals and clinics. Cases included patients (aged ≥ 65 years) newly diagnosed with CAP between October 2016 and September 2019. A maximum of five non-pneumonia control patients matched for sex, school grade, date of outpatient visit, and medical institution were selected for each case. Conditional logistic regression models were used to calculate the odds ratios (ORs) and 95% confidence intervals (CIs) of pneumococcal vaccines for the occurrence of all-cause CAP and pneumococcal CAP. Results: The analysis included 740 individuals (142 patients and 598 controls). The median age of participants was 75 years (men: 54%). The adjusted OR for pneumococcal vaccination against all-cause CAP was 1.31 (95% CI: 0.84-2.06), while that for PPSV23 vaccination in the previous 5 years was 1.33 (95% CI: 0.85-2.09). The adjusted OR for PPSV23 vaccination in the previous 5 years against pneumococcal CAP was 0.93 (95% CI: 0.35-2.50). Conclusions: This study was unable to demonstrate the effectiveness of PPSV23 against all-cause and pneumococcal pneumonia after the introduction of childhood PCV13 in Japan. Nonetheless, additional studies are needed to validate these results. abstract_id: PUBMED:25877774 Pneumococcal vaccination for prevention of pneumonia Aging of the immune system, so-called immunosenescence, is well documented as the cause of increased infection rates and severe, often complicated courses of infections in older adults. This is particularly true for pneumococcal pneumonia in older adults; therefore, the standing committee on vaccination of the Robert Koch Institute (STIKO) recommends a once only vaccination with 23-valent pneumococcal polysaccharide vaccine for all persons aged 60 years and over. Furthermore, the 13-valent pneumococcal conjugate vaccine is also available for administration in adults and is recommended by the STIKO for particular indications. The advantage of the pneumococcal conjugate vaccine is the additional induction of a T-cell dependent immune response that leads to good immunogenicity despite immunosenescence. Initial data from a recent randomized controlled trial, so far only presented at conferences, confirm that the conjugate vaccine also provides protection against non-bacteremic pneumococcal pneumonia, which is not provided by the polysaccharide vaccine. Thus, there are two vaccines for prevention of pneumococcal diseases: one with a broader range of serotype coverage but with an uncertain protection against non-bacteremic pneumococcal pneumonia and another one with less serotype coverage but more effective protection. Vaccination of children with the conjugate vaccine also leads to a rapid decrease of infections by the 13 vaccine serotypes even in adults because of herd protection effects. For prevention of pneumonia in older adults the additional benefit of a concurrent application of influenza vaccine and pneumococcal vaccine should be considered. abstract_id: PUBMED:12057607 Influenza and pneumococcal vaccination in Scottish nursing homes: coverage, policies and reasons for receipt and non-receipt of vaccine. A national survey was carried out to determine the coverage of influenza and pneumococcal vaccines, policies, reasons for receipt, non-receipt of vaccine and strategies to improve vaccine coverage in Scottish nursing homes. Of the 550 nursing homes, 72% (394) participated in the study. Overall coverage was 85% for influenza vaccine in 2001-2002 season and 11% for pneumococcal vaccine in the last 5-year period. Only 6% (23/394) of homes were reported to have a systematic immunization record. The most frequently stated reasons for improved coverage of both vaccines were clear immunization policies (76%), awareness and education for staff and residents (68%), and consent on behalf of the incompetent residents (66%). The presence of vaccination policies was higher for influenza vaccine than pneumococcal vaccine expressed as verbal agreement (27% versus 3%), written policies with set target (24% versus 5%) and written policies without set target (17% versus 2%). Advice from the members of the community health care team was the principal reason for the receipt of both vaccines. The predominant reasons for non-receipt of vaccine were refusal by residents and family members (both vaccines) and lack of advice from general practitioners (pneumococcal vaccine). The substantial disparity in coverage of influenza and pneumococcal vaccine reflects the lack of national recommendations and policies for reimbursements for pneumococcal vaccination. These data suggest that greater efforts are needed to improve prevention behaviors of health care professionals and the public, organized vaccine delivery strategies and systematic vaccination documents to increase influenza and pneumococcal vaccination rates in nursing homes and other long-term care facilities. Answer: Yes, organizational barriers to pneumococcal and influenza vaccine access do exist. Studies have identified several organizational barriers that limit access to vaccination, particularly in terms of restricted choices of time and location for receiving vaccines. Coordination and incentives mechanisms within the healthcare system are not optimal, and the removal of these barriers depends more on strategic than structural factors (PUBMED:17441532). Additionally, home health care agencies face barriers such as misperceptions about vaccines among healthcare workers, difficulty in determining vaccine status, and not receiving accurate information from other providers (PUBMED:28579735). In the context of rheumatoid arthritis patients, barriers include concerns about vaccine side effects, safety of new formulations, access to vaccines, and costs associated with vaccination (PUBMED:34086876). In Australia, nearly half of high-risk adults under 65 reported never having heard of the pneumococcal vaccine, indicating a lack of awareness as a significant barrier (PUBMED:35078659). In the Arabian Gulf countries, unawareness about the vaccine was the main barrier to vaccination uptake (PUBMED:28685134). In Saudi Arabia, there was very low awareness of the pneumococcal vaccine among people living with diabetes, and beliefs about the importance and efficacy of vaccination were strongly associated with vaccine uptake (PUBMED:31334189). Furthermore, during the COVID-19 pandemic, a strategy to facilitate access to pneumococcal conjugate vaccine for people living with HIV/AIDS was implemented, highlighting the need to address physical barriers to access immunization (PUBMED:36625658). Overall, these findings suggest that addressing organizational barriers should be a component of strategies aimed at improving vaccine coverage, with a focus on strategic management of information and the inter-organizational environment.
Instruction: Infundibular Recess Angle Reduction After Endoscopic Third Ventriculostomy: Does It Reflect Clinical Success? Abstracts: abstract_id: PUBMED:25871782 Infundibular Recess Angle Reduction After Endoscopic Third Ventriculostomy: Does It Reflect Clinical Success? Background: Although the ventricular size is significantly reduced after endoscopic third ventriculostomy (ETV) in most successfully treated patients, ventricular size reduction is not always seen after a successful ETV. Practical and reliable radiologic parameters are still needed to assess the clinical success of an ETV. Methods: We retrieved the clinical and radiologic data of patients who underwent an ETV. Patients with the following criteria were included: (1) preoperative magnetic resonance imaging studies available, (2) postoperative magnetic resonance imaging studies done within the first 2 postoperative weeks, and (3) the infundibular recess clearly visible on preoperative and postoperative sagittal magnetic resonance imaging. Preoperative and postoperative measurements of the angle of the infundibular recess of the third ventricle were performed on midsagittal T1-weighted, T2-weighted, fast imaging employing steady-state acquisition, or constructive interference in steady state images. Results: The extent of reduction of the infundibular recess angle predicted the clinical outcome of ETV during the early postoperative period with a high degree of accuracy. The average reduction was about 48% in successful procedures versus only 15% in failed procedures. Conclusions: The degree of reduction of the angle of the infundibular recess of the third ventricle correlated with the amount of third ventricular decompression after ETV. Most importantly, such a reduction was noted to occur during the early postoperative period when radiologic changes are less pronounced. Assessment of change in infundibular recess angle measurement is easy to perform and may prove helpful in cases with no clear-cut clinical evidence of success of ETV. abstract_id: PUBMED:35192025 Persisting embryonal infundibular recess (PEIR) and transsphenoidal-transsellar encephaloceles: distinct entities or constituents of one continuum? Persisting embryonal infundibular recess (PEIR) is a very rare anomaly of the floor of the third ventricle in which the embryonic morphology of the infundibular recess (IR) persists. The exact underlying mechanism of development of PEIR is unknown, and the anomaly has been reported as an isolated finding or in association with other conditions. On the other hand, trans-sphenoidal encephaloceles are the rarest form of basal encephaloceles. The trans-sphenoidal trans-sellar encephalocele (TSE) is the least common variant in which the pituitary gland, pituitary stalk, optic pathways, parts of the third ventricle and IR may be present within the encephalocele. We recently treated one patient with TSE. Based on the observed morphological similarity of the IR in our patient and in the published cases of PEIR, we reviewed the literature in order to validate the hypothesis that PEIR and TSE may possibly belong to one spectrum of malformations. Across the published reports, the morphology of the IR in TSE is very closely similar to PEIR. Moreover, radiological, patho-anatomical, and embryological evidence is in support to our hypothesis that PEIR and TSE are most likely the two extremes of the same continuum of malformations. abstract_id: PUBMED:30459899 Persisting Embryonal Infundibular Recess Masquerading as a Nasal Mass. Persisting embryonal infundibular recess (PEIR) is a rare anomaly of the development of the posterior pituitary wherein there is a defect in the third ventricular floor. Earlier reports have found PEIR descending only up to the sella. However, this is the first case of PEIR presenting as a pulsatile nasal mass. A 35-year-old female presented to the otorhinolaryngologist with hyposmia. Diagnostic nasal endoscopy revealed a pulsatile nasal mass. Magnetic resonance imaging revealed a cystic lesion extending from the third ventricular floor to the nasal cavity. There was no associated hydrocephalus or empty sella. The case was successfully managed by the endoscopic endonasal transsphenoidal approach. abstract_id: PUBMED:33980787 Contrast Enhancement of the Normal Infundibular Recess Using Heavily T2-weighted 3D FLAIR. Purpose: The purpose of the present study was to evaluate contrast enhancement of the infundibular recess in the normal state using heavily T2-weighted 3D fluid-attenuated inversion recovery (FLAIR) (HT2-FLAIR). Methods: Twenty-six patients were retrospectively recruited. We subjectively assessed overall contrast enhancement of the infundibular recess between postcontrast, 4-hour (4-h) delayed postcontrast, and precontrast HT2-FLAIR images. We also objectively conducted chronological and spatial comparisons by measuring the signal intensity (SI) ratio (SIR). Chronological comparisons were performed by comparing SI of the infundibular recess/SI of the midbrain (SIRIR-MB). Spatial comparisons were conducted by comparing SI on postcontrast HT2-FLAIR/SI on precontrast HT2-FLAIR (SIRPost-Pre) of the infundibular recess with that of other cerebrospinal fluid (CSF) spaces, including the superior part of the third ventricle, lateral ventricles, fourth ventricle, and interpeduncular cistern. Results: In the subjective analysis, all cases showed contrast enhancement of the infundibular recess on both postcontrast and 4-h delayed postcontrast HT2-FLAIR, and showed weaker contrast enhancement of the infundibular recess on 4-h delayed postcontrast HT2-FLAIR than on postcontrast HT2-FLAIR. In the objective analysis, SIRIR-MB was the highest on postcontrast images, followed by 4-h delayed postcontrast images. SIRPost-Pre was significantly higher in the infundibular recess than in the other CSF spaces. Conclusion: The present results demonstrated that the infundibular recess was enhanced on HT2-FLAIR after an intravenous gadolinium injection. The infundibular recess may be a potential source of the leakage of intravenously administered gadolinium into the CSF. abstract_id: PUBMED:31075772 Anterior third ventricular height and infundibulochiasmatic angle: two novel measurements to predict clinical success of endoscopic third ventriculostomy in the early postoperative period. Objective: The authors sought to develop a set of parameters that reliably predict the clinical success of endoscopic third ventriculostomy (ETV) when assessed before and after the operation, and to establish a plan for MRI follow-up after this procedure. Methods: This retrospective study involved 77 patients who had undergone 78 ETV procedures for obstructive hydrocephalus between 2010 and 2015. Constructive interference in steady-state (CISS) MRI evaluations before and after ETV were reviewed, and 4 parameters were measured. Two well-known standard parameters, fronto-occipital horn ratio (FOHR) and third ventricular index (TVI), and 2 newly defined parameters, infundibulochiasmatic (IC) angle and anterior third ventricular height (TVH), were measured in this study. Associations between preoperative measurements of and postoperative changes in the 4 variables and the clinical success of ETV were analyzed. Results: Of the 78 ETV procedures, 70 (89.7%) were successful and 8 (10.3%) failed. On the preoperative MR images, the mean IC angle and anterior TVH were significantly larger in the successful procedures. On the 24-hour postoperative MR images of the successful procedures, the mean IC angle declined significantly from 114.2° to 94.6° (p < 0.05) and the mean anterior TVH declined significantly from 15 to 11.2 mm (p < 0.05). The mean percentage reduction of the IC angle was 17.1%, and that of the anterior TVH was 25.5% (both p < 0.05). On the 1-month MR images of the successful procedures, the mean IC angle declined significantly from 94.6° to 84.2° (p < 0.05) and the mean anterior TVH declined significantly from 11.2 to 9.3 mm (p < 0.05). The mean percentage reductions in IC angle (11%) and anterior TVH (16.9%) remained significant at this time point but were smaller than those observed at 24 hours. The 6-month and 1-year postoperative MR images of the successful group showed no significant changes in mean IC angle or mean anterior TVH. Regarding the unsuccessful procedures, there were no significant changes observed in IC angle or anterior TVH at any of the time points studied. Reduction of IC angle and reduction of anterior TVH on 24-hour postoperative MR images were significantly associated with successful ETV. However, no clinically significant association was found between FOHR, TVI, and ETV success. Conclusions: Assessing the IC angle and anterior TVH on preoperative and 24-hour postoperative MR images is useful for predicting the clinical success of ETV. These 2 measurements could also be valuable as radiological follow-up parameters. abstract_id: PUBMED:35199208 Persisting embryonal infundibular recess in a case of TITF-1 gene mutation. The thyroid transcription factor 1 (TITF-1) gene plays an important role in the development of the ventral forebrain, thyroid and lungs. Mutations of this gene are known to cause benign hereditary chorea (BHC) and can cause the full spectrum of abnormalities seen in the brain-thyroid-lung syndrome. Abnormalities of the ventral forebrain on imaging have been variably documented in the literature. Multiple previous reports describe a cystic pituitary mass, as well as duplication of the pituitary stalk and communication between an intrasellar cyst and the third ventricle. The initial MRI performed in our case was interpreted as an intrasellar cyst, but the high-resolution MRI performed later was able to resolve this as a persisting embryonal infundibular recess (PEIR), rather than the cystic pituitary mass which has previously been described. This case illustrates the role of the TITF-1 gene in the development of the pituitary and hypothalamus. abstract_id: PUBMED:29886298 Persisting Embryonal Infundibular Recess: Case Report and Imaging Findings. Background: Persisting embryonal infundibular recess (PEIR) is a rare anomaly of the third ventricular floor, with only 10 cases having been reported to date. Accurate imaging diagnosis of this entity is essential to avoid unnecessary surgery and its associated morbidity. The following case is presented to emphasize its recognition in the differential diagnosis of pituitary cystic lesions. Case Description: A 38-year-old woman was referred for evaluation of a pituitary cystic mass. Magnetic resonance imaging (MRI) revealed a 10.2-mm cystic mass in an enlarged sella turcica. Three-dimensional high-resolution MRI using constructive interference in steady-state sequence clearly delineated a communicating tract between the third ventricle and the sellar cyst through pituitary stalk. A final diagnosis of PEIR was made. The patient was managed successfully with conservative treatment, and the lesion was in a stationary condition over a 5-year follow-up period. Conclusions: This is the first case report demonstrating imaging features of PEIR in 3-dimensional high-resolution MRI. Although PEIR is a rare anomaly, imaging diagnosis of this entity is important because confusion with other pituitary cystic lesions and the attempt of surgical removal may lead to serious complications. abstract_id: PUBMED:28744491 Interlaminar endoscopic lateral recess decompression-surgical technique and early clinical results. Background: Lateral recess stenosis is a common pathology causing de-novo or residual radicular pain following lumbar spine surgery. Diagnostic criteria and treatment strategies for symptomatic lateral recess stenosis are not well established. Methods: We identified ten patients in our prospective patient database (n=146) who underwent endoscopic interlaminar decompression for unilateral symptomatic lateral recess stenosis. Lateral recess height and angle were measured on axial T2-weighted MRI. Values from the symptomatic side were compared to the contralateral side which served as asymptomatic control. Oswestry Disability Index (ODI) and Visual Analogue Scale (VAS) for back and leg pain were collected preoperatively, postoperatively and at last follow-up. Results: Preoperative MRI revealed that both lateral recess angle and height were significantly smaller on the symptomatic compared to the asymptomatic side (angle: 19.3° vs. 35.7°; height: 2.9 vs. 5.7 mm; P<0.01). All patients tolerated endoscopic interlaminar decompression well and half of the patients were discharged on the day of surgery. At last follow-up (12.6±1.7 months), 8 out of 10 patients experienced a minimally clinically important improvement of their VAS for ipsilateral leg pain, which improved from 7.2±0.5 preoperatively to 2.5±0.8 postoperatively (P=0.001). The back pain VAS also improved (preoperatively 5.1±1.1 vs. postoperatively 1.7±0.9, P<0.05). The ODI improved from 50±5.8 preoperatively to 22.2±5.1 at last follow-up (P=0.001). One patient experienced persistent leg pain. Conclusions: Lateral recess height and angle correlate with symptomatic lateral recess stenosis which is effectively treated utilizing interlaminar endoscopic lateral recess decompression. abstract_id: PUBMED:35472085 Basal Recess in Third Ventricle Tumors: A Pathological Feature Defining a Clinical-Topographical Subpopulation of Papillary Craniopharyngiomas. This study investigates the presence of a hollow recess at the midline undersurface of tumors primarily localized within the third ventricle (3V). This structure was originally identified by magnetic resonance imaging (MRI) of 6 3V craniopharyngiomas (CPs) from our series and was then methodically scrutinized in autopsy studies (n = 1091) and MRI scans (n = 5558) of CPs and in 1251 3V tumors reported in the medical literature from 1839 to 2021. A recess at the tumor base was identified in 110 CPs, 95 with a verified papillary histology (papillary craniopharyngioma [PCP]) and 15 with typical gross appearance of PCP. Topographically, 90 tumors were strictly within the 3V (82%); 20 developed at the infundibulo-tuberal region of the 3V floor (18%). Morphologically, 2 main types of recess were identified: (i) a long, narrow recess with either a duct-like or a tubular shape that reached the central region of the CP (n = 47, 42.5%); and (ii) a short recess extending only a few millimeters into the lesion, either with a duct-like or a shallow cleft-like morphology (n = 63, 57.5%). Thus, the presence of a basal recess represents a pathological hallmark of a subpopulation of 3V PCPs. The presumed nature and diagnostic significance of this novel finding is comprehensively addressed. abstract_id: PUBMED:27863276 Simplest radiological measurement related to clinical success in endoscopic third ventriculostomy. Objective: Radiologic criteria for a successful endoscopic third ventriculostomy are not clearly defined and there is an ongoing need for determining simplest and strongest radiological criteria for this purpose. This paper aims to determine the easiest radiological parameter related to surgical outcome METHODS: Between January 2012 and December 2015 all patients receiving endoscopic third ventriculostomy with various indications were reviewed and 29 patients whose preoperative and early postoperative 3D-CISS images were available were studied. There were 13 males and 16 females, and there were 11 pediatric cases (mean age: 9.90±5.2; range: 2-18). The mean age of the entire population was 26.58±18.32 (range: 2-68 years). Measurements were performed using the ruler tool of a freely distributed medical imaging software. Simple ruler measurements of ventricular floor depression, lamina terminalis bowing, anterior commissure to tuber cinereum distance, mamillary body to lamina terminalis distance, third ventricular width, frontal horn width and occipital horn width were recorded and compared between successful and failed interventions. Results: Of the ventriculostomies, 22 (75.9%) were considered successful and 7 (24.1%) as failed at the last follow-up visit. Of the measurements performed, only those related to the third ventricle itself were significantly higher in the failed group. There were no association with lateral ventricular measurements. Conclusion: Simple ruler measurements of the suggested distances significantly correlate with clinical success. After validating our results with higher number of patients, complex measurements and calculations to determine the link between clinical success and radiological success of ventriculostomy procedures may not be needed. Answer: Yes, the reduction of the infundibular recess angle after endoscopic third ventriculostomy (ETV) does reflect clinical success. A study found that the extent of reduction of the infundibular recess angle predicted the clinical outcome of ETV during the early postoperative period with a high degree of accuracy. The average reduction was about 48% in successful procedures versus only 15% in failed procedures. This suggests that the degree of reduction of the angle of the infundibular recess of the third ventricle correlated with the amount of third ventricular decompression after ETV. Most importantly, such a reduction was noted to occur during the early postoperative period when radiologic changes are less pronounced. Therefore, assessment of change in infundibular recess angle measurement is easy to perform and may prove helpful in cases with no clear-cut clinical evidence of success of ETV (PUBMED:25871782).
Instruction: The first general practitioner hospital in The Netherlands: towards a new form of integrated care? Abstracts: abstract_id: PUBMED:15119519 The first general practitioner hospital in The Netherlands: towards a new form of integrated care? Objective: To describe the types of patients admitted to the first Dutch general practitioner (GP) hospital, their health-related quality of life and its substitute function. Design: A prospective observational study. Setting: The remaining 20-bed ward of a former district general hospital west of Amsterdam; a region with 62000 inhabitants and 26 GPs. Subjects: All patients admitted during the 12 months between 1 June 1999 and 1 June 2000. Main Outcome Measures: Patients' health-related quality of life (Medical Outcome Study 36-item Short Form Health Survey, Groningen Activities Restriction Scale), GPs assessments of severity of illness (DUSOI/WONCA Severity of Illness Checklist) and alternative modes of care. Results: In total, 218 admissions were recorded divided into 3 bed categories: GP beds (n = 131), rehabilitation beds (n = 62) and nursing home beds (n = 25). The mean age of all patients was 76 years. Main reasons for admission were immobilization due to trauma at home (GP beds), rehabilitation from surgery (rehabilitation beds) and stroke (nursing home beds). Overall, patients showed a poor health-related quality of life on admission. If the GP beds had not been available, the GPs estimated that the admissions would have been almost equally divided among home care, nursing home and hospital care. The severity of the diagnosis on admission of the 'hospital-care group' appeared to be significantly higher than the other care groups. Conclusion: The GP hospital appears to provide a valuable alternative to home care, nursing home care and hospital care, especially for elderly patients with a poor health-related quality of life who are in need of short medical and nursing care. abstract_id: PUBMED:28442341 Impact of a general practitioner-led integrated model of care on the cost of potentially preventable diabetes-related hospitalisations. Aim: To estimate potential savings for Australia's health care system through the implementation of an innovative Beacon model of care for patients with complex diabetes. Methods: A prospective controlled trial was conducted comparing a multidisciplinary, community-based, integrated primary-secondary care diabetes service with usual care at a hospital diabetes outpatient clinic. We extracted patient hospitalisation data from the Queensland Hospital Admitted Patient Data Collection and used Australian Refined Diagnosis Related Groups to assign costs to potentially preventable hospitalisations for diabetes. Results: 327 patients with complex diabetes referred by their general practitioner for specialist outpatient care were included in the analysis. The integrated model of care had potential for national cost savings of $132.5 million per year. Conclusions: The differences in hospitalisations attributable to better integrated primary/secondary care can yield large cost savings. Models such as the Beacon are highly relevant to current national health care reform initiatives to improve the continuity and efficiency of care for those with complex chronic disease in primary care. abstract_id: PUBMED:32592995 Proactive Integrated Consultation-Liaison Psychiatry: A new service model for the psychiatric care of general hospital inpatients. Objective: To describe a new service model for the psychiatric care of general hospital inpatients, called Proactive Integrated Consultation-Liaison Psychiatry ('Proactive Integrated Psychological Medicine' in the UK). Method: The new service model was developed especially for general hospital inpatient populations with multimorbidity, such as older medical inpatients. Its design was informed by the published literature and the clinical experience of C-L psychiatrists. It was operationalized by a process of iterative piloting. Results: The rationale for the new model and the principles underpinning it are outlined. Details of how to implement it, including a service manual and associated workbook, are provided. The training of clinicians to deliver it is described. The effectiveness and cost-effectiveness of this new service model is being evaluated. Whilst we have found it feasible to deliver and well-accepted by ward teams, potential challenges to its wider implementation are discussed. Conclusion: Proactive Integrated Consultation-Liaison Psychiatry (PICLP) is a fusion of proactive consultation and integrated care, operationalized in a field-tested service manual. Initial experience indicates that it is feasible to deliver. Its effectiveness and cost effectiveness for older patients on acute medical wards is currently being evaluated in a large multicentre randomized controlled trial (The HOME Study). abstract_id: PUBMED:32595094 New integrated care models in England associated with small reduction in hospital admissions in longer-term: A difference-in-differences analysis. Closer integration of health and social care services has become a cornerstone policy in many developed countries, but there is still debate over what population and service level is best to target. In England, the 2019 Long Term Plan for the National Health Service included a commitment to spread the integration prototypes piloted under the Vanguard `New Care Models' programme. The programme, running from 2015 to 2018, was one of the largest pilots in English history, covering around 9 % of the population. It was largely intended to design prototypes aimed at reducing hospital utilisation by moving specialist care out of hospital into the community and by fostering coordination of health, care and rehabilitation services for (i) the whole population ('population-based sites'), or (ii) care home residents ('care home sites'). We evaluate and compare the efficacy of the population-based and care home site integrated care models in reducing hospital utilisation. We use area-level monthly counts of emergency admissions and bed-days obtained from administrative data using a quasi-experimental difference-in-differences design. We found that Vanguard sites had higher hospital utilisation than non-participants in the pre-intervention period. In the post-intervention period, there is clear evidence of a substantial increase in emergency admissions among non-Vanguard sites. The Vanguard integrated care programme slowed the rise in emergency admissions, especially in care home sites and in the third and final year. There was no significant reduction in bed-days. In conclusion, integrated care policies should not be relied upon to make large reductions in hospital activity in the short-run, especially for population-based models. abstract_id: PUBMED:978642 A general-practitioner ward in a new district general hospital. So far there are relatively few general-practitioner wards in district general hospitals in the National Health Service. The work of one such general-practitioner ward at Queen Mary's Hospital, Sidcup, is described and the advantages of this system of care for patients and doctors discussed. abstract_id: PUBMED:34216352 Clinical pharmacists in Dutch general practice: an integrated care model to provide optimal pharmaceutical care. Background Medication-related harm is a major problem in healthcare. New models of integrated care are required to guarantee safe and efficient use of medication. Aim To prevent medication-related harm by integrating a clinical pharmacist in the general practice team. This best practice paper provides an overview of 1. the development of this function and the integration process and 2. its impact, measured with quantitative and qualitative analyses. Setting Ten general practices in the Netherlands. Development and implementation of the (pragmatic) experiment We designed a 15-month workplace-based post-graduate learning program to train pharmacists to become clinical pharmacists integrated in general practice teams. In close collaboration with general practitioners, clinical pharmacists conduct clinical medication reviews (CMRs), hold patient consultations for medication-related problems, carry out quality improvement projects and educate the practice staff. As part of the Pharmacotherapy Optimisation through Integration of a Non-dispensing pharmacist in a primary care Team (POINT) intervention study, ten pharmacists worked full-time in general practices for 15 months and concurrently participated in the training program. Evaluation of this integrated care model included both quantitative and qualitative analyses of the training program, professional identity formation and effectiveness on medication safety. Evaluation The integrated care model improved medication safety: less medication-related hospitalisations occurred compared to usual care (rate ratio 0.68 (95% CI: 0.57-0.82)). Essential hereto were the workplace-based training program and full integration in the GP practices: this supported the development of a new professional identity as clinical pharmacist. This new caregiver proved to align well with the general practitioner. Conclusion A clinical pharmacist in general practice proves a feasible integrated care model to improve the quality of drug therapy. abstract_id: PUBMED:28196828 Systematic review of the effectiveness, barriers and facilitators to general practitioner engagement with specialist secondary services in integrated palliative care. The general practitioner (GP) has a critical role in an integrated model of palliative care as they often know the patient and carer well, are experts in generalist care and have knowledge of health and social services in the community. Specialist palliative services have insufficient capacity to meet demand and those with non-cancer terminal conditions and those from rural and remote areas are underserved. Research has focused on improving access to palliative care by engaging the GP with specialist secondary services in integrated palliative care. Objectives: (1) Evaluate the effectiveness of interventions designed to engage GPs and specialist secondary services in integrated palliative care; and (2) identify the personal, system and structural barriers and facilitators to integrated palliative care. Method: MEDLINE, EMBASE and CINAHL were searched. Any study of a service that engaged the GP with specialist secondary services in the provision of palliative care was included. GP engagement was defined as any organised cooperation between the GP and specialist secondary services in the care of the patient including shared consultations, case conferences that involved at least both the GP and the specialist clinician and/or other secondary services, and/or any formal shared care arrangements between the GP and specialist services. The specialist secondary service is either a specialist palliative service or a service providing specialist care to a palliative population. A narrative framework was used to describe the findings. Results: 17 studies were included. There is some evidence that integrated palliative care can reduce hospitalisations and maintain functional status. There are substantial barriers to providing integrated care. Principles and facilitators of the provision of integrated palliative care are discussed. Conclusions: This is an emerging field and further research is required assessing the effectiveness of different models of integrated palliative care. abstract_id: PUBMED:30326535 Psychosomatic Specialist Consultation in the General Practitioner's Office - A New Model of Integrated Care A high number of patients with mental health disorders currently do not receive minimally adequate treatment, but remain solely in general practice. This is often due to long waiting times for psychotherapy, patient related barriers or diagnostic insecurity. For this reason, we have developed a care model that is applied directly in the general practitioner's (GP) office and is open to any form of psychosomatic disorder. The cornerstone of the model consists of a psychosomatic specialist consultation in the GP's office. Preliminary results concerning utilization and the target population show a high level of acceptance among patients, GPs, and psychosomatic specialists. abstract_id: PUBMED:32596973 Association between potential primary care emergency service and general practitioner care utilisation in New South Wales. Objective: To examine patterns of potential primary care (PPC) ED presentations and any association between PPC ED presentations and frequency of general practitioner (GP) care utilisation in New South Wales, Australia. Methods: Retrospective cross-sectional study of 6 221 762 New South Wales patients who had at least one service in public hospitals, EDs or Medical Benefit Schedule claimable for GP service between 2013/2014 and 2014/2015 is conducted to examine association between PPC ED presentations and GP care utilisation using logistic regression adjusting for comorbidity index and a number of other covariates. Data over 2010/2011 to 2014/2015 are included for analyses of trends and patterns in PPC ED and GP care utilisation. Results: Forty-one percent of the ED presentations were PPC ED presentations over the 5 years 2010/2011 to 2014/2015. Population rates of PPC ED presentations and GP care both increased over the period, with higher PPC ED presentation rates in regional areas, and higher GP care rates in major cities. GP care utilisation was associated with reduced odds for PPC ED presentations, with the adjusted odds ratios ranging from 0.28 for patients with one GP care service to 0.48 for patients with five or more GP care services compared with patients with none. Increased comorbidity index was also associated with increased risk of PPC ED presentations. Conclusion: GP care utilisation was associated with reduced risk for any PPC ED presentations after adjusting for comorbidity index and the other factors. abstract_id: PUBMED:32893290 General practitioners' perspectives of the integrated health care system: a cross-sectional study in Wuhan, China. Background: In recent years, the Chinese government has introduced a new system called the integrated health care system to deepen health care reform and enhance the capacity of the primary health care services. Objective: We aimed to investigate the perceptions and opinions of general practitioners (GPs) on the integrated health care system. Methods: A cross-sectional investigation involving 764 GPs in Hubei province of China was conducted. We used a self-designed questionnaire to collect information on demographics, GPs' knowledge and attitude of the integrated health care system. Data on 749 GPs with completed questionnaires were analysed descriptively. We conducted Spearman rank correlation analysis to test correlation of grade data. Results: 60.92% of the 749 GPs were familiar with the integrated health care system. 88.25% of the GPs were apt to support its development and 70.09% thought it could promote downward referral of patients. The GPs' familiarity with the integrated health care system was significantly related to professional guidance from doctors in higher hospitals (P < 0.05), but not job titles of GPs (P > 0.05). 78.0% of GPs thought that the main benefit of the integrated health care system was the reduction of patients' health care costs; 70.1% believed that the most difficulty was insufficient publicity. Conclusions: The GPs' general cognition of the integrated health care system was general. Most of the GPs supported the integrated health care system and believed that it contributed to decrease the patients' health care costs. Insufficient publicity and the incomplete two-way referral mechanism were regarded to be the main obstacles to its development. Answer: The first general practitioner (GP) hospital in The Netherlands, described in the study by PUBMED:15119519, represents a novel approach to integrated care. This hospital provided a valuable alternative to home care, nursing home care, and hospital care, particularly for elderly patients with a poor health-related quality of life who required short-term medical and nursing care. The study observed that if the GP beds had not been available, the admissions would have been almost equally divided among home care, nursing home, and hospital care. The GP hospital catered to patients with various needs, including immobilization due to trauma at home, rehabilitation from surgery, and stroke. The severity of the diagnosis on admission of the 'hospital-care group' was significantly higher than the other care groups, indicating that the GP hospital played a crucial role in managing more severe cases that might otherwise have required traditional hospital admission. This model of integrated care aligns with the broader trend of integrating primary and secondary care services to improve patient outcomes and reduce healthcare costs. For instance, the general practitioner-led integrated model of care for patients with complex diabetes, as studied in PUBMED:28442341, demonstrated potential for significant national cost savings by reducing potentially preventable diabetes-related hospitalizations. Similarly, the Proactive Integrated Consultation-Liaison Psychiatry model, described in PUBMED:32592995, was developed for general hospital inpatient populations with multimorbidity and is currently being evaluated for its effectiveness and cost-effectiveness. The integration of clinical pharmacists into general practice teams in the Netherlands, as per PUBMED:34216352, is another example of an integrated care model that improved medication safety and aligned well with general practitioners. Additionally, the integrated health care system in England, analyzed in PUBMED:32595094, showed a small reduction in hospital admissions in the longer term, particularly in care home sites. Overall, the first GP hospital in The Netherlands is part of a growing movement towards integrated care models that aim to provide more coordinated, efficient, and patient-centered care, as evidenced by various studies and initiatives across different healthcare systems (PUBMED:15119519, PUBMED:28442341, PUBMED:32592995, PUBMED:34216352, PUBMED:32595094).
Instruction: Holmium laser lithotripsy with semi-rigid ureteroscopy: a first-choice treatment for impacted ureteral stones in children? Abstracts: abstract_id: PUBMED:25415256 Holmium laser lithotripsy with semi-rigid ureteroscopy: a first-choice treatment for impacted ureteral stones in children? Background: We aimed to assess the effectiveness of semi-rigid ureteroscopy and holmium laser lithotripsy in the treatment of impacted ureteral stones in children. Material/methods: We evaluated a total of 32 children under the age of 18 years treated with ureteroscopic holmium laser lithotripsy for impacted ureteral stones between January 2005 and July 2013. Their stone-free state was defined as the absence of any residual stone on radiologic evaluation performed 4 weeks postoperatively. Complications were evaluated according to the modified Clavien classification. Results: The mean patient age was 9.5±5.1 years (range 1-18 years). Seven (21.8%) of the stones were located in the proximal ureter, 9 (28.2%) were in the mid-ureter, and 16 (50%) were in the distal ureter. The mean stone size was calculated as being 10.46±3.8 mm2 (range 5-20). The stone-free rate was 93.75% (30/32 patients) following primary URS. Additional treatment was required for only 2 (6.25%) of the patients. After the procedure, a D-J stent was placed in all the patients. The total complication rate was 15.6% (5 patients). The 10 total complications in these 5 patients were 5 (15.6%) Grade I, 1 (3.1%) Grade II, 2 (6.25%) Grade IIIa, and 2 (6.25%) Grade IIIb. The mean follow-up period was 16.5 months (range 3-55). Conclusions: For the treatment of impacted ureteral stones in children, holmium laser lithotripsy with semi-rigid ureteroscopy, with its low retreatment requirement and acceptable complication rates, is an effective and reliable method in experienced and skilled hands as a first-choice treatment approach. abstract_id: PUBMED:36914419 Application of Flexible Holmium Laser Sheath in Rigid Ureteroscopy for the Treatment of Impacted Upper Ureteral Stones. Objective: By comparing with rigid ureteroscopy (r-URS), we explored the application value of flexible holmium laser sheath combined with r-URS in treating impacted upper ureteral stones. We also verified its effectiveness, safety and economy and investigated its application prospects at community or primary hospitals. Methods: From December 2018 to November 2021, 158 patients with impacted upper ureteral stones were selected from Yongchuan Hospital of Chongqing Medical University. Seventy-five patients in the control group were treated with r-URS, and 83 patients in the experimental group were treated with r-URS combined with a flexible holmium laser sheath if necessary. The operation time, postoperative hospital stay, hospitalization expenses, stone clearance rate after r-URS, auxiliary extracorporeal shock wave lithotripsy (ESWL) ratio, auxiliary flexible ureteroscope, the incidence of postoperative complications and the stone clearance rate after one month were observed. Results: The postoperative hospital stay, stone clearance rate after r-URS, the ratio of auxiliary ESWL, the ratio of the auxiliary flexible ureteroscope, and total hospitalization expenses in the experimental group were significantly less than those in the control group (p < 0.05). There was no significant difference in operation time, postoperative complications, and stone clearance rate after one month between the two groups (p > 0.05). Conclusions: r-URS combined with flexible holmium laser sheath in treating impacted upper ureteral stones can improve the stone clearance rate and reduce hospitalization expenses. Therefore, it has a certain application value in community or primary hospitals. abstract_id: PUBMED:37334262 Combined treatment of impacted ureteral stones: Holmium laser and pneumatic ballistic. The treatment of impacted stones remains a challenging issue for urologists, and is usually treated clinically by a single surgical procedure. In this paper, we report a case of combined holmium laser and pneumatic ballistics for the treatment of an impacted ureteral stone. The postoperative examination showed that the stone was cleared and no complications occurred. abstract_id: PUBMED:24581216 Comparison of Ho:YAG laser and pneumatic lithotripsy in the treatment of impacted ureteral stones: an analysis of risk factors. The aim was to compare pneumatic and holmium:yttrium-aluminum-garnet laser in the treatment of impacted ureteral stones with different locations and to identify the risk factors for complications. Between March 2005 and November 2012, a total of 230 patients underwent ureteroscopic lithotripsy for impacted stones. Of the patients, 117 had pneumatic and 113 had laser lithotripsy for the fragmentation of the stones. Treatment outcomes based on evidence of being stone free were evaluated. Preoperative, operative, and postoperative follow-up findings were analyzed and compared. There was a difference between the two groups according to overall stone clearance rate (93.8% vs. 80.3%, p = 0.002). There was no statistically significant difference for distal location between the laser and pneumatic groups (96.8% vs. 91.7%, p =0.288). For 10 patients with intrarenally migrated stones who were managed with flexible ureterorenoscopy in the same session, laser lithotripsy was more successful than pneumatic for proximal ureteral stone (94.4% vs. 67.9%, p = 0.007). The overall complication rate was 26.1%. There was no statistically significant difference between the two groups (29% vs. 23%, p = 0.296). Multivariate logistic regression analysis revealed that the proximal location was a statistically significant parameter for the occurrence of complications in both groups (p = 0.001 for PL, p = 0.004 for laser). The pneumatic and holmium:yttrium-aluminum-garnet laser lithotripsy are effective in the treatment of distal impacted stones. Both treatments with semirigid ureteroscopy are acceptable for proximal impacted ureteral stones, but holmium laser lithotripsy has an advantage of use with flexible ureteroscope for intrarenally migrated stone. abstract_id: PUBMED:23798864 Management of impacted proximal ureteral stone: Extracorporeal shock wave lithotripsy versus ureteroscopy with holmium: YAG laser lithotripsy. Objective: PROSPECTIVE EVALUATION OF THE EFFICACY AND SAFETY OF THE EXTRACORPOREAL SHOCK WAVE LITHOTRIPSY (SWL) AND URETEROSCOPY WITH HOLMIUM: YAG laser lithotripsy (URSL) as a primary treatment for impacted stone in the proximal ureter. Patients And Methods: A total of 82 patients with a single impacted stone in the proximal ureter were included in the study. Patients were allocated into two groups according to patient preference for either procedure. The first group included 37 patients who were treated by SWL and the second group included 45 patients treated by URSL. The preoperative data and treatment outcomes of both procedures were compared and analyzed. Results: There was no difference as regards to patient and stone characters between the two groups. There was significantly higher mean session number and re-treatment rate in the SWL group in comparison to URSL group (1.5 ± 0.8 vs. 1.02 ± 0.15 session, and 43.2% vs. 2.2%, respectively). At one month, the stone-free rate of the URSL group was statistically significantly higher than that of the SWL group (80% vs. 67.6%, respectively). The stone-free rate at three months was still higher in the URSL group, but without statistically significant difference (80.2% vs. 78.4%, respectively). There was no statistically significant difference in the rate of complications between the SWL and URSL (24.3% vs. 15.6%, respectively). Conclusion: Both procedures can be used effectively and safely as a primary treatment for impacted stone in the proximal ureter; however, the URSL has a significantly higher initial stone-free rate and lower re-treatment rate. abstract_id: PUBMED:11025714 Use of the holmium:YAG laser for the impacted stone basket. Purpose: During ureteroscopic basketing of ureteral calculi a stone may become engaged in the basket and the basket impacted in the ureter. We describe an endoscopic technique of managing the impacted basket and stone. Materials And Methods: The ureteroscope is back loaded off of the basket, and passed into the ureter beside the basket and stone. The holmium:YAG laser is used to irradiate a basket wire, fracturing the wire and releasing the stone. The basket is removed and the stone managed by holmium:YAG lithotripsy. Results: We successfully treated 3 consecutive patients with this technique. No injuries were observed. Conclusions: The holmium:YAG laser may be used to free an impacted stone basket containing a ureteral calculus. abstract_id: PUBMED:21308677 RIRS through semi-rigid ureteroscope and holmium laser in the treatment of ureteral stones retropulsion Retrograde displacement of ureteral stones into the renal cavities during ureteroscopic lithotripsy represents a frequent and adverse event that leads to additional procedures (ESWL, PCNL, Retrograde Intra-renal lithotripsy with flexible instruments, DJ stent placement and subsequent EWSL) to obtain full clearence of calculi. All these procedures require a further time of treatment. Between 1/2008 and 3/2009, a total of 48 patients harbouring proximal (21 cases) and distal (27 cases) ureteral stones underwent Holmium Laser lithotripsy. In 3 patients previous percutaneous nephrostomy was performed to drain the excretory way. In 12 cases (25%) stone retropulsion occurred; in 3 patients in the upper calix and in 5 in the renal pelvis. Only in 4 cases the stone migrated in the lower or medium calix. In 8 cases we attempted the immediate treatment of intrarenal displaced stones by advancing the semi-rigid instrument into the renal cavities. In 2 cases the treatment aborted because of the shortness of ureteroscope. The instillation of lubricating lidocaine jelly prevented in 3 cases furher displacement of stone. Washing with saline solution through nephrostomic catheter allowed an effective mobilization of stone and an easy lasertripsy. RIRS was successful in 4 cases. When flexible devices or immediate ESWL are not available, rigid or semi-rigid retrograde lithotripsy with holmium laser immediately performed after ureteral stone displacement represents a safe and effective method to treat displaced stones. Several tricks are required to obtain a good stone-free rate. abstract_id: PUBMED:18653216 Ureteroscopy and holmium: YAG laser lithotripsy as emergency treatment for acute renal failure caused by impacted ureteral calculi. Objectives: To present our clinical outcomes in managing acute renal failure caused by impacted ureteral calculi with ureteroscopy and holmium: yttrium-aluminum-garnet laser lithotripsy as emergency treatment. Methods: A series of 27 patients with acute postrenal failure were treated from November 2002 to December 2005. Impacted calculi were located in bilateral ureters in 22 patients and unilateral ureters in 5 patients with a solitary or single functioning kidney. Acute renal failure was demonstrated with oliguria or anuria and a significant increase in serum creatinine and blood urea nitrogen. Patients were treated emergently with ureteroscopy and holmium:yttrium-aluminum-garnet laser lithotripsy in the hospital within 24 hours. Ureteral stenting was performed in all cases. Postoperative renal function, electrolytes, and urine volume were monitored daily for 7 days. Radiography and/or ultrasonography were performed at a follow-up visit in 4 weeks. Results: Ureteroscopy and laser lithotripsy were successfully performed in all patients. The mean operative time was 29.2 minutes (range 15-60). The successful fragmentation rate in the ureteroscopic procedure was 93.9% (46 of 49), and the overall stone-free rate was 88.9% (24 of 27). Shock wave lithotripsy was used in 3 patients after recovery of renal function. Of the 27 patients, 26 (96.3%) returned to normal renal function within 7 days. One patient (3.7%) had significant improvement of renal function, but it had not returned to normal at 12 weeks of follow-up. No intraoperative complications or postoperative ureteral stricture occurred. Conclusions: Emergency ureteroscopy and holmium: yttrium-aluminum-garnet laser lithotripsy can be safely and successfully performed by skilled endourologists for acute renal failure caused by impacted ureteral calculi. abstract_id: PUBMED:8837001 Clinical experience with holmium:YAG laser lithotripsy of ureteral calculi. Background And Objective: We recently acquired a holmium:YAG laser with wavelength of 2.1 microns. We would like to report our success in using this machine for laser lithotripsy. Study Design/patients And Methods: Forty-seven ureteroscopic laser lithotripsies were performed in 44 patients (three bilateral stones) over a 6 month period. Four stones were in the upper ureter, six in the middle ureter, and 37 in the lower ureter. The holmium laser beam was delivered via a 400 micron flexible quartz fiber through a 7 Fr rigid ureteroscope during the procedure. The stone size varied from 4 to 21 mm. Results: The success rate was 91%. Most of the stones were fragmented to less than 2 mm in size. Larger fragments were removed with a Dormia basket (six cases) for stone clearance. Endoscopic laser ureterotomy was performed in two patients with stricture below the stone. One patient required conversion to open ureterolithotomy, and three required subsequent Extracorporeal Shock Wave Lithotripsy. A low complication rate was observed. Conclusion: The pulsed holmium:YAG laser is safe and effective for use in ureteroscopic laser lithotripsy. abstract_id: PUBMED:32539828 Thermal effect of holmium laser during ureteroscopic lithotripsy. Background: Holmium laser lithotripsy is the most common technique for the management of ureteral stone. Studies founded that holmium laser firing can produce heat which will cause thermal injury towards ureter. The aim of our current study is to explore factors affecting thermal effect of holmium laser during ureteroscopic lithotripsy. Methods: An in vitro experimental model is design to simulate the ureteroscopic lithotripsy procedure. Different laser power settings (10w (0.5JX20Hz, 1.0 JX10Hz), 20w (1.0 JX20Hz, 2.0 JX10Hz), 30w (1.5JX20Hz, 3.0 JX10Hz)) with various firing time (3 s, 5 s, 10s) and irrigation flow rates(10 ml/min, 15 ml/min, 20 ml/min and 30 ml/min) were employed in the experiment. The temperature around the laser tip was recorded by thermometer. Results: The temperature in the "ureter" rises significantly with the increasing laser power, prolonging firing time and reducing irrigation flow. The highest regional temperature is 78.0 °C at the experimental set-up, and the lowest temperature is 23.5 °C. Higher frequency setting produces more heat at the same power. Laser power < =10w, irrigation flow> = 30 ml/min and "high-energy with low-frequency" can permit a safe working temperature. Conclusion: We clarify that the thermal effect of holmium laser is related with both laser working parameters and irrigation flow. The proper setting is the key factor to ensure the safety during ureteroscopic holmium laser lithotripsy. Answer: Based on the available abstracts, holmium laser lithotripsy with semi-rigid ureteroscopy appears to be an effective and reliable first-choice treatment for impacted ureteral stones in children. The study described in abstract PUBMED:25415256 reports a high stone-free rate of 93.75% following primary ureteroscopy (URS) with low retreatment requirements and acceptable complication rates. The mean follow-up period was 16.5 months, indicating sustained effectiveness over time. The total complication rate was 15.6%, with the majority being low-grade complications according to the modified Clavien classification. This suggests that holmium laser lithotripsy with semi-rigid ureteroscopy is a safe and effective method for treating impacted ureteral stones in pediatric patients when performed by experienced and skilled practitioners. Other abstracts support the use of holmium laser lithotripsy for impacted ureteral stones in various contexts, such as in combination with pneumatic ballistic (PUBMED:37334262), as an emergency treatment for acute renal failure caused by impacted ureteral calculi (PUBMED:18653216), and in comparison with other treatment modalities like extracorporeal shock wave lithotripsy (SWL) and pneumatic lithotripsy (PUBMED:24581216, PUBMED:23798864). These studies highlight the versatility and effectiveness of holmium laser lithotripsy in managing impacted ureteral stones. In summary, the evidence from the provided abstracts supports the use of holmium laser lithotripsy with semi-rigid ureteroscopy as a first-choice treatment for impacted ureteral stones in children, given its high success rate, low retreatment rate, and acceptable safety profile.
Instruction: Can antidepressants prevent pegylated interferon-α/ribavirin-associated depression in patients with chronic hepatitis C: meta-analysis of randomized, double-blind, placebo-controlled trials? Abstracts: abstract_id: PUBMED:24204676 Can antidepressants prevent pegylated interferon-α/ribavirin-associated depression in patients with chronic hepatitis C: meta-analysis of randomized, double-blind, placebo-controlled trials? Background: Antidepressants are effective in treating interferon-α/ribavirin (IFN-α/RBV)-associated depression during or after treatment of chronic hepatitis C (CHC). Whether antidepressant prophylaxis is necessary in this population remains under debate. Methods: Comprehensive searches were performed in Medline, Embase, Cochrane Controlled Trials Register and PubMed. Reference lists were searched manually. The methodology was in accordance with the 2009 PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) Statement. Results: We identified six randomized, double-blind, placebo-controlled trials involving 522 CHC patients treated with pegylated (PEG)-IFN-α plus RBV. The antidepressants used were escitalopram, citalopram, and paroxetine, which are selective serotonin reuptake inhibitors (SSRIs). The rates of depression (17.9% vs. 31.0%, P = 0.0005), and rescue therapy (27.4% vs. 42.7%, P<0.0001) in the SSRI group were significantly lower than those in the placebo group. The rate of sustained virological response (SVR) (56.8% vs. 50.0%, P = 0.60) and drug discontinuation (18.7% vs. 21.1%, P = 0.63) in the SSRI group did not differ significantly to those in the placebo group. In terms of safety, the incidence of muscle and joint pain (40.8% vs. 52.4%, P = 0.03) and respiratory problems (29.3% vs. 40.1%, P = 0.03) were lower, but the incidence of dizziness was significantly higher (22.3% vs. 10.2%, P = 0.001) in the SSRI group. Conclusion: Prophylactic SSRI antidepressants can significantly reduce the incidence of PEG-IFN-α/RBV-associated depression in patients with CHC, with good safety and tolerability, without reduction of SVR. abstract_id: PUBMED:21034680 Prophylactic treatment with escitalopram of pegylated interferon alfa-2a-induced depression in hepatitis C: a 12-week, randomized, double-blind, placebo-controlled trial. Background: Depression is one of the main reasons for treatment withdrawal and failure in chronic hepatitis C patients treated with interferon. Antidepressants are useful for its treatment, but whether they can also be used for prevention has yet to be established. Method: To evaluate the efficacy and safety of escitalopram for preventing interferon alfa-2a-induced depression, we conducted an investigator-initiated multicenter, randomized, double-blind, placebo-controlled trial in 133 chronic hepatitis C patients without baseline mental disorders who were randomly assigned to receive escitalopram or placebo during the first 12 weeks of treatment. Primary efficacy outcomes were the development of DSM-IV major depression and scores on the Montgomery-Asberg Depression Rating Scale (MADRS) and the Hospital Anxiety and Depression Scale (HADS). Primary safety end points were biochemical and virological responses. Patients were recruited between March 2005 and July 2006. Results: Rates of major depression were low (5.4%) and did not differ between placebo (3.2%) and escitalopram (7.6%). MADRS and HADS scores significantly increased during treatment (P < .001 and P = .028, respectively), but there were no differences between treatment groups. Sustained virological response was achieved by 69.2% of patients, 70.4% in the placebo group and 67.9% in the escitalopram group. Conclusions: Findings do not support the use of an antidepressant to prevent interferon-induced depression during the first 12 weeks of treatment in chronic hepatitis C patients at low psychiatric risk. Future studies should be directed to subpopulations of patients at high psychiatric risk. Trial Registration: clinicaltrials.gov Identifier: NCT00166296. abstract_id: PUBMED:16772812 Effect of amantadine on depressive symptoms in chronic hepatitis C patients treated with pegylated interferon: a randomized, controlled pilot study. Objective: Assess the effect of amantadine on depressive symptoms during interferon alfa therapy for hepatitis C virus infection. Methods: We performed a randomized, controlled trial with 14 hepatitis C virus-infected patients, treated with pegylated interferon alfa-2a 180 microg/wk plus ribavirin 1.200 mg/d. Eight patients were randomized to receive amantadine 200 mg/d, and 6 other individuals were randomized to control group without amantadine. Severity of depression and anxiety was measured using the Hospital Anxiety and Depression Scale, before starting re-treatment, and 4, 12, and 24 weeks after immunotherapy with pegylated interferon alfa-2a. Evaluations were performed by psychiatrists blinded in reference to the state of amantadine treatment. Friedman chi(2) test for repeated measures and Mann-Whitney test for nonparametric data were used to assess significant differences. Results: From baseline to follow-up, no significant increase in mean Hospital Anxiety and Depression Scale scores of depressive symptoms were seen in amantadine group (P = 0.142), meanwhile there was a statistical increase of depression scores in the control group (P = 0.001). Conclusions: Our randomized pilot study, though small, clearly indicates that interferon alfa-induced depressive symptoms can be prevented by the use of amantadine. However, double-blind placebo-controlled trials with a higher sample size are required to confirm these preliminary findings. abstract_id: PUBMED:11826422 Interferon and amantadine in naive chronic hepatitis C: a double-blind, randomized, placebo-controlled trial. Recent controlled trials on the efficacy of an amantadine/interferon combination in treatment-naive patients with chronic hepatitis C yielded contradictory results. We therefore conducted a large, double-blind, placebo-controlled, multicenter trial in naive patients with chronic hepatitis C: 246 patients were randomized to receive interferon alfa-2a (6 MIU sc thrice weekly for 20 weeks, then 3 MIU sc thrice weekly) and either amantadine sulphate (2 x 100 mg p.o. QD) or placebo. Treatment continued for a total of 52 weeks, if HCV-RNA in serum polymerase chain reaction (PCR) had fallen below detection limit (1,000 copies/mL) at treatment week 10, and stopped otherwise. All patients were followed for 24 weeks off therapy. After 10 weeks of treatment, 66/121 patients treated with amantadine (55%) and 78/125 treated with placebo (62%) had lost HCV-RNA (n.s.). After 24 weeks of follow-up, 25 patients in the amantadine (21%) and 17 (14%) in the placebo group remained HCV-RNA negative (n.s.). During therapy, virologic breakthroughs occurred less often in the amantadine than in the placebo group [14 (12%) vs. 27 (22%) patients; P =.04]. Multivariate logistic regression analysis revealed genotype, viremia level, age, and amantadine therapy [risk ratio 0.4 (95%CI 0.2-1.0), P =.05] as predictors of sustained virologic response. Adverse events and impact of therapy on quality of life were similar in amantadine and placebo treated patients. Compared with current standard treatment (interferon/ribavirin), the interferon/amantadine combination was not cost-effective. In conclusion, amantadine does not add to a clinically relevant extent to the treatment of naive patients with chronic hepatitis C. abstract_id: PUBMED:18482285 Pegylated interferons for chronic hepatitis C virus infection: an indirect analysis of randomized trials. Dual therapy with pegylated interferon and ribavirin is recommended for patients with chronic hepatitis C virus infection who meet criteria for treatment, but it is unclear whether pegylated interferon alfa-2a or pegylated interferon alfa-2b is more effective or associated with fewer adverse events. Because data from head-to-head trials of pegylated interferon regimens are sparse, we performed adjusted indirect analysis using trials comparing dual therapy with pegylated interferon alfa-2a or pegylated interferon alfa-2b vs dual therapy with non-pegylated interferon. We searched for potentially relevant randomized controlled trials using electronic databases and reference lists. A total of 16 trials met inclusion criteria. Adjusted indirect comparisons found no statistically significant differences between dual therapy with pegylated interferon alfa-2a and dual therapy with pegylated interferon alfa-2b on the outcomes sustained virologic response [relative risk (RR) = 1.59, 95% CI: 0.56-4.46], withdrawal due to adverse events (RR = 0.86, 95% CI: 0.29-2.55), anaemia (RR = 1.67, 95% CI: 0.32-8.84), depression (RR = 1.09, 95% CI: 0.41-2.90) or flu-like symptoms (RR = 1.10, 95% CI: 0.53-2.29). Adjusting for potential publication bias and stratifying analyses by indicators of methodological quality, human immunodeficiency virus infection status, hepatitis C virus genotype, dose of ribavirin or dose of pegylated interferon did not change conclusions. There is insufficient evidence to support conclusions that dual therapy with one pegylated interferon is superior to the other. However, because estimates are imprecise, our results also do not rule out a clinically significant difference. Head-to-head trials are needed to verify the results of indirect analyses and provide additional guidance on optimal treatment choices. abstract_id: PUBMED:18079286 Therapy of interferon-induced depression in chronic hepatitis C with citalopram: a randomised, double-blind, placebo-controlled study. Background: Interferon-induced depression represents a major complication in antiviral treatment of chronic hepatitis C virus (HCV) infection. Aim: To evaluate in a placebo-controlled study the efficacy of a selective serotonin reuptake inhibitor (SSRI) in HCV patients on antiviral therapy with interferon-associated depression. Methods: 100 HCV outpatients were included in a randomised, double-blind, placebo-controlled study. During interferon therapy (peginterferon alpha-2b plus ribavirin), depression was monitored using the Hospital Anxiety and Depression Scale (HADS). Patients with clinically relevant interferon-induced depression (HADS >or=9) were randomly assigned to placebo or citalopram (SSRI, 20 mg/day). Results: In 28 patients (28%), HADS scores increased to >8 during interferon therapy. They were treated with placebo (n = 14) or SSRI (n = 14). HADS scores declined significantly in SSRI patients within four weeks of therapy (p<0.001) but not in placebo patients. This difference between subgroups was statistically significant (p = 0.032). Unblinding became necessary in five placebo patients as a result of intolerable depression. Rescue medication (20 mg citalopram) led to a significant decrease in HADS scores (p = 0.008). All citalopram patients were able to complete interferon therapy as planned. As an interim analysis showed a significant superiority of SSRI over placebo, the study was terminated prematurely. Three patients, who became depressed afterwards, were treated in an unblinded fashion with citalopram. Conclusions: The findings demonstrate clearly that citalopram treatment is highly effective in HCV patients on interferon therapy, when initiated after the onset of clinically relevant depressive symptoms. This suggests that a general SSRI prophylaxis is not necessary in these patients. abstract_id: PUBMED:15791697 Etanercept as an adjuvant to interferon and ribavirin in treatment-naive patients with chronic hepatitis C virus infection: a phase 2 randomized, double-blind, placebo-controlled study. Background/aims: Current therapies for patients with chronic hepatitis C virus (HCV) do not achieve sustained viral clearance in most patients, and are associated with severe toxic effects. Our aim was to investigate the efficacy and safety of etanercept as adjuvant to interferon and ribavirin in treatment-naive patients with HCV. Methods: Double-blind, randomized, placebo controlled trial. Fifty patients with chronic HCV were randomly assigned to receive interferon alfa-2b and ribavirin with either etanercept or placebo for 24 weeks. The main outcome measure was the absence of HCV RNA at 24 weeks, the on treatment response at the end of the etanercept randomization period. Results: At 24 weeks, HCV RNA was absent in 63% (12/19) etanercept patients compared to 32% (8/25) placebo patients (P=0.04). In addition, patients receiving etanercept had lower frequency of most adverse events categories compared to placebo. Conclusions: Etanercept given for 24 weeks as adjuvant therapy to interferon and ribavirin significantly improved virologic response at the end of the etanercept randomization period among patients with HCV, and was associated with decreased incidence of most adverse effects associated with interferon and ribavirin. abstract_id: PUBMED:27842532 Silybin supplementation during HCV therapy with pegylated interferon-α plus ribavirin reduces depression and anxiety and increases work ability. Background: Hepatitis C virus infection and interferon treatment are often associated with anxiety, depressive symptoms and poor health-related quality of life. To evaluate the Silybin-vitamin E-phospholipids complex effect on work ability and whether health related factors (anxiety and depression) were associated with work ability in subjects with chronic hepatitis C treated with Pegylated-Interferon-α2b (Peg-IFN) and Ribavirin (RBV). Methods: Thirty-one patients (Group A) with chronic hepatitis and other 31 subjects in Group B were recruited in a randomized, prospective, placebo controlled, double blind clinical trial. Group A received 1.5 mg/kg per week of Peg-IFN plus RBV and placebo, while Group B received the same dosage of Peg-IFN plus RBV plus association of Silybin 94 mg + vitamin E 30 mg + phospholipids 194 mg in pills for 12 months. All subjects underwent to laboratory exams and questionnaires to evaluate depression (Beck Depression Inventory - BDI), anxiety (State-trait anxiety inventory - STAI) and work ability (Work ability Index - WAI). Results: The comparison between group A and group B showed significant differences after 6 months in ALT (P < 0.001), and viremia (P < 0.05), after 12 months in ALT (P < 0.001), and AST (P < 0.001), at follow up in AST (P < 0.05), and ALT (P < 0.001). Significant difference were observed after 1 month in WAI (p < 0.001) and BDI (P < 0.05), after 6 months in WAI (P < 0.05) and STAI (P < 0.05), after 12 months and at follow up in WAI, STAI and BDI (p < 0.01). Conclusions: The supplementation with Silybin-vitamin E -phospholipids complex increased work ability and reduced depression and anxiety in patients treated with Peg-IFN and RBV. Trial Registration: NCT01957319 , First received: September 25, 2013. Last updated: September 30, 2013 (retrospectively registered). abstract_id: PUBMED:16152757 A randomized, double-blind, placebo-controlled dose-escalation trial of merimepodib (VX-497) and interferon-alpha in previously untreated patients with chronic hepatitis C. Inhibition of inosine monophosphate dehydrogenase (IMPDH) is one of several proposed mechanisms of action for ribavirin (RBV), a critical component of the current treatment for chronic hepatitis C (CHC). This study was a double-blind, placebo-controlled dose-escalation study of a novel, selective, orally active small molecule inhibitor of IMPDH, merimepodib (VX-497 or MMPD) in combination with standard interferon-alpha (IFN-alpha). Fifty-four treatment-naive patients with genotype-1 CHC were randomized to receive IFN-alpha 3 MIU subcutaneously three times a week, alone or in combination with 100 mg or 300 mg (every 8 h) of MMPD for 4 weeks. At the end of 4 weeks, all patients were offered 48 weeks of treatment with IFN-alpha/RBV. The objectives of the study were to evaluate the tolerability of the IFN-alpha/MMPD combination and to evaluate whether MMPD had an on-treatment effect on HCV-RNA, similar to RBV when added to IFN-alpha. The drug combination was generally well tolerated; one patient at the higher dose discontinued because of elevated alanine aminotransferase levels. No pharmacokinetic interactions were evident between the two drugs. Analysis of covariance that adjusted for a baseline imbalance in HCV-RNA in the intent-to-treat population did not show any significant differences between the treatment groups, or between MMPD plus IFN-alpha compared with IFN-alpha alone. However, the per-protocol primary efficacy analysis based on treatment-compliant patients demonstrated a greater reduction in mean HCV-RNA in the combination of 100 mg MMPD plus IFN-alpha compared with IFN-alpha alone (-1.78 log vs -0.86 log, P=0.037). In conclusion, the addition of a selective IMPDH inhibitor to IFN-alpha was well tolerated. In a low-dose range, the addition of MMPD may have the potential to add to the antiviral efficacy of IFN-alpha. Larger, longer duration trials incorporating pegylated IFN would be required to determine whether this combination, alone or with RBV, would increase either early or sustained virological response rates. abstract_id: PUBMED:15131791 Epoetin alfa maintains ribavirin dose in HCV-infected patients: a prospective, double-blind, randomized controlled study. Background & Aims: Combination therapy with interferon alpha (IFN-alpha) and ribavirin (RBV) or pegylated IFN-alpha (PEG-IFN-alpha)/RBV for chronic hepatitis C virus (HCV) infection often causes anemia, prompting RBV dose reduction/discontinuation. This study assessed whether epoetin alfa could maintain RBV dose, improve quality of life (QOL), and increase hemoglobin (Hb) in anemic HCV-infected patients. Methods: HCV-infected patients (n = 185) on combination therapy who developed anemia (Hb < or = 12 g/dL) were randomized into a U. S. multicenter, placebo-controlled, clinical trial of epoetin alfa, 40,000 U subcutaneously, once weekly vs. matching placebo. The study design used an 8-week, double-blind phase (DBP) followed by an 8-week, open-label phase (OLP), in which placebo patients were crossed over to epoetin alfa. Results: At the end of the DBP, RBV doses were maintained in 88% of patients receiving epoetin alfa vs. 60% of patients receiving placebo (P < 0.001). Mean QOL scores at the end of the DBP improved significantly on all domains of the Linear Analog Scale Assessment (LASA) and on 7 of the 8 domains of the Short Form-36, version 2 (SF-36v2). Mean Hb increased by 2.2 +/- 1.3 g/dL (epoetin alfa) and by 0.1 +/- 1.0 g/dL (placebo) in the DBP (P < 0.001). Similar results were demonstrated in patients who switched from placebo to epoetin alfa in the OLP. Epoetin alfa was well tolerated; the most common adverse effects were headache and nausea. Conclusions: Epoetin alfa maintained RBV dose and improved QOL and Hb in anemic HCV-infected patients receiving combination therapy. Answer: Yes, antidepressants can prevent pegylated interferon-α/ribavirin (PEG-IFN-α/RBV)-associated depression in patients with chronic hepatitis C (CHC), according to a meta-analysis of randomized, double-blind, placebo-controlled trials. The meta-analysis found that the rates of depression and rescue therapy were significantly lower in the group treated with selective serotonin reuptake inhibitors (SSRIs) compared to the placebo group. The antidepressants used in the trials included escitalopram, citalopram, and paroxetine. Additionally, the rate of sustained virological response (SVR) and drug discontinuation did not differ significantly between the SSRI group and the placebo group, indicating that the use of SSRIs did not negatively impact the effectiveness of the hepatitis C treatment. The study also noted that the incidence of muscle and joint pain and respiratory problems were lower in the SSRI group, although the incidence of dizziness was higher (PUBMED:24204676). However, it is important to note that another study included in the analysis did not support the use of antidepressants for the prevention of interferon-induced depression during the first 12 weeks of treatment in chronic hepatitis C patients at low psychiatric risk (PUBMED:21034680). This suggests that the benefits of antidepressant prophylaxis may vary depending on the psychiatric risk profile of the patients. In conclusion, the meta-analysis supports the use of prophylactic SSRI antidepressants to significantly reduce the incidence of PEG-IFN-α/RBV-associated depression in patients with CHC, with good safety and tolerability, and without reduction of SVR (PUBMED:24204676).
Instruction: Evaluation of the selective use of abdomino-pelvic drains at laparoscopic myomectomy: in enhanced recovery, do drains delay discharge home? Abstracts: abstract_id: PUBMED:25522116 Evaluation of the selective use of abdomino-pelvic drains at laparoscopic myomectomy: in enhanced recovery, do drains delay discharge home? Objective: To assess whether the use of abdominal drains at laparoscopic myomectomy (LM) influences length of hospital stay. The primary outcome was to determine whether the use of intra-abdominal drains after LM was associated with prolonged hospital stay after surgery. Secondary outcomes were to identify factors that influence the use of abdomino-pelvic drains during LM. Study Design: Retrospective cohort study of 217 consecutive single surgeon LMs in a London university teaching hospital. Abdominal drains were used selectively after LM. Of the 217 patients, 123 (57%) had a drain left in situ at the end of the operation. Results: The two cohorts of patients were not significantly different in their demographics. The use of a drain was significantly associated with an increased number of fibroids (4.6±3.8 vs. 2.8±2.1, p<0.0001), increased weight of fibroids (277±211 g vs. 133±153 g, p<0.0001), increased surgical time (133±40 min vs. 90±35 min, p<0.0001) and increased estimated blood loss (406±265 ml vs. 199±98 ml, p<0.0001). There was no statistically significant difference in length of hospital stay (mean duration of admission 2.1 days±0.98 with drain, vs. 2.1 days±0.97 without a drain, p=0.98). Conclusion: We conclude that although the use of a drain may be associated with a more complex operation, this does not delay the patient's discharge. abstract_id: PUBMED:24436670 Use of intra-abdominal drains. The use of drains in colorectal surgery has been a subject of debate for several decades. Prophylactic drainage of the peritoneal cavity has become less popular in recent years. This change is due to several studies demonstrating that intraperitoneal drains do not adequately drain the peritoneal cavity and do not prevent or contain anastomotic leaks. Percutaneous drain placement has become the standard of care for patients with intra-abdominal abscesses. Selected anastomotic leaks in the stable patient can also be managed with percutaneous drains. In this article, the authors review in detail the use of drains and the literature to support their use in our everyday practice. abstract_id: PUBMED:32395498 Timing for removing prophylactic drains after liver resection: an evaluation of drain removal on the third and first postoperative days. Background: Conventionally, drains are removed from postoperative day (POD) 7 to POD 14 at our institute after hepatectomy (control group). This study was conducted to evaluate the outcomes of drain removal in the early postoperative period. Methods: Recently, we defined criteria for the early removal of drains: (I) a drain-fluid bilirubin level of below 3 mg/dL; (II) a drain discharge volume of less than 500 mL/day; and (III) no macroscopic signs of bleeding or infection. For patients meeting these criteria, drains were removed on POD 3 between January 2012 and February 2013 (POD 3 group) and on POD 1 between February and December 2013 (POD 1 group). The outcomes of these groups were then retrospectively compared. Results: The median duration of the postoperative hospital stay was shorter in the POD 3 group (11 days) than in the control group (14 days) (P<0.0001). The incidence of drain infection was lower in the POD 3 group (1.2%) than in the control group (5.7%). Meanwhile, the incidences of bile leakage and complications were higher in the POD 1 group than in the POD 3 group. However, the incidences were almost the same when patients whose drains were actually removed on the predefined POD were compared. The intraoperative findings were also considered when removing the drains. Conclusions: Drain removal on POD 3 may reduce the length of the postoperative hospital stay and the incidence of drain infection without impairing safety. To remove drains safely on POD 1, however, the intraoperative findings should also be considered. abstract_id: PUBMED:29188155 Abdominal Drains Retrieved Laparoscopically 15 Years Post Laparotomy. A retained intra-abdominal foreign body is a common occurrence that is frequently underreported due to its medicolegal implications. Sponges, gauzes, surgical instruments, abdominal drains, etc. have been reported in the literature. The most common presentation for a retained intra-abdominal foreign body is postoperative abdominal infections or bleeding, frequently seen in the immediate postoperative period. Most of these foreign bodies are removed by exploratory laparotomy owing to recent abdominal surgeries or presentation as complicated abdominal masses. Here, we report a case with retained intra-abdominal drains for 15 years with minimal symptoms presenting as an intermittent abdominal pain; the drains were removed using laparoscopic intervention. abstract_id: PUBMED:25610277 Is a drain required after laparoscopic cholecystectomy? Objective: Whether drains should be routinely used after laparoscopic cholecystectomy is still debated. We aimed to retrospectively evaluate the benefits of drain use after laparoscopic cholecystectomy for non-acute and non-inflamed gallbladders. Materials And Methods: Two hundred and fifty patients (mean age, 47±13.8 years; 200 females and 50 males) who underwent laparoscopic cholecystectomy for cholestasis were included in the study. The medical files of the patients were examined retrospectively to obtain data on patient demographics, cholecystitis attacks, complications during the operation, whether a drain was placed in the biliary tract during the operation, etc. The volume of the fluid collection detected in the subhepatic area by ultrasonography on the first postoperative day was recorded. Results: Drains were placed in 51 patients (20.4%). The mean duration of drain placement was 3.1±1.9 (range 1-16) days. Fluid collection was detected in the gallbladder area in 67 patients (26.8%). The mean volume of collected fluid was 8.8±5.2 mL. There were no significant effects of age, gender, and previous cholecystitis attacks on the presence or volume of the fluid collection (P>0.05 for all). With regard to the relationship between fluid collection and drains, 52 of 199 (26.1%) patients without drains had postoperative fluid collection, compared to 15 of 51 (29.4%) patients with drains (P>0.05). Conclusion: In conclusion, there is no relationship between the presence of a drain after laparoscopic cholecystectomy and the presence of postoperative fluid collection. Thus, in patients without complications, it is not necessary to place a drain to prevent fluid collection. abstract_id: PUBMED:29928075 Can drains be avoided in lipo-abdominoplasty? Context: Seroma formation is a common complication after lipoabdominoplasty. Conventionally, drains have been placed to prevent seroma; however, there are problems with drains, namely, pain, compromised mobilisation, breakage, tissue irritation and infection. Strategies to prevent seroma, such as the use of glue or quilting sutures have all been attempted with variable and unpredictable benefit. Aims: The author extrapolated the experience with own liposuction protocols to lipoabdominoplasty and began doing away with drains from the year 2006 onwards unless absolutely indicated. Incidence of complications is evaluated here. Settings And Design: This retrospective study is the report of our experience with 204 lipoabdominoplasties performed since 2006 upto December 2016. Subjects And Methods: Consecutive abdominoplasty patients were evaluated for seroma formation, wound dehiscence till suture removal. Nuances of operative details that helped avoid drains were contemplated. Results: Only one patient who had drainless abdominoplasty had seroma formation and wound dehiscence. Conclusions: We found that (a) 'toilet liposuction', (a term the author would like to introduce) (b) multilayer wound closure without tension, (c) avoidance of gliding surfaces d) continuous use of pressure garment for 4 days post-operative and (e) reducing the amount of infiltration are the key factors in avoidance of collections and thus seroma formation. If such a technique is diligently followed, desired results are obtained; yet there is no need to insert drains in most patients. Reducing the need of drains permits early ambulation and better patient comfort without seroma and other complications. abstract_id: PUBMED:32221677 A Simple Method for Quantitative Assessment of Suction Drains. Suction drains are widely used in surgical practice, but a consensus is yet to be found around their use in plastic surgery. Nowadays, patients are frequently discharged from hospitals with drains still in place. Soft drains are easier to manage at home because of the reduced weight and size. The content can be disposed of when the container is full, but volume assessment is only possible when the reservoir is inflated. Evaluating the total drained volume alone is a flawed assessment method, as it might lead to erroneously decide whether a drain should be kept or removed. What we should use as a reference instead is the output quantity from the last 24 h. We can precisely determine the amount of collected material on a daily basis by closing the clip of the tubing, opening the exit valve to inflate the container, measuring and then emptying the container. However, this whole process can be complicated and put the sterile environment at risk of contamination, which is why it cannot be performed by the patient at home. We ask our patients to weigh the container daily using a kitchen scale and to write down the obtained values. When the patient returns for a postoperative checkup, they can report their measurements, thus making it easier for the surgeon to decide whether to remove the drain or not. We believe that this simple method can be safely implemented to track drains in the postoperative period after the patient is discharged. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266. abstract_id: PUBMED:26682098 The Outcome of Using Closed Suction Wound Drains in Patients Undergoing Lumbar Spine Surgery: A Systematic Review. Study Design Systematic review. Objective Determine whether closed suction wound drains decrease the incidence of postoperative complications compared with no drain use in patients undergoing spine surgery for lumbar degenerative conditions. Methods Electronic databases and reference lists of key articles were searched up through January 22, 2015, to identify studies comparing the use of closed suction wound drains with no drains in spine surgery for lumbar degenerative conditions. Outcomes assessed included the cumulative incidence of epidural hematoma, superficial and deep wound infection, and postoperative blood transfusion. The overall strength of evidence across studies was based on precepts outlined by the Grades of Recommendation Assessment, Development and Evaluation Working Group. Results Five heterogeneous studies, three randomized controlled trials, and two cohort studies form the evidence basis for this report. There was no difference in the incidence of hematoma, superficial wound infection, or deep infection in patients with compared with patients without closed suction wound drains after lumbar surgery. The upper bounds of the 95% confidence interval for hematoma ranged from 1.1 to 16.7%; for superficial infection, 1.0 to 7.3%; and for deep infection, 1.0 to 7.1%. One observational study reported a 3.5-fold increase in the risk of blood transfusion in patients with a drain. The overall strength of evidence for these findings is considered low or insufficient. Conclusions Conclusions from this systematic review are limited by the quality of included studies that assessed the use of closed suction wound drains in lumbar spine surgeries for degenerative conditions. We believe that spine surgeons should not routinely rely on closed suction wound drains in lumbar spine surgery until a higher level of evidence becomes available to support its use. abstract_id: PUBMED:28190206 The current state of the evidence for the use of drains in spinal surgery: systematic review. Purpose: Search for evidence pertaining to the effectiveness of drains used in spinal surgeries. Method: PubMed and EMBASE databases were searched for articles pertaining to the use of drains in all types of spinal surgery. The bibliographies of relevant studies were searched for additional papers that met the initial inclusion criteria. Level I and II studies were scored according to guidelines in the Cochrane Collaboration Back Review Group. We utilised the Population, Intervention, Comparison, Outcomes and Study design (PICOS) method to define our study eligibility criteria. Results: Nineteen papers were identified: four level I studies, eight level III studies and seven level IV studies. The four level I, involving the randomization of patients into 'drain' and 'non-drain' groups, identified a total of 363 patients. Seven of the eight level III retrospective studies concluded that the use of drains did not reduce complications. Two of the seven level IV studies agreed with this conclusion. The remaining five level IV studies reported the benefits of lumbar drainage following dural tears. Conclusions: There is a paucity of published literature on the use of drains following spinal surgery. This is the first study to assess the evidence for the benefits of drains post-operatively in spinal surgery. The identified studies have shown that drains do not reduce the incidence of complications in anterior cervical discectomy and fusion, one and two level posterior cervical fusions, lumbar laminectomies, lumbar decompressions or discectomies and posterior spinal fusion for adolescent scoliosis. Further level I and II studies are needed. abstract_id: PUBMED:29581697 Drain Tags: A Reliable Method of Securing Drains in the Neck. Drains are essential in most surgeries. Primary need is elimination of dead space and related consequences but drains per se do have their share of complications, one of them being dislodgment. Complications related to dislodgment can range from inactive drains to damage to critical areas like anastomosis sites. Securing drains using this technique was described by Jayaraj et al. in 1988. Since then, no study was done to determine the efficacy and outcomes of this method. An audit was conducted from 2009 to 2014 at SDM Craniofacial unit to determine the utility of drain fixation. Data was collected of a total of 143 patients from the medical records. This useful technique has the advantage of preventing the drain from displacement and its consequent complications. Answer: The evaluation of the selective use of abdomino-pelvic drains at laparoscopic myomectomy (LM) in the context of enhanced recovery and its impact on the length of hospital stay was assessed in a retrospective cohort study of 217 consecutive single surgeon LMs at a London university teaching hospital. The study found that the use of a drain was significantly associated with a more complex operation, indicated by an increased number of fibroids, increased weight of fibroids, increased surgical time, and increased estimated blood loss. However, there was no statistically significant difference in the length of hospital stay between patients with a drain (mean duration of admission 2.1 days±0.98) and those without a drain (2.1 days±0.97, p=0.98). Therefore, the study concluded that the use of a drain may be associated with a more complex operation, but it does not delay the patient's discharge home (PUBMED:25522116).
Instruction: Are stage IV vestibular schwannomas preoperatively different from other stages? Abstracts: abstract_id: PUBMED:18046259 Are stage IV vestibular schwannomas preoperatively different from other stages? Objective: The aim of this study was to focus on the clinical and paraclinical symptoms of patients suffering from Stage IV vestibular schwannomas (VSs). Patients: In this prospective study, we included 734 patients who have VS and candidates for operation. Main Outcome Measures: Patients were classified as having Stage I, II, III, or IV tumors according to Tos criteria as evaluated by magnetic resonance imaging. PREOPERATIVE CLINICAL EVALUATION: We recorded the occurrence of complaints (%) and duration (yr) of hearing loss, tinnitus, and balance disorder. Preoperative paraclinical evaluation included pure-tone (PTA) and speech audiometry, auditory brainstem response (ABR) patterns, and vestibular deficit at videonystamography (VNG). Continuous variables were compared between Stage IV and other stages using analysis of variance. Qualitative variables expressed as a percentage of presence were compared between Stage IV and other stages using percentage comparison. Results: Quantitative Parameters. Patients with Stage IV VS were significantly younger as compared with patients with other stages. Stage IV hearing loss was greater compared with other stages at 250 and 500 Hz but smaller at 2,000 and 8,000 Hz. We found no difference in the loss of PTA between Stage IV and the other stages. Speech discriminancy score was smaller in Stage IV. The durations of hearing loss, tinnitus, and balance disorders were similar whatever the tumor stage. Auditory brainstem response patterns showed no difference in Wave III latency between Stage IV VS and other stages, whereas Wave V latency and V-I interval were higher in Stage IV. Both ABR threshold and VNG caloric deficit were higher in Stage IV VS compared with other stages. Qualitative Parameters. The percentage of patients with Stage IV was lower than that with Stages II and III. The percentage of men and women was similar in all stages. The occurrence of hearing loss was similar in all stages, whereas that of tinnitus was lower in Stage IV compared with Stages I and II. In contrast, the occurrence of balance disorder was higher in Stage IV compared with all other stages. Conclusion: In clinical and paraclinical manifestation, Stage IV VS is different from the other stages. The PTA differences may be attributed to the younger age. Occurrence of clinical symptoms, ABR, and VNG pattern can be explained by the fact that Stage IV develops rapidly in the vestibular, rather than the cochlear nerve and by the fact that larger tumors can be cerebellar compression. This has been confirmed by the higher occurrence of balance disorders in Stage IV and the lower occurrence of tinnitus with similar hearing loss in all stages. abstract_id: PUBMED:20185256 Quality of life of patients following stages III-IV vestibular schwannoma surgery using the retrosigmoid and translabyrinthine approaches. Objective: Surgery is an unquestionable treatment for stages III and IV vestibular schwannomas. The postoperative quality of life (QOL) remains the main issue of concern. In this study, we have evaluated the postoperative QOL of these patients operated by two surgical approaches, the retrosigmoid approach (RSA) and the translabyrinthine approach (TLA). Materials And Methods: This is a retrospective review of 101 stages III and IV vestibular schwannoma cases operated between 2000 and 2006 at our center by the senior author (J.M.). The Short Form SF-36 questionnaire and additional questions were sent to the patients. Comparison was made between the patient group and a control group to evaluate the postoperative QOL. Results: The response rate was 67.3%. There were 44 males and 57 females. The average follow up was 5.9 years. 59 patients were operated using the TLA and 42 using the RSA. Both patient groups had significantly lower scores on the questionnaire when compared to the normal population, and thus a less satisfactory QOL. Pain was the symptom that correlated most with poorer scores on the SF-36 questionnaire, although it was the least frequent symptom reported by the patients. Unsteadiness and facial weakness were the least bothersome complaints. Facial weakness did not correlate with a poorer QOL. Conclusion: Operated stages III-IV vestibular schwannoma patients suffer from a certain degree of QOL deterioration compared to normal controls. Hearing deficit was the most prevalent symptom. Pain was the least frequent but the most bothersome and with the highest correlation with a poorer QOL. Interestingly, unsteadiness and facial weakness were the least bothersome complaints. Facial weakness did not correlate with a poorer QOL. abstract_id: PUBMED:20822759 Surgical treatment of large vestibular schwannomas (stages III and IV). Objective: Study the results of surgical treatment of large vestibular schwannomas. Material And Methods: Between January 1995 and December 2005, 87 stage III and IV unilateral vestibular schwannomas (Koos classification) were operated. Results: The approach used was for the most part translabyrinthine. Nine patients were operated in two phases. Tumor exeresis was total in 79% of the cases, nearly total in 17%, and subtotal in 3.6%. Good facial function was preserved in 63% of the patients. Mortality was zero. Tumor control after a mean follow-up of 45 months was 86%. Conclusions: Surgery for large vestibular schwannomas should have a 0% mortality rate and low morbidity. Otoneurosurgical collaboration, with a preference for the translabyrinthine approach, with surgery undertaken in several phases if need be, provides maximum safety and good functional results as well as an acceptable residual tumor rate. abstract_id: PUBMED:38039537 Long-term tumor control in Koos grade IV vestibular schwannomas without the need for gross-total resection. Objective: The modern management of patients with Koos grade IV vestibular schwannomas (VSs) aims at functional preservation and long-term tumor control. Gross-total resection (GTR) leads to optimal tumor control but frequently also results in permanent facial nerve (FN) palsy. Subtotal resection (STR) or near-total resection (NTR) followed by a wait-and-scan protocol and second-line radiation therapy (RT) in case of progressive residuals yields excellent tumor control rates with less permanent morbidity. Methods: The authors present the results of their prospective cohort of Koos grade IV VS patients who underwent less-than-total resection followed by a wait-and-scan protocol between January 2009 and December 2019 and discuss the latest evidence on this controversial subject. The cohort was followed up with annual clinical and volumetric outcome analyses after standardized MRI. Results: Forty-eight patients were included in the analysis. The mean extent of resection was 87% (median 91%, range 45%-100%), best fitting into the definition of STR rather than NTR. In 2 cases, the proximal portion of the FN at the brainstem could not be reliably identified and monitored during the initial operation, and a second-stage resection was necessary. At 4.4 years after surgery, 81% (39/48) of the tumor residuals regressed or were stable in size. The percentage of regressive tumor residuals increased over time. Nineteen percent (9/48) of the tumor residuals displayed volumetric progression within a mean time of 35 months (median 36 months, range 14-72 months), resulting in a Kaplan-Meier estimate for progression-free survival of 79% after 4 years; higher postoperative volume showed a linear correlation with higher volumetric progression (factor 1.96, 95% CI 1.67-2.30; p < 0.001). Thirty-four of the 48 (71%) patients continue to undergo a wait-and-scan protocol. Second-line RT was performed in 14 patients (29%) within a mean time of 25 months (median 23 months, range 5-54 months), 12 (86%) of whom responded with post-RT pseudoprogression, resulting in an overall tumor control rate of 96%. At the 4.4-year follow-up from the initial resection, 92% of the patients had a good facial outcome (House-Brackmann [HB] grade I or II), 6% had a fair facial outcome (HB grade III), and 2% had a poor facial outcome (HB grades IV-VI). So far, there has been no need for salvage surgery after RT. Conclusions: STR followed by observation and second-line RT in cases of progression leads to good facial outcome and an excellent tumor control rate in the longer term. abstract_id: PUBMED:38393397 Stereotactic radiosurgery for Koos grade IV vestibular schwannoma: a systematic review and meta-analysis. Background: Stereotactic radiosurgery (SRS) is a well-established treatment option for Koos stage I-III vestibular schwannomas (VS), often used as the first line of treatment or after subtotal resection. However, the optimal treatment for Koos-IV VS remains unclear. Therefore, our study aimed to evaluate the effectiveness of SRS as a primary treatment for large VS classified as Koos-IV. Methods: A systematic search was performed on December 28th, 2022, based on PubMed, Web of Science, and Scopus according to the PRISMA statement. The review was updated on September 7th, 2023. The risk of bias was assessed using the NIH Quality Assessment Tool. The R software (ver. 4.3.2) was used for all quantitative analyses and preparation of the forest plots. Publication bias and sensitivity analysis were performed to evaluate the reliability of the obtained results. Results: Among 2941 screened records, ten studies (1398 patients) have been included in quantitative synthesis. The overall tumor control rate was 90.7% (95%CI 86.3-94.4). Kaplan-Meier estimates of tumor control at 2, 6, and 10 years were 96.0% (95% CI 92.9-97.6%), 88.8% (95% CI 86.9-89.8%), and 84.5% (95% CI, 81.2-85.8%), respectively. The overall hearing preservation rate was 56.5% (95%CI 37-75.1). Kaplan-Meier estimates of hearing preservation rate at 2, 6, and 10 years were 77.1% (95% CI 67.9-82.5%), 53.5% (95% CI 44.2-58.5%), and 38.1% (95% CI 23.4-40.7%), respectively. The overall facial nerve preservation rate was 100% (95%CI 99.9-100.0). The overall trigeminal neuropathy rate reached 5.7% (95%CI 2.9-9.2). The overall rate of new-onset hydrocephalus was 5.6% (95%CI 3-9). The overall rates of worsening or new-onset tinnitus and vertigo were 6.8% (95%CI 4.2-10.0) and 9.1% (95%CI 2.1-19.6) respectively. No publication bias was detected according to the used methods. Conclusions: Our systematic review and meta-analysis demonstrated a high overall tumor control rate, excellent facial nerve preservation, and low incidence of new-onset or worsened tinnitus and vertigo. However, several drawbacks associated with SRS should be noted, such as the presence of post-SRS hydrocephalus risk, mediocre long-term hearing preservation, and the lack of immediate tumor decompression. Nevertheless, the use of SRS may be beneficial in appropriately selected cases of Koos-IV VS. Moreover, further prospective studies directly comparing SRS with surgery are necessary to determine the optimal treatment for large VS and verify our results on a higher level of evidence. Registration and protocol: CRD42023389856. abstract_id: PUBMED:33330107 Microsurgical Treatment and Follow-Up of KOOS Grade IV Vestibular Schwannoma: Therapeutic Concept and Future Perspective. Purpose: Surgery of KOOS IV vestibular schwannoma remains challenging regarding the balance of extent of tumor resection (EoR) and functional outcome. Our aim was to evaluate the outcome of surgical resection and define a cut-off value for safe resection with low risk for tumor regrowth of KOOS IV vestibular schwannoma. Methods: All patients presenting at the authors' institution between 2000 and 2019 with surgically treated KOOS IV vestibular schwannoma were included. Outcome measures included EoR, facial/hearing nerve function, surgical complications and progression of residual tumor during the median follow-up period of 28 months. Results: In 58 patients, mean tumor volume was 17.1 ± 9.2 cm3, and mean EoR of 81.6 ± 16.8% could be achieved. Fifty-one patients were available for the follow-up analysis. Growth of residual tumor was observed in 11 patients (21.6%) followed by adjuvant treatment with stereotactic radiosurgery or repeat surgery in 15 patients (29.4%). Overall serviceable hearing preservation was achieved in 38 patients (74.5%) and good facial outcome at discharge was observed in 66.7% of patients, significantly increasing to 82.4% at follow-up. Independent predictors for residual tumor growth was EoR ≤ 87% (OR11.1) with a higher EoR being associated with a very low number of residual tumor progression amounting to 7.1% at follow-up (p=0.008). Conclusions: Subtotal tumor resection is a good therapeutic concept in patients with KOOS IV vestibular schwannoma resulting in a high rate of good hearing and facial nerve function and a very low rate of subsequent tumor progression. The goal of surgery should be to achieve more than 87% of tumor resection to keep residual tumor progression low. abstract_id: PUBMED:35475061 Schwannomatosis Presenting With a Grade IV Glioblastoma: A Case Report and Literature Review. Schwannomatosis is a rare subset of neurofibromatosis. It is a disease process with a predisposition to schwannomas in the absence of bilateral vestibular schwannomas, which differentiates it from neurofibromatosis 2 (NF2). It is occasionally associated with certain tumors such as malignant peripheral nerve sheath tumors or rhabdoid tumors. Currently, there is limited literature to suggest an association between schwannomatosis and glioblastoma (GB). We present a case of a 55-year-old female with a history significant for schwannomatosis who presented after a witnessed first-time seizure with left facial weakness and slurred speech. She was found to have a 3 cm right-sided ring-enhancing lesion that was excised and found to be a grade IV Isocitrate dehydrogenase (IDH) wildtype GB. abstract_id: PUBMED:1572811 Fractionated radiation therapy in the treatment of stage III and IV cerebello-pontine angle neurinomas: preliminary results in 20 cases. From January 1986 to March 1989, 20 patients with stage III and IV cerebello-pontine angle neurinomas were treated with external fractionated radiation therapy; seven patients had phacomatosis. Indications for radiation therapy were as follows: (a) poor general condition or old age contraindicating surgery, 10 patients; (b) hearing preservation in bilateral neurinomas after contralateral tumor removal, 5 patients; (c) partial resection or high risk of recurrence after subsequent surgery for relapse, 4 patients; (d) non-surgical relapse, 1 patient. Most patients were irradiated with a 9 MV linear accelerator. A 3 to 4-field technique with 5 x 5 cm portals was used. Doses were calculated on a 95% isodose and were given 5 days a week for a mean total dose of 5140 cGy (180 cGy/fraction). Median follow-up from radiation therapy was 30 months (7 to 46); 4 patients died, 2 with progressive disease. Two patients underwent total tumor removal after radiation therapy (1 stable and 1 growing tumor). On the whole, 14 tumors remained stable, 3 decreased in size, and 3 progressed. CT scan or NMR tumor changes consistent with partial tumor necrosis appeared in four cases. Hearing preservation was obtained in 3/5 hearing patients with phacomatosis. When surgery is not indicated or incomplete, fractionated radiation therapy appears to be an effective and well-tolerated treatment for stage III and IV neurinomas. Hearing can be preserved. abstract_id: PUBMED:32629521 Management of patients with vestibular schwannoma type IV Background: Vestibular schwannomas (VSs) are benign, slow-growing tumors that are classified by the Koos and Hannover grading scales. Despite extensive study, the management of large VS remains controversial. Patients And Methods: Between 2003 and 2018, 61 patients with VS grade IV were treated in our institution. Patient data and radiological images were retrospectively analyzed. Additionally, we performed a subgroup analysis of patients with and without fourth ventricle distortions. Results: Most patients presented with multiple symptoms, like hearing loss in 55 patients (90 %), trigeminal nerve affection in 16 (26 %), facial nerve affection in 7 (12 %), ataxia in 27 (45 %), and symptoms associated with increased intracranial pressure caused by hydrocephalus in 4 patients (7 %). Patients with type IV b VS presented significantly more often with ataxia, tonsillar herniation, and hydrocephalus. Complete tumor resection was achieved in 48 patients (79 %) and near-total resection was achieved in 12 patients (20 %). During long-term follow-up, 90 % of the patients had favorable outcomes regarding facial nerve function (House and Brackman grade I-III). In six patients (10 %), a ventriculoperitoneal shunt was inserted. More than 90 % of the patients had > 70 % functional impairment based on the Karnofsky Index. Discussion: Large VS is often associated hydrocephalus, ataxia, multiple cranial nerve impairments, and clinical signs of increased intracranial pressure. Primary microsurgical resection, as the primary treatment option, is associated with good outcomes in patients with large VS. abstract_id: PUBMED:35729368 Stereotactic radiosurgery ensures an effective and safe long-term control of Koos grade IV vestibular schwannomas: a single-center, retrospective, cohort study. Purpose: Stereotactic radiosurgery (SRS) is a standard treatment modality for vestibular schwannomas (VSs). However, there is a paucity of data on tumor control and neurological preservation for larger VSs. We aimed to investigate the long-term effectiveness of SRS for Koos grade IV compared with I-III VSs. Methods: We included 452 patients with VSs (50 Koos grade IV and 402 Koos grade I‒III) who were treated with SRS at our institution from 1990 to 2021. Tumor control and functional preservation were calculated using the Kaplan-Meier method and compared between groups with the log-rank test. Results: The median post-SRS follow-up period was 68 months. Progression-free survival rates were 91% at 5 and 10 years for Koos grade IV VSs, and 95% and 92%, respectively, for Koos grade I‒III VSs (p = 0.278). In Koos grade IV VSs, functional preservation rates of the facial and trigeminal nerves were both 96% at 5 years (both 98% for Koos grade I‒III VSs; facial, p = 0.410; trigeminal, p = 0.107). Hearing preservation rates were 61% at 5 years for Koos grade IV VSs and 78% for Koos grade I-III VSs (p = 0.645). Symptomatic transient tumor expansion was more common with Koos grade IV VSs (8.0% vs. 2.5%, p = 0.034), although all related symptoms diminished in accordance with tumor shrinkage. Conclusion: SRS may contribute to long-term tumor control and adequate neurological preservation in the treatment of Koos grade IV VSs, comparable to those in the treatment of Koos grade I‒III VSs. Answer: Yes, Stage IV vestibular schwannomas are preoperatively different from other stages. According to a study that included 734 patients with vestibular schwannomas (VSs), patients with Stage IV VS were significantly younger compared to patients with other stages. The hearing loss in Stage IV was greater at lower frequencies (250 and 500 Hz) but smaller at higher frequencies (2,000 and 8,000 Hz). There was no difference in the loss of pure-tone audiometry (PTA) between Stage IV and the other stages. However, speech discriminancy score was smaller in Stage IV. The durations of hearing loss, tinnitus, and balance disorders were similar regardless of the tumor stage. Auditory brainstem response (ABR) patterns showed no difference in Wave III latency between Stage IV VS and other stages, but Wave V latency and V-I interval were higher in Stage IV. Both ABR threshold and vestibular deficit at videonystamography (VNG) caloric deficit were higher in Stage IV VS compared with other stages. The occurrence of balance disorder was higher in Stage IV compared with all other stages, while the occurrence of tinnitus was lower in Stage IV compared with Stages I and II, despite similar hearing loss across all stages (PUBMED:18046259). These findings suggest that Stage IV VS develops more rapidly in the vestibular nerve rather than the cochlear nerve and that larger tumors can cause cerebellar compression, which is confirmed by the higher occurrence of balance disorders in Stage IV and the lower occurrence of tinnitus with similar hearing loss in all stages (PUBMED:18046259).
Instruction: Ventral hernia repair after bowel surgery: does gastrointestinal contamination matter in the era of biologic mesh? Abstracts: abstract_id: PUBMED:28526657 Risk-adjusted adverse outcomes in complex abdominal wall hernia repair with biologic mesh: A case series of 140 patients. Introduction: Biologic mesh is preferred for repair of complex abdominal wall hernias (CAWHs) in patients at high risk of wound infection. We aimed to identify predictors of adverse outcomes after complex abdominal wall hernia repair (CAWR) using biologic mesh with different placement techniques and under different surgical settings. Methods: A retrospective case series study was conducted on all patients who underwent CAWR with biologic mesh between 2010 and 2015 at a tertiary medical center. Results: the study population included 140 patients with a mean age of 54 ± 14 years and a median follow up period 8.8 months. Mesh size ranged from 50 to 1225 cm2. Ninety percent of patients had undergone previous surgery. Type of surgery was classified as elective in 50.7%, urgent in 24.3% and emergent in 25.0% and a porcine mesh was implanted in 82.9%. The most common mesh placement technique was underlay (70.7%), followed by onlay (16.4%) and bridge (12.9%). Complications included wound complications (30.7%), reoperation (25.9%), hernia recurrence (20.7%), and mesh removal (10.0%). Thirty-two patients (23.0%) were admitted to the ICU and the mean hospital length of stay was 10.8 ± 17.5 days. Age-sex adjusted predictors of recurrence were COPD (OR 4.2; 95%CI 1.003-17.867) and urgent surgery (OR 10.5; 95%CI 1.856-59.469), whereas for reoperation, mesh size (OR 6.8; 95%CI 1.344-34.495) and urgent surgery (OR 5.2; 95%CI 1.353-19.723) were the predictors. Conclusions: Using biologic mesh, one-quarter and one-fifth of CAWR patients are complicated with reoperation or recurrence, respectively. The operation settings and comorbidity may play a role in these outcomes regardless of the mesh placement techniques. abstract_id: PUBMED:24691329 Ventral hernia repair after bowel surgery: does gastrointestinal contamination matter in the era of biologic mesh? Background: Ventral hernias are often repaired after planned or unplanned bowel procedures. Biologic materials are thought to better tolerate gastrointestinal contamination than synthetic alternatives. The purpose of this review was to evaluate the impact of intestinal contamination on ventral hernia repair with biologic material. Methods: This is a retrospective review of all patients who underwent ventral hernia repair with biologic material between 2003 and 2012. Groups were defined based on performance of concomitant bowel surgery. Data were collected on patient demographics, risk factors, concomitant procedures, mesh type, and outcomes. Results: Of 82 patients included in this series, 32 (39%) had concomitant bowel surgery. Ventral hernia repair was performed predominantly with Alloderm and Strattice. There was no difference in hernia recurrence (contaminated group-28% vs. non-contaminated group-34%, P = 0.58), surgical site infections (contaminated-28% vs. non-contaminated-20%, P = 0.40), or other complications when patients with and without concomitant bowel surgery were compared. Conclusions: Biologic materials have made it such that ventral hernias can be safely repaired after bowel surgery without increased risk of complications. abstract_id: PUBMED:36223064 Semiresorbable biologic hybrid meshes for ventral abdominal hernia repair in potentially contaminated settings: lower risk of recurrence. In case of potential contamination, implantation of synthetic meshes in hernia and abdominal wall surgery is problematic due to a higher risk of mesh infection. As an alternative, a variety of different biologic meshes have been used. However, relevant data comparing outcome after implantation of these meshes are lacking. Between January 2012 and October 2021, biologic meshes were used for reconstruction of the abdominal wall in 71 patients with preoperative or intraoperative abdominal contamination. In this retrospective study, semiresorbable biologic hybrid meshes (BHM) and completely resorbable meshes (CRM) were compared and analyzed using a Castor EDC database. In 28 patients, semiresorbable biologic hybrid meshes were used; in 43 patients, completely resorbable meshes were used. Both groups showed no difference in age, gender, BMI, operation duration, hernia size and Charlson comorbidity index. The risk degree of surgical-site occurrences was graded according to the Ventral Hernia Working Group (VHWG) classification, and the median value was 3 (range 2-4) in the BHM group and 3 (range 2-4) in the CRM group. Hernia recurrence within 24 months after hernia repair was significantly lower in the BHM group (3.6% vs. 28.9%; p = 0.03), while postoperative complication rate, with respect to seromas in need of therapy (61.4% vs. 55.5%, p = 0.43) and operative revision (28.6% vs. 16.3%, p = 0.22) was not different in either group. Biologic hybrid meshes can be used safely in case of possible contamination. BHM seems to reduce the risk of hernia recurrence compared to completely resorbable biologic meshes, but this has to be investigated further. abstract_id: PUBMED:25305799 Assessing complications and cost-utilization in ventral hernia repair utilizing biologic mesh in a bridged underlay technique. Background: The inability to reapproximate fascia in complex ventral hernia (CVH) repair remains challenging. Single-stage bridging reconstructions have been reported, however, with high rates of recurrence and wound complications. We describe a single-surgeon experience with bridging biologic CVH repair. Methods: We reviewed 37 patients undergoing CVH repair with bridging biologic mesh by the senior author from January 1, 2007 to January 1, 2013. Surgical history and operative characteristics were analyzed for predictors of hernia recurrence and wound complications. Results: Average age was 53 ± 15 years, body mass index was 31.1 ± 8.1 kg/m(2), and history of prior repair in 18 patients. Common indications were trauma, intra-abdominal infection, and prior intra-abdominal surgery. Incidence of wound complications was 51.4%, most commonly wound breakdown and infection. With average follow-up of 13 months, recurrence rate was 18.9% at an average of 8.2 months postoperatively. Analysis demonstrated postoperative wound infection as the only predictor of recurrence (odds ratio = 22.1, P = .017). Conclusions: Hernia recurrence rate was 18.9% with bridged biologic CVH repairs, strongly associated with postoperative wound infection. This suggests that patients with postoperative infections may benefit from closer surveillance and more aggressive wound management. abstract_id: PUBMED:32100211 Laparoscopic intraperitoneal onlay mesh (IPOM) repair using n-butyl-2-cyanoacrylate (Liquiband Fix8™) for mesh fixation: learning experience and short-medium term results. Aim: The purpose of this study was to evaluate a novel hernia repair glue fixation device (LiquibandFix8™) in laparoscopic IPOM repair for incisional hernias. Methods: All ventral incisional hernia patients requiring laparoscopic IPOM repair were included in the study. A retrospective review of the data was performed. Results: 137 patients underwent 138 laparoscopic IPOM repairs for incisional hernias using n-butyl-2-cyanoacrylate (LiquibandFix8™) for mesh fixation over a 40-month period by a single experienced laparoscopic surgeon. There were 70 males and 67 females, median age 54 years (range 23-80 years). Defects were predominantly midline n = 123/138 (89%) and were closed by transfascial sutures in 126/138 (91%) prior to IPOM repair (IPOM plus). Mesh fixation was successful in all 138 repairs. Synthetic meshes (Symbotex™ [polyester/collagen composite, Medtronic] n = 80, and Dynamesh® [polypropylene/PVDF composite, FEG Textiltechnik mbH] n = 5) and biologic meshes (Surgimend® [bovine collagen dermal matrix, Integra] n = 53) were used. Median follow-up was 32 months (range 12-48 months). There were 16 adverse events in 12 patients (9%): conversion to open surgery (inadvertent enterotomy) 2 (1.5%), bladder perforation 1, intraperitoneal bleeding 2, port-site haematoma 2, port site wound infection 1, post-op ileus 2, chest infection 1, seroma 1 (1%), hernia recurrence 3 (2%) and chronic pain 2 (1.5%). Conclusion: This retrospective study shows that mesh fixation in laparoscopic IPOM using cyanoacrylate glue with the Liquiband Fix8™ device is feasible, safe, easy to learn, and is associated with a low risk of seroma, hernia recurrence and chronic pain with short-medium term follow-up. abstract_id: PUBMED:28547932 From intraperitoneal onlay mesh repair to preperitoneal onlay mesh repair. Laparoscopic repair for ventral and incisional hernias was first reported in the early 1990s. It uses intraperitoneal only mesh placement to achieve a tension-free repair of the hernia. However, in recent years, there has been greater concern about long-term complication involving intraperitoneal mesh placement. Many case reports and case series have found evidence of mesh adhesion, mesh fistulation, and mesh migration into hollow organs including the esophagus, small bowel, and large bowel, resulting in various major acute abdominal events. Subsequent management of these complications may require major surgery that is technically demanding and difficult; in such cases, laparotomy and bowel resection have often been performed. Because of these significant, but not common, adverse events, many surgeons favor open sublay repair for ventral and incisional hernias. Investigators are therefore searching for a laparoscopic approach for ventral and incisional hernias that might overcome the mesh-induced visceral complications seen after intraperitoneal only mesh placement repair. Laparoscopic preperitoneal onlay mesh is one such approach. This article will explore the fundamental of intraperitoneal only mesh placement and its problems, the currently available peritoneal visceral-compatible meshes, and upcoming developments in laparoscopic ventral and incisional hernia repair. The technical details of preperitoneal onlay mesh, as well as its potential advantages and disadvantages, will also be discussed. abstract_id: PUBMED:24229031 A systematic review of outcomes following repair of complex ventral incisional hernias with biologic mesh. Repair of contaminated/infected ventral incisional hernias is marked by high rates of recurrence, complications, and/or explantation of synthetic mesh. Biologic mesh products are recommended for repair to permit reconstruction and reduce complications. A systematic review of PubMed, EMBASE, and Cochrane databases identified English-language articles reporting postoperative outcomes (e.g., hernia recurrence, infection, mesh explantation) in patients undergoing contaminated/infected ventral incisional herniorrhaphy. Eleven studies met inclusion criteria and contained quantitative outcome data. All were retrospective chart reviews of biologic mesh use (mostly human acellular dermal matrix). Hernia recurrence and wound infection rates were highly variable and inconsistently reported across studies. Mesh explantation was rarely reported. Outcome variability is likely owing to heterogenous patient populations, surgical technique variations, and follow-up duration. Biologic mesh use in contaminated/infected herniorrhaphy was marked by low reported rates of secondary surgeries for infected mesh removal. Data from large, well-controlled, prospective trials with biologic mesh products are needed. abstract_id: PUBMED:29061321 Robotic Parastomal Hernia Repair With Biologic Mesh. Introduction: Parastomal hernia is a common and vexing problem after ileal conduit urinary diversion that can cause pain, bowel symptoms, and problems with the stoma appliance, resulting in leakage, odor, and impairment of quality of life. Historically, these hernias have been managed open surgically, which requires considerable dissection of the abdominal wall for mesh placement, and may also require relocating the stoma to the contralateral side. Further, open parastomal hernia repair can be a morbid procedure with recurrence rates of approximately 30%. Materials And Methods: We present our technique of robotic parastomal hernia repair with a biologic mesh. We used a biologic mesh with the keyhole technique to avoid risk of erosion into the bowel segment. We present a case series of 3 patients undergoing robotic parastomal hernia repair. The illustrative case in the video is that of a 63-year-old obese woman with a parastomal hernia resulting in poor stoma appliance adhesion and bothersome urinary leakage. Results: Robotic repair was completed successfully without intraoperative complications in all 3 patients. The mean age was 72 years. Mean body mass index was 30 kg/m2. Mean operating time was 3.5 hours. Mean estimated blood loss was 7.5 mL. Mean length of hospital stay was 3 days. On follow-up, all 3 patients were asymptomatic with stoma appliance issues and had no recurrence. Conclusion: Our initial series suggests that robotic repair of parastomal hernia is safe and feasible with minimal morbidity and good short-term outcomes. Further studies are required to confirm the efficacy of our technique. abstract_id: PUBMED:27351653 Development of a novel murine model for treatment of infected mesh scenarios. Background: Indications regarding hernia repair after removal of previously infected prostheses remain unclear. Patients may receive staged primary repair or single-stage reconstructions, neither of which may be ideal. Although animal models have simulated contamination by direct inoculation of implants with bacteria, there remains a paucity of literature, which simulates a field following mesh infection and removal. We aimed to develop a murine model to mimic this complex scenario to allow for further testing of various implants. Methods: Thirty-six female CL57BL/6J mice underwent implantation of a 0.7 × 0.7 cm polyester mesh in the dorsal subcutaneous position. Wounds were closed and inoculated with 100 µL containing 1 × 104 CFU of GFP-labeled MSSA. After 2 weeks, the infected mesh was removed and the cavity was copiously irrigated with saline. Mice were split into four groups: with three groups receiving new polyester, polypropylene, and porcine mesh and remaining as non-mesh controls. Mice were survived for another 2 weeks and underwent necropsy. Gross infection was evaluated at 2 and 4 weeks. Tissue homogenization and direct plating to recover GFP MSSA was completed at 4 weeks. Results: At 2 weeks, all mice were noted to have gross mesh infection. One animal died due to overwhelming infection and wound breakdown. At 4 weeks, 5/6 (83 %) control mice who did not have a second mesh implantation had full clearance of their wounds. In contrast, 9/10 (90 %) mice with re-implantation of polypropylene were noted to have pus and recovery of GFP MSSA on plating. This was also observed in 100 % of mice with polyester and porcine mesh. Conclusion: Our novel murine model demonstrates that mesh re-implantation after infected mesh removal results in infection of the newly placed prosthesis, regardless of the material characteristic or type. This model lays foundation for development and investigation of implants for treatment strategies following infected mesh removal. abstract_id: PUBMED:19967621 Transgastric biologic mesh delivery and abdominal wall hernia repair in a porcine model. Background And Study Aims: Incisional abdominal wall hernias currently require repair with open or laparoscopic surgery, which is associated with wound complications and recurrent hernia formation. Natural orifice transluminal endoscopic surgery (NOTES) techniques may have the potential to decrease the morbidity associated with hernia repair. The aim of this study was to repair a chronic ventral hernia with a biologic mesh placed transgastrically in a porcine model. Materials And Methods: Six pigs underwent creation of an incisional abdominal wall hernia. At least 4 weeks later, transgastric repair was done using an underlay biologic mesh with at least 5 cm of overlap from the hernia fascial edge. The mesh was secured with transfascial sutures and the stomach was closed with a sutured gastropexy. Pigs were evaluated 2 weeks later by laparoscopy. Pigs were sacrificed and necropsy wa s performed 4 weeks after the repair. Results: Six pigs underwent hernia repair lasting a mean (+/- SD) of 204 +/- 123 minutes, with one perioperative death. At 2 weeks after hernia repair, laparoscopy showed significant adhesions in all pigs; one pig had extensive mesh infection and was sacrificed. Necropsy on one pig at 2 weeks and four pigs at 4 weeks showed complete coverage of the hernia defect in all pigs. All pigs had mesh abscesses or a positive mesh culture. Conclusion: Transgastric repair of a chronic ventral hernia is technically feasible. Difficulties with mesh delivery and infection need to be overcome before this approach can be used in humans. Answer: In the era of biologic mesh, gastrointestinal contamination does not appear to significantly impact the outcomes of ventral hernia repair after bowel surgery. A retrospective review of patients who underwent ventral hernia repair with biologic material, including those with concomitant bowel surgery, found no difference in hernia recurrence or surgical site infections between the contaminated and non-contaminated groups. This suggests that biologic materials can be safely used for ventral hernia repair after bowel surgery without an increased risk of complications (PUBMED:24691329). Moreover, semiresorbable biologic hybrid meshes have been shown to be safely used in potentially contaminated settings, with a significantly lower risk of hernia recurrence compared to completely resorbable biologic meshes. This indicates that biologic hybrid meshes might be a preferable option in cases of potential contamination (PUBMED:36223064). However, it is important to note that while biologic meshes have been associated with lower rates of secondary surgeries for infected mesh removal, the outcomes can be highly variable and are influenced by factors such as patient populations, surgical techniques, and follow-up duration (PUBMED:24229031). In summary, the use of biologic mesh in ventral hernia repair after bowel surgery does not seem to increase the risk of complications, even in the presence of gastrointestinal contamination, making it a viable option in these scenarios.
Instruction: Can syndromic thresholds provide early warning of national influenza outbreaks? Abstracts: abstract_id: PUBMED:18032426 Can syndromic thresholds provide early warning of national influenza outbreaks? Background: Influenza incidence thresholds are used to help predict the likely impact of influenza and inform health professionals and the public of current activity. We evaluate the potential of syndromic data (calls to a UK health helpline NHS Direct) to provide early warning of national influenza outbreaks. Methods: Time series of NHS Direct calls concerning 'cold/flu' and fever syndromes for England and Wales were compared against influenza-like-illness clinical incidence data and laboratory reports of influenza. Poisson regression models were used to derive NHS Direct thresholds. The early warning potential of thresholds was evaluated retrospectively for 2002-06 and prospectively for winter 2006-07. Results: NHS Direct 'cold/flu' and fever calls generally rose and peaked at the same time as clinical and laboratory influenza data. We derived a national 'cold/flu' threshold of 1.2% of total calls and a fever (5-14 years) threshold of 9%. An initial lower fever threshold of 7.7% was discarded as it produced false alarms. Thresholds provided 2 weeks advanced warning of seasonal influenza activity during three of the four winters studied retrospectively, and 6 days advance warning during prospective evaluation. Conclusion: Syndromic thresholds based on NHS Direct data provide advance warning of influenza circulating in the community. We recommend that age-group specific thresholds be developed for other clinical influenza surveillance systems in the UK and elsewhere. abstract_id: PUBMED:31063101 Developing influenza and respiratory syncytial virus activity thresholds for syndromic surveillance in England. Influenza and respiratory syncytial virus (RSV) are common causes of respiratory tract infections and place a burden on health services each winter. Systems to describe the timing and intensity of such activity will improve the public health response and deployment of interventions to these pressures. Here we develop early warning and activity intensity thresholds for monitoring influenza and RSV using two novel data sources: general practitioner out-of-hours consultations (GP OOH) and telehealth calls (NHS 111). Moving Epidemic Method (MEM) thresholds were developed for winter 2017-2018. The NHS 111 cold/flu threshold was breached several weeks in advance of other systems. The NHS 111 RSV epidemic threshold was breached in week 41, in advance of RSV laboratory reporting. Combining the use of MEM thresholds with daily monitoring of NHS 111 and GP OOH syndromic surveillance systems provides the potential to alert to threshold breaches in real-time. An advantage of using thresholds across different health systems is the ability to capture a range of healthcare-seeking behaviour, which may reflect differences in disease severity. This study also provides a quantifiable measure of seasonal RSV activity, which contributes to our understanding of RSV activity in advance of the potential introduction of new RSV vaccines. abstract_id: PUBMED:33321199 An aberration detection-based approach for sentinel syndromic surveillance of COVID-19 and other novel influenza-like illnesses. Coronavirus Disease 2019 has emerged as a significant global concern, triggering harsh public health restrictions in a successful bid to curb its exponential growth. As discussion shifts towards relaxation of these restrictions, there is significant concern of second-wave resurgence. The key to managing these outbreaks is early detection and intervention, and yet there is a significant lag time associated with usage of laboratory confirmed cases for surveillance purposes. To address this, syndromic surveillance can be considered to provide a timelier alternative for first-line screening. Existing syndromic surveillance solutions are however typically focused around a known disease and have limited capability to distinguish between outbreaks of individual diseases sharing similar syndromes. This poses a challenge for surveillance of COVID-19 as its active periods tend to overlap temporally with other influenza-like illnesses. In this study we explore performing sentinel syndromic surveillance for COVID-19 and other influenza-like illnesses using a deep learning-based approach. Our methods are based on aberration detection utilizing autoencoders that leverages symptom prevalence distributions to distinguish outbreaks of two ongoing diseases that share similar syndromes, even if they occur concurrently. We first demonstrate that this approach works for detection of outbreaks of influenza, which has known temporal boundaries. We then demonstrate that the autoencoder can be trained to not alert on known and well-managed influenza-like illnesses such as the common cold and influenza. Finally, we applied our approach to 2019-2020 data in the context of a COVID-19 syndromic surveillance task to demonstrate how implementation of such a system could have provided early warning of an outbreak of a novel influenza-like illness that did not match the symptom prevalence profile of influenza and other known influenza-like illnesses. abstract_id: PUBMED:29609253 Study on early warning threshold values for 7 common communicable diseases in Gansu province, 2016 Objective: To optimize the warning threshold values of common communicable diseases in Gansu province, and improve the early warning effect. Method: An early warning model was set up for influenza, scarlet fever, other infectious diarrheal diseases, dysentery, typhoid and paratyphoid, viral hepatitis type E and hand foot and mouth disease (HFMD) respectively in Gansu by using the moving percentile method and cumulative sum method. By calculating the sensitivity, specificity, predictive value of positive test, predictive value of negative test, Youden' index and receiver-operating characteristic curve, the optimum early warning threshold values for communicable diseases in Gansu were selected. Results: The optimum early warning boundary values of influenza, scarlet fever, other infectious diarrheal diseases, dysentery, typhoid and paratyphoid, and viral hepatitis type E were P(90), P(80), P(95), P(90), P(80) and P(90) respectively. The optimum early warning parameters of HFMD were k=1.2, H=5σ. Under the optimum early warning boundary values/parameters, the early warning sensitivities of influenza, scarlet fever, other infectious diarrheal diseases, dysentery, typhoid and paratyphoid, viral hepatitis type E and HFMD were 86.67%, 100.00%, 91.67%, 100.00%, 100.00%, 100.00% and 100.00%, the specificities were 86.49%, 62.22%, 75.00%, 100.00%, 97.92%, 89.13% and 74.47%. The predictive values of positive test were 72.22%, 29.17%, 52.38%, 100.00%, 80.00%, 54.55% and 29.41%, and the predictive values of negative test were 94.12%, 100.00%, 96.77%, 100.00%, 100.00%, 100.00% and 100.00%, and the Youden' indexes were 0.73, 0.62, 0.67, 1.00, 0.98,0.89 and 0.74. Receiver-operating characteristic curve showed that the values/parameters of this warning boundary were the points closest to the upper left of the coordinate diagram. Conclusion: The early warning thresholds of influenza, other infectious diarrheal diseases, dysentery and hepatitis E in Gansu may be raised appropriately and the early warning parameters of HFMD need to be adjusted to improve the effectiveness of early warning. abstract_id: PUBMED:20028535 Early detection of influenza outbreaks using the DC Department of Health's syndromic surveillance system. Background: Since 2001, the District of Columbia Department of Health has been using an emergency room syndromic surveillance system to identify possible disease outbreaks. Data are received from a number of local hospital emergency rooms and analyzed daily using a variety of statistical detection algorithms. The aims of this paper are to characterize the performance of these statistical detection algorithms in rigorous yet practical terms in order to identify the optimal parameters for each and to compare the ability of two syndrome definition criteria and data from a children's hospital versus vs. other hospitals to determine the onset of seasonal influenza. Methods: We first used a fine-tuning approach to improve the sensitivity of each algorithm to detecting simulated outbreaks and to identifying previously known outbreaks. Subsequently, using the fine-tuned algorithms, we examined (i) the ability of unspecified infection and respiratory syndrome categories to detect the start of the flu season and (ii) how well data from Children's National Medical Center (CNMC) did versus all the other hospitals when using unspecified infection, respiratory, and both categories together. Results: Simulation studies using the data showed that over a range of situations, the multivariate CUSUM algorithm performed more effectively than the other algorithms tested. In addition, the parameters that yielded optimal performance varied for each algorithm, especially with the number of cases in the data stream. In terms of detecting the onset of seasonal influenza, only "unspecified infection," especially the counts from CNMC, clearly delineated influenza outbreaks out of the eight available syndromic classifications. In three of five years, CNMC consistently flags earlier (from 2 days up to 2 weeks earlier) than a multivariate analysis of all other DC hospitals. Conclusions: When practitioners apply statistical detection algorithms to their own data, fine tuning of parameters is necessary to improve overall sensitivity. With fined tuned algorithms, our results suggest that emergency room based syndromic surveillance focusing on unspecified infection cases in children is an effective way to determine the beginning of the influenza outbreak and could serve as a trigger for more intensive surveillance efforts and initiate infection control measures in the community. abstract_id: PUBMED:30338927 Detecting early-warning signals of influenza outbreak based on dynamic network marker. The seasonal outbreaks of influenza infection cause globally respiratory illness, or even death in all age groups. Given early-warning signals preceding the influenza outbreak, timely intervention such as vaccination and isolation management effectively decrease the morbidity. However, it is usually a difficult task to achieve the real-time prediction of influenza outbreak due to its complexity intertwining both biological systems and social systems. By exploring rich dynamical and high-dimensional information, our dynamic network marker/biomarker (DNM/DNB) method opens a new way to identify the tipping point prior to the catastrophic transition into an influenza pandemics. In order to detect the early-warning signals before the influenza outbreak by applying DNM method, the historical information of clinic hospitalization caused by influenza infection between years 2009 and 2016 were extracted and assembled from public records of Tokyo and Hokkaido, Japan. The early-warning signal, with an average of 4-week window lead prior to each seasonal outbreak of influenza, was provided by DNM-based on the hospitalization records, providing an opportunity to apply proactive strategies to prevent or delay the onset of influenza outbreak. Moreover, the study on the dynamical changes of hospitalization in local district networks unveils the influenza transmission dynamics or landscape in network level. abstract_id: PUBMED:16177702 Evaluation of syndromic surveillance based on National Health Service Direct derived data--England and Wales. Introduction: Syndromic surveillance systems might serve as an early warning to detect outbreaks of infectious diseases and chemical poisoning, including those caused by deliberate release. In England and Wales, data from National Health Service (NHS) Direct, a national telephone health advice service, were used for surveillance of 10 syndromes commonly occurring in the community. Objectives: The objective of this study was to evaluate NHS Direct syndromic surveillance using the "Framework for Evaluating Public Health Surveillance Systems for Early Detection of Outbreaks", published by CDC. Methods: Quantitative and qualitative assessments were performed. Examination of daily data flow was used to determine the timeliness and data quality. Validity was determined by comparing NHS Direct surveillance with a well-established clinical-based surveillance system using a time series analysis. Semistructured interviews of main stakeholders were conducted to determine usefulness, flexibility, acceptability, portability, stability, and system costs. Results: NHS Direct syndromic surveillance has representative national coverage, provides near real-time recording and data analysis, and can potentially detect high-risk, large-scale events. Direct costs are low and variable costs are unpredictable. Flexibility depends on urgency of the need for change, and portability relies on the existence of infrastructure similar to NHS Direct. Statistically significant correlation exists between NHS Direct surveillance and a surveillance system based on the Royal College of General Practitioners data for influenza-like illness. Conclusion: The CDC framework is a useful tool to standardize the evaluation of syndromic surveillance. NHS Direct syndromic surveillance is timely, representative, useful, and acceptable with low marginal costs and borderline flexibility and portability. Cross-correlation time series modeling might represent an appropriate method in the evaluation of syndromic surveillance validity. abstract_id: PUBMED:32304150 Performance of Bayesian outbreak detection algorithm in the syndromic surveillance of influenza-like illness in small region. Early warning for Infectious disease outbreak is an important public health policy concern, and finding a reliable method for early warning remains one of the active fields for researchers. The purpose of this study was to evaluate the performance of the Bayesian outbreak detection algorithm in the surveillance of influenza-like illness in small regions. The Bayesian outbreak detection algorithm (BODA) and modified cumulative sum control chart algorithm (CUSUM) were applied to daily counts of influenza-like illness in Tehran, Iran. We used data from September 2016 through August 2017 to provide background counts for the algorithms, and data from September 2017 through August 2018 used for testing the algorithms. The performances of the BODA and modified CUSUM algorithms were compared with the results coming from experts' signal inspections. The data of syndromic surveillance of influenza-like illness in Tehran had a median daily counts of 7 (IQR = 3-14). The data showed significant seasonal trends and holiday and day-of-the-week effects. The utility of the BODA algorithm in real-time detection of the influenza outbreak was better than the modified CUSUM algorithm. Moreover, the best performance was when a trend included in the analysis. The BODA algorithm was able to detect the influenza outbreaks with 4-5 days delay, with the least false-positive alarm. Applying the BODA algorithm as an outbreak detection method in influenza-like syndromic surveillance might be useful in early detection of the outbreaks in small regions. abstract_id: PUBMED:28606078 How to select a proper early warning threshold to detect infectious disease outbreaks based on the China infectious disease automated alert and response system (CIDARS). Background: China Centre for Diseases Control and Prevention (CDC) developed the China Infectious Disease Automated Alert and Response System (CIDARS) in 2005. The CIDARS was used to strengthen infectious disease surveillance and aid in the early warning of outbreak. The CIDARS has been integrated into the routine outbreak monitoring efforts of the CDC at all levels in China. Early warning threshold is crucial for outbreak detection in the CIDARS, but CDCs at all level are currently using thresholds recommended by the China CDC, and these recommended thresholds have recognized limitations. Our study therefore seeks to explore an operational method to select the proper early warning threshold according to the epidemic features of local infectious diseases. Methods: The data used in this study were extracted from the web-based Nationwide Notifiable Infectious Diseases Reporting Information System (NIDRIS), and data for infectious disease cases were organized by calendar week (1-52) and year (2009-2015) in Excel format; Px was calculated using a percentile-based moving window (moving window [5 week*5 year], x), where x represents one of 12 centiles (0.40, 0.45, 0.50….0.95). Outbreak signals for the 12 Px were calculated using the moving percentile method (MPM) based on data from the CIDARS. When the outbreak signals generated by the 'mean + 2SD' gold standard were in line with a Px generated outbreak signal for each week during the year of 2014, this Px was then defined as the proper threshold for the infectious disease. Finally, the performance of new selected thresholds for each infectious disease was evaluated by simulated outbreak signals based on 2015 data. Results: Six infectious diseases were selected in this study (chickenpox, mumps, hand foot and mouth diseases (HFMD), scarlet fever, influenza and rubella). Proper thresholds for chickenpox (P75), mumps (P80), influenza (P75), rubella (P45), HFMD (P75), and scarlet fever (P80) were identified. The selected proper thresholds for these 6 infectious diseases could detect almost all simulated outbreaks within a shorter time period compared to thresholds recommended by the China CDC. Conclusions: It is beneficial to select the proper early warning threshold to detect infectious disease aberrations based on characteristics and epidemic features of local diseases in the CIDARS. abstract_id: PUBMED:29699520 A methodological framework for the evaluation of syndromic surveillance systems: a case study of England. Background: Syndromic surveillance complements traditional public health surveillance by collecting and analysing health indicators in near real time. The rationale of syndromic surveillance is that it may detect health threats faster than traditional surveillance systems permitting more timely, and hence potentially more effective public health action. The effectiveness of syndromic surveillance largely relies on the methods used to detect aberrations. Very few studies have evaluated the performance of syndromic surveillance systems and consequently little is known about the types of events that such systems can and cannot detect. Methods: We introduce a framework for the evaluation of syndromic surveillance systems that can be used in any setting based upon the use of simulated scenarios. For a range of scenarios this allows the time and probability of detection to be determined and uncertainty is fully incorporated. In addition, we demonstrate how such a framework can model the benefits of increases in the number of centres reporting syndromic data and also determine the minimum size of outbreaks that can or cannot be detected. Here, we demonstrate its utility using simulations of national influenza outbreaks and localised outbreaks of cryptosporidiosis. Results: Influenza outbreaks are consistently detected with larger outbreaks being detected in a more timely manner. Small cryptosporidiosis outbreaks (<1000 symptomatic individuals) are unlikely to be detected. We also demonstrate the advantages of having multiple syndromic data streams (e.g. emergency attendance data, telephone helpline data, general practice consultation data) as different streams are able to detect different outbreak types with different efficacy (e.g. emergency attendance data are useful for the detection of pandemic influenza but not for outbreaks of cryptosporidiosis). We also highlight that for any one disease, the utility of data streams may vary geographically, and that the detection ability of syndromic surveillance varies seasonally (e.g. an influenza outbreak starting in July is detected sooner than one starting later in the year). We argue that our framework constitutes a useful tool for public health emergency preparedness in multiple settings. Conclusions: The proposed framework allows the exhaustive evaluation of any syndromic surveillance system and constitutes a useful tool for emergency preparedness and response. Answer: Yes, syndromic thresholds can provide early warning of national influenza outbreaks. Studies have shown that syndromic data, such as calls to health helplines like the UK's NHS Direct, can be used to derive thresholds that give advance warning of influenza activity in the community. For instance, a study found that NHS Direct 'cold/flu' and fever calls generally rose and peaked at the same time as clinical and laboratory influenza data, and thresholds derived from this data provided 2 weeks advanced warning during three of the four winters studied retrospectively, and 6 days advance warning during prospective evaluation (PUBMED:18032426). Another study developed early warning and activity intensity thresholds for monitoring influenza using general practitioner out-of-hours consultations (GP OOH) and telehealth calls (NHS 111). The NHS 111 cold/flu threshold was breached several weeks in advance of other systems, indicating its potential to alert to threshold breaches in real-time (PUBMED:31063101). Moreover, an aberration detection-based approach using deep learning has been explored for sentinel syndromic surveillance of COVID-19 and other influenza-like illnesses. This method leverages symptom prevalence distributions to distinguish outbreaks of two ongoing diseases that share similar syndromes, even if they occur concurrently, demonstrating the potential for early warning of novel influenza-like illness outbreaks (PUBMED:33321199). Additionally, studies have evaluated the performance of syndromic surveillance systems and developed methodological frameworks to determine the time and probability of detection for various scenarios, including national influenza outbreaks. These evaluations help in understanding the types of events that syndromic surveillance systems can detect and the effectiveness of different data streams in providing early warnings (PUBMED:29699520). In conclusion, syndromic thresholds, when properly developed and evaluated, can serve as an effective tool for providing early warning of national influenza outbreaks, allowing for timely public health responses.
Instruction: Immediate and long-term survival after intra-aortic balloon pumping: is advanced age an independent, unfavorable prognostic factor? Abstracts: abstract_id: PUBMED:7706629 Immediate and long-term survival after intra-aortic balloon pumping: is advanced age an independent, unfavorable prognostic factor? Objective: To determine whether advanced age is an independent prognostic factor that may increase the risk of complications and reduce the immediate and long-term survival after treatment with intra-aortic balloon pumping (IABP) for acute ischemic heart disease. Design: Retrospective analysis of a clinical series and long-term follow-up study. Patients: A total of 142 patients (age range 23-86 years) in whom IABP was indicated following a standardized protocol. Measurements: Univariate analysis of the association between age and anamnestic, clinical, and hemodynamic data significantly affecting prognosis; multivariate logistic regression analysis of variables showing significant associations. Results: IABP-related complications were independent of age. At univariate analysis, pre-existing, noncardiac, associated conditions (chronic renal failure, stroke, bronchopulmonary disease) (P < .001), an indication to IABP for hemodynamic complications of acute myocardial infarction rather than for medically refractory myocardial ischemia (P < .001), a lack in hemodynamic response to IABP (P < .002), a prolonged delay to IABP initiation (P < .003), and advanced age (P < .025) were all predictors of in-hospital mortality. However, older patients were also more frequently affected by chronic comorbid conditions and by acute left ventricular failure as an indication to IABP. When these variables were entered into logistic regression models, the association between age and mortality was no longer statistically significant. Similarly, advancing age was associated (P < .012) with an increased mortality rate during an average follow-up of 94 +/- 9 months, but, again, such a significant association disappeared when simultaneously adjusting for New York Heart Association functional class, which was the single most important predictor of long-term mortality. Conclusions: Several anamnestic and clinical variables, rather than advanced age itself, are independent, prognostic factors negatively affecting the outcome of treatment with IABP. Technique-related complications are independent of age. Thus, old age should not be considered as a criterion to exclude any patient, a priori, from the potential benefits of this type of cardiocirculatory assistance. abstract_id: PUBMED:3695583 Intra-aortic balloon pumping support in surgically treated valvular heart disease: prognostic factors and survival. Prognostic factors of Low Output Syndrome (LOS) requiring operative circulatory support by intra-aortic balloon-pumping (IABP) counterpulsation, were analysed in a population of 841 patients who underwent heart valve replacement surgery between June 1977 and May 1985. The incidence of IABP circulatory support was 6.8%. Mean survival time of patients who needed IABP was 2.35 years +/- 0.425 vs 6.30 years +/- 0.103 for patients who didn't have this complication. A multivariate analysis using the logistic model was done to pinpoint factors predictive of IABP support. The prognostic factors were pre-operative functional class, presence of prior valve replacement, presence of concomitant surgery and presence of endocarditic etiology. The subgroup of 58 patients undergoing balloon counterpulsation was analysed for factors predictive of survival using a multivariate analysis of the Cox' model. Presence of aortic or mitral regurgitation was found to be the independent risk factor of mortality. Using a combination of prognostic factors, we pinpointed groups of patients at high risk of needing post-operative balloon pump counterpulsation support. The necessity of an alternative procedure for the mechanical support of the failing circulation is underlined. abstract_id: PUBMED:19515719 Intra-aortic balloon counterpulsation as a bridge to heart transplantation does not impair long-term survival. Aims: There are few studies of the use of intra-aortic balloon pump (IABP) treatment as a bridge to heart transplantation (HTx). This is the first study to compare long-term clinical and haemodynamic outcomes in IABP-treated HTx patients and electively transplanted patients. Methods And Results: This was a retrospective study of all adult HTx recipients between 2001 and 2007. Thirty-two patients (aged 50 +/- 13 years) treated with IABP, as a bridge to HTx due to severe hypo-perfusion, were compared with 135 electively transplanted patients (aged 54 +/- 11 years). The mean time from onset of IABP to HTx was 21 +/- 16 days. Clinical condition improved during IABP treatment. Serum creatinine decreased from 128 +/- 56 to 102 +/- 29 micromol/L (P < 0.01), aspartate transaminase from 682 +/- 1299 to 63 +/- 89 U/L (P = 0.01), and ALAT from 483 +/- 867 to 126 +/- 284 U/L (P = 0.02). Intra-aortic balloon pump treatment related complications were few. Mortality was similar in the IABP and control groups at 30 days post-HTx (6.2 vs.3.7%, P = 0.54), at 1 year (9.4 vs.11.1%, P = 0.80), and beyond. Long-term clinical and haemodynamic indices were similar in the two groups. Conclusion: Intra-aortic balloon pump treatment stabilizes patients in end-stage heart failure, is safe, well tolerated, and is successful in bridging acutely decompensated patients to transplantation. Complications are few and manageable. Following IABP and HTx, short- and long-term survival, biochemical and invasive and non-invasive haemodynamic outcomes were similar to those in electively transplanted patients. abstract_id: PUBMED:12918526 Short- and long-term hemodynamic effects of intra-aortic balloon support in ventricular septal defect complicating acute myocardial infarction. In patients with an infarct-related ventricular septal defect, an intra-aortic balloon pump provides immediate and long-term hemodynamic improvement, resulting in an enhanced effective cardiac output and a reduced left-to-right-shunt and shunt flow ration. In patients who can be stabilized or remain stable, there is no habituation to the effects of the intra-aortic balloon pump; thus, later surgical closure of the ventricular septal defect might be possible in some patients. abstract_id: PUBMED:31476881 Intra-aortic balloon pump in acute chest pain and cardiogenic shock - a long-term follow-up. Objectives. Coronary revascularisation and intra-aortic balloon pump (IABP) has been considered the gold standard treatment of acute coronary syndrome with cardiogenic shock, recently challenged by the SHOCK II study. The aim of this non-randomised study was to investigate the long term prognosis after immediate IABP supported angiography, in patients with acute chest pain and cardiogenic shock, treated with percutaneous coronary intervention (PCI), cardiac surgery or optimal medical treatment. We assessed data from 281 consecutive patients admitted to our department from 2004 to 2010. Results. Mean (±SD) age was 63.8 ± 11.5 (range 30-84) years with a follow-up of 5.6 ± 4.4 (0-12.7) years. Acute myocardial infarction was the primary diagnosis in 93% of the patients, 4% presented with unstable angina pectoris and 3% cardiomyopathy or arrhythmias of non-ischemic aetiology. Systolic blood pressure at admittance was 85 ± 18 mmHg and diastolic 55 ± 18 mmHg. Thirty day, one- and five-year survival was 71.2%, 67.3% and 57.7%, respectively. PCI was performed immediately in 70%, surgery was done in 17%, and 13% were not eligible for any revascularisation. Independent variables predicting mortality were medical treatment vs revascularisation, out-of-hospital cardiac arrest, and advanced age. Three serious non-fatal complications occurred due to IABP treatment, i.e. 0.001 per treatment day. Conclusions. We report the use of IABP in patients with acute chest pain admitted for angiography. Long-term survival is acceptable and discriminating factors were no revascularisation, out-of-hospital cardiac arrest and age. IABP was safe and feasible and the complication rate was low. abstract_id: PUBMED:26159743 A 6-Year Single-Center Experience of Intra-aortic Balloon Pump Treatment--Retrospective Analysis of 223 Patients. Objectives: This study describes the long-term survival (1 year) of patients after intra-aortic balloon pump (IAPB) treatment. Design: A single-center, retrospective registry study. Setting: Single university hospital intensive care unit. Participants: Participants included 223 consecutive patients who received intra-aortic balloon pump (IABP) treatment between January 1, 2005, and December 31, 2010 (203 cardiac surgical patients [7.8% of all cardiac surgical patients] and 20 conservative medical patients). Interventions: IABP treatment was used as an adjunct therapy for severe hemodynamic compromise perioperatively in cardiac surgery and in hemodynamic shock of other etiology. Measurements And Main Results: Mean age of patients was 66±10 years. The 30-day mortality was 22.1% for surgical patients and 37.8% for conservative medical patients. The 1-year mortality was 24.5% for surgical patients and 55% for conservative medical patients. There were 14 (6.3%) major complications related to IABP treatment that required operative treatment. The 1-year survival for patients surviving the initial 30 days after coronary artery bypass grafting was 95.2%. Independent predictors of 1-year mortality (Cox regression) were age, previous cardiac surgery, critical preoperative state, and conservative medical treatment. Timing (preoperative, intraoperative, postoperative) of IABP treatment did not predict survival. Conclusions: IABP treatment was safe with few serious complications. Timing of IABP placement was not related to survival. Survival at 1-year follow-up was excellent after an initially high mortality for cardiac surgical patients treated for hemodynamic compromise with aortic counterpulsation. abstract_id: PUBMED:7594080 Balloon aortic valvuloplasty in adults: failure of procedure to improve long-term survival. Objectives: This study sought to determine the long-term outcome of adult patients undergoing percutaneous balloon aortic valvuloplasty. Background: Percutaneous balloon aortic valvuloplasty has been offered as an alternative to aortic valve replacement for selected patients with valvular aortic stenosis. Although balloon aortic valvuloplasty produces an immediate reduction in the transvalvular aortic gradient, a high incidence of restenosis frequently leads to recurrent symptoms. Therefore, it is unclear whether balloon aortic valvuloplasty impacts on the long-term outcome of these patients. Methods: Clinical, hemodynamic and echocardiographic data were collected at baseline in 165 patients undergoing balloon aortic valvuloplasty and examined for their ability to predict long-term outcome. Results: The median duration follow-up was 3.9 years (range 1 to 6). Ninety-nine percent follow-up was achieved. During this 6-year period, 152 patients (93%) died or underwent aortic valve replacement, and 99 (60%) died of cardiac-related causes. The probability of event-free survival (freedom from death, aortic valve replacement or repeat balloon aortic valvuloplasty) 1, 2 and 3 years after valvuloplasty was 40%, 19% and 6%, respectively. In contrast, the probability of survival 3 years after balloon aortic valvuloplasty in a subset of 42 patients who underwent subsequent aortic valve replacement was 84%. Survival after aortic valvuloplasty was poor regardless of the presenting symptom, but patients with New York Heart Association functional class IV congestive heart failure had events earliest. Univariable predictors of decreased event-free survival were younger age, advanced congestive heart failure symptoms, lower ejection fraction, elevated left ventricular end-diastolic pressure, presence of coronary artery disease and increased left ventricular internal diastolic diameter. Stepwise multivariable logistic regression analysis found that only younger age and a lower left ventricular ejection fraction contributed independent adverse prognostic information (chi-square 14.89, p = 0.0006). Conclusions: Long-term event-free and actuarial survival after balloon aortic valvuloplasty is dismal and resembles the natural history of untreated aortic stenosis. Aortic valve replacement may be performed in selected subjects with good results. However, the prognosis for the remainder of patients who are not candidates for aortic valve replacement is particularly poor. abstract_id: PUBMED:10769995 Acute aortic thrombosis after intra-aortic balloon pumping. Acute aortic occlusion is a rare but devastating complication of intra-aortic balloon pumping. Prompt diagnosis of aortic occlusion and immediate therapeutic decision making are keys for saving patients in such cases. We describe a 72-year-old man who underwent emergency surgical removal of a thrombus and aortoiliac bifurcated grafting for acute aortic occlusion after withdrawal of the intra-aortic balloon pumping catheter. The presence of a small unrecognized abdominal aortic aneurysm was considered to be the cause of thrombosis in this patient. Ultrasonographic screening for an abdominal aortic aneurysm is recommended for patients who require intra-aortic balloon pumping support. abstract_id: PUBMED:10503635 Aortic balloon entrapment complicating intra-aortic balloon counterpulsation. Short-term balloon counterpulsation may be complicated by aortic balloon entrapment. We report a patient in whom video-coronary surgery was performed using beating heart anastomosis. This patient developed intra-aortic balloon (IAB) entrapment 4 days after initiation of the counterpulsation previously implanted for a preoperative low ejection fraction. No limb ischaemia was detected, but urgent removal of the balloon catheter is mandatory if blood is detected in the tubing connecting the balloon to the console, even in the presence of adequate function of the IAB assistance. abstract_id: PUBMED:27904447 Long-term survival in glioblastoma: methyl guanine methyl transferase (MGMT) promoter methylation as independent favourable prognostic factor. Background: In spite of significant improvement after multi-modality treatment, prognosis of most patients with glioblastoma remains poor. Standard clinical prognostic factors (age, gender, extent of surgery and performance status) do not clearly predict long-term survival. The aim of this case-control study was to evaluate immuno-histochemical and genetic characteristics of the tumour as additional prognostic factors in glioblastoma. Patients And Methods: Long-term survivor group were 40 patients with glioblastoma with survival longer than 30 months. Control group were 40 patients with shorter survival and matched to the long-term survivor group according to the clinical prognostic factors. All patients underwent multimodality treatment with surgery, postoperative conformal radiotherapy and temozolomide during and after radiotherapy. Biopsy samples were tested for the methylation of MGMT promoter (with methylation specific polymerase chain reaction), IDH1 (with immunohistochemistry), IDH2, CDKN2A and CDKN2B (with multiplex ligation-dependent probe amplification), and 1p and 19q mutations (with fluorescent in situ hybridization). Results: Methylation of MGMT promoter was found in 95% and in 36% in the long-term survivor and control groups, respectively (p < 0.001). IDH1 R132H mutated patients had a non-significant lower risk of dying from glioblastoma (p = 0.437), in comparison to patients without this mutation. Other mutations were rare, with no significant difference between the two groups. Conclusions: Molecular and genetic testing offers additional prognostic and predictive information for patients with glioblastoma. The most important finding of our analysis is that in the absence of MGMT promoter methylation, longterm survival is very rare. For patients without this mutation, alternative treatments should be explored. Answer: The retrospective analysis of a clinical series and long-term follow-up study described in abstract PUBMED:7706629 suggests that advanced age is not an independent prognostic factor that increases the risk of complications or reduces immediate and long-term survival after treatment with intra-aortic balloon pumping (IABP) for acute ischemic heart disease. Although advanced age was initially associated with increased in-hospital mortality and increased mortality rate during an average follow-up of 94 +/- 9 months, these associations disappeared when adjusting for other variables such as chronic comorbid conditions, acute left ventricular failure, and New York Heart Association functional class. The study concludes that several anamnestic and clinical variables, rather than advanced age itself, are independent prognostic factors negatively affecting the outcome of treatment with IABP. Technique-related complications were found to be independent of age, indicating that old age should not be considered a criterion to exclude patients from the potential benefits of IABP cardiocirculatory assistance.
Instruction: An observational near-infrared spectroscopy study on cerebral autoregulation in post-cardiac arrest patients: time to drop 'one-size-fits-all' hemodynamic targets? Abstracts: abstract_id: PUBMED:25769511 An observational near-infrared spectroscopy study on cerebral autoregulation in post-cardiac arrest patients: time to drop 'one-size-fits-all' hemodynamic targets? Aims: A subgroup of patients with ROSC after cardiac arrest (CA) with disturbed cerebral autoregulation might benefit from higher mean arterial pressures (MAP). We aimed to (1) phenotype patients with disturbed autoregulation, (2) investigate whether these patients have a worse prognosis, (3) define an individual optimal MAP per patient and (4) investigate whether time under this individual optimal MAP is associated with outcome. Methods: Prospective observational study in 51 post-CA patients monitored with near infrared spectroscopy. Results: (1) 18/51 patients (35%) had disturbed autoregulation. Phenotypically, a higher proportion of patients with disturbed autoregulation had pre-CA hypertension (31±47 vs. 65±49%, p=0.02) suggesting that right shifting of autoregulation is caused by chronic adaptation of cerebral blood flow to higher blood pressures. (2) In multivariate analysis, patients with preserved autoregulation (n=33, 65%) had a significant higher 180-days survival rate (OR 4.62, 95% CI [1.06:20.06], p=0.04]. Based on an index of autoregulation (COX), the average COX-predicted optimal MAP was 85 mmHg in patients with preserved and 100 mmHg in patients with disturbed autoregulation. (3) An individual optimal MAP could be determined in 33/51 patients. (4) The time under the individual optimal MAP was negatively associated with survival (OR 0.97, 95% CI [0.96:0.99], p=0.02). The time under previously proposed fixed targets (65, 70, 75, 80 mmHg) was not associated with a differential survival rate. Conclusion: Cerebral autoregulation showed to be disturbed in 35% of post-CA patients of which a majority had pre-CA hypertension. Disturbed cerebral autoregulation within the first 24h after CA is associated with a worse outcome. In contrast to uniform MAP goals, the time spent under a patient tailored optimal MAP, based on an index of autoregulation, was negatively associated with survival. abstract_id: PUBMED:33063026 Near-Infrared Spectroscopy to Assess Cerebral Autoregulation and Optimal Mean Arterial Pressure in Patients With Hypoxic-Ischemic Brain Injury: A Prospective Multicenter Feasibility Study. We provide preliminary multicenter data to suggest that recruitment and collection of physiologic data necessary to quantify cerebral autoregulation and individualized blood pressure targets are feasible in postcardiac arrest patients. We evaluated the feasibility of a multicenter protocol to enroll patients across centers, as well as collect continuous recording (≥ 80% of monitoring time) of regional cerebral oxygenation and mean arterial pressure, which is required to quantify cerebral autoregulation, using the cerebral oximetry index, and individualized optimal mean arterial pressure thresholds. Additionally, we conducted an exploratory analysis to assess if an increased percentage of monitoring time where mean arterial pressure was greater than or equal to 5 mm Hg below optimal mean arterial pressure, percentage of monitoring time with dysfunctional cerebral autoregulation (i.e., cerebral oximetry index ≥ 0.3), and time to return of spontaneous circulation were associated with an unfavorable neurologic outcome (i.e., 6-mo Cerebral Performance Category score ≥ 3). Design Setting And Patients: A prospective multicenter cohort study was conducted in ICUs in three teaching hospitals across Canada. Patients (≥ 16 yr old) were included if their cardiac arrest occurred within the previous 36 hours, they had greater than or equal to 20 consecutive minutes of spontaneous circulation following resuscitation, and they had a post-resuscitation Glasgow Coma Scale of less than or equal to 8. Measurements And Main Results: Recruitment rates were calculated across sites, and patients underwent continuous regional cerebral oxygenation monitoring using near-infrared spectroscopy, as well as invasive blood pressure monitoring. Exploratory multivariable logistic regression was performed. Although it was feasible to recruit patients across multiple centers, there was variability in the recruitment rates. Physiologic data were captured in 86.2% of the total monitoring time and the median monitoring time was 47.5 hours (interquartile interval, 29.4-65.0 hr) across 59 patients. Specifically, 88% of mean arterial pressure and 96% of bilateral frontal regional cerebral oxygenation data were acquired, and 90% of cerebral oximetry index and 70% of optimal mean arterial pressure values were quantified. However, there was substantial variation in the amount of data captured among individuals. Time to return of spontaneous circulation was associated with an increased odds of an unfavorable neurologic outcome. Conclusions And Relevance: We demonstrated feasibility to recruit and collect high frequency physiologic data in patients after cardiac arrest. Future investigations will need to systematically document the reasons for data attrition, as well as how these methodological complications were resolved. Due to underpowered analyses and the inability to control for potential confounds, further studies are needed to explore the association between cerebral autoregulatory capacity and individualized mean arterial pressure thresholds with neurologic outcomes. abstract_id: PUBMED:26490778 Non-invasive Monitoring of Dynamic Cerebrovascular Autoregulation Using Near Infrared Spectroscopy and the Finometer Photoplethysmograph. Background: Near infrared spectroscopy (NIRS) enables continuous monitoring of dynamic cerebrovascular autoregulation, but this methodology relies on invasive blood pressure monitoring (iABP). We evaluated the agreement between a NIRS based autoregulation index calculated from invasive blood pressure monitoring, and an entirely non-invasively derived autoregulation index from continuous non-invasive blood pressure monitoring (nABP) using the Finometer photoplethysmograph. Methods: Autoregulation was calculated as the moving correlation coefficient between iABP and rSO2 (iTOx) or nABP and rSO2 (nTOx). The blood pressure range where autoregulation is optimal was also determined for invasive (iABPOPT) and non-invasive blood pressure measurements (nABPOPT). Results: 102 simultaneous bilateral measurements of iTOx and nTOx were performed in 19 patients (median 2 per patient, range 1-9) with different acute pathologies (sepsis, cardiac arrest, head injury, stroke). Average iTOx was 0.01 ± 0.13 and nTOx was 0.01 ± 0.11. The correlation between iTOx and nTOx was r = 0.87, p < 0.001, 95 % agreement ± 0.12, bias = 0.005. The interhemispheric asymmetry of autoregulation was similarly assessed with iTOx and nTOx (r = 0.81, p < 0.001). Correlation between iABPOPT and nABPOPT was r = 0.47, p = 0.003, 95 % agreement ± 32.1 mmHg, bias = 5.8 mmHg. Coherence in the low frequency spectrum between iABP and nABP was 0.86 ± 0.08 and gain was 1.32 ± 0.77. Conclusions: The results suggest that dynamic cerebrovascular autoregulation can be continuously assessed entirely non-invasively using nTOx. This allows for autoregulation assessment using spontaneous blood pressure fluctuations in conditions where iABP is not routinely monitored. The nABPOPT might deviate from iABPOPT, likely because of discordance between absolute nABP and iABP readings. abstract_id: PUBMED:25828921 Hemodynamic targets during therapeutic hypothermia after cardiac arrest: A prospective observational study. Aim: In analogy with sepsis, current post-cardiac arrest (CA) guidelines recommend to target mean arterial pressure (MAP) above 65 mmHg and SVO2 above 70%. This is unsupported by mortality or cerebral perfusion data. The aim of this study was to explore the associations between MAP, SVO2, cerebral oxygenation and survival. Methods: Prospective, observational study during therapeutic hypothermia (24h - 33 °C) in 82 post-CA patients monitored with near-infrared spectroscopy. Results: Forty-three patients (52%) survived in CPC 1-2 until 180 days post-CA. The mean MAP range associated with maximal survival was 76-86 mmHg (OR 2.63, 95%CI [1.01; 6.88], p = 0.04). The mean SVO2 range associated with maximal survival was 67-72% (OR 8.23, 95%CI [2.07; 32.68], p = 0.001). In two separate multivariate models, a mean MAP (OR 3.72, 95% CI [1.11; 12.50], p=0.03) and a mean SVO2 (OR 10.32, 95% CI [2.03; 52.60], p = 0.001) in the optimal range persisted as independently associated with increased survival. Based on more than 1625000 data points, we found a strong linear relation between SVO2 (range 40-90%) and average cerebral saturation (R(2) 0.86) and between MAP and average cerebral saturation for MAP's between 45 and 101 mmHg (R(2) 0.83). Based on our hemodynamic model, the MAP and SVO2 ranges associated with optimal cerebral oxygenation were determined to be 87-101 mmHg and 70-75%. Conclusion: we showed that a MAP range between 76-86 mmHg and SVO2 range between 67% and 72% were associated with maximal survival. Optimal cerebral saturation was achieved with a MAP between 87-101 mmHg and a SVO2 between 70% and 75%. Prospective interventional studies are needed to investigate whether forcing MAP and SVO2 in the suggested range with additional pharmacological support would improve outcome. abstract_id: PUBMED:31274498 Inadequacy of near-infrared spectroscopy cerebral oximetry monitoring for detecting neurological complication. Near-infrared spectroscopy (NIRS) cerebral oximetry is an established and standard monitoring modality for surgery under extracorporeal circulation with circulatory arrest. It helps to reduce the neurological complication, but in many instances, it becomes not only technically challenging but also is difficult to interpret and take corrective action based on the NIRS values. In this case study, we aimed to present the inadequacy of cerebral oximetry for detecting neurological complication. abstract_id: PUBMED:35314211 Near-infrared spectroscopy during cardiopulmonary resuscitation for pediatric cardiac arrest: A prospective, observational study. Aim: Cerebral oxygenation (rSO2) is not routinely measured during pediatric cardiopulmonary resuscitation (CPR). We aimed to determine whether higher intra-arrest rSO2 was associated with return of spontaneous circulation (ROSC) and survival to hospital discharge. Methods: Prospective, single-center observational study of cerebral oximetry using near-infrared spectroscopy (NIRS) during pediatric cardiac arrest from 2016 to 2020. Eligible patients had ≥30 s of rSO2 data recorded during CPR. We compared median rSO2 and percentage of rSO2 measurements above a priori thresholds for the entire event and the final five minutes of the CPR event between patients with and without ROSC and survival to discharge. Results: Twenty-one patients with 23 CPR events were analyzed. ROSC was achieved in 17/23 (73.9%) events and five/21 (23.8%) patients survived to discharge. The median rSO2 was higher for events with ROSC vs. no ROSC for the overall event (62% [56%, 70%] vs. 45% [35%, 51%], p = 0.025) and for the final 5 minutes of the event (66% [55%, 72%] vs. 43% [35%, 44%], p = 0.01). Patients with ROSC had a higher percentage of measurements above 50% during the final five minutes of CPR (100% [100%, 100%] vs. 0% [0%, 29%], p = 0.01). There was no association between rSO2 and survival to discharge. Conclusions: Higher cerebral rSO2 during CPR for pediatric cardiac arrest was associated with higher rates of ROSC but not with survival to discharge. abstract_id: PUBMED:15531074 Near-infrared spectroscopy: a tool to monitor cerebral hemodynamic and metabolic changes after cardiac arrest in rats. Introduction: Cardiac arrest (CA) is associated with poor neurological outcome and is associated with a poor understanding of the cerebral hemodynamic and metabolic changes. The objective of this study was to determine the applicability of near-infrared spectroscopy (NIRS), to observe the changes in cerebral total hemoglobin (T-Hb) reflecting cerebral blood volume, oxygenation state of Hb, oxidized cytochrome oxidase (Cyto-C), and brain water content following CA. Methods: Fourteen rats were subjected to normothermic (37.5 degrees C) or hypothermic (34 degrees C) CA induced by 8 min of asphyxiation. Animals were resuscitated with ventilation, cardiopulmonary resuscitation (CPR), and epinephrine (adrenaline). Hypothermia was induced before CA. NIRS was applied to the animal head to measure T-Hb with a wavelength of 808 nm (n = 10) and oxygenated/deoxygenated Hb, Cyto-C, and brain water content with wavelengths of 620-1120 nm (n = 4). Results: There were no technical difficulties in applying NIRS to the animal, and the signals were strong and consistent. Normothermic CA caused post-resuscitation hyperemia followed by hypoperfusion determined by the level of T-Hb. Hypothermic CA blunted post-resuscitation hyperemia and resulted in more prominent post-resuscitation hypoperfusion. Both, normothermic and hypothermic CA resulted in a sharp decrease in oxygenated Hb and Cyto-C, and the level of oxygenated Hb was higher in hypothermic CA after resuscitation. There was a rapid increase in brain water signals following CA. Hypothermic CA attenuated increased water signals in normothermic CA following resuscitation. Conclusion: NIRS can be applied to monitor cerebral blood volume, oxygenation state of Hb, Cyto-C, and water content following CA in rats. abstract_id: PUBMED:37195622 Are NIRS-derived cerebral autoregulation and ABPopt values different between hemispheres in hypoxic-ischemic brain injury patients following cardiac arrest? Purpose: Near-infrared spectroscopy (NIRS) has been suggested as a non-invasive monitoring technique to set cerebral autoregulation (CA) guided ABP targets (ABPopt) in comatose patients with hypoxic-ischemic brain injury (HIBI) following cardiac arrest. We aimed to determine whether NIRS-derived CA and ABPopt values differ between left and right-sided recordings in these patients. Methods: Bifrontal regional oxygen saturation (rSO2) was measured using INVOS or Fore-Sight devices. The Cerebral Oximetry index (COx) was determined as a CA measure. ABPopt was calculated using a published algorithm with multi-window weighted approach. A paired Wilcoxon signed rank test and intraclass correlation coefficients (ICC) were used to compare (1) systematic differences and (2) degree of agreement between left and right-sided measurements. Results: Eleven patients were monitored. In one patient there was malfunctioning of the right-sided optode and in one patient not any ABPopt value was calculated. Comparison of rSO2 and COx was possible in ten patients and ABPopt in nine patients. The average recording time was 26 (IQR, 22-42) hours. The ABPopt values were not significantly different between the bifrontal recordings (80 (95%-CI 76-84) and 82 (95%-CI 75-84) mmHg) for the left and right recordings, p = 1.0). The ICC for ABPopt was high (0.95, 0.78-0.98, p < 0.001). Similar results were obtained for rSO2 and COx. Conclusion: We found no differences between left and right-sided NIRS recordings or CA estimation in comatose and ventilated HIBI patients. This suggests that in these patients without signs of localized pathology unilateral recordings might be sufficient to estimate CA status or provide ABPopt targets. abstract_id: PUBMED:21705904 Cerebral blood flow and cerebrovascular autoregulation in a swine model of pediatric cardiac arrest and hypothermia. Objective: Knowledge remains limited regarding cerebral blood flow autoregulation after cardiac arrest and during postresuscitation hypothermia. We determined the relationship of cerebral blood flow to cerebral perfusion pressure in a swine model of pediatric hypoxic-asphyxic cardiac arrest during normothermia and hypothermia and tested novel measures of autoregulation derived from near-infrared spectroscopy. Design: Prospective, balanced animal study. Setting: Basic physiology laboratory at an academic institution. Subjects: Eighty-four neonatal swine. Interventions: Piglets underwent hypoxic-asphyxic cardiac arrest or sham surgery and recovered for 2 hrs with normothermia followed by 4 hrs of either moderate hypothermia or normothermia. In half of the groups, blood pressure was slowly decreased through inflation of a balloon catheter in the inferior vena cava to identify the lower limit of cerebral autoregulation at 6 hrs postresuscitation. In the remaining groups, blood pressure was gradually increased by inflation of a balloon catheter in the aorta to determine the autoregulatory response to hypertension. Measures of autoregulation obtained from standard laser-Doppler flowmetry and indices derived from near-infrared spectroscopy were compared. Measurements And Main Results: Laser-Doppler flux was lower in postarrest animals compared to sham-operated controls during the 2-hr normothermic period after resuscitation. During the subsequent 4-hr recovery, hypothermia decreased laser-Doppler flux in both the sham surgery and postarrest groups. Autoregulation was intact during hypertension in all groups. With arterial hypotension, postarrest, hypothermic piglets had a significant decrease in the perfusion pressure lower limit of autoregulation compared to postarrest, normothermic piglets. The near-infrared spectroscopy-derived measures of autoregulation accurately detected loss of autoregulation during hypotension. Conclusions: In a pediatric model of cardiac arrest and resuscitation, delayed induction of hypothermia decreased cerebral perfusion and decreased the lower limit of autoregulation. Metrics derived from noninvasive near-infrared spectroscopy accurately identified the lower limit of autoregulation during normothermia and hypothermia in piglets resuscitated from arrest. abstract_id: PUBMED:34318236 The role of diffuse correlation spectroscopy and frequency-domain near-infrared spectroscopy in monitoring cerebral hemodynamics during hypothermic circulatory arrests. Objectives: Real-time noninvasive monitoring of cerebral blood flow (CBF) during surgery is key to reducing mortality rates associated with adult cardiac surgeries requiring hypothermic circulatory arrest (HCA). We explored a method to monitor cerebral blood flow during different brain protection techniques using diffuse correlation spectroscopy (DCS), a noninvasive optical technique which, combined with frequency-domain near-infrared spectroscopy (FDNIRS), also provides a measure of oxygen metabolism. Methods: We used DCS in combination with FDNIRS to simultaneously measure hemoglobin oxygen saturation (SO2), an index of cerebral blood flow (CBFi), and an index of cerebral metabolic rate of oxygen (CMRO2i) in 12 patients undergoing cardiac surgery with HCA. Results: Our measurements revealed that a negligible amount of blood is delivered to the cerebral cortex during HCA with retrograde cerebral perfusion, indistinguishable from HCA-only cases (median CBFi drops of 93% and 95%, respectively) with consequent similar decreases in SO2 (mean decrease of 0.6 ± 0.1% and 0.9 ± 0.2% per minute, respectively); CBFi and SO2 are mostly maintained with antegrade cerebral perfusion; the relationship of CMRO2i to temperature is given by CMRO2i = 0.052e0.079T. Conclusions: FDNIRS-DCS is able to detect changes in CBFi, SO2, and CMRO2i with intervention and can become a valuable tool for optimizing cerebral protection during HCA. Answer: The observational study using near-infrared spectroscopy (NIRS) on cerebral autoregulation in post-cardiac arrest patients suggests that a 'one-size-fits-all' approach to hemodynamic targets may not be appropriate for all patients. The study found that 35% of post-cardiac arrest patients had disturbed cerebral autoregulation, and a higher proportion of these patients had pre-cardiac arrest hypertension, indicating that chronic adaptation to higher blood pressures might cause a rightward shift in autoregulation (PUBMED:25769511). Patients with preserved autoregulation had a significantly higher 180-day survival rate, and an individual optimal mean arterial pressure (MAP) could be determined for 33 out of 51 patients. The time spent under this individual optimal MAP was negatively associated with survival, in contrast to fixed MAP targets that were not associated with differential survival rates (PUBMED:25769511). The study's findings suggest that disturbed cerebral autoregulation within the first 24 hours after cardiac arrest is associated with worse outcomes and that patient-tailored optimal MAP, based on an index of autoregulation, may be more beneficial than uniform MAP goals (PUBMED:25769511). This supports the notion that hemodynamic targets should be individualized rather than applying a generalized target for all post-cardiac arrest patients.
Instruction: Postoperative radiographs after pinning of supracondylar humerus fractures: are they necessary? Abstracts: abstract_id: PUBMED:28441278 The Utility of Routine Postoperative Radiographs After Pinning of Pediatric Supracondylar Humerus Fractures. Background: The purpose of this study was to determine the frequency with which postoperative radiographs resulted in a change in management following closed reduction and percutaneous pinning of displaced pediatric supracondylar humerus fractures. We hypothesize that only the initial postoperative radiograph will lead to changes in management of operative supracondylar humerus fractures. Methods: A retrospective review was performed at 2 level I pediatric trauma centers. Inclusion criteria were patients below 18 years of age who sustained supracondylar humerus fractures (Gartland type II, III, IV) who were operatively treated from 2008 to 2013 with adequate radiographic follow-up. Patients with flexion type, intra-articular, transphyseal, and open fractures were excluded from the study. Routine radiographs were taken at initial follow-up (1 wk postoperatively) and at pin removal (3 to 4 wk postoperatively). Results: The final analysis included 572 patients. Initial postoperative radiographs changed treatment in 9 patients (1.6%), including revision surgeries, 2 pin adjustments, and 2 early pin removals. At the time of pin removal, 20 (3.5%) patients required further immobilization. There were no changes to the initial plan for continued nonoperative treatment at final follow-up (6 to 8 wk postoperatively). Conclusions: In this large retrospective series of patients treated with closed reduction and percutaneous pinning of displaced supracondylar humerus fractures, radiographs at 3 weeks do not reveal a need to return to the operating room or other significant pathology. These findings suggest that radiographs should be obtained within 7 to 10 days postoperatively for type III fractures and may only need to be repeated if the clinical situation warrants it, such as severe fracture pattern, persistent pain, or clinical deformity. Level Of Evidence: Level IV-case series. abstract_id: PUBMED:22955529 Postoperative radiographs after pinning of supracondylar humerus fractures: are they necessary? Background: The purpose of this study was to evaluate the necessity of early postoperative radiographs after pinning of supracondylar humerus fractures by determining both the percentage of patients who displayed change in fracture fixation and whether these changes affected their outcome. Methods: A series of 643 consecutive patients who underwent operative management of Gartland type II and III fractures at our institution between January 2002 and December 2010 were reviewed. Demographic data were obtained through chart review, including age, sex, extremity, fracture type, and mechanism. Intraoperative fluoroscopic images were compared with postoperative radiographs to identify changes in fracture alignment and pin placement. Results: A total of 643 patients (320 females, 323 males) with a mean age of 6.1 years (range, 1.1 to 16.0) were reviewed. Fifty-seven percent of fractures were classified as type II and 43% were type III. The overall complication rate was 8.8% (57/643). Pin backout or fracture translation was seen in 32 patients (4.9%) at the first postoperative visit. All of these patients sustained type III fractures. One of these patients required further operative management. Patients with changes in pin or fracture alignment did not demonstrate a statistically significant difference in time to first postoperative visit (P=0.23), days to pin removal (P=0.07), or average follow-up time (P=0.10). Fracture severity did not correlate with change in alignment (P=0.952). No postoperative neurological complications were observed in patients with alignment changes. Conclusions: Mild alignment changes and pin migration observed in postoperative radiographs after pinning of supracondylar humerus fractures have little effect on clinical management parameters or long-term sequelae. Radiographs can therefore be deferred until the time of pin removal provided adequate intraoperative stability was obtained. Level Of Evidence: Level IV. abstract_id: PUBMED:31368922 The Utility of the Early Postoperative Follow-up and Radiographs After Operative Treatment of Supracondylar Humerus Fractures in Children. Background: Supracondylar humerus (SCH) fractures are common elbow injuries in pediatric patients. The American Academy of Orthopedic Surgeons published guidelines for the standard of care in the treatment of displaced SCH fractures, however, no recommendations for follow-up care were made. With the recent push to eliminate unnecessary radiographs and decrease health care costs, many are questioning postoperative protocols. The purpose of our study was to evaluate the utility of the 1-week follow-up appointment after closed reduction and percutaneous pinning (CRPP) of displaced SCH fractures. Methods: A retrospective review performed at a single institution from 2014 to 2016 included patients under 14 years of age with a closed, displaced SCH fracture treated with CRPP. Exclusion criteria included patients without complete clinical or radiographic follow-up. Variables examined included time to initial follow-up, change in treatment plan after 1-week x-rays, complications, demographics, fracture type, pin number and configuration, reduction parameters, immobilization, time to pin removal, duration of casting, and clinical outcome. Results: A total of 412 patients were divided into 2 groups based on time to initial follow-up. Overall, 368 had an initial follow-up at 7 to 10 days (group 1) and 44 at 21 to 28 days (group 2). There was no difference in age, sex, fracture type, pin configuration, or a number of pins between groups. Statistically significant findings included time to initial follow-up and days to pin removal (group 1 at 26.2 d vs. group 2 at 23.8 d), type of immobilization (group 1 with 5% circumferential casts and group 2 with 70%), and time to surgery (26.2 vs. 62.9 h, respectively). There was no significant difference in complication rates and only a 0.5% rate of change in management in group 1. Conclusions: Early postoperative follow-up and radiographs did not change the patient outcome and might be eliminated in children with displaced SCH fractures treated with CRPP. Given the current focus of on efficiency and cost-effective care, eliminating the 1-week postoperative appointment would improve appointment availability and decrease medical cost. Level Of Evidence: Level III-Therapeutic. abstract_id: PUBMED:27270906 Routine radiographs at time of pin removal after closed reduction and percutaneous pinning for type 2 supracondylar humerus fractures do not change management: a retrospective cohort study. Purpose: Radiographs are usually taken on day of pin removal for children treated with closed reduction and percutaneous pinning (CRPP) of type 2 supracondylar humerus fractures. The purpose of this study was to determine whether radiographs taken at time of pin removal for patients recovering uneventfully alter management. Methods: After IRB approval, billing records identified 1213 patients aged 1-10 years who underwent elbow surgery between 2007 and 2013 at our institution for a supracondylar humerus fracture. Of these patients, 389 met inclusion criteria. Clinical charts were reviewed for demographics, operative details, and clinical follow-up, focusing on clinical symptoms present at pin removal. Radiographs taken at time of pin removal and subsequent visits were assessed for healing and fracture alignment. Results: In no case was pin removal delayed based on radiographs. One hundred and nineteen (31 %) patients had radiographs taken following pin removal; in no case was loss of reduction found among these patients. No cases of neurologic or vascular injury, re-fracture, or loss of reduction occurred. Infection occurred in 12 patients (3 %). Pins were kept in place for 23.8 ± 4.4 days. Eighty-six patients (22 %) had additional intervention after pin removal (cast application in all cases). Of 389 patients, 75 (19 %) had no documented reason for extended casting, four (1 %) were extended based on physician evaluation of radiographs, and seven (2 %) were extended for other reasons. Conclusions: Elimination of radiographs at time of pin removal should be considered. If continuing to obtain radiographs at pin removal, we recommend removing pins before taking radiographs to reduce patient fear and anxiety from visualizing percutaneous pins. abstract_id: PUBMED:32501904 A Systematic Review of the Utility of Postoperative Imaging in the Management of Pediatric Supracondylar Humerus Fractures. Background: Recently published studies call into question the clinical utility of postoperative radiographs in the management of pediatric supracondylar humerus (SCH) fractures. This topic has been addressed as part of the American Academy of Orthopaedic Surgeons Appropriate Use Criteria, although recommendations regarding serial radiographs were not included as part of the discussion. The purpose of this systematic review is to summarize the recent literature regarding the utility of postoperative radiographs as part of the management of SCH fractures. Methods: A systematic review of the literature published between January 1, 2000 and December 31, 2017 was conducted using PubMed/MEDLINE and SCOPUS databases to identify studies relevant to postoperative management of SCH. Eight studies met the inclusion criteria, from which data pooled estimates and an analysis of heterogeneity were calculated. Results: The pooled estimate of changes in fracture management on the basis of postoperative radiographs was 1% (0.98±0.33). Significant interstudy heterogeneity was observed with an I test statistic of 76%. Changes in fracture management included prolonged immobilization following pin removal and return to the operating room. Conclusions: There is a paucity of articles focusing on the utility or appropriate use of postoperative radiographs in changing management of SCH fractures, one of the most common upper extremity fractures in children. There is a very low rate of change in management on the basis of imaging, and frequently authors commented that the management change could have been prompted without routine serial radiographs. The cumulate findings of these studies suggest routine postoperative radiographs after SCH fractures are infrequently associated with changes in management. Practitioners should consider postoperative protocols with the intention of identifying early postoperative alignment loss or when the complication is suspected to prevent excessive routine radiography in the management of pediatric SCH fractures. Level Of Evidence: Level III-systematic review of level III studies. abstract_id: PUBMED:27774833 Outcome after pin fixation of supracondylar humerus fractures in children: postoperative radiographic examinations are unnecessary. Background and purpose - The quality of pin fixation of displaced supracondylar humerus fractures in children has not been assessed, and the clinical value of radiographic examinations after pin fixation is unclear. We evaluated pin configuration, quality of osteosynthesis, and outcome in 264 supracondylar fractures. The clinical significance of postoperative radiographs was analyzed. Patients and methods - 252 Gartland-III and 12 flexion-type supracondylar humerus fractures were pin-fixed in the periods 2002-2006 and 2012-2014. During 2012-2014, staff were intructed that postoperative radiographs should not be taken. Quality of reduction was assessed by measuring Baumann and lateral capitellohumeral angles (LCHA) and also by recording the crossing point of the anterior humeral line (AHL) with bony capitellum. Rotatory alignment was registered as normal or abnormal. Pin configuration and quality of osteosynthesis were evaluated. The clinical significance of postoperative radiographs was analyzed. Results - Postoperatively, Baumann angle was normal in 66% of the fractures, AHL crossed the capitellum in 84%, and no malrotation was evident in 85% of the fractures. Crossed pins were used in 89% of the cases. 2 or more pins fixed both fracture fragments in 66%. Radiographic examinations were inadequate for assessment of LCHA in 13%, of Bauman angle in 8%, of AHL in 2%, of rotation in 1%, and of pin fixation in 2% of the cases. Postoperative radiographs did not give useful information except in 1 patient who had corrective osteotomy. All 94 patients with follow-up (97%) who were treated during 2012-2014 were satisfied with the outcome. Interpretation - Despite pin fixation being deemed unsatisfactory in one-third of the cases, significant malunion was rare. Postoperative radiography did not alter management or outcome. abstract_id: PUBMED:26223213 The utility of radiographs prior to pin removal after operative treatment of supracondylar humerus fractures in children. Purpose: To determine the safety, utility, and efficacy of pin removal prior to radiographs during the postoperative care of surgically treated displaced pediatric supracondylar humerus fractures. Methods: Retrospective review of 532 children with supracondylar humerus fractures treated with closed reduction and pinning from 2007 to 2012. Group 1: children who had their splint/cast removed and radiographs prior to pin removal. Group 2: children with radiographs taken after removal. Data recorded and analyzed included: demographic and radiographic data at the time of surgery and at final follow-up, including the number of radiographs taken prior to pin removal and if pins were ever retained after radiographs were performed on the date of intended removal. Results: There was no difference between the groups' demographics. The number of postoperative radiographs taken prior to pin removal ranged from zero to two. No statistically significant change in Baumann's (p = 0.79) or lateral humeral capitellar angles (p = 0.19) was noted between the groups. No children in group 1 (0/438) had their pins retained after radiographs were taken on the date of intended removal. Conclusion: Obtaining postoperative radiographs prior to pin removal, although commonplace, is not necessary. If fracture stability is noted intraoperatively, and there is an uneventful postoperative course, it is safe and effective to discontinue immobilization and remove pins prior to X-ray. This is safely done without change in alignment or clinical sequelae. Doing so can aid in clinic flow, may decrease child anxiety, and limit multiple cast room visits. Level Of Evidence: Level III therapeutic study. abstract_id: PUBMED:30542232 Intra-Operative Bone Stability Test. Fractures of the supercondylar humerus are the most common elbow fracture in pediatrics. Management of this injury would be aided if surgeons could reliably test fracture stability intraoperatively after pinning. A transverse supracondylar humerus fracture model was created using 3 adult cadaver upper-extremity specimens with an intact soft tissue envelope. Using the lateral entry technique, three 2.0 mm pins were then drilled using lateral entry technique to create an "A" pinning. Pins were checked in anteroposterior and lateral views with the C-arm to confirm accurate placement in both planes. The pinning configuration was then tested by holding the proximal fragment steady with one hand and applying stress to the distal fragment with the other hand. The amount of movement of the distal fragment relative to the proximal fragment was recorded for each specimen. Distraction did not produce any substantial displacement of the osteotomy. The most valuable maneuvers were (in order of effectiveness): external rotation, lateral translation, posterior translation, valgus, and apex posterior. The results of this study indicate that external rotation, lateral translation, posterior translation, and valgus stresses created the most temporary deformity to the construct. A combination of these maneuvers should help the surgeon to decide if the fixation is stable. Our study demonstrates a possible technique to determining intraoperatively the stability of fixation of supracondylar humerus fractures, which could prevent the need for postoperative radiographs to assess stability. abstract_id: PUBMED:31149973 Utility of Follow-up Radiographs After Pin Removal in Supracondylar Humerus Fractures: A Retrospective Cohort Study. Introduction: Displaced supracondylar humerus fractures (SCFs) are common pediatric injuries, typically treated by closed reduction and percutaneous pinning (CRPP). Radiographs are obtained at pin removal and subsequently to evaluate fracture healing. We evaluated the utility of radiographs obtained after pin removal in pediatric SCF management. Methods: A retrospective cohort study of children aged 2 to 11 years with SCF requiring CRPP at a single institution from January 2007 to July 2017 was conducted. Radiographs were taken at pin removal and minimum 3 weeks later. Demographic and treatment data were collected via chart review. Radiographic measures were Baumann and lateral humeral-capitellar angles, anterior humeral line alignment, and number of cortices with callus. The McNemar-Bowker test analyzed anterior humeral line alignment and cortices with callus. The paired t-test analyzed Baumann and lateral humeral-capitellar angles. Results: One hundred patients were included (47 males and 53 females, mean 5.7 years). The mean time to pin removal was 23.8 days, and the median clinical and radiographic follow-up periods were 109.0 and 52.2 days, respectively. Fracture patterns were extension type II (21%), III (73%), IV (1%), flexion (4%), and varus (1%). No patients' fracture management changed in the acute or long-term postoperative period because of findings on post-pin removal (PPR) radiographs. Fewer cortices with callus were seen at pin removal versus PPR (<0.001). At pin removal, no differences were found in the Baumann angle (75.8 ± 5.0 versus 74.6 ± 5.9; P = 0.053), lateral humeral-capitellar angles (30.7 ± 12.5 versus 31.6 ± 1.3; P = 0.165), or anterior humeral line alignment (P = 0.261). Discussion: No patients' fracture management was modified because of findings evident on PPR radiographs. The number of cortices with callus was the only radiographic measure to differ over time, as is anticipated with routine healing. Elbow radiographic alignment measures did not differ in the interval between radiographs. Therefore, PPR radiographs may not provide clinical utility in the absence of other clinical findings. Level Of Evidence: Level III, retrospective comparative study. abstract_id: PUBMED:35415488 Local Hematoma Block as Postoperative Analgesia in Pediatric Supracondylar Humerus Fractures. Purpose: Supracondylar humerus fracture (SHF) is the most common type of fracture in children. The aim of this study was to evaluate the efficacy of local hematoma block with 0.25% bupivacaine as postoperative pain control in patients with pediatric SHF who underwent closed reduction pin fixation. Methods: We performed an institutional review board-approved, prospective cohort study of 65 patients with SHF treated with closed reduction percutaneous pin fixation. For 6 months, all patients were randomly divided into 2 groups. The treatment group (35 patients) received an intraoperative local hematoma block using 0.25% bupivacaine whereas the control group (30 patients) did not receive a local hematoma block as postoperative pain management adjuvant. After surgery, all patients were prescribed opioid pain medication. To evaluate the efficacy of the hematoma block, postoperative morphine equivalent consumption and the Faces Pain Scale-Revised (FPS-R) survey were blindly recorded during postoperative day 1. Demographic data, surgical details, clinical neurovascular examination during the hospital stay, and complications were also evaluated. Results: Comparison of the control group with the treatment group showed similar morphine equivalent consumption and Face Pain Scale-Revised Survey results. No hematoma block-associated complications were reported. Conclusions: The result of this study do not favor the use of local hematoma block to improve pain control and decrease the need for opioid use on postoperative day 1 in pediatric SHF after patients undergo closed reduction percutaneous pin fixation. These results can lay the foundation for future studies while suggesting new, novel opioid-free pain control strategies in patients with SHF. Type Of Study/level Of Evidence: Therapeutic II. Answer: The necessity of postoperative radiographs after pinning of pediatric supracondylar humerus fractures has been questioned in recent studies. A retrospective review of 643 patients who underwent operative management of Gartland type II and III fractures found that mild alignment changes and pin migration observed in postoperative radiographs had little effect on clinical management parameters or long-term sequelae, suggesting that radiographs could be deferred until the time of pin removal provided adequate intraoperative stability was obtained (PUBMED:22955529). Another study with 412 patients treated with closed reduction and percutaneous pinning (CRPP) of displaced supracondylar humerus fractures showed that early postoperative follow-up and radiographs did not change the patient outcome and might be eliminated, which would improve appointment availability and decrease medical costs (PUBMED:31368922). A systematic review of the literature also indicated that there is a very low rate of change in management based on postoperative radiographs, and often the management change could have been prompted without routine serial radiographs (PUBMED:32501904). Furthermore, a retrospective cohort study of 264 supracondylar fractures found that postoperative radiography did not alter management or outcome, despite pin fixation being deemed unsatisfactory in one-third of the cases (PUBMED:27774833). Another study involving 532 children with supracondylar humerus fractures treated with CRPP found that obtaining postoperative radiographs prior to pin removal is not necessary if fracture stability is noted intraoperatively and there is an uneventful postoperative course (PUBMED:26223213). Additionally, a retrospective cohort study of 100 patients with supracondylar humerus fractures treated with CRPP showed that no patients' fracture management changed because of findings on post-pin removal radiographs, suggesting that these radiographs may not provide clinical utility in the absence of other clinical findings (PUBMED:31149973). In conclusion, the evidence suggests that routine postoperative radiographs after pinning of pediatric supracondylar humerus fractures are not necessary and do not frequently lead to changes in management. They may be deferred or eliminated, especially in cases with adequate intraoperative stability and an uneventful postoperative course, to improve healthcare efficiency and reduce costs.
Instruction: Do general practitioners change how they use the computer during consultations with a significant psychological component? Abstracts: abstract_id: PUBMED:18036885 Do general practitioners change how they use the computer during consultations with a significant psychological component? Objectives: To describe the patterns of computer use during patient visits to family doctors and to determine whether doctors alter their pattern of computer use in consultations which have significant psychological content. Design: Observational, non-randomised cluster trial with data being collected from videotaped consultations. Setting: Three inner-city Family Practice offices involved in physician training in Belfast, Northern Ireland. Participants: Ten family doctors, who declared using computers during their consultations and consecutive consenting adult patients attending these doctors. Results: One hundred consultations were videotaped (59% patient participation rate). The average consultation time was 9min 48s, and number of problems per consultation was 1.9. Three broad styles of computer use were defined: (1) "end users" who only used the computer at the end of a consultation to summarise the consultation, (2) "continuous users", who interacted with the computer throughout the consultation, and (3) "minimal users", who only ever used the computer at the end of the consultation mostly to issue prescriptions. Of the 100 consultations videoed 37% were of a psychological nature. Consultations with psychological content were on average longer (11min 47s vs. 8min 39s) and the average percentage time doctors spent on the computer was about half that of non-psychological consultations (11% vs. 23% and p&lt;0.001). Conclusion: The doctors were found to adopt one of three broad styles of computer use during their consultations. In consultations with observable significant psychological content doctors significantly reduce the proportion of time at the computer suggesting an ability to appropriately tailor their use of the computer during consultations. abstract_id: PUBMED:37690154 Interpreting technology: Use and non-use of doctor-patient video consultations in Danish general practice. This study uses socio-cognitive theory on technological frames to understand how and why general practitioners in Denmark use or choose not to use video consultations. Video consultations play a vital role in the digitalisation of the Danish healthcare system. Whilst political decision-makers continuously push for increased use of video consultations, uptake accounts for less than 2% of all consultations. Research is needed that explores the actual circumstances and conditions of video consultation use. Our data corpus consists of 30 semi-structured interviews conducted from August 2021 to August 2022 with 27 Danish general practitioners. Interviews were analysed following reflexive thematic analysis. Our findings show that video consultations are interpreted as 1) compromising occupational values, 2) a crisis tool, 3) the future, and 4) a tool to improve work conditions. Video consultations are differently adopted across clinics due to different interpretations of the technology and its relative advantage in specific clinical contexts. We argue that the concept of technological frames offers a useful analytic perspective for elucidating and anticipating attitudes and actions towards a technology. It increases our understanding of the uptake and rejection of video consultations. This knowledge is valuable for clinicians and politicians working with technological innovation in general practice. abstract_id: PUBMED:32594819 Prevalence and distribution of psychological diagnoses and related frequency of consultations in Norwegian urban general practice. Objective: To investigate the prevalence and distribution of psychological diagnoses made by general practitioners (GPs) in urban general practice and the related frequency of consultations during 12 consecutive months in Norwegian general practice.Design: A cross-sectional study with data extracted from 16,845 electronic patient records in 35 urban GP practicesSetting: Six GP group practices in Groruddalen, Norway.Subjects: All patients aged 16-65 with a registered contact with a GP during 12 months in 2015.Main outcome measures: Frequency and distribution of psychological diagnoses made by GPs, and the number of patients' consultations.Results: GPs made a psychological diagnosis in 18.8% of the patients. The main diagnostic categories were depression symptoms or disorder, acute stress reaction, anxiety symptoms or disorder and sleep disorder, accounting for 67.1% of all psychological diagnoses given. The mean number of consultations for all patients was 4.09 (95% CI: 4.03, 4.14). The mean number of consultations for patients with a psychological diagnosis was 6.40 (95% CI: 6.22, 6.58) compared to 3.55 (95% CI 3.50, 3.51) (p&lt;0.01) for patients without such a diagnosis. Seven percent of the diagnostic variation was due to differences among GPs.Conclusions: Psychological diagnoses are frequent in urban general practice, but they are covered using rather few diagnostic categories. Patients with psychological diagnoses had a significantly higher mean number of GP consultations regardless of age and sex.Implications: The knowledge of the burden of psychological health problems in general practice must be strengthened to define evidence-based approaches for detecting, diagnosing and treating mental disorders in the general practice population.Key PointsEighteen percent of patients aged 16-65 in our study of patients in urban general practice received one or more psychological diagnoses in 12 months.Depression was the most common diagnosis; followed by acute stress reaction, anxiety and sleep disturbance.Patients with psychological diagnoses had a significantly higher mean number of consultations compared to patients without such diagnoses regardless of age and sex. abstract_id: PUBMED:36040765 The Technology Acceptance of Video Consultations for Type 2 Diabetes Care in General Practice: Cross-sectional Survey of Danish General Practitioners. Background: During the COVID-19 pandemic, video consultations became a common method of delivering care in general practice. To date, research has mostly studied acute or subacute care, thereby leaving a knowledge gap regarding the potential of using video consultations to manage chronic diseases. Objective: This study aimed to examine general practitioners' technology acceptance of video consultations for the purpose of managing type 2 diabetes in general practice. Methods: A web-based survey based on the technology acceptance model measuring 4 dimensions-perceived usefulness, perceived ease of use, attitude, and behavioral intention to use-was sent to all general practices (N=1678) in Denmark to elicit user perspectives. The data were analyzed using structural equation modeling. Results: The survey sample comprised 425 general practitioners who were representative of the population. Structural equation modeling showed that 4 of the 5 hypotheses in the final research model were statistically significant (P&lt;.001). Perceived ease of use had a positive influence on perceived usefulness and attitude. Attitude was positively influenced by perceived usefulness. Attitude had a positive influence on behavioral intention to use, although perceived usefulness did not. Goodness-of-fit indices showed acceptable fits for the structural equation modeling estimation. Conclusions: Perceived usefulness was the primary driver of general practitioners' positive attitude toward video consultations for type 2 diabetes care. The study suggests that to improve attitude and technology use, decision-makers should focus on improving usefulness, that is, how it can improve treatment and make it more effective and easier. abstract_id: PUBMED:32586629 Burnout and psychological distress among trainee General Practitioners Objective: To estimate the prevalence of burnout and psychological distress among trainee General Practitioners of two training placements and its relationship with the year of residence, lifestyle and working conditions. Design: Cross-sectional study. Location: Primary Care. Participants: On-site surveys for all trainees of two training placements (n = 278), including all trainees of the sample. The response percentage is 76.9%. Main Measurements: Prevalence of burnout using the Maslach Burnout Inventory (MBI) scale and prevalence of psychological distress using the General Health Questionnaire-12 scale (GHQ-12). Also variables on lifestyles, labor organization and drug use. Outcomes: 72% of participants are women. The prevalence of burnout is 33.8% (CI 95% 27.0-40.6), being 25.6% (CI 95% 19.5-31.7) emotional exhaustion, 37.9% (CI 95% 31.0-44.7)depersonalization and 52.9% (CI 95% 45.9-59.9) the low personal accomplishment. Prevalence increases significantly in all dimensions after the second year of training, except in personal accomplishment. The prevalence of emotional distress is 40.4% (CI 95% 33.5-47.3), without significant differences according to year of training. In the group of trainees with psychological distress, 50.0% also have burnout, while this percentage is 22.1% among those who do not present psychological distress. Conclusions: The prevalence of emotional distress and burnout (especially in low personal accomplishment) is high in trainee General Practitioners. The prevalence in higher than the described in previous studies for other groups of health professionals and should be taken into account to develop preventive measures. abstract_id: PUBMED:25391284 General practitioners' perceptions of and involvement in health behaviour change: can computer-tailored interventions help? Aim: To explore GPs' perceptions of their role in primary prevention, barriers experienced and willingness to accommodate an automated, computer-tailored intervention. Background: General practice is an attractive setting for primary prevention of chronic disease. Due to constraints in time and knowledge it is underutilised. Methods: Telephone interviews of 13 GPs in Brisbane, Australia, whose patients were previously involved in a lifestyle change research project. Qualitative responses were grouped into themes. Findings: GPs perceived their role in lifestyle change as 'educators', 'supporters' and 'prompters'. Smoking and physical activity were addressed more often than alcohol and salt intake. Longer lifestyle-focussed consultations and computer-generated reminders were suggested to overcome barriers. A computer-tailored approach was appreciated due to its minimal impact on practice routine. GPs understand their role in primary prevention but need help to overcome barriers. GP initiated consultations focusing on lifestyle and prevention along with computer support systems could improve capability for prevention in general practice. abstract_id: PUBMED:36939814 General Practitioners' Experiences With Potentials and Pitfalls of Video Consultations in Norway During the COVID-19 Lockdown: Qualitative Analysis of Free-Text Survey Answers. Background: The use of video consultations (VCs) in Norwegian general practice rapidly increased during the COVID-19 pandemic. During societal lockdowns, VCs were used for nearly all types of clinical problems, as in-person consultations were kept to a minimum. Objective: This study aimed to explore general practitioners' (GPs') experiences of potentials and pitfalls associated with the use of VCs during the first pandemic lockdown. Methods: Between April 14 and May 3, 2020, all regular Norwegian GPs (N=4858) were invited to answer a web-based survey, which included open-ended questions about their experiences with the advantages and pitfalls of VCs. A total of 2558 free-text answers were provided by 657 of the 1237 GPs who participated in the survey. The material was subjected to reflexive thematic analysis. Results: Four main themes were identified. First, VCs are described as being particularly convenient, informative, and effective for consultations with previously known patients. Second, strategically planned VCs may facilitate effective tailoring of clinical trajectories that optimize clinical workflow. VCs allow for an initial overview of the problem (triage), follow-up evaluation after an in-person consultation, provision of advice and information concerning test results and discharge notes, extension of sick leaves, and delivery of other medical certificates. VCs may, in certain situations, enhance the GPs' insight in their patients' relational and socioeconomical resources and vulnerabilities, and even facilitate relationship-building with patients in need of care who might otherwise be reluctant to seek help. Third, VCs are characterized by a demarcated communication style and the "one problem approach," which may entail effectiveness in the short run. However, the web-based communication climate implies degradation of valuable nonverbal signals that are more evidently present in in-person consultations. Finally, overreliance on VCs may, in a longer perspective, undermine the establishment and maintenance of relational trust, with a negative impact on the quality of care and patient safety. Compensatory mechanisms include clarifying with the patient what the next step is, answering any questions and giving further advice on treatment if conditions do not improve or there is a need for follow-up. Participation of family members can also be helpful to improve reciprocal understanding and safety. Conclusions: The findings have relevance for future implementation of VCs and deserve further exploration under less stressful circumstances. abstract_id: PUBMED:6512748 Consultation use of a computer by general practitioners. Reporting on an experimental field study of computer use by general practitioners during consultations, this paper focuses particularly on the experiences of the doctors in their efforts to communicate concurrently with a patient and a computer. Through the analysis of logs of computer use, video recordings of consultations and interviews with doctors, the authors report on the uptake of the computer facilities, the impact of such use on information handling during the consultation, the doctors' views about using the system, and finally the relationship between doctors' ;natural' consulting styles and their uptake and views of the system. abstract_id: PUBMED:32186815 General practitioners' use of internal medicine e-consultations Objective: Consultations between general practitioners (GPs) and hospital specialists are traditionally conducted by phone. A conversation by phone where a GP seeks advice regarding a patient, who is unknown to the specialist, can have limitations. An e-consultation service for internal medicine allows GPs to consult an internal medicine specialist for clinical queries. In this study we evaluated GP use of internal medicine e-consultations. Design: Observational study. Method: Eligible patients were all those for whom the GP used an internal medicine e-consultation, provided by Zuyderland Medical Centre, location Sittard-Geleen, the Netherlands, in the period 2017-2018. Data on patient demographics, content of the GP clinical query, and any preceding diagnostics were collated. Furthermore, the internal medicine specialist response was categorised, including the presence of any patient referral advice. Results: 136 (88.6%) of all 154 GPs in the region used an e-consultation at least once, and the clinical queries covered all areas of internal medicine. A total of 1047 eligible patients were discussed by GPs in an e-consultation; 19.6% of them attended the internal medicine out-patient clinic within three months of the index episode. The use of e-consultations by GPs led to a 70% reduction in referrals when compared to the situation where e-consultations were not available. Conclusion: GPs used internal medicine e-consultations for a wide range of questions covering all areas of internal medicine. In more than 80% of cases, the GP continued to manage the patient, resulting in a reduction of the actual number of patient referrals to the internal medicine out-patient clinic. abstract_id: PUBMED:10024704 Computer use by general practitioners in Scotland. Background: Despite the widespread adoption by general practitioners (GPs) of desktop computers, there has been very little evaluation of the way in which the computer is actually used during consultations and the way in which it affects patient satisfaction. Aim: To ascertain the extent to which the computer is used in the consultation and to investigate the possible relationship between computer use and patient satisfaction. Method: Six GPs completed a short questionnaire about the extent to which they use the computer during surgeries. Eighty-four consultations from the surgeries of these GPs were video recorded. Patient satisfaction data on these 84 patients were collected at the time of the surgery using the previously validated Consultation Satisfaction Questionnaire. Results: All six GPs stated that they usually used the computer during consultations. However, video observation revealed that the computer was used in just 51% of surgeries. The proportion of time that the computer was used for varied from 0.03 to 0.4, with a mean value of 0.12. The commonest function for which the computer was used was prescribing. The consultations in which the computer was used (CU) were on average 148 seconds longer than the non-computerized consultations (NCU). There was no difference in patient satisfaction between the two groups. Conclusion: Despite this group of GPs having a self-declared interest in the use of computers, the extent to which the computer was used was much lower than expected from the GPs' self-reported use. This may be partly explained by the fact that using the computer takes up valuable time within the consultation and does not appear to contribute to patient satisfaction. If desktop computers are to be used to their full potential in general practice, more work is required to evaluate their impact on the consultation process itself. Answer: Yes, general practitioners do change how they use the computer during consultations with a significant psychological component. A study described in PUBMED:18036885 observed that in consultations with observable significant psychological content, doctors significantly reduced the proportion of time spent on the computer. The average percentage of time doctors spent on the computer during psychological consultations was about half that of non-psychological consultations (11% vs. 23%, p<0.001). This suggests that doctors are able to appropriately tailor their use of the computer during consultations depending on the nature of the consultation, potentially to maintain a more personal and engaged interaction with the patient when discussing psychological issues.
Instruction: Does executive function explain the IQ-mortality association? Abstracts: abstract_id: PUBMED:19073749 Does executive function explain the IQ-mortality association? Evidence from the Canadian study on health and aging. Objective: To assess the robustness of the association between intelligence quotient (IQ) and mortality in older adults and to examine whether or not the association can be explained by more specific cognitive processes, including individual differences in executive functioning. Methods: We examined the associations among Full Scale IQ, individual IQ subtest scores, and 10-year mortality among older community-dwelling, adult participants in the Canadian Study of Health and Aging, who were verified as disease and cognitive-impairment free at baseline via comprehensive medical and neurological evaluation (n = 516). Survival analysis including Cox proportional hazards regression models were used to examine mortality risk as a function of Full Scale IQ and its specific subcomponents. Results: An inverse association was found between IQ and mortality, but this did not survive adjustment for demographics and education. The association between IQ and mortality seemed to be predominantly accounted for by performance on one specific IQ subtest that taps executive processes (i.e., Digit Symbol (DS)). Performance on this subtest uniquely and robustly predicted mortality in both unadjusted and adjusted models, such that a 1-standard deviation difference in performance was associated with a 28% change in risk of mortality over the 10-year follow-up interval in adjusted models. Conclusions: The association between IQ and mortality in older adults may be predominantly attributable to individual differences in DS performance. abstract_id: PUBMED:25151423 Executive function in probands with autism with average IQ and their unaffected first-degree relatives. Objective: This study aimed to characterize executive function (EF) in pedigrees of children with autism spectrum disorder (ASD) and average IQ. The authors examined the hypothesis that deficits in EF relate to lower levels of adaptive functioning, and they assessed evidence for a cognitive extended phenotype in unaffected relatives in a large, well-characterized sample. Method: Proband EF was assessed by parent-report questionnaires (Behavior Rating Inventory of Executive Functioning [BRIEF], n = 109) and child neuropsychological tests (Delis-Kaplan Executive Functioning System [D-KEFS], n = 35). EF also was examined in parents (D-KEFS, n = 335) and unaffected siblings (BRIEF, n = 114; D-KEFS, n = 57). Adaptive functioning was assessed by the Vineland Adaptive Behavior Scales-II (n = 155). All data were obtained from the Autism Consortium Clinical Genetics Database. Results: Individuals with ASD showed important EF weaknesses. Multiple regression analyses showed that parent-reported EF deficits were related to profound decreases in adaptive functioning even after controlling for age, IQ, and severity of ASD symptoms. Parent-reported EF also was related to adaptive skills in preschoolers. First-degree unaffected relatives did not demonstrate difficulties with EF compared with normative data. Conclusion: In this study, EF impairments do not appear to relate to broad familial risk factors for ASD but may be associated with factors relevant to the expression of ASD in probands. Results support the benefits of EF assessment as a way to identify potential therapeutic targets that could lead to improved adaptive behavior in children with ASD and average IQ. abstract_id: PUBMED:31026201 More Than IQ: Executive Function Explains Adaptive Behavior Above and Beyond Nonverbal IQ in Youth With Autism and Lower IQ. Adaptive behavior is a critical metric for measuring outcomes in those with autism spectrum disorder (ASD). Executive function skills predict adaptive behavior in youth with ASD with average or higher IQ; however, no study has examined this relationship in ASD with lower IQ (IQ ≤ 75). The current study evaluated whether executive function predicted adaptive behavior in school-age youth with ASD with lower IQ, above and beyond nonverbal IQ. We examined adaptive behavior and executive function through informant report on 100 youth with ASD with lower IQ. Executive function skills explained variance in adaptive social and communication domains, beyond nonverbal IQ; monitoring skills played a significant role. This research suggests that malleable skills like executive function may contribute to functional outcomes in this population. abstract_id: PUBMED:36749962 The Benefits of Breastfeeding on Child Intelligence, Behavior, and Executive Function: A Review of Recent Evidence. Background: The effects of breastfeeding on neurodevelopmental outcomes are unclear. Inconsistent findings have been reported and confounding factors make interpretation of studies difficult. The World Health Organization published a systematic review on breastfeeding and intelligence in 2013, demonstrating a positive association with improved performance on intelligence tests. The objective of this review is to explore published literature since 2013 to examine the association between breastfeeding, cognition, executive function, and behavior. Methods: Duplicate searches were carried out using Web of Science and OVID for publications between January 2012 and March 2022. Non-English articles and those not correcting for maternal IQ or home environment were excluded. Results: Twenty-three studies were included, examining the effects of breastfeeding on cognition (21), executive function (3), and behavior (6). Most studies showed a modest dose-dependent increase in cognitive scores in children who were breastfed, test score differences ranging from 0.19 to 0.96 points per month of breastfeeding comparing any breastfeeding, predominant and exclusive breastfeeding. Four out of six studies showed a positive correlation between breastfeeding and behavior. One out of three studies assessing breastfeeding and executive function showed a positive dose-dependent correlation. Discussion: Recent evidence demonstrates that breastfeeding has a small positive effect on IQ in later childhood. Evidence suggesting that breastfeeding is a protective factor in developing conduct disorders and achieving higher executive function is limited. Further research is required. Limitations include potential confounders and recall bias of breastfeeding. abstract_id: PUBMED:31663399 Dopaminergic gene analysis indicates influence of inattention but not IQ in executive dysfunction of Indian ADHD probands. Organizational inefficiency and inattention are speculated to be the reason for executive deficit (ED) of ADHD probands. Even with average IQ, probands often perform poorly due to higher inattention. Pharmacotherapy, cognitive behavioural therapy, and counselling provide only symptomatic relief. Several candidate genes showed involvement with ADHD; the most consistent are dopamine receptor 4 (DRD4) and solute carrier family 6 member 3 (SLC6A3). We analyzed association of rarely investigated DRD4 and SLC6A3 variants with ADHD core traits in Indo-Caucasoid probands. ED, inattention, organizational efficiency, and IQ were measured by Barkley Deficit in Executive Functioning-Child &amp; Adolescent scale, DSM-IV-TR, Conners' Parent Rating Scale-revised, and WISC respectively. Target sites were analyzed by PCR, RFLP, and/or Sanger sequencing of genomic DNA. DRD4 variants mostly affected inattention while SLC6A3 variants showed association with IQ. Few DRD4 and SLC6A3 variants showed dichotomous association with IQ and inattention. DRD4 Exon3 VNTR &gt;4R showed negative impact on all traits excepting IQ. Inattention showed correlation with attention span, organizational efficiency, and ED, while IQ failed to do so. We infer that IQ and attention could be differentially regulated by dopaminergic gene variants affecting functional efficiency in ADHD and the two traits should be considered together for providing better rehabilitation. abstract_id: PUBMED:24853458 Association of COMT and PRODH gene variants with intelligence quotient (IQ) and executive functions in 22q11.2DS subjects. The 22q11.2 deletion syndrome (22q11.2DS) carries the highest genetic risk factor for the development of schizophrenia. We investigated the association of genetic variants in two schizophrenia candidate genes with executive function (EF) and IQ in 22q11.2DS individuals. Ninety two individuals with 22q11.2 deletion were studied for the genetic association between COMT and PRODH variants and EF and IQ. Subjects were divided into children (under 12 years old), adolescents (between 12 and 18 years old) and adults (older than 18 years), and genotyped for the COMT Val158Met (rs4680) and PRODH Arg185Trp (rs4819756) polymorphisms. The participants underwent psychiatric evaluation and EF assessment. Our main finding is a significant influence of the COMT Val158Met polymorphism on both IQ and EF performance. Specifically, 22q11.2DS subjects with Met allele displayed higher IQ scores in all age groups compared to Val carriers, reaching significance in both adolescents and adults. The Met allele carriers performed better than Val carriers in EF tasks, being statistically significant in the adult group. PRODH Arg185Trp variant did not affect IQ or EF in our 22q11.2DS cohort. In conclusion, functional COMT variant, but not PRODH, affects IQ and EF in 22q11.2DS subjects during neurodevelopment with a maximal effect at adulthood. Future studies should monitor the cognitive performance of the same individuals from childhood to old age. abstract_id: PUBMED:25359760 High IQ May "Mask" the Diagnosis of ADHD by Compensating for Deficits in Executive Functions in Treatment-Naïve Adults With ADHD. Objective: To evaluate and compare the performance of adults with ADHD with high and standard IQ in executive functions (EF) tasks. Method: We investigated the neuropsychological performance of 51 adults with ADHD, compared with 33 healthy controls (HC) while performing a wide battery of neuropsychological tests that measure executive functioning. Adults with clinical diagnosis of ADHD were divided into two groups according to their IQ level (IQ ≥ 110-ADHD group with more elevated IQ, and IQ &lt; 110-ADHD group with standard IQ). Results: The ADHD group with standard IQ presented a worse executive functioning compared with the HC group in the following measures: Stroop 2 ( p = .000) and 3 ( p = .000), Trail Making Test (TMT) B ( p = .005), Wisconsin Card-Sorting Test (WCST)-perseverative errors ( p = .022) and failures to maintain set ( p = .020), Continuous Performance Test (CPT)-omission errors ( p = .005) and commission errors ( p = .000), and Frontal Assessment Battery (FAB)-conceptualization ( p = .016). The ADHD group with more elevated IQ presented only impairments in the CPT-commission errors ( p = .019) when compared with the control group. Conclusion: Adults with ADHD and more elevated IQ show less evidence of executive functioning deficits compared with those with ADHD and standard IQ, suggesting that a higher degree of intellectual efficiency may compensate deficits in executive functions, leading to problems in establishing a precise clinical diagnosis. abstract_id: PUBMED:24479144 Executive function and mortality in homebound elderly adults. Objectives: To determine the association between executive function and mortality in homebound elderly adults. Design: Longitudinal study. Setting: Four home care agencies in the Boston area. Participants: Homebound adults aged 60 and older with 8-year follow-up for mortality (N=1,172). Measurements: Cognitive domains including executive, memory, and language functions were evaluated at baseline. Executive function was measured using the Trail-Making Test Part B (TMT B), and subjects were divided into four subgroups from lowest to highest TMT B score. The second cross-group analyses were used to compare those who were alive and those who had died. Multivariate logistic regression analysis was used to determine whether there was an association between TMT B scores and mortality. Results: At baseline, 436 (37.2%) homebound elderly adults had the highest TMT B scores (≥300), which indicated the slowest performance. At 8-year follow-up, 381 (32.5%) participants had died. Participants with the highest TMT B scores were more than twice as likely to have died as those with the lowest scores (0–99) (odds ratio=2.39, 95% confidence interval=1.27–4.52, P=.003) after adjusting for confounders including medical comorbidities related to death. The other cognitive domains, including memory and language, were not associated with mortality in the same model. Conclusion: Many homebound elderly adults have multiple medical conditions, and executive function may be critical in their ability to manage their medical conditions and may affect the outcome of death. abstract_id: PUBMED:29874102 Executive and Reward-Related Function in Pediatric Obesity: A Meta-Analysis. Objective: This study examined the effect of pediatric obesity on executive function and reward-related decision-making, cognitive processes that are relevant to obesogenic behaviors, and evaluated their association with sample (e.g., age, gender, intelligence, and socioeconomic status, SES) and study/task (e.g., categorical/continuous variable, food stimuli) characteristics. Methods: A random-effects meta-analysis was conducted using Hedge's g effect sizes of published studies from 1960 to 2016, limited to children younger than the age of 21 years without medical comorbidities. Analysis included estimation of heterogeneity (τ2), publication bias (funnel-plot symmetry and fail-safe N), and sensitivity analyses for sample and study/task characteristics. Results: Across 68 studies with 70 samples, obesity was associated with worse functioning overall (-0.24; 95CI: -0.30 to -0.19; p &lt; 0.001) and for each component process (attention, switching, inhibition, interference, working memory, reward, delay of gratification: -0.19 to -0.38; p's &lt; 0.017), except trait impulsivity (-0.06; 95CI: -0.18 to 0.07). Deficits increased with age and female composition of the sample for inhibition (p = 0.002). No other characteristics moderated effect of obesity. Conclusions: Small-to-moderate negative associations with obesity were observed for executive and reward-related performance, but not on reported impulsivity in studies with children younger than the age of 21 years. These results were not moderated by IQ, SES, and study/task characteristics. Age and gender moderated association with inhibition, with a larger obesity-related deficit in older and predominantly female samples. These results suggest cognitive and demographic intervention targets for prevention and mitigation of obesogenic behavior. abstract_id: PUBMED:24568837 Variability in phenylalanine control predicts IQ and executive abilities in children with phenylketonuria. A number of studies have revealed significant relationships between cognitive performance and average phenylalanine (Phe) levels in children with phenylketonuria (PKU), but only a few studies have been conducted to examine the relationships between cognitive performance and variability (fluctuations) in Phe levels. In the current study, we examined a variety of indices of Phe control to determine which index best predicted IQ and executive abilities in 47 school-age children with early- and continuously-treated PKU. Indices of Phe control were mean Phe, the index of dietary control, change in Phe with age, and several indices of variability in Phe (standard deviation, standard error of estimate, and percentage of spikes). These indices were computed over the lifetime and during 3 developmental epochs (&lt;5, 5.0-9.9, and ≥10 years of age). Results indicated that variability in Phe was generally a stronger predictor of cognitive performance than other indices of Phe control. In addition, executive performance was better predicted by variability in Phe during older than younger developmental epochs. These results indicate that variability in Phe should be carefully controlled to maximize cognitive outcomes and that Phe control should not be liberalized as children with PKU age. Answer: Yes, executive function appears to play a significant role in explaining the IQ-mortality association. The study conducted on older community-dwelling adults in the Canadian Study of Health and Aging found that while there was an inverse association between IQ and mortality, this association did not hold after adjusting for demographics and education. Instead, performance on a specific IQ subtest that taps executive processes, namely the Digit Symbol (DS) subtest, uniquely and robustly predicted mortality. A 1-standard deviation difference in DS performance was associated with a 28% change in the risk of mortality over a 10-year follow-up interval in adjusted models, suggesting that executive function, as measured by the DS subtest, may be a critical factor in the IQ-mortality link (PUBMED:19073749). This finding is consistent with other research indicating the importance of executive function in various aspects of life outcomes. For instance, executive function deficits in individuals with autism spectrum disorder (ASD) and average IQ are related to profound decreases in adaptive functioning (PUBMED:25151423), and executive function skills explain adaptive behavior above and beyond nonverbal IQ in youth with ASD and lower IQ (PUBMED:31026201). Moreover, executive function has been found to be critical for homebound elderly adults in managing their medical conditions, which may affect mortality outcomes (PUBMED:24479144). In summary, executive function, which encompasses a range of cognitive processes including planning, working memory, attention, and problem-solving, seems to be a significant factor in explaining the relationship between IQ and mortality, potentially more so than IQ itself when demographic and educational factors are accounted for.
Instruction: Are we overpathologising motherhood? Abstracts: abstract_id: PUBMED:19628282 Are we overpathologising motherhood? Background: The rate of psychiatric disorders in motherhood is often estimated on the percentage of women scoring high on validated self-report mood measures such as the Edinburgh Depression Scale. Screening for possible current or likely future distress also uses self-report mood measures, as well as additional psychosocial questions. Methods: This paper critically questions whether such prevalence rates, and percentage of women with high risk status following screening, are being overestimated. The properties of the Edinburgh Scale are examined, along with the validity of diagnostic criteria. In addition a consideration as to the percentage of women classified as 'at-risk' is considered. Results: The properties of the Edinburgh Scale show that around 50% of women scoring high are not in fact depressed. Revised estimates of prevalence rates are therefore given that take the properties of the scale into account which are more conservative than current estimates. Repeat testing of the scale after just two weeks to help differentiate transient from enduring distress will also lower the possibility of overpathologising motherhood, as will the use of correct cut-off scores. The DSM IV diagnostic criteria for depression are also questioned in relation to perinatal women and men. Finally, classifying women to be 'at-risk' based upon the presence of a single risk factor is questionable given that the majority of women with risks do not become depressed, and also the rate of women reported to have at least one risk (up to 88%) is so high as to negate the usefulness of this concept. Conclusions: Current estimates of the prevalence of perinatal distress, and of women with risks, are an overestimation of the true rates. The clinical practice of using the presence of a single risk factor, or a single high score on a self-report mood scale, to form part of the assessment to determine whether or not to actively intervene may also overpathologise the situation. A more thorough understanding of these issues will improve our assessment procedures so that resources can be appropriately targeted to those women, and their families, who really need specialist mental health intervention. abstract_id: PUBMED:25641993 Motherhood after the age of 35 in Poland. Postponing motherhood is a widespread phenomenon across developed countries, however only few studies look into very late motherhood in post-socialist countries using individual level data. In this study, I look at the context of the first childbirth in Poland in the midst of the political transformation of 1989. Employing sequence analysis I reconstructed life trajectories of women who experienced the transition to adulthood during the late 1980's and the early 1990's and have just completed their fertility histories. Individual data from the 2011 GGS-PL and the 2011 FAMWELL Survey were used. Comparing paths of mothers' lives, I searched for differences in educational, professional and conjugal careers between women who gave birth before the age of 30 and after the age of 35. The results show how various life careers crisscross over the life course leading women to late motherhood. abstract_id: PUBMED:35433925 The Motherhood Penalty of Immigrants in France: Comparing the Motherhood Wage Penalty of Immigrants From Europe, the Maghreb, and Sub-Sahara With Native-Born French Women. To date, relatively few studies analyzed the motherhood penalty as experienced by immigrant women. The principal objective of this research is to establish whether the negative effect of motherhood on wages is higher for immigrants than it is for the native population; and how this effect may vary across different immigrant regions of origin. Using data from the Enquête Revenus Fiscaux et Sociaux from 2009 to 2012 (INSEE, 2009-2012), a series of linear regression models were calculated to examine whether the effect of motherhood on wages differs for immigrant women and native women; and to what extent this effect varies across different immigrant origin groups. Specifically, this study explores the effect of motherhood on immigrant labor market integration in France from three regions of origin, as compared to native French women: immigrants from sub-Saharan Africa, the Maghreb (Algeria, Tunisia, and Morocco), and from European countries (each and every region of origin is considered separately in comparison to native French women). The results revealed substantial differences in the motherhood penalty between the different regions of origin and assert the existence of an especially pronounced motherhood penalty for mothers from the Maghreb. Given the gap in the research with regards to the cost of motherhood for immigrants in the labor market of the host country, this research sheds light on specific mechanisms influencing the integration patterns of immigrant women. Moreover, by choosing France, which is one of the main immigration destinations in Europe, and a country where the motherhood penalty for the native population is almost non-existent, this study provides a new perspective on the intersection of motherhood, immigration, and region of origin in the immigrants' labor-market integration process. abstract_id: PUBMED:35980020 ON GESTATION AND MOTHERHOOD. In English law, legal motherhood is allocated to the person who gestated. However, we argue that gestation-legally denoted as the "natural" source of parenting obligations-is often constructed as mothering, rather than the precursor to it. This means that women and pregnant people are treated as mothers prior to birth in legal and medical contexts. Since legal motherhood is an important status, defining the role an individual plays in a child's life, the conflation of gestation and motherhood does not reflect that, legally, a fetus does not have personhood. This blurring between gestation and motherhood is metaphysically incoherent, as a fetus is not an entity that can be parented. This conflation poses a real harm to pregnant people's autonomy, specifically those who do not intend to parent or who do not identify as women. More broadly, the medico-legal conflation of gestation and mothering is autonomy-limiting for all pregnant people as, resultantly, they may be coerced into obstetric intervention through legal processes. We argue for a better recognition of the differences between gestation and mothering, to promote autonomy and reflect the very different ways families may be formed. abstract_id: PUBMED:33340880 The journey to solo motherhood - An explorative study. Objectives: In high-income countries, the number of women who choose to become single mothers is rising. This trend is also reflected in Denmark. The aim of this study was to explore the decision- making process of becoming a solo mother. Study Design: This study was designed as an explorative, qualitative study. Semi-structured interviews were conducted face to face or by telephone, to explore women's thoughts and experiences of becoming solo mothers. A systematic text condensation and interpretation were used to analyze the interview material. Results: Twenty solo mothers-to-be participated in the study and 38 interviews were conducted, before and after pregnancy. Four themes related to the journey of solo motherhood emerged from the data analysis: The dream of motherhood in a nuclear family - a shattered dream, Achieving socio-emotional acceptance, approaching motherhood with a comprehensive plan, and the fatherless child. Conclusion: The nuclear family was considered as the ideal family among our participants. The decision to become solo mother was characterized by a long process of reorganizing ideas about a family. In the effort to feel more secure about being a solo parent, to control the process and reduce vulnerability, most women drew up a detailed motherhood plan, even before embarking on the project. abstract_id: PUBMED:31406896 Adolescent mothers' experiences of the transition to motherhood: An integrative review. A woman experiences heightened vulnerability and faces tremendous challenges when transitioning to motherhood. This is exacerbated for young mothers and studies have shown that adolescent mothers experience an increased burden of responsibility during the transition to motherhood. Recent research addressing the experiences of adolescent mothers has increased. However, the current literature on this topic is still fragmented. The aim of this study was to conduct an integrative review of the literature to explore adolescent mothers' experiences of transition to motherhood and identify associated factors. The literature was searched using electronic databases: Medline, Cumulative index to Nursing and Allied Health Literature (CINAHL), ProQuest, Scopus and PubMed. Relevant articles published in English from February 2005 to 2018 were included. Eighteen articles were included in the analysis. Based on this review, factors influencing a successful to transition to motherhood for adolescents included physical problems related to birth and breastfeeding, psychological well-being, ability to care for their baby, social support, education and economic strain and the provision of healthcare. The literature indicated a relationship between social supports and the development of positive maternal identity in the transition period for adolescent motherhood. Future healthcare interventions for adolescent mothers during the transitional period should aim to provide social support and the increase ability of adolescent mothers to manage the physical and psychological challenges of young motherhood, and enhance new mothers' knowledge about caring for babies. abstract_id: PUBMED:30581323 Motherhood Penalties and Living Arrangements in China. Past research on the "motherhood wage penalty" has all been based on data from nuclear families, leaving open the possibility that the motherhood wage penalty may be lower or even absent in multi-generational families. In this paper, the wage gap between mothers and non-mothers is examined in both nuclear and multi-generational families in the context of contemporary China, which has a long tradition of patriarchal families. Using 1993-2006 China Health and Nutrition Survey data, the magnitude and variation of motherhood penalty is explored with fixed effects models among 1,058 women. It is found that each additional child lowers hourly wages by about 12 percent. In addition, the results show that the motherhood penalty is largest for women living with husband's parents, smaller for women not living with parents, and nil for women living with their own parents. abstract_id: PUBMED:38009748 "It was not an accident": Women's experiences of renewing motherhood at 40. In recent decades, there has been an increase in motherhood at an advanced age that has raised several medical and social concerns. We conducted a qualitative interview study, guided by Interpretative Phenomenological Analysis, to focus on the motivations and experiences of Belgian women who 'renewed' their motherhood later in life, meaning they had one or several children and then (at least 10 years later) had another child at the age of 40 or older. We focused on ten women's experiences of motherhood later in life, as well as on the way they managed social norms and expectations regarding family building and the appropriate life course. We identified two main themes. The first theme describes the participants' encounters with social norms that challenged their decision to reproduce and parent later in life, and how they managed and anticipated criticism, surprise, disbelief and incomprehension about their renewed motherhood in various ways. The second theme shows how these women talked about taking responsibility as a (renewed) mother of advanced age. For them, responsible motherhood involved making thoughtful reproductive choices, attending to the range of needs of their children, and making extra efforts to safeguard the social and emotional wellbeing of their youngest children, thereby seeking to reduce potential harm resulting from these reproductive choices. This study provides insight into these women's self-conception and their interactions with prejudiced social views of motherhood and family building. abstract_id: PUBMED:28056404 Becoming an 'Amai': Meanings and experiences of motherhood amongst Zimbabwean women living in Melbourne, Australia. Background: little is known about the meanings and experiences of motherhood among Zimbabwean migrant women. This paper discusses the meanings and experiences of motherhood from the perspectives of Zimbabwean migrant women living in Melbourne, Australia. Methods: qualitative methods (in-depth interviewing, photo elicitation and drawing) were conducted with 15 Zimbabwean women who had children in Zimbabwe and in Australia. Data were analysed using thematic analysis method. Findings: Zimbabwean women defined motherhood in varied ways. Common to all women was that becoming a mother had a significant meaning. Motherhood came with a sense of responsibility for children which resulted from their compromise and sacrifice. The dedication was exhibited by participants who demonstrated commitment to motherhood when striving to be a good mother. While motherhood provided pleasure and joy, some women found the role of motherhood burdensome in their new homeland. Due to cultural expectations of motherhood, women kept their difficulties silent for fear of being judged a 'bad mother'. The unfamiliarity with the health and social care systems in Australia presented challenges to these women. Often, they were treated without respect and felt discriminated against. Conclusions: our findings reveal the paradox of motherhood. Although motherhood can be burdensome, there are positive changes brought about by the process of motherhood. Due to a lack of knowledge about the health and social care system and the negative experiences with health care in Australia, the women felt overwhelmed about becoming a mother in Australia. Implications: healthcare providers, including midwives, need to understand how migrant women perceive and experience motherhood and their mothering role as this will help to improve the health and social care for these women and their children. Findings from this study provide a basis for further investigation into the formation and strengthening of support networks for Zimbabwean mothers in particular, and to other migrant women in general. abstract_id: PUBMED:29645167 Midwives and the achievement of safer motherhood. The root problem of unsafe motherhood in developing countries is lack of access to a competent care-giver throughout pregnancy, childbirth and the postpartum period. These resources, trained midwives, with or without prior nursing training, should be recruited from the communities they are to serve. Community midwives must be involved in any realistic strategic plan to provide this needed midwifery care. Answer: The question of whether we are overpathologizing motherhood is addressed in the abstract from PUBMED:19628282. The paper critically examines the prevalence rates of psychiatric disorders in motherhood, which are often estimated based on the percentage of women scoring high on self-report mood measures such as the Edinburgh Depression Scale. The authors argue that these prevalence rates and the percentage of women classified as 'at-risk' following screening may be overestimated. They point out that about 50% of women scoring high on the Edinburgh Scale are not actually depressed, suggesting that current estimates of perinatal distress are an overestimation of the true rates. The paper also questions the validity of using the presence of a single risk factor, or a single high score on a self-report mood scale, as part of the assessment to determine whether to intervene, which may lead to overpathologizing motherhood. The authors advocate for a more thorough understanding of these issues to improve assessment procedures and ensure that resources are appropriately targeted to those who truly need specialist mental health intervention. In summary, the provided abstract suggests that there may be a tendency to overpathologize motherhood by relying too heavily on self-report mood measures and the presence of single risk factors, which can lead to overestimations of perinatal distress and the number of women at risk. This overpathologization could result in unnecessary interventions and misallocation of mental health resources.
Instruction: Can pre-placement health assessments predict subsequent sickness absence? Abstracts: abstract_id: PUBMED:18346955 Can pre-placement health assessments predict subsequent sickness absence? Background: Sickness absence is a growing economic problem, due largely to the financial losses it incurs. The ability to identify employees likely to take greater than average sickness absence may provide managers with useful information at the pre-placement stage. Aim: To confirm whether specific risk factors identified at the pre-placement health assessment could predict subsequent sickness absence. Methods: A total of 400 National Health Service pre-placement health questionnaires were analysed to allocate employees to low-, medium- or high-risk categories for subsequent sickness absence, using the risk table developed by C. J. M. Poole (Can sickness absence be predicted at the pre-placement health assessment? Occup Med (Lond) 1999; 49:337-339) [1]. Subsequent sickness absence was analysed to assess if there was an association between the allocated category and sickness absence taken. Results: Mean sickness absence hours per 1000 h worked were 22.5 (95% CI 18.2-27.2) in the low-risk group, 33.6 (27.2-40.7) in the medium-risk group and 44.7 (25.1-69.9) in the high-risk group (analysis of variance, P &lt;or= 0.002), demonstrating a statistically significant difference in sickness absence taken in subsequent years. Conclusions: The results confirmed Poole's hypothesis that future sickness absence can be predicted at the pre-placement health assessment. Certain risk factors, namely female sex, age, smoking, history of at least two previous episodes of low-back pain and previous days sickness absence identified at pre-placement assessment, predict a greater than average subsequent sickness absence. However, the best model using identified risk factors only predicted 10-12% of the variation in sickness absence. abstract_id: PUBMED:10628062 Can sickness absence be predicted at the pre-placement health assessment? Pre-placement health assessments may be used not only to assess fitness for work but also to predict sickness absence. A risk assessment based on available evidence is proposed for making such judgements. abstract_id: PUBMED:26075194 Job Maintenance by Supported Employment: An Overview of the "Supported Employment Plus" Trial. The number of days of absence from work associated with mental illness has risen dramatically in the past 10 years in Germany. Companies are challenged by this issue and seek help for the physical and mental health of their employees. Supported Employment concepts such as the Individual Placement and Support (IPS) model have been designed to bring jobless persons with mental disorders back to work. In the randomized, controlled SEplus trial, a modified IPS-approach is tested concerning its ability to shorten times of sick leave of persons with mental distress or a mental disorder and to prevent them from losing their job. The trial is outlined in this study protocol. abstract_id: PUBMED:25743874 Changes in alcohol drinking and subsequent sickness absence. Aims: The aim was to examine whether changes in alcohol drinking are associated with sickness absence. Methods: Repeated postal questionnaires on alcohol drinking were conducted among employees of the City of Helsinki in 2000-2 and 2007 to assess changes in drinking habits between these two time points. Data on the number of self-certified and medically confirmed sickness absences were derived from the employer's register. Sickness absences were followed from 2007 until the end of 2010 among employees participating in both questionnaire surveys. The study includes 3252 female and 682 male employees 40-60 years old at baseline. Poisson regression was used in the data analysis and population attributable fractions (PAFs) were calculated. Results: Alcohol drinking was associated especially with self-certified sickness absence. Rate ratios (RRs) and 95% confidence intervals (CIs) for increasing weekly average drinking were 1.38, 1.18-1.62 among women and 1.58, 1.18-2.12 among men. Also stable problem drinking (for women 1.39, 1.26-1.54, for men 1.44, 1.10-1.87) and among women stable heavy drinking (1.53, 1.20-1.94) increased self-certified sickness absence. There were associations between alcohol drinking and medically confirmed sickness absence but these were mainly explained by health and health behaviours. Also, a decrease in weekly average drinking was associated with sickness absence among women whereas among men former problem drinking increased sickness absence. According to the PAF values, problem drinking had a stronger contribution to sickness absence than weekly average drinking. Conclusions: Alcohol drinking is particularly associated with self-certified sickness absence. Reducing adverse drinking habits is likely to prevent sickness absence. abstract_id: PUBMED:24372407 Mental and physical health-related functioning mediates between psychological job demands and sickness absence among nurses. Aims: To investigate whether health-related functioning mediates the effect of psychological job demands on sickness absence in nurses. Background: Nurses face high job demands that can have adverse health effects resulting in sickness absence. Design: Prospective cohort study with 1-year follow-up. Methods: Data for 2964 Norwegian nurses were collected in the period 2008-2010. At baseline, psychological job demands were measured with the Demand-Control-Support Questionnaire. Health-related functioning was assessed by the Mental Composite Score and the Physical Composite Score of the SF-12 Health Survey (2nd version). Sickness absence (no = 0, yes = 1) was self-reported at 1-year follow-up. Interaction and mediation analyses were conducted stratified by tenure (&lt;1-year, 1-2 years, 3-6 years, &gt;6 years) as a registered nurse. Results: A total of 2180 nurses (74%) with complete data were eligible for analysis. A significant three-way interaction between job demands, control and support was found in newly licensed nurses (tenure &lt;1-year). Baseline psychological job demands were positively associated with sickness absence at 1-year follow-up. This association was substantially weakened when Mental Composite Score and Physical Composite Score were introduced as mediator variables, indicating a partial mediation effect that was particularly pronounced in newly licensed nurses. Psychological job demands did not modify the effect of health-related functioning on sickness absence. Conclusion: Both mental and physical health-related functioning mediated between psychological job demands and sickness absence. Nurse managers should pay attention to health-related functioning, because poor health-related functioning may predict sickness absence, especially in newly licensed nurses. abstract_id: PUBMED:27515971 Reporting and prediction of work-related sickness absence by general practitioners. Background: Information on sickness absence (SA) duration in general practice is difficult to record. The duration of absence certified by general practitioners (GPs) can be viewed as a prognosis for return to work. The Health and Occupation Research network in General Practice (THOR-GP) collects SA information from GPs associated with cases of work-related ill-health. A sample of these cases is followed up 1 year retrospectively to gather information on the duration of absence. Aims: To examine the extent of the underestimation of SA in routinely reported data and to investigate how well GPs predict patients' return to work. Methods: THOR-GPs submit case and SA information using a web-based form. GPs who submitted selected cases were asked about the total number of days of SA and whether the patient had returned to work. Results: THOR-GPs' routine SA data collection underestimated absence duration by 61%. According to the retrospective data, a much larger proportion of periods of absence due to work-related mental ill-health developed into long-term SA (60%) than episodes attributed to musculoskeletal disorders (32%). In over half the reported cases, the return to work was longer than the GP initially predicted. Conclusions: THOR-GP prospectively reported SA data underestimated the total length of absence; however, these data can examine the episodic rates of absence within different groups. More accurate longitudinal data can be collected retrospectively. GPs' ability to predict the length of time a patient will be away from work is important to enable treatment and rehabilitation planning in order to decrease the likelihood of a patient falling into long-term SA. abstract_id: PUBMED:30688122 Sick leave assessments of workers with subjective health complaints: a cross-sectional study on differences among physicians working in occupational health care. Aims: To obtain more insight into differences in sick leave assessments of workers with subjective health complaints, we studied sick leave assessments among Dutch occupational and insurance physicians, and explored possible determinants for these differences.Methods: A cross-sectional study was conducted among 50 occupational and 43 insurance physicians in the Netherlands. They all assessed sick leave (complete, partial or no) of nine video case vignettes of workers with subjective health complaints and gave their opinion on the complaints, sick leave and health status. Data were analyzed via a multinomial regression approach, using generalized estimating equations in SPSS.Results: Compared to occupational physicians, complete sick leave was less likely to be assessed by insurance physicians (odds ratio 0.74, 95% confidence interval 0.56-0.97). For occupational physicians, psychological diagnoses, private issues and reduced work ability had more influence on the outcome of the sick leave assessment than for insurance physicians.Conclusion: There are differences in sick leave assessments for workers with subjective health complaints between physicians working in the same occupational health system; insurance physicians are stricter in assessing complete sick leave than occupational physicians. These differences may be explained by differences in roles, tasks and perspectives of the physician in occupational health care.Implications for Rehabilitation Sick leave assessments of workers with subjective health complaints. • The current study showed that there are differences between occupational physicians and insurance physicians in sick leave assessments for workers with subjective health complaints. • These differences may be based on different perspectives, roles and tasks of physicians. • Physicians working in the rehabilitation and occupational health system should be more aware of the impact of these differences on their assessments and advices. • Better communication and collaboration between physicians, and more insight into and clarification of the perspectives may result in more agreement between physicians' sick leave assessment and advice towards workers. abstract_id: PUBMED:28118837 Physical and mental health functioning after all-cause and diagnosis-specific sickness absence: a register-linkage follow-up study among ageing employees. Background: Sickness absence has been shown to be a risk marker for severe future health outcomes, such as disability retirement and premature death. However, it is poorly understood how all-cause and diagnosis-specific sickness absence is reflected in subsequent physical and mental health functioning over time. The aim of this study was to examine the association of all-cause and diagnosis-specific sickness absence with subsequent changes in physical and mental health functioning among ageing municipal employees. Methods: Prospective survey and register data from the Finnish Helsinki Health Study and the Social Insurance Institution of Finland were used. Register based records for medically certified all-cause and diagnostic-specific sickness absence spells (&gt;14 consecutive calendar days) in 2004-2007 were examined in relation to subsequent physical and mental health functioning measured by Short-Form 36 questionnaire in 2007 and 2012. In total, 3079 respondents who were continuously employed over the sickness absence follow-up were included in the analyses. Repeated-measures analysis was used to examine the associations. Results: During the 3-year follow-up, 30% of the participants had at least one spell of medically certified sickness absence. All-cause sickness absence was associated with lower subsequent physical and mental health functioning in a stepwise manner: the more absence days, the poorer the subsequent physical and mental health functioning. These differences remained but narrowed slightly during the follow-up. Furthermore, the adverse association for physical health functioning was strongest among those with sickness absence due to diseases of musculoskeletal or respiratory systems, and on mental functioning among those with sickness absence due to mental disorders. Conclusions: Sickness absence showed a persistent adverse stepwise association with subsequent physical and mental health functioning. Evidence on health-related outcomes after long-term sickness absence may provide useful information for targeted interventions to promote health and workability. abstract_id: PUBMED:32012684 Diagnosis-Specific Sickness Absence and Subsequent Common Mental Disorders: A Register-Linkage Cohort Study among Finnish Public Sector Employees. Sickness absence is associated with poor health outcomes, but little is known about its consequences for general mental health. This study examined the associations between diagnosis-specific sickness absence and subsequent common mental disorders (CMD). Register data on medically certified all-cause sickness absence and sickness absence due to mental disorders and musculoskeletal diseases from 2004-2007 were linked to the Helsinki Health Study 2007 and 2012 survey data on City of Helsinki employees in Finland (N = 3560). Using logistic regression and multinomial logistic regression, we analysed the associations between the total number of reimbursed sickness absence days in 2004-7 and CMD General Health Questionnaire 12) in 2007 and 2012 and CMD changes. Sickness absence due to mental disorders (age- and sex-adjusted odds ratio (OR)range: 2.16 to 2.93), musculoskeletal diseases (OR range: 2.79 to 2.93) and all-cause sickness absence (OR range: 1.48 to 3.20) were associated with CMD in 2007. In 2012, associations with lower ORs were observed. Associations were also found with changing and especially repeated (OR range: 1.49 to 3.40) CMD. The associations remained after adjusting for work-related covariates and health behaviours. Diagnosis-specific sickness absence showed persistent associations with subsequent CMD and their changes. Attention should be paid to both the short- and long-term consequences of sickness absence for employee mental health. abstract_id: PUBMED:24004774 Workers' opinions on the effect of contact with health care providers on sickness absence duration. Background: Because of the aging working population and the increasing age of retirement the number of workers with chronic illnesses and disabilities is growing. It is important that workers with health complaints receive efficient health care in order to remain fully or at least partly productive. Objective: To explore workers' opinions about the effectiveness of contact with health care providers in shortening sickness absence duration. Methods: Data come from a four-wave study from 2005 to 2008 among Dutch workers (n=1,424). Data were obtained on visits to health care providers, sickness absence and workers' opinions on whether and how their absence could have been shortened. Results: A third of the workers were of the opinion that the health care provider (most often the general practitioner, GP) had played a role in preventing sickness absence and 35% were of the opinion that the health care provider had limited their absence. Most often the physical therapist (71%) and mental health therapist (61%) shortened sickness absence duration, in contrast to the occupational physician (OP, 25%) and GP (32%). The effectiveness of the health care providers' treatment was associated with the cause of sickness absence. Approximately 15% of the workers reported that their sickness absence could have been shortened if health care providers had provided the proper treatment and if waiting times had been reduced. Conclusions: Health care providers differ in their potential to shorten sickness absence duration. Health care providers can further reduce sickness absence and health care costs by providing the proper treatment and by reducing waiting times. Answer: Yes, pre-placement health assessments can predict subsequent sickness absence. A study analyzing 400 National Health Service pre-placement health questionnaires found that certain risk factors identified at the pre-placement health assessment could predict greater than average subsequent sickness absence. These risk factors included female sex, age, smoking, history of at least two previous episodes of low-back pain, and previous days of sickness absence. The study confirmed the hypothesis that future sickness absence can be predicted at the pre-placement health assessment, although the best model using identified risk factors only predicted 10-12% of the variation in sickness absence (PUBMED:18346955). This finding supports the notion proposed in an earlier study that pre-placement health assessments may be used to predict sickness absence based on a risk assessment (PUBMED:10628062).
Instruction: Does parental physical violence reduce children's standardized test score performance? Abstracts: abstract_id: PUBMED:17719240 Does parental physical violence reduce children's standardized test score performance? Purpose: Many negative cognitive and behavioral outcomes have been identified among children living in households with parental violence, but few studies have examined academic performance. In a rural population-based cohort, we examine the role of parental violence on standardized test score performance. Methods: The cohort included 306 children ages 6 through 17. Parents responded to a health interview that included questions about physical violence. Children's standardized test scores were collected prospectively for 5 years after the parent interview. Hierarchical multivariate models clustering on school, household, and repeated individual test scores and controlling for children's and parent's characteristics were run to predict test score performance. Results: One in five children lived in a household in which parents reported at least one act of physical violence. Children whose parents reported intimate partner violence (IPV) performed an average of 12.2 percentile points lower than children whose parents reported no IPV (95% CI, -19.2--5.2; p &lt; 0.001). Parent-reported IPV led to larger test score reductions for girls than for boys and for children less than 12 years old than for older children. Conclusions: Parental physical violence was common, and children in homes with violence had significantly poorer performance on standardized test scores. abstract_id: PUBMED:36804027 Parental stress and physical violence against children during the second year of the COVID-19 pandemic: results of a population-based survey in Germany. Background: Parents and caregivers belonged to those with the highest burdens during the COVID-pandemic. Considering the close link between parental stress and child maltreatment, identifying families with high parental stress is of utmost importance to prevent violence against children. Within this study, we thus aimed to investigate the interplay of parental stress, changes in parental stress, and physical violence against children during the second year of the COVID-pandemic on an exploratory level. Methods: We conducted a cross-sectional, observational study in Germany from July to October 2021. By using different sampling steps, a representative probability sample of the German population was generated. A subsample of these participants with children under the age of 18 was included for analysis within this study (N = 453, 60.3% females, Mage = 40.08; SD = 8.53). Results: Higher parental stress levels were associated with more physical violence against children, higher levels of own experiences of child maltreatment, and mental health symptoms. An increase in parental stress during the pandemic was associated with female sex, the use of physical violence against children, and parental experience of child maltreatment. Parents who have ever used physical violence against their children have been characterized by higher parental stress levels, a stronger increase in parental stress during the pandemic, own experience of child maltreatment, mental health symptoms and sociodemographic characteristics. Higher parental stress levels, a stronger increase of parental stress during the pandemic, having pre-existing psychiatric disorders, and parental experience of child maltreatment predicted an increased use of physical violence against children during the pandemic. Conclusions: Our results underscore the importance of parental stress for the risk of physical violence against children, more so in times of overall increased stress due to the pandemic and underline the need for low threshold support for families at risk in times of crises. abstract_id: PUBMED:29422877 Association between Children's Physical Activity and Parental Practices Enhancing Children's Physical Activity: The Moderating Effects of Children's BMI z-Score. Objective: Parental practices that aim at increasing children's physical activity were found to be related to children's physical activity. So far, however, the association between these two sets of variables was studied without considering the moderating role of children's BMI z-score, which may determine the effectiveness of parental practices. The present study aims at filling this void. Design: Longitudinal data were collected among 879 dyads of children (6-11 years old) and their parents. Seven parental physical activity practices were assessed at baseline. Physical activity, body mass, and height (measured among children) were assessed twice (at baseline and 7-month follow-up). Body mass and height were measured objectively. Seven moderation analyses were conducted. Results: Six parental practices emerged to predict physical activity of children: collaborative social control, overall support, stimulation to be active, general encouragement for physical activity, positive social control, and modeling. Children's BMI z-score moderated three associations. The relationships between parental positive social control, overall parental support, and general parental encouragement for physical activity (at baseline), and children's physical activity (at follow-up) were significant only among children with low and medium BMI z-score. In turn, collaborative social control and modeling predicted children's physical activity at the follow-up regardless child's BMI z-score. Conclusions: Parental positive social control or overall parental support may be ineffective in children with higher body mass who are in need to increase their physical activity. abstract_id: PUBMED:31184531 The Toxic Triad: Childhood Exposure to Parental Domestic Violence, Parental Addictions, and Parental Mental Illness as Factors Associated With Childhood Physical Abuse. Childhood physical abuse can result in serious behavioral, mental health, and physical health conditions. There is a need for improved strategies to identify households in which childhood physical abuse occurs. This article explores three potential correlates of childhood physical abuse: childhood exposure to parental domestic violence, parental addictions, and parental mental illness. Secondary analyses were conducted using the regionally representative 2010 Brief Risk Factor Surveillance Survey (BRFSS) of adults (n = 9,241 men, n = 13,627 women) and the analyses were replicated in the 2012 BRFSS (n = 11,656 men, n = 18,145 women). Bivariate and logistic regression analyses were conducted. Approximately one-quarter of the sample was Hispanic and/or Non-White. In 2010, 78.3% of men who had been exposed to all three of these early adversities reported that they had experienced childhood physical abuse compared with 7.5% of males who did not experience these adversities. Women reported similar levels of childhood physical abuse (66.9% for those reporting all three factors, 5.9% for those with zero risk factors). The 2012 BRFSS analyses resulted in comparable findings. Domestic violence, even in the absence of parental addictions and mental illness, was associated with a high prevalence of childhood physical abuse (between 34% and 38%). Currently, the World Health Organization cautions against routine screening for child abuse due to the high rate of false positives. We propose a two-step strategy to improve targeting: first, identifying households in which two or more adversities exist, and subsequently screening children in these households. Our findings will help improve the targeting of screening and outreach efforts to children most at risk, thereby minimizing the risk of false positives. Our data provide support for universal screening for childhood physical abuse in cases of domestic violence, particularly for those families where parental addictions and/or parental mental illness also exist. abstract_id: PUBMED:31261833 The Relationship between Parental Perception of Neighborhood Collective Efficacy and Physical Violence by Parents against Preschool Children: A Cross-Sectional Study in a County of China. Children exposed to negative neighborhood environments are at high risk of experiencing violence. This study aimed to explore the effects of parental perception of neighborhood collective efficacy on parental physical violence (PV) to their preschool children in a county of China. A total of 1337 parents from nine kindergartens were recruited by the stratified random cluster sampling method. Data about parental PV behavior toward children during the past three months, parental perception of neighborhood collective efficacy, together with their attitudes towards the use of corporal punishment to discipline children, and demographic characteristics were collected. Their relationships were investigated by applying multivariable logistic regression models. Overall, 67.5% of the parents reported at least one form of PV during the past three months. The rates of minor PV (MPV) and severe PV (SPV) were 67% and 22.8%, respectively. The results of multivariate logistic regression showed that only social cohesion was associated with lower odds of parental PV and MPV behavior after controlling for covariates. The results suggest that neighborhood collective efficacy is associated with parental PV behavior against their children to some extent, but the effects differ according to the severity level of PV. Neighborhood social cohesion may have a positive role in reducing parental PV behavior in the county surveyed at present study. abstract_id: PUBMED:32111051 Child-to-Parent Violence as an Intervening Variable in the Relationship between Inter-Parental Violence Exposure and Dating Violence. The exposure of adult children to inter-parental violence is an indirect form of victimization which has not been widely investigated in relation to its consequences in adulthood. The main goal of this study was to analyze predictors of dating violence based on an integrated model of intergenerational transmission of violence with the assessment of potential indirect effects of inter-parental violence exposure on dating violence through child-to-parent violence and sexism. A total of 847 college students participated in this study, ranging from 18 to 25 years of age. Inter-parental violence exposure plays a relevant role in dating violence, with indirect effects through child-to-parent violence and sexism. These results support social learning theory in explaining the intergenerational transmission of violence and indicate that further attention should be paid to children exposed to inter-parental violence. Intervention models to prevent the perpetration of dating violence should include the prevention of inter-parental violence exposure and child-to-parent violence. abstract_id: PUBMED:16452364 Parental expectations, physical punishment, and violence among adolescents who score positive on a psychosocial screening test in primary care. Objective: We sought to examine the relationship between perceived and stated parental expectations regarding adolescents' use of violence, parental use of physical punishment as discipline, and young adolescents' violence-related attitudes and involvement. Methods: Surveys were completed by 134 youth and their parents attending 8 pediatric practices. All youth were 10 to 15 years of age and had scored positive on a psychosocial screening test. Results: Multivariate analyses revealed that perceived parental disapproval of the use of violence was associated with a more prosocial attitude toward interpersonal peer violence and a decreased likelihood of physical fighting by the youth. Parental report of whether they would advise their child to use violence in a conflict situation (stated parental expectations) was not associated with the adolescents' attitudes toward interpersonal peer violence, intentions to fight, physical fighting, bullying, or violence victimization. Parental use of corporal punishment as a disciplining method was inversely associated with a prosocial attitude toward interpersonal peer violence among the youth and positively correlated with youths' intentions to fight and fighting, bullying, and violence victimization. Conclusions: Perceived parental disapproval of the use of violence may be an important protective factor against youth involvement in violence, and parental use of physical punishment is associated with both violence perpetration and victimization among youth. Parents should be encouraged to clearly communicate to their children how to resolve conflicts without resorting to violence and to model these skills themselves by avoiding the use of physical punishment. abstract_id: PUBMED:29096162 Violence against children in Afghanistan: Concerns and opportunities for positive change. Violence against children (VAC) in Afghanistan is a serious issue in the context of many decades of conflict and poverty. To date, limited studies have explored the extent of VAC in Afghanistan and the settings where VAC takes place. To understand (i) the extent of VAC, (ii) settings where VAC takes place, (iii) parental forms of VAC and (iv) regional differences, an interview administered cross-sectional survey was employed among a community sample of 145 children and 104 parents living within Kabul, Torkham, and Jalalabad. Demographic information was collected as well as items from the International Child Abuse Screening Tool (ICAST-CH). In this study, 71% of children reported experiencing physical violence is some form in the past year. Home was the most likely location of violence. The overwhelming majority of parents reported using physical violence as a discipline method. Parents who attained higher levels of education and had more skilled occupations used violence less as a discipline method. However, consistent with international research, children cited their parents as their preferred source of support in situations of violence. Interestingly, parents did not see violent forms of discipline as more effective than non-violent strategies. The results offer a disturbing yet 'on the ground' insight into VAC in Afghanistan from the experience of children and parents. The results have important implications for programming design and provide a focus for stopping and preventing VAC in Afghanistan and similar contexts. abstract_id: PUBMED:29528800 Do Multiple Forms of Social Capital Mediate the Relationship Between Parental Violence and Children's Maladaptive Behaviors? Many things can harm children's well-being. Among them, exposure to parental violence makes children vulnerable and often leads to aggression and/or depression. However, not all children who have suffered parental violence show aggressive behavior or depressive mood. Social capital, defined as resources accruing from interpersonal relationships, was proposed to significantly mediate the relationships among adverse experiences and their negative impacts. In previous studies, social capital accrued from parents played a positive role for children in violent situations, but children exposed to parental violence need alternative sources of social capital. This study targeted fourth-grade Korean children and aimed to identify and test the role of various forms of social capital to help children overcome negative consequences from parental violence. Siblings, friends, teachers, neighbors, and online acquaintances were sources of social capital, and the results showed that social capital from siblings, teachers, neighbors, or online acquaintances mediated in the relationships between parental violence and aggressive behavior. In addition, social capital from siblings and online acquaintances mediated in the relationships between parental violence and depressive mood. The findings have implications in terms of intervention. It is suggested that multiple forms of social capital from children's immediate environments are helpful in their adaptation from exposure to parental violence, and thus, relationship-based interventions are recommended. abstract_id: PUBMED:37352648 Fear of COVID-19 and parental violence: The mediating role of parental burnout and child perceived as difficult. Background: Since the beginning of the COVID-19 pandemic, there has been a worldwide increase in the incidence of child abuse. Studies show that the pandemic context contributes to exacerbate several risk factors usually associated with the use of violent disciplinary practices. Objective: This study aims to better understand the role of parental burnout and child perceived as difficult as a parental stressor in the link between fear of COVID-19 and the use of parental violence (minor and severe physical violence and repeated psychological aggression). Participants: The sample includes 467 mothers living in Québec (Canada) with a child aged 5 or less. Method: An online questionnaire, administered one year after the beginning of the COVID-19 pandemic in Québec (March to May 2021), was used to measure parental violence, parental burnout, parental stress related to the perception of the child as difficult and fear of COVID-19. Serial mediation analyses were performed. Results: The main analyses confirmed the indirect association between fear of COVID-19 and the three forms of parental violence studied, through parental burnout and the child perceived as difficult. Unlike physical violence (minor and severe), the association between fear of COVID-19 and repeated psychological aggression is explained only by parental burnout. Conclusions: This study has identified new mechanisms that allow a better understanding of processes underlying parental violence during the pandemic. It also shows that parental violence can also occur in low-risk families. It is crucial to develop strategies to prevent the use of violent disciplinary practices in future socio-health crises such as the COVID-19 pandemic. Answer: Yes, parental physical violence does reduce children's standardized test score performance. A study conducted on a rural population-based cohort found that children whose parents reported intimate partner violence (IPV) performed an average of 12.2 percentile points lower on standardized test scores than children whose parents reported no IPV. The negative impact on test scores was more pronounced for girls than for boys and for children under 12 years old compared to older children (PUBMED:17719240).
Instruction: Do respiratory therapists receive training and education in smoking cessation? Abstracts: abstract_id: PUBMED:21106321 Do respiratory therapists receive training and education in smoking cessation? A national study of post-secondary training programs. Objective: To assess the tobacco-related education provided by post-secondary respiratory therapy training programs in the United States. Methods: A cross-sectional research design was used to survey the entire population of program directors of post-secondary, respiratory therapy training programs in the United States. A valid and reliable questionnaire was developed and mailed using a 2-wave mailing technique (73% return rate). Internal reliability coefficients (Cronbach alpha) for the various components of the questionnaire ranged from 0.78 to 0.91. Results: More than half of programs (56%) offered no teaching on the 5R's. Nearly half (47%) offered no teaching on the 5A's. Of the 13 tobacco-related topics listed in the basic science and clinical science sections of the questionnaire, only one topic (i.e., diseases linked to tobacco use) received 3h or more of instruction by approximately a third of programs (35.8%). The majority of programs (&gt;90%) spent no time teaching students about the socio-political aspects of tobacco use cessation. Moreover, 41% of programs did not formally evaluate students' competence in providing smoking cessation counseling to patients. Conclusions: Tobacco-related education is a very minor component of the education and training received by respiratory therapy students in the United States. Practice Implications: Respiratory therapy training programs in the United States have great potential to strengthen the tobacco-related education that they provide to students. Practicing respiratory therapists would likely benefit from continuing medical education focused on how to use evidence-based smoking cessation counseling techniques with patients. abstract_id: PUBMED:22704749 Smoking cessation and counseling: practices of Canadian physical therapists. Background: Although engaging or supporting smoking cessation with patients is a health priority for health professionals, the degree to which physical therapists do so is unknown. They have a particular responsibility given their professional commitment to patient education, and typical practice pattern (i.e., long, multiple visits). Purpose: This study examined the smoking-cessation practices of Canadian physical therapists, including the frequency of such counseling; use of the established 5A's approach (ask, advise, assist, assess, and arrange follow-up); and smoking-cessation training received. Methods: A cross-sectional postal survey of licensed practicing physical therapists in Canada was conducted. Surveys were mailed between April and June 2009 and data analyzed in November 2009. Descriptive statistics characterized their sociodemographics and counseling practices; chi-square assessed differences between those trained in smoking-cessation counseling trained and those untrained for the 5A's, and regional differences for smoking- cessation counseling frequency and training. Results: Completed surveys (n=738) yielded a 78.1% response rate. Most physical therapists (54.0%) counsel rarely or not at all. Regional differences for smoking-cessation counseling were observed. In all, 76.3% asked their patients if they smoke, but few (21.6%) reported assisting their patients to quit smoking. Few reported receiving smoking-cessation counseling training; proportionally, those trained in smoking-cessation counseling assisted, assessed, and arranged follow-ups more than those who were untrained. Conclusions: Few Canadian physical therapists (25.4%) counsel for smoking cessation all or most times, or adhere to the established 5A's approach. Smoking-cessation counseling training including the 5A's needs to be included in physical therapy continuing education and the curricula in entry-level programs, consistent with 21st-century health priorities. abstract_id: PUBMED:23232737 Respiratory therapists' smoking cessation counseling practices: a comparison between 2005 and 2010. Objective: We assessed whether smoking cessation counseling practices and related psychosocial characteristics among respiratory therapists (RTs) improved between 2005 and 2010. Methods: Data were collected in mailed self-report questionnaires in 2005 and in 2010, in random independent samples of active licensed RTs in Québec, Canada. Results: The response proportion was 67.6% in 2005 and 59.9% in 2010. There were no substantial differences in mean cessation counseling scores according to year of survey. RTs who reported that they had received cessation counseling training during their studies or after their studies (when they were in practice) had statistically significantly better counseling practices for both patients ready and patients not ready to quit than untrained RTs. In addition, their self-efficacy to provide effective counseling was higher and they perceived fewer knowledge-related barriers to cessation. Further, RTs trained after their studies perceived fewer patient-related and time barriers to cessation counseling, and had better knowledge of community resources. Conclusions: Although the proportion of RTs trained in smoking cessation counseling during and after studies increased between 2005 and 2010 (from 3% to 14%, and from 17% to 29%, respectively), sustained efforts are needed to increase the number of trained RTs, so that this translates into positive observable changes in counseling practices. abstract_id: PUBMED:29944038 Predictors of physical therapists' intentions to counsel for smoking cessation: Implications for practice and professional education. Objective: This study explored factors predicting intention (predominant construct in social cognitive behavioral models) for smoking cessation (SC) counseling that may provide salient information for designing/targeting SC counseling training for physical therapists (PTs). Design: Cross-sectional questionnaire survey of PTs licensed to practice in Canada. Methods: Self-efficacy items and PT views were assessed for internal consistency (Cronbach's α) and data reduced using principal axis factor analysis. Hierarchical linear regression modeling assessed predictors of intention to counsel for SC. Results: Internal consistency: self-efficacy and PT views: r = 0.937, r = 0.821, respectively. Factor structures from self-efficacy: "skills and knowledge" and "clinic incidentals" (57% total variance); from PT views': "professional role" and "role modeling" (63.8% total variance). Significant predictors of intent to counsel for SC were "professional role" (β = 0.54, p ≤ 0.001), and "skills and knowledge" (β = 0.23, p ≤ 0.001). Conclusions: Physical therapists' intent to engage in SC counseling increases when they consider it their "professional role." Encouraging PTs to view SC counseling as a professional role as well as increasing SC counseling self-efficacy focusing on skills and knowledge to do so needs to be incorporated into entry-level academic physical therapy programs and continuing professional PT education. abstract_id: PUBMED:31864933 Catalyst for Change: Measuring the Effectiveness of Training of All Health Care Professionals to Provide Brief Intervention for Smoking Cessation to Cancer Patients. Background: Smoking cessation is recommended to prevent individuals from developing cancer, with additional benefits after a cancer diagnosis. Tobacco use during cancer treatments increases the risk of complications and may reduce the effectiveness of treatment; patients who quit smoking are less likely to experience recurrence. Health care professionals play a vital role in assessing and educating cancer patients on how to quit. We report on the effectiveness of peer-to-peer education of a brief smoking cessation intervention to patients. Methods: An interprofessional team was created to implement and integrate smoking cessation best practices into daily clinical care. Health care providers were provided with peer-to-peer training to provide brief interventions of smoking cessation to patients. After training, participants were invited to complete an electronic survey to assess the effectiveness of the peer-to-peer training sessions. The survey consisted of 3 domains: confidence, comfort, and knowledge. Participants were asked to rate a series of statement questions using a Likert scale as well as to self-assess knowledge. The survey also included open-ended questions to invite respondents to share further comments and feedback. Results: Approximately 90% of staff across the oncology program participated in a training session. This included nurses, radiation therapists, and patient and family support professionals. Sixty-one surveys were returned (30% response rate). Most respondents had &gt;10 years of clinical experience (70%), 91% agreed it was important to support patients in their efforts to quit smoking, 88% agreed they have an impact on their patients' smoking cessation attempts, and 67% had opportunities in daily practice to support smoking cessation. Sixty-one percent would benefit from additional education. Fifty-three percent indicated they experienced challenges providing interventions and patients' responsiveness to the intervention. Conclusions: Peer-to-peer training for smoking cessation can increase confidence, comfort, and knowledge. Challenges include comfort level of clinician, determining the best time to ask patients, and having the confidence in knowing their scope of practice. abstract_id: PUBMED:26920077 A survey of UK optometry trainees' smoking cessation training. Background: Smoking is a risk factor for a number of eye conditions, including age-related macular degeneration, cataracts and thyroid eye disease. Smoking cessation interventions have been shown to be highly cost-effective when delivered by a range of healthcare professionals. Optometrists are well placed to deliver smoking cessation advice to a wide population of otherwise healthy smokers. Yet optometrists remain a relatively neglected healthcare professional group in smoking cessation research and policy. Surveys of UK medical/nursing schools and of optometrists' training internationally demonstrate significant deficits in current curricular coverage regarding smoking cessation. This study aimed to identify the extent of smoking cessation training in UK optometry trainees' undergraduate and pre-registration training. Methods: All undergraduate optometry schools in the UK (n = 9) were invited to participate in a web-based survey of their curricular coverage and assessment related to smoking cessation, and of perceived barriers to delivering smoking cessation training. A content analysis of the College of Optometrists Scheme for Registration Trainee Handbook 2014 was conducted to identify competence indicators related to smoking cessation. Results: Nine undergraduate optometry schools (100%) responded to the survey. The majority reported dedicating limited hours (0-3) to teaching smoking cessation, and predominantly focused on teaching the harmful effects of smoking (89%). Only one school provides practical skills training for delivering evidence-based smoking cessation interventions, including very brief advice. The majority of schools (78%) reported that they did not formally examine students on their knowledge or skills for supporting smoking cessation, and rated confidence in their graduates' abilities to deliver smoking cessation interventions as 'poor' (78%). Lack of knowledge amongst staff was identified as the key barrier to teaching about smoking cessation support. The pre-registration competency framework does not include any competence indicators related to providing support for quitting smoking. Conclusions: There are substantial gaps in the current curricula of UK optometry training, particularly regarding practical skills for supporting smoking cessation. Increased curricular coverage of these issues is essential to ensure trainee optometrists are adequately trained and competent in supporting patients to quit smoking. abstract_id: PUBMED:28745954 Assessing the impact of a respiratory diagnosis on smoking cessation. The aim of this study was to assess the impact of respiratory diagnoses on smoking cessation. A total of 229 current and former smokers, with and without respiratory diagnoses completed an anonymous online questionnaire assessing how their smoking habit changed when diagnosed with various respiratory conditions. Among all participants the most common reason for quitting smoking was to reduce the risk of health problems in general. In those with a chronic respiratory diagnosis, this was their most common reason for quitting. Motivation to quit smoking, scored by participants on a scale of 0-10, increased at the time of diagnosis then further increased after diagnosis of a chronic respiratory condition but declined after diagnosis of an acute respiratory condition. The research had a small sample size so further research is required. However, important themes are highlighted with the potential to influence clinical practice. All clinicians should receive training to promote cessation at the time of diagnosing respiratory conditions. abstract_id: PUBMED:37153725 Outcomes of video training on smoking cessation counseling for nurses. Introduction: Lack of smoking cessation education hinders nurses in providing adequate tobacco cessation counseling. Video training on smoking cessation counseling for nurses was developed and assessed for its short-term outcomes on knowledge and self-efficacy. Methods: A quasi-experimental study using pretest-posttest design was conducted with nurses in Thailand in 2020. A total of 126 nurses received online video training. Patient-nurse role-playing was used to demonstrate cessation counseling for smokers who are in the contemplation and preparation stage. Motivational interviewing techniques were emphasized throughout the video. Knowledge and self-efficacy in smoking cessation counseling were assessed pre and post training by a questionnaire. Results: Comparisons of the pre and post training, mean score of knowledge (10.75 ± 2.39 vs 13.01 ± 2.86, t=7.716, p&lt;0.001) and self-efficacy in smoking cessation counseling (3.70 ± 0.83 vs 4.36 ± 0.58, t=11.187, p&lt;0.001) were significantly increased. These positive learning outcomes were found in nurses with experience and no experience in cessation counseling (p&lt;0.001). Conclusions: This study shows that video training can improve nurses' knowledge and confidence in smoking cessation counseling. It could therefore be included in nursing continuing education to improve nurses' knowledge and confidence in smoking cessation services. abstract_id: PUBMED:19737205 Postgraduate education for doctors in smoking cessation. Introduction And Aims: Smoking cessation advice from doctors helps improve quit rates but the opportunity to provide this advice is often missed. Postgraduate education is one strategy to improve the amount and quality of cessation support provided. This paper describes a sample of postgraduate education programs for doctors in smoking cessation and suggests future directions to improve reach and quality. Design And Methods: Survey of key informants identified through tobacco control listserves supplemented by a review of the published literature on education programs since 2000. Programs and publications from Europe were not included as these are covered in another paper in this Special Issue. Results: Responses were received from only 21 key informants from eight countries. Two further training programs were identified from the literature review. The following components were present in the majority of programs: 5 As (Ask, Advise, Assess, Assist and Arrange) approach (72%), stage of change (64%), motivational interviewing (72%), pharmacotherapies (84%). Reference to clinical practice guidelines was very common (84%). The most common model of delivery of training was face to face. Lack of interest from doctors and lack of funding were identified as the main barriers to uptake and sustainability of training programs. Discussion And Conclusions: Identifying programs proved difficult and only a limited number were identified by the methods used. There was a high level of consistency in program content and a strong link to clinical practice guidelines. Key informants identified limited reach into the medical profession as an important issue. New approaches are needed to expand the availability and uptake of postgraduate education in smoking cessation abstract_id: PUBMED:17264326 Smoking cessation in patients with respiratory diseases: a high priority, integral component of therapy. Smoking cessation is the one of the most important ways to improve the prognosis of patients with respiratory disease. The Task Force on guidelines for smoking cessation in patients with respiratory diseases was convened to provide evidence-based recommendations on smoking cessation interventions in respiratory patients. Based on the currently available evidence and the consensus of an expert panel, the following key recommendations were made. 1) Patients with respiratory disease have a greater and more urgent need to stop smoking than the average smoker, so respiratory physicians must take a proactive and continuing role with all smokers in motivating them to stop and in providing treatment to aid smoking cessation. 2) Smoking cessation treatment should be integrated into the management of the patient's respiratory condition. 3) Therapies should include pharmacological treatment (i.e. nicotine replacement therapy, bupropion or varenicline) combined with behavioural support. 4) Respiratory physicians should receive training to ensure that they have the knowledge, attitudes and skills necessary to deliver these interventions or to refer to an appropriate specialist. 5) Although the cost of implementing these recommendations will partly be offset by a reduction in attendance for exacerbations, etc., a budget should be established to enable implementation. Research is needed to establish optimum treatment strategies specifically for respiratory patients. Answer: Respiratory therapists in the United States receive minimal tobacco-related education as part of their training. More than half of the respiratory therapy programs surveyed did not offer teaching on the 5R's, and nearly half did not offer teaching on the 5A's. The majority of programs spent little to no time teaching students about the socio-political aspects of tobacco use cessation, and 41% of programs did not formally evaluate students' competence in providing smoking cessation counseling to patients (PUBMED:21106321). Additionally, a study comparing smoking cessation counseling practices among respiratory therapists in Québec, Canada, between 2005 and 2010 found that although the proportion of therapists trained in smoking cessation counseling increased during this period, sustained efforts are needed to increase the number of trained therapists to translate into positive changes in counseling practices (PUBMED:23232737). These findings suggest that while some respiratory therapists may receive some training in smoking cessation, it is not a significant component of their education, and there is a need for improved and more comprehensive training in this area.
Instruction: Is contrast-enhanced US alternative to spiral CT in the assessment of treatment outcome of radiofrequency ablation in hepatocellular carcinoma? Abstracts: abstract_id: PUBMED:19280552 Is contrast-enhanced US alternative to spiral CT in the assessment of treatment outcome of radiofrequency ablation in hepatocellular carcinoma? Purpose: The present study was conducted to assess the efficacy of contrast-enhanced ultrasound with low mechanical index in evaluating the response of percutaneous radiofrequency ablation treatment of hepatocellular carcinoma by comparing it with 4-row spiral computed tomography. Materials And Methods: 100 consecutive patients (65 men and 35 women; age range: 62 - 76 years) with solitary hepatocellular carcinomas (mean lesion diameter: 3.7 cm +/- 1.1 cm SD) underwent internally cooled radiofrequency ablation. Therapeutic response was evaluated at one month after the treatment with triple-phasic contrast-enhanced spiral CT and low-mechanical index contrast-enhanced ultrasound following bolus injection of 2.4 ml of Sonovue (Bracco, Milan). 60 out of 100 patients were followed up for another 3 months. Contrast-enhanced sonographic studies were reviewed by two blinded radiologists in consensus. Sensitivity, specificity, NPV and PPV of contrast-enhanced ultrasound examination were determined. Results: After treatment, contrast-enhanced ultrasound identified persistent signal enhancement in 24 patients (24 %), whereas no intratumoral enhancement was detected in the remaining 76 patients (76 %). Using CT imaging as gold standard, the sensitivity, specificity, NPV, and PPV of contrast enhanced ultrasound were 92.3 % (95 % CI = 75.9 - 97.9 %), 100 % (95 % CI = 95.2 - 100 %), 97.4 % (95 % CI = 91.1 - 99.3 %), and 100 % (95 % CI = 86.2 - 100 %). Conclusion: Contrast-enhanced ultrasound with low mechanical index using Sonovue is a feasible tool in evaluating the response of hepatocellular carcinoma to radiofrequency ablation. Accuracy is comparable to 4-row spiral CT. abstract_id: PUBMED:21799595 Technical note: Radiofrequency ablation of hepatocellular carcinoma with contrast-enhanced ultrasound guidance: First Indian experience. Radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC) is an established alternative treatment to surgery and intra-arterial chemotherapy, usually performed under contrast-enhanced CT scan guidance. We describe our experience with the use of contrast-enhanced ultrasound and contrast dynamics analysis for planning and monitoring RFA in a patient with HCC. abstract_id: PUBMED:24917381 Usefulness of contrast-enhanced ultrasonography using Sonazoid for the assessment of therapeutic response to percutaneous radiofrequency ablation for hepatocellular carcinoma. Aim: Accurate assessment of the coagulated area is imperative to achieve an excellent outcome from percutaneous radiofrequency ablation (PRFA) for the treatment of hepatocellular carcinoma (HCC). We evaluated the efficacy of contrast-enhanced ultrasonography (CEUS) with the contrast-enhancing agent Sonazoid for precisely assessing the therapeutic effect of PRFA for HCC. Methods: We enrolled 87 consecutive patients with solitary naïve HCC of less than 3 cm in diameter. PRFA treatment was performed with a 17-G cool-tip needle, and CEUS was performed to assess the ablative margin 3 h after the procedure, when the coagulated tumor outline was easiest to discern. The treatment was repeated until an ablative margin greater than 5 mm was confirmed. After CEUS assessment of the therapeutic response, the patients were followed to investigate local tumor recurrence. Results: In 78 patients (89.7%), the outline of the coagulated tumors could be recognized by ultrasonography, and CEUS assessment of the ablative margin was successful. The remaining nine patients were assessed by computed tomography. The 5-year cumulative survival rate after the assessment of the treatment response with CEUS was 58.4%, and the 4-year cumulative total recurrence rate was 72.3%. The 5-year cumulative local tumor recurrence rate was very low (2.3%). Conclusion: The assessment with CEUS at 3 h after the PRFA procedure was successful in the majority of the patients, and it yielded a very low rate of local recurrence. abstract_id: PUBMED:23796857 Contrast-enhanced US-guided radiofrequency ablation of hepatocellular carcinoma. Purpose: Contrast-enhanced ultrasound (US) has been shown to be an efficient imaging modality in guiding radiofrequency (RF) ablation of hepatocellular carcinomas (HCC). The purpose of the present study was to assess the usefulness of contrast-enhanced US in guiding RF ablation in patients with early-stage HCC that was not clearly visible on grayscale US or noncontrast computed tomography (CT). Materials And Methods: During a 17-month period, contrast-enhanced US-guided RF ablation was performed in 14 patients with 19 early-stage lesions that were poorly defined on grayscale US and noncontrast CT. Contrast-enhanced US was repeated after 30 minutes, and complete ablation was defined as absence of any arterial-phase enhancement within the ablated lesion. Patients were followed periodically with clinical evaluation, liver function tests, α-fetoprotein measurement, and multiphasic CT or magnetic resonance (MR) imaging for a minimum of 1 year after ablation to look for local recurrence or disease progression. Survival probability was estimated with the Kaplan-Meier method. Results: Complete tumor ablation was achieved in all 19 lesions, with no evidence of residual or recurrent tumor in the ablated areas after a mean follow-up of 16 months. No major complications were observed in any patient. However, new lesions developed in other parts of the liver on follow-up scans in three patients, and were accordingly treated with RF ablation. Two patients died of disease progression or liver failure within the 1-year follow-up. Conclusions: For early-stage HCCs not well visualized on unenhanced US or CT, contrast-enhanced US provides an additional tool to guide RF ablation. abstract_id: PUBMED:15467416 Radiofrequency ablation of liver cancer: early evaluation of therapeutic response with contrast-enhanced ultrasonography. The early assessment of the therapeutic response after percutaneous radiofrequency (RF) ablation is important, in order to correctly decide whether further treatment is necessary. The residual unablated tumor is usually depicted on contrast-enhanced multiphase helical computed tomography (CT) as a focal enhancing structure during the arterial and portal venous phases. Contrast-enhanced color Doppler and power Doppler ultrasonography (US) have also been used to detect residual tumors. Contrast-enhanced gray-scale US, using a harmonic technology which has recently been introduced, allows for the detection of residual tumors after ablation, without any of the blooming or motion artifacts usually seen on contrast-enhanced color or power Doppler US. Based on our experience and reports in the literature, we consider that contrast-enhanced gray-scale harmonic US constitutes a reliable alternative to contrast-enhanced multiphase CT for the early evaluation of the therapeutic response to RF ablation for liver cancer. This technique was also useful in targeting any residual unablated tumors encountered during additional ablation. abstract_id: PUBMED:31809215 The Application Comparison of Contrast-Enhanced Ultrasound and Contrast-Enhanced Computed Tomography in Radiofrequency Ablation Treatment for Hepatocellular Carcinoma. Objective: To compare the application value of contrast-enhanced ultrasound (CEUS) and contrast-enhanced computed tomography (CECT) in radiofrequency ablation (RFA) for treatment of hepatocellular carcinoma (HCC). Methods: A total of 112 patients with primary HCC were enrolled for the treatment of RFA and they were randomly equally divided into the CEUS group and CECT group. The gender, age, and number and maximum diameter of tumors between the two groups were compared. The median duration of follow-up was 27.0 months and the clinical outcomes were compared. Results: The average examination time period before ablation, of guiding needle insertion and of ablation in the CEUS group were significantly shorter than those in the CECT group (p &lt; 0.05). The complete ablation rate of the CEUS group was significantly higher than that of the CECT group (86.36% vs. 73.17%, χ2 = 4.618, p = 0.032). There was no comparative difference in the incidence of complications (including fever, infection, pain, and liver injury) between the two groups (p &gt; 0.05). The relapse rate of the CEUS group was lower and the survival rate was higher than that in the CECT group (p &lt; 0.05). Conclusions: Compared with the CECT, the RFA for treatment of HCC guided by the CEUS shorten the time of examination and treatment. The immediate postoperative assessment can improve the overall complete ablation rate, reduce the relapse rate, and increase the survival rate, which provides insights for the clinical application. abstract_id: PUBMED:26697413 Imaging Modalities for Assessment of Treatment Response to Nonsurgical Hepatocellular Carcinoma Therapy: Contrast-Enhanced US, CT, and MRI. Tumor response and time to progression have been considered pivotal for surrogate assessment of treatment efficacy for patients with hepatocellular carcinoma (HCC). Recent advancements in imaging modalities such as contrast-enhanced ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) are playing an important role in assessing the therapeutic effects of HCC treatments. According to some HCC clinical guidelines, post-therapeutic evaluation of HCC patients is based exclusively on contrast-enhanced dynamic imaging criteria. The recommended techniques are contrast-enhanced CT or contrast-enhanced MRI. Contrast-enhanced US is employed more in the positive diagnosis of HCC than in post-therapeutic monitoring. Although contrast enhancement is an important finding on imaging, enhancement does not necessarily depict the same phenomenon across modalities. We need to become well acquainted with the characteristics of each modality, including not only contrast-enhanced CT and MRI but also contrast-enhanced US. Many nonsurgical treatment options are now available for unresectable HCC, and accurate assessment of tumor response is essential to achieve favorable outcomes. For the assessment of successful radiofrequency ablation (RFA), the achievement of a sufficient ablation margin as well the absence of tumor vascular enhancement is essential. To evaluate the response to transcatheter arterial chemoembolization (TACE), enhanced tumor shrinkage is relied on as a measure of antitumor activity. Here, we give an overview of the current status of imaging assessment of HCC response to nonsurgical treatments including RFA and TACE. abstract_id: PUBMED:30317772 The occurrence and clinical significance of contrast agent spillover on immediate enhanced CT reexamination after radiofrequency ablation of liver cancer Objective: To investigate the relationship between radiofrequency ablation immediately after enhanced CT scanning and the occurrence of contrast agent spillover and postoperative severe bleeding, and analyze the risk factors for hepatocellular carcinoma (HCC). Methods: A retrospective analysis of 199 patients with hepatocellular carcinoma who underwent radiofrequency ablation of liver cancer in our hospital from January 2016 to January 2017 was reviewed. A total of 232 cases were treated with ablation. The agent spillovers were divided into two groups, one for the contrast agent spill group and the other for the non-contrast agent spill group. Its basic clinical data, laboratory data related to the risk of bleeding, and imaging data were analyzed to explore their clinical treatment effects and the risk factors for their occurrence. According to different data, t-test, χ2 test or logistic regression test was used for statistical analysis. Results: In 199 patients, a total of 232 cases were treated with ablation, including 138 males and 61 females. The average age was (57.56 ± 12.09) years and the average diameter of lesions was (3.42 ± 2.30) cm. A total of 30 cases (12.93%) of contrast agent spillover occurred immediately after the ablation of CT scanning needle. There were no severe bleeding and no special clinical intervention after the operation. The other 202 cases were non- contrast agent spill group, but 2 cases had severe bleeding and had corresponding clinical interventions. Univariate analysis showed that there were statistically significant differences in thoracentesis (P = 0.019), complications of cirrhosis (P &lt; 0.001), and histological types of liver cancer (P = 0.013), and other clinical data [age, lesion size]. There were no significant differences in Child-Pugh classification, preoperative hemoglobin (HGB), platelet count, prothrombin time, APTT, international standardized ratio, number of puncture needles, and puncture length. There was also no significant difference in postoperative HGB (P = 0.160), preoperative-postoperative HGB change (P = 0.999), and length of hospital stay (P = 0.730) between the two groups. Logistic regression analysis showed that with cirrhosis (OR = 5.59, P &lt; 0.001), transthoracic puncture (OR = 2.67, P = 0.021) may be a risk factor for contrast agent spillover after CT-enhanced CT scan in patients with hepatocellular carcinoma. Conclusion: Contrast agent spillover is not uncommon in the evaluation of curative effect of CT immediately after ablation (12.93%). It is not predictive for the occurrence of postoperative severe bleeding and does not require excessive clinical intervention. Liver cirrhosis and transthoracic cavity puncture may be the risk factors for its occurrence. The risk of severe postoperative bleeding should not be ignored for patients who have no contrast agent spills after surgery. abstract_id: PUBMED:23359761 Role of contrast enhanced ultrasound in radiofrequency ablation of metastatic liver carcinoma. Objective: To investigate the application of contrast enhanced ultrasound (CEUS) in planning and guiding for radiofrequency ablation (RFA) for metastatic liver carcinoma (MLC). Methods: One hundred and thirty-five patients with clinically and pathologically diagnosed MLC (from gastrointestinal tumors) were included in the present study, and 104 of them had received CEUS prior to RFA to assess the number, size, shape, infiltration, location and enhancing features of the lesions. Among the 104 patients, 21 (20.1%) were excluded from RFA treatment due to too many lesions or large infiltrative range based on CEUS. The remaining 83 patients with 147 lesions underwent RFA (group A). During the same period, other 31 patients with 102 lesions serving as control group were treated based on findings of conventional ultrasound without contrast (group B). The patients underwent follow-up enhanced CT at the 1st month, and then every 3-6 months after RFA. The tumor was considered as early necrosis if no contrast enhancement was detected in the treated area on the CT scan at the 1st month. Results: In group A, 72 of 147 MLC lesions (48.9%) showed increased sizes on CEUS. Among them, 48 lesions (66.6%) appeared enlarged in arterial phase, and 24 (33.3%) showed enlarged hypoechoic area in parenchymal phase. CEUS showed total 61 additional lesions in 35 patients (42.1%) (ranged from 8 to 15 mm) compared with conventional ultrasound (US), and 42 (68.8%) of them were visualized in parenchymal phase only. There were total 208 lesions in group A underwent RFA with CEUS planning, and the tumor necrosis rate was 94.2% (196/208). In this group, local recurrence was found in 16 lesions (7.7%) during 3-42 months' following up, and new metastases were seen in 30 cases (36.1%). For group B, the tumor necrosis rate was 86.3% (88/102), local recurrence in 17 lesions (16.7%), and new metastases in 13 cases (41.9%). Tumor early necrosis and recurrence rates were significantly different between the two groups (P=0.018, P=0.016, respectively). Conclusion: CEUS played an important role in RFA for liver metastases by candidate selecting and therapy planning, which helped to improve the outcome of the treatment. abstract_id: PUBMED:22174544 Review of dynamic contrast-enhanced ultrasound guidance in ablation therapy for hepatocellular carcinoma. Local ablative techniques-percutaneous ethanol injection, microwave coagulation therapy and radiofrequency ablation (RFA)-have been developed to treat unresectable hepatocellular carcinoma (HCC). The success rate of percutaneous ablation therapy for HCC depends on correct targeting of the tumor via an imaging technique. However, probe insertion often is not completely accurate for small HCC nodules, which are poorly defined on conventional B-mode ultrasound (US) alone. Thus, multiple sessions of ablation therapy are frequently required in difficult cases. By means of two breakthroughs in US technology, harmonic imaging and the development of second-generation contrast agents, dynamic contrast-enhanced harmonic US imaging with an intravenous contrast agent can depict tumor vascularity sensitively and accurately, and is able to evaluate small hypervascular HCCs even when B-mode US cannot adequately characterize the tumors. Therefore, dynamic contrast-enhanced US can facilitate RFA electrode placement in hypervascular HCC, which is poorly depicted by B-mode US. The use of dynamic contrast-enhanced US guidance in ablation therapy for liver cancer is an efficient approach. Here, we present an overview of the current status of dynamic contrast-enhanced US-guided ablation therapy, and summarize the current indications and outcomes of reported clinical use in comparison with that of other modalities. Answer: Yes, contrast-enhanced ultrasound (CEUS) is considered an alternative to spiral CT in the assessment of treatment outcomes of radiofrequency ablation (RFA) in hepatocellular carcinoma (HCC). Studies have shown that CEUS is a feasible tool with accuracy comparable to 4-row spiral CT in evaluating the response of HCC to RFA. In one study, CEUS identified persistent signal enhancement in 24% of patients post-treatment, and using CT imaging as the gold standard, CEUS demonstrated high sensitivity (92.3%), specificity (100%), negative predictive value (NPV, 97.4%), and positive predictive value (PPV, 100%) (PUBMED:19280552). Additional research supports the use of CEUS in guiding RFA for HCC, indicating that it can be effective for planning and monitoring the procedure (PUBMED:21799595). CEUS with the contrast agent Sonazoid has also been shown to be effective in assessing the therapeutic effect of PRFA for HCC, achieving a very low rate of local recurrence (PUBMED:24917381). For early-stage HCCs not well visualized on unenhanced US or CT, CEUS provides an additional tool to guide RF ablation (PUBMED:23796857). Furthermore, CEUS has been reported to be reliable for the early evaluation of the therapeutic response to RF ablation for liver cancer, offering advantages such as the absence of blooming or motion artifacts typically seen on contrast-enhanced color or power Doppler US (PUBMED:15467416). A comparison study found that CEUS-guided RFA for HCC shortened the examination and treatment time, improved the overall complete ablation rate, reduced the relapse rate, and increased the survival rate compared to CECT-guided RFA (PUBMED:31809215). In summary, CEUS is a valuable alternative to spiral CT for assessing the treatment outcome of RFA in HCC, with several studies demonstrating its efficacy and potential advantages in various aspects of the treatment process.
Instruction: Titanium Elastic Nailing for Pediatric Tibia Fractures: Do Older, Heavier Kids Do Worse? Abstracts: abstract_id: PUBMED:25887826 Titanium Elastic Nailing for Pediatric Tibia Fractures: Do Older, Heavier Kids Do Worse? Background: Elastic nailing is a common method of fixation for tibial shaft fractures in skeletally immature individuals. Poor outcomes of titanium elastic nails for femoral shaft fractures have been associated with increasing patient age and weight, especially patients weighing &gt;50 kg. Our objective is to determine if there is an upper weight or age limit to the safe and effective use of titanium elastic nails for tibial shaft fractures in the pediatric population. Methods: This is a retrospective cohort study of patients who underwent stabilization of a tibial shaft fracture with titanium elastic nails at a large tertiary-care pediatric trauma center. Data collected included patient demographics, injury characteristics, and radiographic data. Weight groups were stratified as ≥ or &lt;50 kg, and age groups as 14 years or older or less than 14 years old. Malunion was defined as 10 degrees of angulation in either the sagittal or coronal plane. Union was defined as bridging of ≥3 cortices on orthogonal radiographs. A significant difference in time to union was considered to be 3 weeks. Results: Ninety-five patients were included with a mean age of 12.1 years (range, 6 to 16 y) and a mean weight of 50.2 kg (range, 21 to 122 kg). Malunion rate was similar between weight cohorts: 13.3% (6/45) in the ≥50-kg group and 10% (5/50) in the &lt;50-kg group (P=0.61). Malunion rate was similarly comparable between age groups: 17.6% (6/34) in the 14 years and older group and 8.2% (5/61) in the less than 14-year-old group (P=0.17). There was no statistically significant difference in time to union between weight or age cohorts. In sum, we did not find a significant difference in the rate of malunion or time to healing between younger and older patients or between lighter and heavier patients. Conclusion: The use of titanium elastic nails for tibial shaft fractures, unlike for other long bone fractures, seems not to be precluded in older and heavier patients. Level Of Evidence: Level III. abstract_id: PUBMED:30415674 Elastic stable intramedullary nailing (ESIN) in paediatric femur and tibia shaft fractures: Comparison between titanium and stainless steel nails. Introduction: Elastic stable intramedullary nailing (ESIN) is currently considered the gold standard in surgical treatment of femur and tibial shaft fractures in school age paediatric patients. Although elastic intramedullary nails are available in both titanium (Ti) and stainless steel (SS) alloy, titanium nails are most commonly used. Nevertheless, there is still contrasting evidence as to whether the use of Ti nails can offer better outcomes in terms of fracture healing and stability over SS nails. The aim of this study was to compare outcomes and complications of Ti and SS ESIN for femur and tibia shaft fractures in a population of school age paediatric patients. Materials And Methods: All consecutive patients who underwent ESIN for femoral or tibial shaft fracture from June 2012 to May 2015 at our centre were retrospectively reviewed. Standard demographic data were collected. Pre-operative and post-operative X-rays were reviewed, complications were collected from patients charts. Patients were divided in two groups, titanium nails (Ti group) and stainless steel nails (SS group) and outcomes compared between the two. Results: A total of 34 patients were included (17 patients Ti group, 17 patients SS group) with a total of 14 femur and 21 tibia fractures. Average age at surgery was 9.4 ± 2.5 years in Ti group and 10.4 ± 2.4 years in SS group (p = 0.21). The average time to bone healing was 3 months in Ti group, and 2.8 months in SS group (p = 0.63). At final follow-up (12 months), no patient showed a coronal plane or sagittal plane deformity &gt;10° and &gt;15°, respectively. Complication rate was similar between the two groups (24% Ti group, 22% SS group). Conclusions: We did not observe any significant difference in terms of bone healing, fracture mechanical stability, return to full activity, and complication rate between Ti and SS ESIN for paediatric femoral and tibial shaft fractures. While Ti nails remain a better choice for patients with metal allergy, SS nails may offer safe, effective, and cheaper alternative to Ti nails in school age femur and tibial shaft fractures. abstract_id: PUBMED:26606988 Elastic nailing of tibia shaft fractures in young children up to 10 years of age. Introduction: Although tibia shaft fractures in children usually have satisfactory results after closed reduction and casting, there are several surgical indications, including associated fractures and soft tissue injuries such as open fractures. Titanium elastic nails (TENs) are often used for pediatric tibia fractures, and have the advantage of preserving the open physis. However, complications such as delayed union or nonunion are not uncommon in older children or open fractures. In the present study, we evaluated children up to 10 years of age with closed or open tibial shaft fractures treated with elastic nailing technique. Methods: A total of 16 tibia shaft fractures treated by elastic nailing from 2001 to 2013 were reviewed. The mean patient age at operation was 7 years (range: 5-10 years). Thirteen of 16 cases were open fractures (grade I: 4, grade II: 6, grade IIIA: 3 cases); the other cases had associated fractures that necessitated operative treatments. Closed, antegrade intramedullary nailing was used to insert two nails through the proximal tibial metaphysis. All patients were followed up for at least one year after the injury. Outcomes were evaluated using modified Flynn's criteria, including union, alignment, leg length discrepancies, and complications. Results: All fractures achieved union a mean of 16.1 weeks after surgery (range: 11-26 weeks). No patient reported knee pain or experienced any loss of knee or ankle motion. There was a case of superficial infection in a patient with grade III open fracture. Three patients reported soft tissue discomfort due to prominent TEN tips at the proximal insertion site, which required cutting the tip before union or removing the nail after union. At the last follow-up, there were no angular or rotational deformities over 10° in either the sagittal or coronal planes. With the exception of one case with an overgrowth of 15 mm, no patient showed shortening or overgrowth exceeding 10mm. Among final outcomes, 15 were excellent and 1 was satisfactory. Summary: Even with open fractures or soft tissue injuries, elastic nailing can achieve satisfactory results in young children, with minimal complications of delayed bone healing, or infection. abstract_id: PUBMED:24857493 Unlocked and locked elastic stable intramedullary nailing in an ovine tibia fracture model: a biomechanical study. In the present study, four different systems of elastic stable intramedullary nails (unlocked, Ender stainless steel nails locked with 3-mm screws, titanium nails locked with end caps, titanium nails locked with plugs and 3-mm screws) were implanted in cadaveric ovine tibiae. Fractures were simulated by a transverse diaphyseal osteotomy. The specimens were subjected to simultaneous axial and torsional fatigue loading of 5000 and 1000 cycles, respectively. The unlocked systems failed at an axial load of 200 N peak amplitude. End caps systems withstood axial loads up to 800 N for 1000 cycles, and ender nails and plugs lasted up to 1000 N for 1000 cycles. All systems showed a decrease of axial stiffness with higher loads and endured cycles. Ender nails and nails locked with plugs failed by penetration of the distal epiphysis rather than by loosening of the interlocking system. Overall, the titanium nails locked with plugs and 3-mm screws exhibited superior test results. abstract_id: PUBMED:34833065 Biomechanical Performance of BoneHelix® Compared with Elastic Stable Intramedullary Nailing (ESIN) in a Pediatric Tibia Fracture Model. Tibial shaft fractures are common injuries in the pediatric and adolescent populations. Elastic stable intramedullary nailing (ESIN) is the treatment of choice for cases that require surgical stabilization. A new intramedullary device, BoneHelix® (BH), may be an alternative for use with fractures that cannot be satisfactorily stabilized with ESIN. This study aimed to assess the biomechanical performance of BH compared with ESIN in a porcine tibia fracture model, observing cyclic fatigue and load to failure. Computed tomography was used to monitor the implant position and to rule out unintended damage. No implant or bone failure occurred during the fatigue testing. An increase in the cumulative plastic displacement was observed in both test groups over the loading cycles applied. Both implant-bone constructs displayed a trend toward closure of the osteotomy gap. During the load-to-failure test, the average loads at failure in specimens instrumented with ESIN and BH were 5364 N (±723) and 4350 N (±893), respectively, which were not statistically significant (p = 0.11). The values of both groups were two to three times higher than the estimated maximal load (2000 N) during physiological weight bearing. The biomechanical results thus indicate equivalent performance and stability by the implants tested. abstract_id: PUBMED:17513967 Complications after titanium elastic nailing of pediatric tibial fractures. A retrospective review of 60 diaphyseal tibia fractures (31 closed and 29 open fractures) treated with flexible intramedullary fixation was conducted. All charts and radiographs were reviewed. Children ranged in age from 5.1 to 17 years. Fifty patients with 51 fractures were followed up until union and comprised the study group. The mean follow-up period for these 50 patients was 79 weeks. Forty-five fractures achieved bony union within 18 weeks (mean, 8 weeks). Five patients (11%) had delayed healing (3 had delayed unions that ultimately healed with casting or observation, and 2 had nonunions that required secondary procedures to achieve union [1 patient underwent a fibular osteotomy, and 1 underwent exchange nailing with a reamed tibial nail]). These 5 fractures ultimately healed, with a mean time to union of 41 weeks. Patients with delayed healing tended to be older (mean age, 14.1 years) versus the study population as a whole (mean age, 11.7 years). In addition to delayed union, other complications were observed in the study population. One patient healed with malunion (13-degree valgus), requiring corrective osteotomy. One patient with a grade II open fracture was diagnosed with osteomyelitis at the fracture site after attaining bony union. Two patients developed nail migration through the skin, requiring modification or nail removal. The fixation of pediatric diaphyseal tibia fractures with titanium elastic nails is effective but has a substantial rate of delayed healing, particularly in older patients. abstract_id: PUBMED:27424533 Computational comparison of tibial diaphyseal fractures fixed with various degrees of prebending of titanium elastic nails and with and without end caps. Introduction: Elastic stable intramedullary nailing (ESIN) is a treatment strategy for the management of diaphyseal long-bone fractures in adolescents and children, but few studies have investigated the mechanical stability of tibial diaphyseal fractures treated with various degrees of prebending of the elastic nails. Therefore, the aim of this study was to compare the mechanical stability, including the gap deformation and nail dropping, of a tibia fracture with various fracture sites and fixed with various degrees of prebending of the elastic nails by the finite element method. Furthermore, the contribution of end caps to stability was taken into consideration in the simulation. Methods: A tibia model was developed with a transverse fracture at the proximal, middle and distal parts of the diaphysis, and fixed with three degrees of prebending of elastic nails, including those equal to, two times and three times the diameter of the intramedullary canal. The outer diameter of the nail used in the computation was 3.5mm, and the fractured tibia was fixed with two elastic double C-type nails. Furthermore, the proximal end of each nail was set to free or being tied to the surrounding bone by a constraint equation to simulate with or without using end caps. Results: The results indicated that using end caps can prevent the fracture gap from collapsing by stopping the ends of the nails from dropping back in all prebending conditions and fracture patterns, and increasing the prebending of the nails to a degree three times the diameter of the canal reduced the gap shortening and the dropping distance of the nail end in those without using end caps under axial compression and bending. Insufficient prebending of the nails and not using end caps caused the gap to collapse and the nail to drop back at the entry point under loading. Conclusions: Using end caps or increasing the prebending of the nails to three times the diameter of the canal is suggested to stop the nail from dropping back and thus produce a more stable structure, with less gap deformation, in the management of a simulated tibial diapyhseal fracture by using titanium elastic nails with a double C-shape. abstract_id: PUBMED:28029668 Titanium Elastic Nailing System (TENS) for Tibia Fractures in Children: Functional Outcomes and Complications. Introduction: Closed reduction and cast application is still regarded as first line treatment for pediatric tibial fractures. Over the past few decades, management of pediatric tibial fractures has shifted more towards operative intervention because of quicker recovery, shorter rehabilitation period, less immobilization, lack of stiffness of adjoining joints, and less psychological impact to the children. Flexible intramedullary nails not only fulfill the above advantage but also maintain alignment and rotation. Methods: This was a retrospective study of pediatric tibial fractures fixed with two titanium elastic nails through proximal ends of bones. Alignment of fracture, any infection, delayed union, non union, limb length discrepancy, motion of knee joint, and fracture union time were measured during follow-up examination. Results: Forty-five patients were enrolled into the study out of which 28 (62.2%) were male and 17 (37.8%) were female. Average age of patient was 9.48±2.17 years and average time taken to heal the fractures (both clinical and radiological) was 11.17±2.81 weeks. There were 2 (4.4%) of malunion, 4 (8.8%) of delayed union, 3 (6.6%) of limb shortening, 2 (4.4%) of limb lengthening, 6 (13.6%) of nail prominence and skin irritation, 2 (4.4%) of superficial infection at nail entry site and one case of re-fracture. Conclusions: Titanium elastic nail fixation is a simple, easy, rapid, reliable and effective method for management of pediatric tibial fractures in patients with operative indications. There may be the chances of complication following the TENS in tibia but these are avoidable as well as manageable with careful precautions. abstract_id: PUBMED:18388707 Elastic stable intramedullary nailing of tibial shaft fractures in children. Most pediatric tibia shaft fractures are amenable to nonoperative treatment with satisfying results, yet surgical stabilization is necessary in certain cases. The purpose of our study was to determine the effectiveness and the complications associated with elastic stable intramedullary nailing in severe pediatric tibial fractures. We retrospectively reviewed 24 tibia shaft fractures in 24 patients that were treated operatively by elastic stable intramedullary nailing between 1997 and 2005 at our institution. Extensive review of all charts and radiographic data was completed at Cincinnati Children's Hospital Medical Center. There were 8 closed and 16 open fractures. The average union time for all tibia fractures was 20.4 weeks. The average union time for closed and open fractures was 21.5 and 20.2 weeks, respectively. Complications include 2 (8%) neurovascular, 2 (8%) infections, 2 (8%) malunions, and 1 (4%) leg-length discrepancy. Although complications do exist, elastic stable intramedullary nailing of pediatric tibia shaft fractures using Nancy nails is an effective treatment option. abstract_id: PUBMED:26814925 Reamed intramedullary nailing of diaphyseal tibial fractures: comparison of compression and non-compression nailing. Background: Modern intramedullary implants provide the option to perform compression at the fracture gap in long bone fractures via a compression screw mechanism. The aim of this study was to assess if the application of interfragmentary compression in the intramedullary nailing of tibia fractures could increase the union rate and speed of fracture healing. Methods: Sixty-three patients who suffered from an AO-type 42-A3 or 42-B2 fracture that was treated by reamed intramedullary nailing between 2003 and 2008 were included in this retrospective study. Twenty-five patients were treated with dynamic interlocking without compression while 38 were treated with compression nailing. The compression load of the dynamic proximal screw was calculated by postoperative X-ray and radiographs taken four weeks after operation. Healing was assessed by radiological evaluation until the completion of bony healing or the disappearance of clinical symptoms. Nonunion was defined as the absence of radiological union and the persistence of clinical symptoms after six months. Results: Postoperative compression was applied at a mean load of 1,852 N, and 980 N remained after four weeks. In the compression group, 19 open and 19 closed fractures occurred. In the non-compression group, 25 patients were included (14 closed and 11 open cases). Active compression decreased healing time significantly. Nonunion occurred in one compression patient and three non-compression patients. Conclusion: The results show that additional compression of the fracture gap can improve healing outcome in simple transverse tibial shaft fractures treated with reamed nailing. Answer: Based on the available abstracts, it appears that older and heavier children do not necessarily have worse outcomes when treated with titanium elastic nailing for pediatric tibia fractures. A retrospective cohort study specifically aimed at determining if there is an upper weight or age limit for the safe and effective use of titanium elastic nails for tibial shaft fractures in the pediatric population found no significant difference in the rate of malunion or time to healing between younger and older patients or between lighter and heavier patients (PUBMED:25887826). This suggests that titanium elastic nailing can be used effectively for tibial shaft fractures in older and heavier pediatric patients, unlike for other long bone fractures where poor outcomes have been associated with increasing patient age and weight. Additionally, another study comparing titanium and stainless steel elastic stable intramedullary nailing (ESIN) for femur and tibia shaft fractures in school-age pediatric patients did not observe any significant difference in terms of bone healing, fracture mechanical stability, return to full activity, and complication rate between the two materials (PUBMED:30415674). This further supports the notion that titanium nails can be used effectively in pediatric patients regardless of age and weight. However, it is important to note that while titanium elastic nailing is effective, there can still be complications such as delayed healing, particularly in older patients (PUBMED:17513967). Additionally, using end caps or increasing the prebending of the nails is suggested to produce a more stable structure with less gap deformation in the management of tibial diaphyseal fractures (PUBMED:27424533). In summary, the evidence from the provided abstracts indicates that titanium elastic nailing for pediatric tibia fractures does not result in worse outcomes for older and heavier children, and can be considered a safe and effective treatment option for this demographic.
Instruction: Pediatric traumatic brain injury: do racial/ethnic disparities exist in brain injury severity, mortality, or medical disposition? Abstracts: abstract_id: PUBMED:16317806 Pediatric traumatic brain injury: do racial/ethnic disparities exist in brain injury severity, mortality, or medical disposition? Introduction: Little is known regarding pediatric racial/ethnic disparities. We sought to determine if racial/ethnic disparities exist in the severity, mortality, or medical disposition of pediatric traumatic brain injury (TBI). Methods: We analyzed data from a comprehensive trauma database assembled at a large independent children's hospital. Among all patients evaluated by the trauma service in the emergency department (ED), cases of TBI were identified (N=1035). Analyses contrasted non-Hispanic White children with all others (minority children). The relationship of race to patient characteristics, brain injury severity, mortality, and medical disposition (hospital admission, intensive care unit admission) was analyzed by using bivariable approaches and multivariate logistic regression. The latter controlled for age, overall injury severity, and insurance status. Results: Although sociodemographic characteristics did not differ, the mechanism of injury was significantly different (P&lt;.001); minority children were more likely to have been a pedestrian or cyclist struck by a vehicle. Minority children were less likely to require transfer to the ED for treatment and were more likely to be publicly insured (P&lt;.001). No differences in brain injury severity, mortality, or medical disposition were observed with both bivariable and multivariable approaches. Conclusions: This study is one of the first to examine potential disparities in trauma and contributes to the small but growing literature in pediatric health disparities. Multiple explanations are explored, several with potential implications for reducing disparities in other health conditions. Identifying conditions in which evaluation and treatment appears to be free of disparities may provide insights for subsequent investigations and interventions. abstract_id: PUBMED:17993963 Ethnic disparities exist in trauma care. Background: An estimated 5.3 million people in the United States live with permanent disability related to traumatic brain injury (TBI). Access to rehabilitation after TBI is important in minimizing these disabilities. Ethnic disparities in access to health care have been documented in other diseases, but have not been studied in trauma care. We hypothesized that access to rehabilitation after TBI is influenced by race or ethnicity. Methods: Retrospective analysis of the National Trauma Data Bank patients with severe blunt TBI (head abbreviated injury score 3-5, n = 58,729) who survived the initial hospitalization was performed. Placement into rehabilitation after discharge was studied in three groups: non-Hispanic white (NHW 77%), African American (14%), and Hispanic (9%). The two minority groups were compared with NHW patients using logistic regression to control for differences in age, gender, overall injury severity (injury severity score), TBI severity (head abbreviated injury score and Glasgow Coma Scale score), associated injuries, and insurance status. Results: The three groups were similar in injury severity score, TBI severity, and associated injuries. After accounting for differences in potential confounders, including injury severity and insurance status, minority patients were 15% less likely to be placed in rehabilitation (odds ratio 0.85, 95% confidence interval 0.8-0.9, p &lt; 0.0001). Conclusions: Ethnic minority patients are less likely to be placed in rehabilitation than NHW patients are, even after accounting for insurance status, suggesting existence of systematic inequalities in access. Such inequalities may have a disproportionate impact on long-term functional outcomes of African American and Hispanic TBI patients, and suggest the need for an in-depth analysis of this disparity at a health policy level. abstract_id: PUBMED:19015463 Ethnic disparities in initial management of trauma patients in a nationwide sample of emergency department visits. Hypothesis: Ethnic disparities in functional outcomes after traumatic brain injuries have been demonstrated previously. However, it is not clear if these disparities are due to differential access to initial diagnostic and treatment modalities or disproportionate care at poorly funded inner-city emergency departments (EDs). We hypothesized that initial assessment of injured patients in EDs is affected by patient ethnicity. Design: Retrospective database analysis. Setting: Data were obtained from the National Hospital Ambulatory Medical Care Survey ED component for 2003, which includes a national probability sample survey of ED visits. Patients: All injury-related initial ED visits of patients 15 years and older were included. Patients were divided into 3 groups: non-Hispanic white (n = 6106), African American (n = 1406), and Hispanic (n = 1051). Main Outcome Measures: The intensity of ED assessment and management and patient disposition from EDs were compared in the 3 groups. Results: Compared with non-Hispanic white patients, minority patients were slightly younger and less likely to be insured but were similar in sex, mechanism of injury, and injury severity. There were no clinically significant differences between non-Hispanic white patients and the 2 minority groups in ED assessment, diagnostic and treatment modalities, and ED disposition. There were no systematic differences by region of the country, ownership of the hospitals, or insurance status of the patients. Conclusion: The initial assessment and management of injured patients from ethnic/racial minorities was similar to that of non-Hispanic white patients in a nationwide representative sample of ED visits. Other causes of ethnic disparities in outcomes after injuries should be sought. abstract_id: PUBMED:24590153 Racial/ethnic and insurance status disparities in discharge to posthospitalization care for patients with traumatic brain injury. Objectives: Professional, posthospitalization care (PHC) can improve outcomes among patients with traumatic brain injury. We examined disparities in discharge to PHC by patients' race/ethnicity and insurance type. Participants: A total of 6061 adults hospitalized for unintentional traumatic brain injury in Oregon, 2008 to 2011. Main Outcome Measure: Posthospitalization care was assessed on the basis of discharge disposition. Multivariable logistic regression was used to estimate effects of race/ethnicity and insurance on referral to PHC while controlling for potential confounders. Generalized estimating equations were used to calculate odds ratios (ORs) and 95% confidence intervals (CIs), accounting for clustering of data by hospital. Results: 28% of patients were discharged to PHC. While controlling for potential confounders, Hispanics were less likely to be discharged to PHC (OR, 0.62; CI, 0.40-0.96) than non-Hispanic whites. Compared with patients with private insurance, uninsured patients were less likely to be discharged to PHC (OR, 0.19; CI, 0.11-0.32) whereas patients with public insurance (OR, 1.65; CI, 1.33-2.05) and worker's compensation (OR, 1.66; CI, 1.09-2.52) were more likely to be discharged to PHC. Conclusions: Results suggest that racial/ethnic and insurance disparities exist in discharge to postacute care after hospitalization for traumatic brain injury. Future research should examine factors that might contribute to and reduce these inequities in care. abstract_id: PUBMED:28688653 Ethnic disparities in traumatic brain injury care referral in a Hispanic-majority population. Background: Functional outcomes after traumatic brain injury (TBI) can be significantly improved by discharge to posthospitalization care facilities. Many variables influence the discharge disposition of the TBI patient, including insurance status, patient condition, and patient prognosis. The literature has demonstrated an ethnic disparity in posthospitalization care referral, with Hispanics being discharged to rehabilitation and nursing facilities less often than non-Hispanics. However, this relationship has not been studied in a Hispanic-majority population, and thus, this study seeks to determine if differences in neurorehabilitation referrals exist among ethnic groups in a predominately Hispanic region. Methods: This study is a retrospective cohort that includes 1128 TBI patients who presented to University Medical Center El Paso, Texas, between the years 2005 and 2015. The patients' age, sex, race, residence, admission Glasgow Coma Scale (GCS), GCS motor, Injury Severity Score (ISS), hospital and intensive care unit length of stay (LOS), mechanism of injury, and discharge disposition were analyzed in univariate and multivariate models. Results: Our study population had an insurance rate of 55.5%. Insurance status and markers of injury severity (hospital LOS, intensive care unit LOS, ISS, GCS, and GCS motor) were predictive of discharge disposition to rehabilitation facilities. The study population was 70% Hispanic, yet Hispanics were discharged to rehabilitation facilities (relative risk: 0.56, P: 0.001) and to long-term acute care/nursing facilities (relative risk: 0.35, P &lt; 0.0001) less than non-Hispanics even after LOS, ISS, ethnicity, insurance status, and residence were adjusted for in multivariate analysis. Conclusions: This study suggests that patients of different ethnicities but comparable traumatic severity and insurance status receive different discharge dispositions post-TBI even in regions in which Hispanics are the demographic majority. abstract_id: PUBMED:20375897 Epidemiology of injury and the impact of health disparities. Purpose Of Review: The purpose of this review is to highlight recent advances in understanding the epidemiology of pediatric trauma and the impact of health disparities on care of the injured child. Recent Findings: Recent studies examining outcomes for injury in children consistently demonstrate worse clinical and functional outcomes for minority children compared with white children, with African-American race being an independent predictor of mortality. Despite controlling for injury severity and insurance status (as a surrogate of socioeconomic status), these disparities persist. Significant racial differences in mortality were also identified when national data were compared with local institutional data for children with traumatic brain injury. Studies examining the effect of insurance status on care of the injured child have similarly uncovered unsettling inequities. Disparities in delivery of pediatric trauma care have been identified based on access to pediatric trauma centers. Other studies have sought to reduce disparities by use of guidelines. Finally, prevention studies have demonstrated racial disparities in the use of motor vehicle restraints, with improved restraint use in minority populations after implementation of culturally tailored prevention programs. Summary: The cause of disparities in childhood trauma appears to be multifactorial and may include race, socioeconomic factors, insurance status, access, and healthcare provider biases. Multiple studies have confirmed that disparities exist, but it is difficult to tease out the reasons why they exist. Further work is necessary to identify causes of such disparities and formulate strategies to eliminate them. abstract_id: PUBMED:18212662 Ethnic disparities in long-term functional outcomes after traumatic brain injury. Objectives: Ethnic disparities in access to acute rehabilitation and in long-term global neurologic outcomes after traumatic brain injury (TBI) have been previously documented. The current study was undertaken to determine whether there are specific types of functional deficits that disproportionately affect ethnic minorities after TBI. Methods: The TBI Clinical Trials Network is a National Institutes of Health-funded multicenter prospective study. Local data from trauma centers in a single ethnically diverse major metropolitan study site were analyzed. Functional outcomes were measured in 211 patients with blunt TBI (head Abbreviated Injury Scale score 3-5) who were alive &gt;/=6 months after discharge using the Functional Status Examination (FSE), which measures outcome in 10 functional domains and compares current functional status to preinjury status. For each domain, patients were classified as functionally independent (FSE score 1, 2) or dependent upon others (FSE score 3, 4). Ethnic minorities (n = 66) were compared with non-Hispanic whites (n = 145), with p &lt; 0.05 considered significant. Results: The two groups had similar injury severity (head Abbreviated Injury Scale score, initial Glasgow Coma Scale score, Injury Severity Score) and were equally likely to be placed in rehabilitation after trauma center discharge (minorities 51%, whites 46%, p = 0.28). Minority patients experienced worse long-term functional outcomes in all domains, which reached statistical significance in post-TBI standard of living, engagement in leisure activities, and return to work or school. Conclusions: Ethnic minorities with TBI suffer worse long-term deficits in three specific functional domains. TBI rehabilitation programs should target these specific areas to reduce disparities in functional outcomes in ethnic minorities. abstract_id: PUBMED:33380167 Regional, Racial, and Mortality Disparities Associated With Neurosurgeon Staffing at Level I Trauma Centers. Background: Traumatic brain injury (TBI) occurs in approximately 30% of trauma patients. Because neurosurgeons hold expertise in treating TBI, increased neurosurgical staffing may improve patient outcomes. We hypothesized that TBI patients treated at level I trauma centers (L1TCs) with ≥3 neurosurgeons have a decreased risk of mortality vs. those treated at L1TCs with &lt;3 neurosurgeons. Methods: The Trauma Quality Improvement Program database (2010-2016) was queried for patients ≥18 years with TBI. Patient characteristics and mortality were compared between ≥3 and &lt;3 neurosurgeon-staffed L1TCs. A multivariable logistic regression analysis was used to identify risk factors associated with mortality. Results: Traumatic brain injury occurred in 243 438 patients with 5188 (2%) presenting to L1TCs with &lt;3 neurosurgeons and 238 250 (98%) to L1TCs with ≥3 neurosurgeons. Median injury severity score (ISS) was similar between both groups (17, P = .09). There were more Black (37% vs. 12%, P &lt; .001) and Hispanic (18% vs. 12%, P &lt; .001) patients in the &lt;3 neurosurgeon group. Nearly 60% of L1TCs with &lt;3 neurosurgeons are found in the South. Mortality was higher in the &lt;3 vs. the ≥3 group (12% vs. 10%, P &lt; .001). Patients treated in the &lt;3 neurosurgeon group had a higher risk for mortality than those treated in the ≥3 neurosurgeon group (odds ratio (OR) 1.13, 95% confidence intervals (CI) 1.01-1.26, P = .028). Discussion: There exists a significant racial disparity in access to neurosurgeon staffing with additional disparities in outcomes based on staffing. Future efforts are needed to improve this chasm of care that exists for trauma patients of color. abstract_id: PUBMED:37638408 Association Between Emergency Medical Service Agency Volume and Mortality in Trauma Patients. Objective: The aim of this study was to evaluate the association of annual trauma patient volume on outcomes for emergency medical services (EMS) agencies. Background: Regionalization of trauma care saves lives. The underlying concept driving this is a volume-outcome relationship. EMS are the entry point to the trauma system, yet it is unknown if a volume-outcome relationship exists for EMS. Methods: A retrospective analysis of prospective cohort including 8 trauma centers and 20 EMS air medical and metropolitan ground transport agencies. Patients 18 to 90 years old with injury severity scores ≥9 transported from the scene were included. Patient and agency-level risk-adjusted regression determined the association between EMS agency trauma patient volume and early mortality. Results: A total of 33,511 were included with a median EMS agency volume of 374 patients annually (interquartile range: 90-580). Each 50-patient increase in EMS agency volume was associated with 5% decreased odds of 6-hour mortality (adjusted odds ratio=0.95; 95% CI: 0.92-0.99, P =0.03) and 3% decreased odds of 24-hour mortality (adjusted odds ratio=0.97; 95% CI: 0.95-0.99, P =0.04). Prespecified subgroup analysis showed EMS agency volume was associated with reduced odds of mortality for patients with prehospital shock, requiring prehospital airway placement, undergoing air medical transport, and those with traumatic brain injury. Agency-level analysis demonstrated that high-volume (&gt;374 patients/year) EMS agencies had a significantly lower risk-standardized 6-hour mortality rate than low-volume (&lt;374 patients/year) EMS agencies (1.9% vs 4.8%, P &lt;0.01). Conclusions: A higher volume of trauma patients transported at the EMS agency level is associated with improved early mortality. Further investigation of this volume-outcome relationship is necessary to leverage quality improvement, benchmarking, and educational initiatives. abstract_id: PUBMED:36527498 Comorbidities, injury severity and complications predict mortality in thoracic trauma. Purpose: Thoracic trauma accounts for 25-50% of posttraumatic mortality. Data on epidemiology of thoracic trauma in Scandinavia and risk factors for mortality are scarce. This study aims to provide an overview of epidemiology, clinical events and risk factors for mortality of patients with severe thoracic injuries. Methods: A retrospective study including adult thoracic trauma patients with abbreviated injury scale ≥ 3, between 2009 and 2018 at Haukeland University Hospital was performed. Subgroup analyses were performed for specific patient groups: (1) isolated thoracic trauma, (2) polytrauma without Traumatic Brain Injury (TBI) and (3) polytrauma with TBI. Logistic regression analyses were applied to find risk factors for 30-days mortality. Age, sex, comorbidity polypharmacy score (CPS), trauma and injury severity score (TRISS) and comprehensive complication index (CI) were included in the final model. Results: Data of 514 patients were analyzed, of which 60 (12%) patients died. Median (IQR) injury severity score (ISS) was 17 (13-27). Data of 463 patients, of which 39 patients died (8%), were included in multivariate analyses. Female sex odds ratio (OR) (2.7, p = 0.04), CPS &gt; 9 (OR 4.8; p = 0.01), TRISS ≤ 50% (OR 44; p &lt; 0.001) and CI ≥ 30 (OR 12.5, p &lt; 0.001) were significant risk factors for mortality. Subgroup analyses did not demonstrate other risk factors. Conclusion: Comorbidities and associated pharmacotherapies, TRISS, female sex, and complications during admission predict in-hospital mortality after thoracic trauma. Current findings might help to recognize patients at risk of an adverse outcome, and thereby prevent complications. Trial Registration: RETROSPECTIVELY REGISTERED: The regional committees for medical and health research ethics file number is 2017/293. Answer: The question of whether racial/ethnic disparities exist in pediatric traumatic brain injury (TBI) in terms of brain injury severity, mortality, or medical disposition has been explored in several studies. According to the data analyzed from a comprehensive trauma database at a large independent children's hospital, no differences in brain injury severity, mortality, or medical disposition were observed when comparing non-Hispanic White children with minority children, even after controlling for age, overall injury severity, and insurance status (PUBMED:16317806). This suggests that in the context of this study, evaluation and treatment of pediatric TBI appeared to be free of disparities related to race/ethnicity. However, it is important to note that other studies have identified disparities in trauma care and outcomes for different ethnic groups in the broader context of TBI. For instance, ethnic minority patients were found to be less likely to be placed in rehabilitation after TBI compared to non-Hispanic White patients, even after accounting for insurance status and injury severity (PUBMED:17993963). Additionally, Hispanics were less likely to be discharged to posthospitalization care (PHC) compared to non-Hispanic Whites, and insurance status also played a role in the likelihood of being discharged to PHC (PUBMED:24590153). In a Hispanic-majority population, Hispanics were discharged to rehabilitation and long-term acute care/nursing facilities less often than non-Hispanics, even after adjusting for injury severity and insurance status (PUBMED:28688653). These findings indicate that while one study did not find disparities in the immediate medical disposition of pediatric TBI patients (PUBMED:16317806), other studies have documented disparities in access to rehabilitation and posthospitalization care for TBI patients, which could potentially impact long-term outcomes (PUBMED:17993963, PUBMED:24590153, PUBMED:28688653). It is possible that disparities may become more apparent in the post-acute care phase rather than the initial treatment phase. Therefore, while the specific study on pediatric TBI did not find disparities in the emergency department setting, disparities in the broader TBI patient population and in post-acute care settings do exist and warrant further investigation and intervention to ensure equitable care for all patients.
Instruction: Does degree of obesity influence the validity of reported energy and protein intake? Abstracts: abstract_id: PUBMED:10369492 Does degree of obesity influence the validity of reported energy and protein intake? Results from the SOS Dietary Questionnaire. Swedish Obese Subjects. Objective: To test the validity of a dietary questionnaire which was developed with the particular goal of measuring dietary intake in obese subjects. Design: Reported energy intake was compared with 24 h energy expenditure measured in a chamber for indirect calorimetry (24 EE) and reported nitrogen intake with nitrogen in urine collected during the 24 h in the chamber. Subjects: Twenty-nine overweight men and women, body mass index (BMI) ranging from 25.5 49.5 kg/m2. Results: Reported energy intake correlated significantly with 24 EE (r = 0.50, P = 0.006) and reported urinary nitrogen correlated significantly with urinary nitrogen excretion (r=0.56, P=0.0015). Mean reported energy intake+/-s.d. was 10.2+/-3.6 MJ and mean 24 EEi s.d. was 10.3+/-1.9 MJ. Although this difference was small and non significant, it indicates some underreporting if one can assume that these overweight subjects are less physically active in the chamber than in free-living conditions. Reported nitrogen intake also suggested underreporting at the group level. However, when the data were analysed at the individual level it was clear that the underreporting errors did not increase with increasing degree of obesity. Conclusions: Previous studies with the SOS dietary questionnaire have demonstrated that it is possible to obtain plausible energy intakes from both obese and nonobese subjects. This present analysis further demonstrates that the questionnaire discriminates overweight subjects with high and low intakes of energy and protein, using unbiased biomarkers to judge validity. These data provide additional support for the usefulness of the SOS dietary questionnaire. abstract_id: PUBMED:9094910 Validity of reported energy intake in preadolescent girls. Energy intake and energy expenditure were assessed in 109 girls aged 8-12 y. Intake was estimated from a 7-d dietary record based on household measures. Expenditure was measured with the doubly labeled water technique during 2-wk period. Overall, the mean (+/-SD) energy intake was 7.0 +/- 1.67 MJ/d and the mean energy expenditure was 8.03 +/- 1.28 MJ/d. The mean difference between intake and expenditure was 1.03 +/- 1.77 MJ/d (P &lt; 0.0001). The mean proportion of actual intake reported was 88.3 +/- 21.0%. Multivariate-regression analysis showed that age and total daily energy expenditure were significantly and independently related to the reporting error. Coefficients for age and total daily energy expenditure were both positive, indicating that as age and daily energy expenditure increased, the magnitude of the error of reporting increased. Income, ethnicity, parental obesity, and body fat were not significantly related to accuracy of reporting. The use of food records to determine energy intake appears to provide more accurate results in younger than in older girls, and the accuracy of the method apparently decreases as energy expenditure increases. abstract_id: PUBMED:11348556 The validity of self-reported energy intake as determined using the doubly labelled water technique. In the 1980s the development of the doubly labelled water (DLW) technique made it possible to determine the validity of dietary assessment methods using external, independent markers of intake in free-living populations. Since then, the accuracy of self-reported energy intake (EI) has been questioned on a number of occasions as under-reporting has been found to be prevalent in many different populations. This paper is a review of investigations using the DLW technique in conjunction with self-reported EI measures in groups including adults, children and adolescents, obese persons, athletes, military personnel and trekking explorers. In studies where a person other than the subject is responsible for recording dietary intake, such as parents of young children, EI generally corresponds to DLW determined energy expenditure. However, in instances where the subjects themselves report their intake, EI is generally under-reported when compared with energy expenditure. It was originally believed that this phenomenon of under-reporting was linked to increased adiposity and body size, however, it is now apparent that other factors, such as dietary restraint and socio-economic status, are also involved. This paper therefore aims to present a more comprehensive picture of under-reporting by tying in the findings of many DLW studies with other studies focusing particularly on the characteristics and mechanisms for under-reporting. Awareness of these characteristics and mechanisms will enable researchers to obtain more accurate self-reports of EI using all dietary recording techniques. abstract_id: PUBMED:2393004 Validity of reported energy intake in obese and nonobese adolescents. Daily metabolizable energy intake (ME) and total daily energy expenditure (TEE) were measured in 28 nonobese and 27 obese adolescents over a 2-wk period. Reported ME was significantly (p less than 0.001) lower than measured TEE in both the nonobese and the obese groups (2193 +/- 618 vs 2755 +/- 600 kcal/d and 1935 +/- 722 vs 3390 +/- 612 kcal/d, respectively). Reported ME as a percentage of TEE was significantly lower in the obese than the nonobese group (58.7 +/- 23.6% vs 80.6 +/- 18.7%, respectively). When reported ME was adjusted to account for changes in body energy stores, reported ME still remained significantly lower than TEE in both groups. ME was highly reproducible over the 2-wk period. Intraclass correlation coefficients among days for subjects with complete 14-d diaries were 0.87 and 0.89 for nonobese and obese groups, respectively. In both groups, interindividual variability in ME was significantly greater than intraindividual variability. Our data suggest that reported ME in nonobese and obese adolescents is not representative of TEE or energy requirements. abstract_id: PUBMED:7869932 Limitations in the assessment of dietary energy intake by self-report. Development of the doubly-labeled water method has made it possible to test the validity of dietary intake instruments for the measurement of energy intake. Comparisons of measured energy expenditure with energy intake from either weighed or estimated dietary records against energy expenditure have indicated that obese subjects, female endurance athletes, and adolescents underestimate habitual and actual energy intake. Individual underestimates of 50% are not uncommon. Even in non-obese adults, where bias is minimal, the standard deviation for individual errors in energy intake approaches 20%. Two investigations of the validity of self-reported dietary records for measuring change in dietary intake also indicate large underestimates of the actual change. Because of bias and imprecision, self-reported energy intakes should be interpreted with caution unless independent methods of assessing their validity are included in the experimental design. abstract_id: PUBMED:24588967 Protein leverage and energy intake. Increased energy intakes are contributing to overweight and obesity. Growing evidence supports the role of protein appetite in driving excess intake when dietary protein is diluted (the protein leverage hypothesis). Understanding the interactions between dietary macronutrient balance and nutrient-specific appetite systems will be required for designing dietary interventions that work with, rather than against, basic regulatory physiology. Data were collected from 38 published experimental trials measuring ad libitum intake in subjects confined to menus differing in macronutrient composition. Collectively, these trials encompassed considerable variation in percent protein (spanning 8-54% of total energy), carbohydrate (1.6-72%) and fat (11-66%). The data provide an opportunity to describe the individual and interactive effects of dietary protein, carbohydrate and fat on the control of total energy intake. Percent dietary protein was negatively associated with total energy intake (F = 6.9, P &lt; 0.0001) irrespective of whether carbohydrate (F = 0, P = 0.7) or fat (F = 0, P = 0.5) were the diluents of protein. The analysis strongly supports a role for protein leverage in lean, overweight and obese humans. A better appreciation of the targets and regulatory priorities for protein, carbohydrate and fat intake will inform the design of effective and health-promoting weight loss diets, food labelling policies, food production systems and regulatory frameworks. abstract_id: PUBMED:34055853 Low Protein Diets and Energy Balance: Mechanisms of Action on Energy Intake and Expenditure. Low protein diets are associated with increased lifespan and improved cardiometabolic health primarily in rodents, and likely improve human health. There is strong evidence that moderate to severe reduction in dietary protein content markedly influences caloric intake and energy expenditure, which is often followed by a decrease in body weight and adiposity in animal models. While the neuroendocrine signals that trigger hyperphagic responses to protein restriction are better understood, there is accumulating evidence that increased sympathetic flux to brown adipose tissue, fibroblast growth factor-21 and serotonergic signaling are important for the thermogenic effects of low protein diets. This mini-review specifically focuses on the effect of low protein diets with variable carbohydrate and lipid content on energy intake and expenditure, and the underlying mechanisms of actions by these diets. Understanding the mechanisms by which protein restriction influences energy balance may unveil novel approaches for treating metabolic disorders in humans and improve production efficiency in domestic animals. abstract_id: PUBMED:16222394 Factors associated with low reported energy intake in the elderly. Background: Underreporting of dietary intake can be estimated by the Energy Ratio (ER) between reported energy intake (EI) and calculated total energy expenditure (TEE). The gap between EI and TEE is usually attributed to underreporting. In elderly populations, compromised dietary intake and health status may offer alternative explanations to this gap. Objective: This study aimed to characterize "underreporting" of dietary intake and low energy reporters (LER) among the elderly. Design: Participants aged 65 years and over, were recruited using random population sampling of the Negev population. Data were collected using the 24-h recall method with additional demographic and age-specific health questionnaires. ER was calculated using the ratio between reported energy intake and calculated TEE computed by the Schofield formula. LER were defined as those with reported energy intake of less than 0.8 calculated BMR using the Schofield formula. Results: We restricted our analysis to 191 elderly aged 65-74 y and 177 aged 75 y and older who reported their diet to be "as usual." In univariate analyses, BMI &lt; 22, better health status, use of fewer than four medications and good reported appetite were significantly related to higher ER. No difference was shown in ER by gender, level of education, and family status. Weight loss superior 5 kg was associated with low ER (p = 0.049). In a linear regression model, low ER (indicating "underreporting") was significantly associated with higher activity level and use of over four medications. Using a dichotomous approach, Low Energy Reporters (LER) used a higher number of medications, ate fewer food items per day and suffered from poorer appetite. Conclusions: In the elderly, ER &lt; 1 may indicate underreporting in dietary intake, and indeed, is frequent among the obese. Nonetheless, among the elderly, ER &lt; 1 may reflect truly low caloric intake. Our results suggest that, to some degree, health status variables are associated with decreased ER, supporting a true caloric deprivation state. abstract_id: PUBMED:18448177 Protein intake and energy balance. Maintaining energy balance in the context of body-weight regulation requires a multifactorial approach. Recent findings suggest that an elevated protein intake plays a key role herein, through (i) increased satiety related to increased diet-induced thermogenesis, (ii) its effect on thermogenesis, (iii) body composition, and (iv) decreased energy-efficiency, all of which are related to protein metabolism. Supported by these mechanisms, relatively larger weight loss and subsequent stronger body-weight maintenance have been observed. Elevated thermogenesis and GLP-1 appear to play a role in high protein induced satiety. Moreover, a negative fat-balance and positive protein-balance is shown in the short-term, whereby fat-oxidation is increased. Furthermore, a high protein diet shows a reduced energy efficiency related to the body-composition of the body-weight regained, i.e. favor of fat free mass. Since protein intake is studied under various energy balances, absolute and relative protein intake needs to be discriminated. In absolute grams, a normal protein diet becomes a relatively high protein diet in negative energy balance and at weight maintenance. Therefore 'high protein negative energy balance diets' aim to keep the grams of proteins ingested at the same level as consumed at energy balance, despite lower energy intakes. abstract_id: PUBMED:1877506 Energy requirements of a postobese man reporting a low energy intake at weight maintenance. Three experiments were performed to test the validity of the low reported energy intake of weight maintenance in a postobese man. In the first experiment the subject reported a mean energy intake of 8008 kJ/d during 16 d and he maintained a stable body weight. This finding was not reproduced in the second experiment, which consisted of a 6-d inpatient study during which the subject was confined to a whole-body calorimeter for 5 d. Indeed, he lost weight when fed a controlled energy intake of 7950 kJ/d. Moreover, this experiment showed that direct and indirect calorimetry provided comparable energy-expenditure measurements during this period. Finally, when the subject was refed a controlled energy intake of 7950 kJ/d for 21 d, body weight and fat losses were observed. Therefore, these observations do not support the validity of the low energy intake that may be reported by people predisposed to obesity. Answer: The degree of obesity does not necessarily influence the validity of reported energy and protein intake in a straightforward manner. Studies have shown mixed results regarding the relationship between obesity and the accuracy of self-reported dietary intake. One study found that the degree of obesity did not increase the underreporting errors of energy and protein intake. In this study, reported energy intake correlated significantly with measured energy expenditure, and reported nitrogen intake correlated significantly with urinary nitrogen excretion among overweight subjects. The SOS dietary questionnaire used in the study was able to discriminate overweight subjects with high and low intakes of energy and protein, suggesting that the questionnaire is useful for both obese and nonobese subjects (PUBMED:10369492). However, other studies have indicated that underreporting of energy intake is prevalent in many different populations, including obese individuals. For example, in obese and nonobese adolescents, reported metabolizable energy intake was significantly lower than measured total daily energy expenditure, with obese adolescents reporting a lower percentage of their total energy expenditure compared to nonobese adolescents (PUBMED:2393004). Similarly, obese subjects, female endurance athletes, and adolescents have been found to underestimate habitual and actual energy intake, with individual underestimates of up to 50% (PUBMED:7869932). It has been suggested that factors other than obesity, such as dietary restraint and socio-economic status, may also play a role in underreporting (PUBMED:11348556). Additionally, the accuracy of self-reported energy intake may decrease as energy expenditure increases (PUBMED:9094910). In summary, while obesity may be associated with underreporting of dietary intake, it is not the sole factor influencing the validity of self-reported energy and protein intake. Other factors, including the methods used to collect dietary information and individual characteristics, also affect the accuracy of dietary reports.
Instruction: Completion pneumonectomy for lung metastases: is it justified? Abstracts: abstract_id: PUBMED:9458137 Completion pneumonectomy for lung metastases: is it justified? Objective: To evaluate the postoperative outcome and long-term results of patients who underwent iterative and extended pulmonary resection leading to completion pneumonectomy for pulmonary metastases. Methods: From January 1985 to December 1995, 12 patients (mean age 45 years) underwent completion pneumonectomy for pulmonary metastases. These patients represent 1.5% of all pulmonary metastases operated on. There were 5 sarcoma and 7 carcinoma patients. Before completion pneumonectomy, 8 patients had only one pulmonary resection (wedge resection, 2; segmentectomy, 2; lobectomy, 4), 3 patients had two operations and finally, 1 patient had multiple bilateral wedge resections and 1 lobectomy. The median interval time between the last pulmonary resection and completion pneumonectomy was 13.5 months (range 1-24 months). Results: There were 10 left and two right completion pneumonectomies. Three patients had an extended resection (1 carina; 1 chest wall; 1 pleuropneumonectomy). Intrapericardial dissection was used in 3 patients. Two patients died within 30 days of the operation: 1 died of postoperative complications (8.3%) whereas the other died of rapidly evolving metastatic disease. The remaining 10 patients had an uneventful postoperative course. Only 1 patient is still alive and free of disease 69 months after completion pneumonectomy. One patient is alive with disease, another was lost to follow-up; 9 patients died of metastatic disease. The median survival time after completion pneumonectomy was 6 months (range 0-69 months). The estimated 5-year probability of survival was 10% (95% CI: 2-40%). Conclusions: Indications for both iterative and extended pulmonary resection for PM may be discussed only in highly young selected patients; the extremely poor outcome of our subgroup of patients should lead to even more restrictive indications of CP for pulmonary metastatic disease. abstract_id: PUBMED:7540330 Completion pneumonectomy: experience with 47 cases. From 1982 through March 1994, fourty-seven patients underwent completion pneumonectomy for a reappearing lung cancer, lung metastases, late complications, benign lung diseases, and early complication of bronchial or pulmonary artery sleeve resections. Intraoperative bleeding was higher than in standard pneumonectomy; there was one intraoperative mortality (2.3%). Operative mortality was 14.9% overall but was 3.6% in completion pneumonectomy for lung cancer, 20% for late complications and benign disease, and 57% for the treatment of early complications of sleeve resections. Three and five-year survival in patients with lung cancer who survived the operation was 43.8% and 28.7% according to the Kaplan-Meier method; no significant difference in long-term survival was present between patients with a second primary lung cancer or recurrence. Completion pneumonectomy is indicated in reappearing lung cancer and should be considered in benign disease when a less invasive procedure is not available. Completion pneumonectomy for the treatment of early complications of bronchial or pulmonary artery sleeve resections has a very high mortality but no alternative is available. abstract_id: PUBMED:15522560 Completion pneumonectomy for lung metastases. Objective: Completion pneumonectomy (CP) for malignant disease is generally accepted but controversial for lung metastases. The data available show a high perioperative morbidity and mortality with a poor long-term prognosis. We analysed the postoperative outcome and long-term results of our patients undergoing CP. Patients And Methods: Between January 1986 and May 2003, nine patients underwent completion pneumonectomy for lung metastases. This represents 10% (9/86) of all CPs performed and 1.7% (9/525) of all pneumonectomies. Results: One to three metastasectomies in the form of wedge resection (16), segment resection (5) and lobectomies (3) were performed prior to CP. The mean time interval between the operation of the primary tumour and the first metastasectomy was 38 months, the first and second metastasectomy 12 months, the second and third metastasectomy 14 months, and the third metastasectomy and CP 25 months. Six patients had an extended completion pneumonectomy. Operative morbidity and mortality was 0%. One patient is still alive and recurrence-free 9 months after CP. Two patients have recurrent pulmonary contralateral metastases under chemotherapy and six patients died of metastatic disease. Actual survival is 33%, recurrence-free survival (RFS) is 11%. The 3-year survival is 34%. Conclusion: Since there was no morbidity and mortality in our series, CP for lung metastases seems to be justified but the long-term survival is limited by the occurrence of contralateral or extrapulmonary metastatic disease. Multiple resections of metastases have a positive influence on survival, but the last step of resection in the form of CP does not seem to improve long-term survival. abstract_id: PUBMED:15019677 Completion pneumonectomy in cancer patients: experience with 55 cases. Objective: Analysis of a single institution experience with completion pneumonectomy. Methods: From 1989 to 2002, 55 consecutive cancer patients received completion pneumonectomy (mean age 62 years; 25-79). Indications were bronchogenic carcinoma in 38 patients (4 first cancers, 8 recurrent cancers, 26 second cancers), lung metastases in three (one each from breast cancer, colorectal neoplasm and lung cancer), lung sarcoma in one, and miscellaneous non-malignant conditions in 13 patients having been surgically treated for a non-small cell lung cancer previously (bronchopleural fistula in 4, radionecrosis in 3, aspergilloma in 2, pachypleura in 1, massive hemoptysis in 1 and pneumonia in 2). Before completion pneumonectomy, 50 patients had had a lobectomy, three a bilobectomy, and two lesser resections. The mean interval between the two procedures was 51 months for the whole group (1-469), 60 months for lung cancer (12-469), 43 months for pulmonary metastases (21-59) and 29 months for non-malignant disorders (1-126). Results: There were 35 right (64%) and 20 left (36%) resections. The surgical approaches were a posterolateral thoracotomy in 50 cases (91%) and a lateral thoracotomy in five cases (9%). Intrapericardial route was used in 49 patients (89%). Five patients had an extended resection (2 chest wall, 1 diaphragm, 1 subclavian artery and 1 superior vena cava). Operative mortality was 16.4% (n=9): 11.9% for malignant disease (n=5) and 30.8% for benign disease (n=4) Operative mortality was 20% for right completion pneumonectomies (n=7) and 10% for left-sided procedures (n=2) Twenty-three patients (42%) experienced non-fatal major complications. Actuarial 3- and 5-year survival rates from the time of completion pneumonectomy were 48.4 and 35.2% for the entire group. Three- and five-year survival for patients with bronchogenic carcinoma were 56.9 and 43.4%, respectively. Conclusions: These results suggest that completion pneumonectomy in the setting of lung malignancies can be done with an operative risk similar to the one reported for standard pneumonectomy. In contrast, in cancer patients, completion pneumonectomy for inflammatory disorders is a very high-risk procedure. abstract_id: PUBMED:9438766 Completion total thyroidectomy in children with thyroid cancer secondary to the Chernobyl accident. Objective: To evaluate the usefulness of submitting children with thyroid cancer secondary to nuclear accidents to a completion total thyroidectomy. Design: A case series consisting of patients living and operated on in Belarus whose parents had asked for a clinical evaluation in a western European center. Setting: A tertiary care referral center. Patients: The conditions of 47 children from Gomel, Belarus, with differentiated thyroid carcinoma following the nuclear accident at Chernobyl, Ukraine, were evaluated at the University of Pisa, Pisa, Italy. In approximately half of the cases, the treatment in Belarus consisted of a hemithyroidectomy. After a complete evaluation, the decision was made to reoperate on 19 of them by performing a completion total thyroidectomy. The preoperative evaluation revealed that 5 (26%) of the 19 patients who had undergone a hemithyroidectomy had unilateral recurrent nerve palsy and that 2 (10.5%) had hypoparathyroidism. Interventions: Neck ultrasonography was used for the preoperative localization of thyroid residuals, thyroid nodules, suspicious lymph nodes, and a guided fine-needle aspiration biopsy specimen. The circulating thyroglobulin measurement was obtained before reoperation. An iodine 131 whole-body scan (WBS) was performed and circulating thyroglobulin levels were obtained after completion of the thyroidectomy during withdrawal of levothyroxine sodium therapy. Main Outcome Measure: The number of patients with a recurrence of thyroid cancer and lung or lymph node metastases after the completion total thyroidectomy. Results: The results of the histologic examination were positive for papillary thyroid cancer in 6 (28.6%) of 21 patients, 3 with residual cancer in the remaining thyroid lobe and 3 with metastatic lymph node disease. A posttherapy WBS demonstrated lung metastases in 5 (28%) of 18 patients and lymph node metastases in 6 (33%) of 18 patients; the results of a posttherapy WBS were negative for metastases in 7 (39%) of 18 patients. Hypoparathyroidism developed in 4 (21%) of 19 patients who underwent a completion total thyroidectomy; unilateral laryngeal nerve palsy developed in 1 (5.2%) of these 19 patients. Among 22 children who previously underwent total thyroidectomy in Belarus, a diagnostic WBS showed lung metastases in 10 (45%) of the children and lymph node metastases alone in 3 (14%) of the children; the results of a diagnostic WBS were negative for metastases in 9 (41%) of the children. Statistical analysis showed a nonsignificant (P&gt;.05) difference in the prevalence of lung and lymph node metastases in patients who previously underwent total thyroidectomy compared with patients who underwent completion total thyroidectomy. Conclusion: Completion total thyroidectomy allowed for the diagnosis and treatment of recurrent thyroid cancer and lung or lymph node metastases in 61% (11/18) of the patients in whom residual differentiated thyroid carcinoma was not previously recognized. abstract_id: PUBMED:23046489 Completion pneumonectomy for lung cancer treatment: early and long term outcomes. Background: To analyze the experience of completion pneumonectomy for lung cancer at a single institution in China. Methods: From January 1988 to December 2007, 92 patients underwent completion pneumonectomy for the treatment of lung cancer. The indications were second primary lung cancer (n=51), Local metastasis (n=37) and Lung metastasis (n=4). The median interval between the primary operation and CP was 24.4 months (1.5-145 m). Results: There was no intraoperative deaths. The CP procedure lasted 4.3 h (1.5-8 h). Blood loss in the CP performance was 1854.5 ml (200-9100 ml) 9 (9.78%) patients died in the postoperative period: pulmonary embolism (n = 2), disseminated intravascular coagulation (DIC) after the multisystem failure (n = 1), respiratory failure after contralateral pneumonia (n = 5), bronchopleural fistula (BPF) with acute respiratory distress syndrome (ARDS) (n = 1) 31(33.7%) patients had at least one major nonfatal complication. The 1, 3 and 5 year survival rates were 81%, 26% and 14% respectively. Conclusions: Completion pneumonectomy for lung cancer is a safe surgical procedure for the skilled surgeon though it has a relatively higher complications and the long-term survival is acceptable. abstract_id: PUBMED:16404359 Residual lung lesions after completion of chemotherapy for gestational trophoblastic neoplasia: should we operate? The significance of residual lung metastasis from malignant gestational trophoblastic neoplasm (GTN) after the completion of chemotherapy is unknown. We currently do not advocate resection of these masses. Here, we investigate the outcome of these patients. Patients with residual lung abnormalities after the completion of treatment for GTN were compared to those who had a complete radiological resolution of the disease. None of the residual masses post-treatment were surgically removed. In all, 76 patients were identified. Overall 53 (70%) patients had no radiological abnormality on CXR or CT after completion of treatment. Eight (11%) patients had residual disease on CXR alone 15 patients had residual disease on CT (19%). During follow-up, two patients (2.6%) relapsed. One of these had had a complete radiological response post-treatment whereas the other had residual disease on CT. Patients with residual lung lesions after completing treatment for GTN do not appear to have an increased chance of relapse compared to those with no residual abnormality. We continue to recommend that these patients do not require pulmonary surgery for these lesions. abstract_id: PUBMED:33550322 Completion Pneumonectomy for the Recurrence of Lung Cancer Introductions: The morbidity and mortality after completion pneumonectomy (CP) are reportedly high. We, herein, report the outcomes of CP at our institution. Subjects: Nine consecutive patients [7 men and 2 women, average age of 72 years(range 44~84 years)] who underwent CP for recurrence of lung cancer during 2012~2018 were retrospectively reviewed. Results: Right-sided sleeve CP was performed in two cases and left-sided CP in seven cases. The indications for surgery were lymph node metastasis of the cancer, pulmonary metastasis, and bronchial stump recurrence in 4, 3, and 2 cases, respectively. Postoperative complications occurred in six patients. One of the patients who underwent right sleeve pneumonectomy developed bronchopleural fistula and died 68 days after the surgery. The mean follow-up period was 33 months, and four patients died during follow-up. Of the 5 patients still alive, 4 had no recurrence and 1 had recurrence in the stump of the main bronchus. The five-year overall survival rate was 78%. Conclusions: Although only few cases were assessed, the prognosis after CP at our institution was relatively good. abstract_id: PUBMED:2329307 Completion pneumonectomy--a review of 29 cases From 1962 through 1988, a total of 29 consecutive patients had completion pneumonectomy (CP). Indications for initial pulmonary resection were primary lung cancer in 27 patients, metastatic lung tumor in 1, and mediastinal tumor with pulmonary invasion in 1. Indications for CP were lung cancer (including local recurrence, pulmonary metastasis from the first lung cancer, and second primary lung cancer) in 21 patients, complications after initial operations in 7, and pulmonary arterial injury during second operation in 1. Severe adhesion of the residual lung and the hilar structures made operative procedures extremely difficult. Injury of pulmonary arteries occurred in 6 patients. Especially, in cases the left upper lobe had been resected previously, deviation of the lower lobe and hilar adhesion lead to operative difficulty. Post-CP bronchial fistula occurred more frequently in what the bronchi had been dissected at more peripheral level than main bronchus, because of some severe hilar adhesions. Operative mortality was 13.8% (9.5% for second lung cancer, 28.5% for post-operative complication). Five-year survival for patients with lung cancer was 32.9% according to the Kaplan-Meier method. We conclude that the indications for CP are clinically resectable lung cancer and bronchial stenosis with residual pulmonary organic changes following bronhoplastic procedure. Postoperative bronchofistulae should be managed by other operative procedure. abstract_id: PUBMED:17331417 Outcome of gestational trophoblastic neoplasia patients with residual lung tumor after completion of treatment Objective: To analyze retrospectively the prognosis of gestational trophoblastic neoplasia (GTN) patients who achieved normal human chorionic gonadotropin-beta subunit (beta-hCG) titer after completing treatment but remained with residual lung tumor. Methods: A total of 1130 GTN patients were hospitalized at Peking Union Medical College Hospital from Jan 1985 to Jan 2004. Among these patients, 901 achieved complete remission (CR); 152 achieved normal blood beta-hCG titer after the completion of treatment but remained with residual lung tumor (defined as partial remission). Retrospective analyses were carried out on the 152 patients. Statistical analysis was used to compare the recurrent rate of the CR patients with the progression rate of the 152 patients. Results: The blood beta-hCG level of all the 152 patients returned to normal after they received 1-30 courses of standard multiple-drug combined chemotherapy treatment. Another 0-8 courses of chemotherapy were applied to these patients before they left hospital. Seventeen of the 152 patients lost to follow-up. Of the rest 135 patients followed up for 14 to 110 months, 83 showed no significant changes in terms of their residual tumors; the residual tumors in 46 patients diminished or disappeared; and the other 6 patients had progression of disease (PD), with beta-hCG levels going up 6-8 months after completing treatment (increased lung metastases were observed in four of the six patients). There was no significant difference (P &gt; 0.05) between the recurrent rate [3.4% (31/901)] of the 901 CR patients and the progression rate [3.9% (6/152)] of the 152 patients. There was also no significant difference (P &gt; 0.05) between the recurrent rate [2.2% (10/463)] of the CR patients with lung metastasis and the progression rate of the 152 patients. Conclusions: After normalization of beta-hCG titer, patients whose lung tumors remained unchanged even after several additional courses of chemotherapy should be considered as CR patients. Follow-ups should be strictly carried out on these patients, especially at around 6 months after the completion of treatment, and particularly for high-risk and drug-resistant choriocarcinoma patients. Answer: The justification for completion pneumonectomy for lung metastases is a subject of debate, as evidenced by the varying outcomes and opinions presented in the abstracts. One study suggests that completion pneumonectomy for pulmonary metastases may only be discussed in highly selected young patients due to the extremely poor outcomes observed in their subgroup, indicating that indications for such procedures should be even more restrictive (PUBMED:9458137). Another study reports no operative morbidity and mortality, suggesting that completion pneumonectomy for lung metastases could be justified, although long-term survival is limited by the occurrence of contralateral or extrapulmonary metastatic disease (PUBMED:15522560). However, the same study notes that multiple resections of metastases have a positive influence on survival, but the last step of resection in the form of completion pneumonectomy does not seem to improve long-term survival. In contrast, another study reports an operative mortality of 16.4% and suggests that completion pneumonectomy in the setting of lung malignancies can be done with an operative risk similar to that reported for standard pneumonectomy, but it is a very high-risk procedure for inflammatory disorders in cancer patients (PUBMED:15019677). Similarly, another study reports a relatively high complication rate and acceptable long-term survival, indicating that completion pneumonectomy for lung cancer is a safe surgical procedure for skilled surgeons (PUBMED:23046489). A study on gestational trophoblastic neoplasia patients with residual lung tumor after treatment concludes that patients whose lung tumors remained unchanged even after several additional courses of chemotherapy should be considered as complete remission patients, suggesting that surgical intervention may not be necessary in such cases (PUBMED:17331417). In summary, the justification for completion pneumonectomy for lung metastases appears to be contingent on patient selection, the skill of the surgeon, and the balance between operative risks and potential long-term survival benefits. The decision to proceed with such surgery should be made on a case-by-case basis, considering the individual patient's condition and prognosis.
Instruction: Is the association between socioeconomic status and nonreplaced extracted teeth mediated by dental care behaviours in adults? Abstracts: abstract_id: PUBMED:26087774 Is the association between socioeconomic status and nonreplaced extracted teeth mediated by dental care behaviours in adults? Objective: To investigate whether the association between socioeconomic status (SES) and number of nonreplaced extracted teeth (NRET) is mediated by dental care behaviours (DCBs) in adults. Methods: A random sample of adults, who participated in the 2010 dental telephone survey (n = 1100) in Tehran, Iran, was investigated. The outcome was self-reported NRET due to dental caries or periodontal diseases. The main exposures were wealth index and education; and the potential mediators were the frequency of tooth brushing and flossing, and dental visiting in the previous year of the study. Multivariable regression analysis was applied to test mediation using the count ratios (CR) and 95% confidence intervals (CI). Results: The response rate was 73%. The mean age was 38.99 (SD=13.83), and 50.8% were female. The mean NRET were 1.28 (95% CI 1.14; 1.41). NRET were greater among participants who had less than 12 years of schooling [CR = 4.26 (95% CI 3.52; 5.52)] and those in the poorest quintile [CR = 1.89 (95% CI 1.36; 2.61)] compared with those in the most educated and wealthiest groups, respectively. People who brushed their teeth less than twice a day [CR = 1.38 (95% CI 1.17; 1.62)] and did not use dental floss daily [CR = 1.47 (95% CI 1.24; 1.75)] were more likely to present NRET. After controlling for DCBs, a slight decrease in the CRs was observed for both the poorest (13.3%) and the less than 12 years of schooling (7.6%) groups compared to the reference groups. Conclusions: Although DCBs were associated with both SES and NRET, they could only be partially accountable for the observed socioeconomic inequalities in NRET. abstract_id: PUBMED:28540102 Awareness and Knowledge of Undergraduate Dental Students about Sterilization/Disinfection Methods of Extracted Human Teeth. Background: Dental undergraduate students work on extracted human teeth in preclinical practical's to learn technical skills before entering the clinics and delivering dental care to the patients. Aim: The aim of the present investigation was to assess the awareness and knowledge toward sterilization/disinfection methods of extracted human teeth in a selected group of Indian dental students. Materials And Methods: In this descriptive cross-sectional study, the participants consisted of 2nd-, 3rd-, 4th-, and 5th-year dental students. Data were collected by questionnaires and analyzed by Mann-Whitney U-test and Kruskal-Wallis test using SPSS software version 16 for Windows (SPSS Inc., Chicago, IL, USA). Results: In this study, 235 dental students participated in the study. The average awareness and knowledge score was 7.27 (1.92). Based on the opinion of 57% (134/235) students, hydrogen peroxide was selected as the suitable material for sterilization and 24.6% (58/235) students believed that autoclave sterilization is a good way for the purpose. Conclusion: The results of this investigation indicated that awareness and knowledge of undergraduate dental students in relation to sterilization/disinfection methods of extracted human teeth were good. However, deficiencies were observed in relation to teaching the material and methods suitable for sterilization. abstract_id: PUBMED:21029146 The association between low-socioeconomic status mother's Sense of Coherence and their child's utilization of dental care. Objectives: The objective of this study was to investigate the relationship of low-socioeconomic status mother's Sense of Coherence (SOC) and their child's utilization of dental care services in a city of Southeast Brazil. Methods: A cross-sectional study was conducted on a sample of 190 schoolchildren aged 11-12 and their mothers in Sao João de Meriti-RJ, Southeast Brazil. The outcome variables were children's use of dental care services and visiting dentists mainly for check-ups. Demographic and socioeconomic characteristics and data regarding children's dental care use were collected through interviews with mothers. Children's oral health-related behaviours as well as dental status (DMFS index), dental pain, Visible Dental Plaque and Bleeding on Probing Index were registered. Mother's SOC was assessed through the validated short version (13-item) of Antonovsky's scale. Multiple logistic regression was used in the data analysis. Results: Of the mothers, 81.1% reported on their child's utilization of dental care services. Of them, 42.9% considered check-ups as the main reason for taking their children to dental services. Children whose mothers had higher levels of SOC were more likely to utilize dental care services (OR=2.08 95%CI=1.17-3.64) and visit a dentist mainly for check-ups (except for dental treatment) (OR=2.02 95%CI=1.06-3.81) than those whose mothers had lower levels of SOC. These findings were adjusted for socioeconomic status, children's oral health-related behaviours and oral health measures. Conclusions: Mother's SOC was a psychosocial factor associated with their child's pattern of use of dental care services in low-socioeconomic status families. abstract_id: PUBMED:26074207 Factors related to number of present teeth in Korean elderly adults aged 55-84 years. Objectives: The aim of this study was to determine the associations between the number of present teeth (NT) and socio-economic, demographic and oral health behavioural factors among Korean adults aged 55-84 years. Methods: The total subjects comprised 3767 individuals who were examined and who answered the questions on socio-economic status and oral health behaviour from the fourth Korean National Health and Nutrition Examination Survey conducted from 2007 to 2009. The dependent variable was NT, with binary status divided by the median. Socio-economic and demographic factors included gender, educational level, parent's educational levels, region of residence, household income, type of health insurance and mother's economic activity. Oral health behaviours were as follows: daily toothbrushing frequency, smoking status, recent dental visit and illegal dental treatment. Multivariate logistic regression models were applied to explain the associations between NT and other variables. Results: In a model adjusted by socio-economic, demographic and oral health behavioural variables, subjects who lived in urban areas were more likely to have larger NT compared to those in suburban areas (OR: 1.22, P = 0.025). Males were more likely to have larger NT (OR: 1.90, P &lt; 0.001), and daily toothbrushing frequency was associated with NT (OR = 1.25, P = 0.023). Non-smokers (OR: 2.44, P &lt; 0.001) and past smokers (OR: 1.70, P &lt; 0.001) were more likely to have lager NT compared to current smokers. Subjects without illegal dental treatments were more likely to have lager NT compared to those with illegal dental treatments (OR = 2.21, P &lt; 0.001). Conclusions: Interventions aiming to preserve present teeth in elderly adults should consider socio-economic, demographic and oral health behavioural factors. abstract_id: PUBMED:35084131 Is the inability to afford dental care associated with untreated dental caries in adults? Objective: To assess whether inability to afford dental care is associated with the number of teeth with untreated dental caries and whether this association is independent of socioeconomic factors and ethnicity. Basic Research Design: Data were from the National Health and Nutrition Examination Survey (NHANES) (2015-2018), a cross-sectional national survey of non-institutionalised Americans. Clinical Setting: The survey included clinical assessment of tooth condition, data on sociodemographic factors, use of dental services, health insurance, number of teeth and affordability of dental care when needed. Participants: The analysis included 9,440 participants aged 18 years and over. Main Outcome Measures: The association between number of teeth with untreated caries and affordability of dental care was assessed adjusting for age, gender, ethnicity, income, education, dental visits, and health insurance. Results: The mean number of teeth with untreated caries was 0.51, and 13% reported inability to afford care. Mean numbers of teeth with untreated caries among those unable and able to afford dental care were 1.46 and 0.36, respectively. In the fully adjusted model, the rate ratio for teeth with caries among those who could not afford dental care was 2.45 (95% Confidence Intervals 'CI': 2.04, 2.95). Income and education inequalities were slightly attenuated after accounting for inability to afford care. Other statistically significant predictors included education, and irregular dental visits. Conclusion: Inability to afford dental care may exacerbate inequalities in dental caries. The findings highlight the need for affordable access to dental services. abstract_id: PUBMED:35084810 Is the inability to afford dental care associated with untreated dental caries in adults? Objective: To assess whether inability to afford dental care is associated with the number of teeth with untreated dental caries and whether this association is independent of socioeconomic factors and ethnicity. Basic Research Design: Data were from the National Health and Nutrition Examination Survey (NHANES) (2015-2018), a cross-sectional national survey of non-institutionalised Americans. Clinical Setting: The survey included clinical assessment of tooth condition, data on sociodemographic factors, use of dental services, health insurance, number of teeth and affordability of dental care when needed. Participants: The analysis included 9,440 participants aged 18 years and over. Main Outcome Measures: The association between number of teeth with untreated caries and affordability of dental care was assessed adjusting for age, gender, ethnicity, income, education, dental visits, and health insurance. Results: The mean number of teeth with untreated caries was 0.51, and 13% reported inability to afford care. Mean numbers of teeth with untreated caries among those unable and able to afford dental care were 1.46 and 0.36, respectively. In the fully adjusted model, the rate ratio for teeth with caries among those who could not afford dental care was 2.45 (95% Confidence Intervals 'CI': 2.04, 2.95). Income and education inequalities were slightly attenuated after accounting for inability to afford care. Other statistically significant predictors included education, and irregular dental visits. Conclusion: Inability to afford dental care may exacerbate inequalities in dental caries. The findings highlight the need for affordable access to dental services. abstract_id: PUBMED:27878823 Owning the tooth: exploring the ethical and legal issues relating to the use of extracted human teeth in dental education in Australia. Extracted human teeth have been used to practice operative techniques for a very long time. As a natural surrogate for a live tooth in vivo, their use has traditionally been very important for the development of skills in trainee dentists, as well as their qualified colleagues who wish to practise existing or new skills. As synthetic alternatives develop greater authenticity, alongside a society in which many retain their natural dentition well into old age, the current paradigm relating to how extracted teeth in dental education are used needs to be revisited. An ethical and legal dilemma that must be addressed within dental education relates to where and how teeth may be sourced. This article will seek to question whether there is a legal or ethical requirement to gain consent for the use of extracted teeth from patients, as well as exploring the status of whether extracted dental tissue can be considered to be the property of either patient or surgeon. Whilst synthetic alternatives are being utilized more frequently in education, it is unlikely that they will completely replace extracted natural teeth in the immediate future. It is therefore imperative that their use complies with legal doctrine and contemporary ethical thought. abstract_id: PUBMED:35448991 Socioeconomic inequalities in the utilization of dental services among adults in Saudi Arabia. Background: This study used the Anderson Behavioral Model to assess the socioeconomic inequalities in dental services utilization among adults in Saudi Arabia, along with other predictors of utilization, to inform future planning of dental care services. Methods: This cross-sectional study was a secondary analysis using national data from the 2019 Kingdom of Saudi Arabia World Health Survey (KSAWHS). The survey consisted of two interviewer-administered questionnaires: one household and one individual interview. The questions covered predisposing factors (age, gender, marital status, nationality, education, employment), enabling factors (income, household wealth, area-based socioeconomic class, health insurance, eligibility for free governmental health care, transportation and region of residence) and self-reported need for dental treatment. The main outcome was dental utilization in the past year; predisposing, enabling and need factors were independent variables. Hierarchical logistic regression analyses identified significant predictors of dental utilization, applying survey weights to adjust for the complex survey design. Adjusted odds ratios with 95% confidence intervals and p values were reported in the final model. Results: The final dataset included 8535 adults (response rate = 95.4%). Twenty percent of adults had visited the dentist at least once in the past year (95% CI 18-21%). There were socioeconomic inequalities in dental utilization. High household income (OR 1.43, p = 0.043), second and middle household wealth status (OR 1.51, p = 0.003 and OR 1.57, p = 0.006) and access to free governmental health care (OR 2.05, p = 0.004) were significant predictors in the final regression model along with perceived need for dental treatment (OR 52.09, p &lt; 0.001). Conclusion: Socioeconomic inequalities in the utilization of dental services exist in Saudi Arabia. The need for treatment was the strongest predictor suggesting predominantly symptomatic attendance. Increasing awareness about the importance of preventive dental visits rather than symptomatic attendance could be an important policy implication to improve oral health and optimize dental care expenditure. Further research should explore the drivers for adults to seek preventive care in the absence of any recognized dental problems. abstract_id: PUBMED:28446178 Inequality in unmet dental care needs among South Korean adults. Background: The current public health research agenda was to identify the means to reduce oral health inequalities internationally. The objectives of this study were to provide evidence of inequality in unmet dental needs and to find influencing factors attributable to those among South Korean adults. Methods: Pooled cross-sectional data from the fourth Korean National Health and Nutrition Examination Survey (2007-2009) on 17,141 Korean adults were used. Demographic factors (sex, age, and marital status), socioeconomic factors (education level, employment status, and income level), need factors (normative dental needs and self-perceived oral health status), and oral health-related factors (the number of decayed teeth, the presence of periodontitis, and the number of missing teeth) were included. Multiple logistic regression analysis was performed. Results: Of South Korean adults, 43.9% had perceived unmet dental needs, with the most common reason being financial difficulties. The disparities in unmet dental care needs were strongly associated with income level, normative treatment needs, and self-perceived oral health status. The low-income group, people with normative dental treatment needs, and those with perceived poor oral health status were more likely to have unmet dental needs. There was considerable inequality in unmet dental care needs due to economic reasons according to such socioeconomic factors as income and education level. Conclusions: Public health policies with the expansion of dental insurance coverage are needed to reduce inequalities in unmet dental care needs and improve the accessibility of dental care services to vulnerable groups who are experiencing unmet dental care needs due to socioeconomic factors despite having normative and self-perceived needs for dental treatment. abstract_id: PUBMED:34726790 Access to dental care and blood pressure profiles in adults with high socioeconomic status. Background: Reduced access to dental care may increase cardiovascular risk; however, socioeconomic factors are believed to confound the associations. We hypothesized that the relation persists despite economic wellness and high education, with reduced access to dental care affecting cardiovascular risk at least in part through its effect on blood pressure (BP), possibly mediated by systemic inflammation. Methods: We first assessed the sociodemographic and clinical characteristics related to last dental visit timing (≤ or &gt;6 months; self-reported) using national representative cross-sectional data. Then, the association of last dental visit timing with clinic BP was selectively investigated in highly educated, high income participants, further matched for residual demographic and clinical confounders using propensity score matching (PSM). The mediating effect of systemic inflammation was formally tested. Machine learning was implemented to investigate the added value of dental visits in predicting high BP over the variables included in the Framingham Hypertension Risk Score among individuals without an established diagnosis of hypertension. Results: Of 27,725 participants included in the population analysis, 46% attended a dental visit ≤6 months. In the PSM cohort (n = 2350), last dental visit attendance &gt;6 months was consistently associated with 2 mmHg higher systolic BP (P = 0.001) and with 23 to 35% higher odds of high/uncontrolled BP compared with attendance ≤6 months. Inflammation mildly mediated the association. Access to dental care improved the prediction of high BP by 2%. Conclusions: Dental care use impacts on BP profiles independent of socioeconomic confounders, possibly through systemic inflammation. Regular dental visits may contribute to preventive medicine. Answer: Yes, the association between socioeconomic status (SES) and nonreplaced extracted teeth (NRET) is partially mediated by dental care behaviours (DCBs) in adults. A study conducted on a random sample of adults in Tehran, Iran, found that NRET were greater among participants with lower SES, such as those with less than 12 years of schooling and those in the poorest quintile. The study also observed that people who brushed their teeth less than twice a day and did not use dental floss daily were more likely to present NRET. After controlling for DCBs, there was a slight decrease in the count ratios for both the poorest and the less educated groups, indicating that while DCBs are associated with both SES and NRET, they only partially account for the observed socioeconomic inequalities in NRET (PUBMED:26087774).
Instruction: Physical activity decreased by a quarter in the 11- to 12-year-old Swedish boys between 2000 and 2013 but was stable in girls: a smartphone effect? Abstracts: abstract_id: PUBMED:25892659 Physical activity decreased by a quarter in the 11- to 12-year-old Swedish boys between 2000 and 2013 but was stable in girls: a smartphone effect? Aim: This study explored physical activity, body mass index (BMI) and overweight and obesity from 2000 to 2013 using a convenience sample of second- and fifth-grade Swedish schoolchildren aged 8-9 years and 11-12 years, respectively. Methods: We examined cross-sectional cohorts of 126 second-grade children in 2000, 84 in 2006 and 44 in 2013 and 105 fifth-grade children in 2000 and 38 in 2013. No fifth graders were available in 2006. Physical activity data were collected based on pedometer readings over four consecutive weekdays, and height and weight were measured. Identical instruments and procedures were used in all three years. Results: There was an increase in physical activity in second-grade girls from 2000 to 2006 (p &lt; 0.01), which then stabilised between 2006 and 2013, but second-grade boys and fifth-grade girls were stable throughout the study period. Fifth-grade boys decreased significantly (24%) from 16 670 to 12 704 steps per day (p &lt; 0.01) from 2000 to 2013. Mean BMI scores remained stable over time. Conclusion: Time trends in physical activity differed between boys and girls. Second-grade boys and fifth-grade girls were stable throughout, whereas second-grade girls increased from 2000 to 2006 before stabilising. Fifth-grade boys showed a significant 24% reduction from 2000 to 2013. Changes in recess and leisure time habits, such as smartphone use, may have influenced the result. abstract_id: PUBMED:22942996 Physical fitness, physical activity and sedentary activities of 7 to 11 year old boys with different body mass indexes. Purpose: The purpose of this study was to examine the differences in physical fitness, physical activity, and sedentary activities among 7 to 11 year old boys with varying body mass index in Ardabil, Iran. Methods: The sample consisted of 766 boys who participated in eight weight bearing (1 mile walk/run, sit ups, standing long jump, vertical jump, shuttle run 4 × 10 m, 30-meter sprint, pull ups, pushups) and four non-weight-bearing physical fitness tests (flamingo balance, handgrip; sit and reach and finger reaction time). Physical activity and sedentary activities were assessed by questionnaires. Body mass index (BMI) was computed to classify participants into normal weight, underweight, overweight and obese groups. Results: The prevalence of underweight, overweight and obesity was 10.7%, 14.1% and 4.2% respectively. Cardiorespiratory fitness (VO(2max)) decreased progressively as the BMI increased and differences among the all body mass index categories were significant (P&lt;0.01). Obese subjects performed worse in all weight bearing fitness tests than normal weight subjects (P&lt;0.01). Moreover, overweight subjects performed worse in one mile, pushups and pull ups than normal weight subjects (P&lt;0.01). Underweight subjects had significantly lower Pushups than normal weight subjects (P&lt;0.01). Obese subjects had significantly higher sedentary activities and lower physical activity than other counterparts (P&lt;0.01). Conclusions: This study showed that weight bearing physical fitness, cardiorespiratory fitness and physical activity progressively decreased as the BMI increased and conversely, sedentary activities increased as the BMI increased. abstract_id: PUBMED:30523665 Comparisons of pedometer-determined weekday physical activity among Swedish school children and adolescents in 2000 and 2017 showed the highest reductions in adolescents. Aim: This study compared differences in weekday physical activity among school children and adolescents in middle-class communities in south-eastern Sweden in 2000 and 2017. Methods: Data were drawn from two independent cross-sectional cohorts of children and adolescents aged 8, 11 and 14. The same six schools, protocols, procedures and instruments were used in both years, with 332 subjects in 2000 and 562 in 2017. Mean steps per day (SPD) were measured with a Yamax SW-200 pedometer. Results: The mean weekday SPD was similar among eight-year-old girls in both years, but 8% (p = 0.040) higher among eight-year-old boys in 2017. However, among 11-year-old children, the mean SPD were 12% lower among both girls (p = 0.033) and boys (p = 0.001). The SPD among 14-year-old adolescents were 24% (p = 0.012) and 30% (p &lt; 0.001) lower among girls and boys, respectively. Conclusion: This school-based study from south-eastern Sweden showed that weekday physical activity differed between age groups in 2000 and 2017. The highest reductions were seen among older children and adolescents, particularly 14-year-old girls and boys. The only group that increased was eight-year-old boys, with no significant change in eight-year-old girls. A possible explanation might be the dramatically increased use of the Internet on phones and tablets. abstract_id: PUBMED:37463382 DEVELOPMENT OF COORDINATION ABILITIES IN 6-10 YEARS OLD BOYS WITH POSTURAL DISORDERS. Objective: The aim: To develop, substantiate and experimentally test the methodology for the development of coordination abilities in 6-10 years old boys with postural disorders during their physical education training sessions. Patients And Methods: Materials and methods: The experimental (EG) and the control groups (CG) were formed to organize the pedagogical experiment. The EG included 17 boys and the CG consisted of 19 boys. The EG was engaged in sports activities according the developed methodology and the CG followed the current school curricula on physical education. Results: Results: The methodology for the development of coordination abilities in 6-10 years old boys with postural disorders in the process of their physical education has been developed. Its peculiarity lies in the accentuated influence on the vestibular, motor and visual analyzers with the help of a system of exercises aimed at controlling movements in space in static positions and during movement. Conclusion: Conclusions: The obtained results testify to the effectiveness of the developed methodology. The EG boys had significantly higher indicators of static balance and coordination abilities at body position change, as well as general coordination of movements (p ≤ 0.05) at the end of the academic year than those of the CG. In addition, the number of 6-10 years old boys with various postural disorders decreased by 7.4 % (from 26.8 % to 19.4 %). abstract_id: PUBMED:25611939 The impact of playworks on boys' and girls' physical activity during recess. Background: School-based programs, such as Playworks, that guide students in organized activities during recess and make improvements to the recess play yard may lead to significant increases in physical activity-especially for girls. This study builds on past research by investigating the impact of Playworks separately for girls and boys. Methods: Twenty-nine schools were randomly assigned to receive Playworks for 1 school year or serve as a control group. Postintervention physical activity data were collected via accelerometers and recess observations. Impacts were estimated separately for girls and boys using regression models. Results: Girls in Playworks schools had significantly higher accelerometer intensity counts and spent more time in vigorous physical activity than girls in control schools. No significant differences based on accelerometer data were found for boys. A significant impact was also found on the types of activities in which girls engaged during recess; girls in the treatment group were less likely than those in the control group to be sedentary and more likely to engage in jumping, tag, and playground games. Conclusions: The current findings suggest that Playworks had a significant impact on some measures of girls' physical activity, but no significant impact on measures of boys' physical activity. abstract_id: PUBMED:3220069 Isometric and isokinetic muscle strength, anthropometry and physical activity in 8 and 13 year old Swedish children. Isometric muscle strength of the hand-grip and of trunk flexion and extension, and isokinetic torque of elbow and knee flexion and knee extension were assessed in a random sample of 8 and 13 year old Swedish children. The results were compared with respect to sex and age in absolute terms and relative to weight, height2 and estimates of lean body mass and cross-sectional muscle area. Daily physical activity was also estimated. The muscle strength variables were in general found to be very similar in the 8 year old boys and girls. In the 13 year old group the boys were generally stronger than the girls, in both absolute and relative terms, except for similar torque values during knee extension. The absolute and relative muscle strength and torque values were higher in the older than in the younger children, with the exception of trunk strength per unit of body weight and of lean body mass, which were similar in boys of both ages and significantly lower in the older than in the younger girls. No significant correlation was found between the estimates of physical activity and isometric and isokinetic muscle strength and torque. abstract_id: PUBMED:28764730 Objectively measured physical activity patterns, sedentary time and parent-reported screen-time across the day in four-year-old Swedish children. Background: Physical activity (PA) improves health outcomes accumulating evidence suggests that sedentary time (ST), especially parent-reported screen-time, is associated with negative health outcomes in children. The aim of the present study is to describe levels and patterns of PA and ST across the day and week and activity pattern differences between the sexes, across all weekdays and time spent in and outside the preschool in four-year old children. Methods: In total 899 four-year old Swedish children who had both complete questionnaire data on screen-time behaviors and objective activity variables and at least 4 days, including one weekend day, with more than 10 h of GT3X+ Actigraph accelerometer wear time data were included in the study. Patterns of PA and ST across the day and week and differences between sexes, weekdays vs. weekend days and time in preschool vs. time spent outside preschool were assessed. Results: Children engaged in 150 min (SD 73) and 102 min (SD 60) of screen-time on weekend days and weekdays, with 97% and 86% of children exceeding the 1 h guideline for screen-time on weekend days and weekdays, respectively. Accelerometer data showed that boys are more active and less sedentary compared with girls and both sexes were more active and less sedentary on weekdays compared with weekend days, while parent-reported data showed that boys engage in more screen-time compared with girls. Children accumulated 24.8 min (SD. 19) MVPA during preschool time and 26.6 min (SD. 16) outside preschool hours on weekdays, compared with 22.4 min (SD. 18) MVPA during preschool time and 25.3 min (SD. 22) outside preschool hours on weekend days. Conclusions: Four-year old Swedish children display different activity patterns across the day on weekdays compared to weekend days, with preschool hours during weekdays being the most active segments and preschool hours during weekend days being the least active segments of the day. abstract_id: PUBMED:24150625 Daily Physical Activity and Physical Fitness in 11-to 15-year-old Trained and Untrained Turkish Boys. The aims of this study were to assess levels and patterns of physical activity (PA) in relation to age and regular sport activity, and to examine its relationship to physical fitness in trained and untrained boys. One hundred forty-seven 11-to 15- year-old boys (73 trained and 74 untrained) participated in this study. Trained boys, comprised of 26 soccer, 25 handball and 22 volleyball players, had been training regularly for at least one year. The intensity, duration and frequency of PA were assessed from four complete days of heart rate monitoring with 15-seconds sampling intervals. Aerobic fitness was assessed by determining peakVO2 with a portable breath-by-breath gas analyzer (Cosmed K4b(2)) and the running speeds at fixed lactate concentrations during an incremental running test. Anaerobic fitness was evalu-ated with the Wingate Anaerobic Test. Skinfold thicknesses from eight sites and Tanner stages of pubic hair were also obtained. Based on 15-s heart rate data, instead of continuous activity, multiple short bouts of moderate and vigorous PA, lasting up to one minute, were characteristic of daily PA patterns of both trained and untrained boys. PA levels of trained boys were higher than untrained boys (p &lt; 0.01) and the levels of PA decreased with age and maturation in both groups (p &lt; 0.05). Daily PA variables were related to body fatness in both groups (p &lt; 0.05), but the relationships were not consistent in the trained group. Daily PA variables were also related to aerobic fitness in the untrained group (p &lt; 0.05) and these relationships were somewhat better with vigorous PA, whereas in the trained group, none of the PA variables were related to any of the aerobic fitness indices (p &gt; 0.05). No relationship was observed between PA variables and anaerobic fitness in either group (p&gt; 0.05). It seems that such relationships may somewhat depend on the fitness level of the subjects. Key pointsPA levels of trained boys were higher than untrained boys and the levels of PA decreased with age and maturation in both groups.Based on the 15-s HR data, instead of continuous activity, multiple short bouts of moderate and vigorous PA, lasting up to one minute, were characteristic of daily PA patterns of both trained and untrained boys.Daily PA variables were related to aerobic fitness in the untrained group and these relationships were somewhat better with vigorous PA (&gt;70% HRR), whereas in the trained group, none of the PA variables were related to any of the aerobic fitness indices.Neither peak nor mean power values were related to any of the daily PA variables in both trained and untrained groups. abstract_id: PUBMED:20551494 Eight-year secular trends of pedometer-determined physical activity in young Swedish adolescents. Background: To explore the secular trends (time change) of pedometer-determined physical activity (steps per day) in Swedish young adolescents 13 to 14 years of age from 2000 to 2008. Methods: The study was analyzed between 2 cross-sectional cohorts carried out in October 2000 (235,111 girls) and October 2008 (186,107 girls) in the same school, using identical procedures. Data of mean steps per day were collected during 4 consecutive weekdays (sealed pedometer Yamax SW-200 Tokyo, Japan) and in addition height and weight were measured. Results: When comparing cohort 2000 with cohort 2008 no significant difference in physical activity were found neither among girls (12,989 vs 13,338 [t = -0.98, P &lt; .325]) nor boys (15,623 vs 15,174 [t = 0.78, P = .436]). The share of girls and boys meeting weight control recommendations was none significantly higher in 2008 both among girls (68% versus 62%) and among boys (69% versus 65%). Conclusion: There was no significant difference of young adolescents' physical activity during school weekdays in 2008 compared with 2000. This stabilized physical activity level, in an internationally comparison regarded as high, is promising. Enhanced focus on physical activity in society and at school might have influenced the result. abstract_id: PUBMED:2152760 Patterns of physical activity among 11 to 16 year old British children. Objective: To examine the patterns of physical activity among British schoolchildren aged 11 to 16 and to assess whether the children experience the intensity and duration of physical activity that are believed to stress the cardiopulmonary system appropriately. Design: Cross sectional study of a sample of children drawn from a larger survey of coronary risk factors in children. Continuous monitoring of heart rate for 12 hour periods on three school days and one Saturday. Setting: Two communities in Devon. Subjects: 266 Children (163 girls, 103 boys) aged 11 to 16 randomly selected from a sample of 707 children. Main Outcome Measures: Percentage of time and number of sustained periods in which heart rate was greater than 139 beats/min. Anthropometric measures and external assessment of sexual maturity with Tanner's indices. Results: The boys had heart rates greater than 139 beats/min for a significantly higher percentage of time than the girls (p less than 0.01) during the weekday (6.2% v 4.3%) and the Saturday (5.6% v 2.6%). The boys had significantly more five and 10 minute periods with heart rates greater than 139 beats/min than the girls during the Saturday and weekdays and more 20 minute periods during the weekdays. 84 Girls and 37 boys had no 10 minute period with a heart rate greater than 139 beats/min during the three weekdays and 112 girls and 65 boys had no such 10 minute period during the Saturday. No significant relation was detected in either sex between the amount or habitual physical activity (heart rate) and skinfold thickness or maturity group. Conclusions: British children have surprisingly low levels of habitual physical activity, and many children seldom undertake the volume of physical activity believed to benefit the cardiopulmonary system. Boys are more active than girls. The pubertal stage of development or body fatness, or both, do not seem to be sensitive indicators of physical activity in either girls or boys. Answer: The study conducted on Swedish schoolchildren aged 11-12 years observed a significant decrease in physical activity among boys, with a 24% reduction in the number of steps per day from 2000 to 2013. This decline was not observed in girls, whose physical activity levels remained stable during the same period (PUBMED:25892659). The researchers suggest that changes in recess and leisure time habits, such as the increased use of smartphones, may have influenced these results. However, the study does not provide direct evidence linking smartphone use to the observed decrease in physical activity among boys; it merely proposes it as a possible explanation. Therefore, while the data indicates a decrease in physical activity among 11- to 12-year-old Swedish boys, attributing this change solely to the effect of smartphones would require further investigation to establish a causal relationship.
Instruction: Nail dystrophies, scalp and intergluteal/perianal psoriatic lesions: risk factors for psoriatic arthritis in mild skin psoriasis? Abstracts: abstract_id: PUBMED:24819637 Nail dystrophies, scalp and intergluteal/perianal psoriatic lesions: risk factors for psoriatic arthritis in mild skin psoriasis? Aim: In the literature, clinical features of psoriasis associated with psoriatic arthritis (PsA) onset have not been clarified, even if high Psoriasis Area and Severity Index (PASI) was found to be directly related with PsA. Methods: The study was conducted between November 2008 and April 2009 on consecutive psoriatic outpatients referred to a service of dermatological consultations, on indication of other hospital specialists. Participants were affected by mild skin psoriasis, in particular with nail dystrophies and/or scalp and/or intergluteal/perianal lesions. The questionnaire provided by the Classification of Psoriatic Arthritis (CASPAR) study group was proposed to all patients and we added a question about the time of psoriasis onset. Results: Eighty-seven subjects (34 males and 53 females) aged between 18 and 91 years (mean age 48 years) with such psoriatic features were recruited. Other 30 refused to take part to the study. Forty-seven per cent of these participants were found to suffer from PsA, in particular 83% of those with nail and scalp psoriasis, 40% of subjects with intergluteal/perianal involvement and 37% of patients with just scalp lesions. We present the first exploratory and observational study focused on a population with mild skin psoriasis, in order to evaluate if nail dystrophies, scalp or intergluteal/perianal psoriatic lesions might be indicative of a higher risk of PsA even in cases with mild skin psoriasis. Conclusion: Nail psoriasis is probably a suggestive feature of joint involvement. The scalp psoriasis, as well as intergluteal/perianal psoriatic lesions, is likely to be less associated to PsA, but may be an important comorbidity factor for the development of PsA. Further investigations in a larger population are encouraged to assess a potential link between PsA, site-specific and mild skin psoriasis. abstract_id: PUBMED:27498654 Nail psoriasis in an adult successfully treated with a series of herbal skin care products family &#150; a case report. Psoriasis is a common chronic inflammatory dermatosis that causes significant distress and morbidity. Approximately 50% of patients with cutaneous psoriasis and 90% of patients with psoriatic arthritis demonstrate nail involvement of their psoriasis. Left untreated, nail psoriasis may progress to debilitating nail disease that leads to not only impairment of function but also on quality of life. We report the case of a 50-year-old male patient with recalcitrant nail dystrophies on the fingers since the age of 40, who responded successfully to Dr. Michaels® product family. The patient had a 35-year history of plaque psoriasis localised on the scalp, ears, groin, limbs, and trunk and with psoriatic arthritis. The nail symptoms consisted of onycholysis, onychomycosis, leukonychia, transverse grooves, nail plate crumbling and paronychia of the periungal skin. This case represents the efficacy and safety of the Dr. Michaels® (Soratinex® and Nailinex®) product family with successful resolution of nail dystrophies and surrounding paronychia with no reported adverse events. abstract_id: PUBMED:28664284 Differential diagnostic spectrum of psoriasis vs. psoriatic arthritis in skin and joints Psoriasis and psoriatic arthritis are common disease entities in the daily routine of dermatologists and rheumatologists. The clinical management of the diseases profits tremendously from interdisciplinary contacts and shared decision-making processes. For dermatologists, recognition of early clinical symptoms of inflammatory arthropathies is pivotal. Rheumatologists, on the other hand, are required to make an early assessment of skin and nail changes in patients, since they are important parameters in the clinical classification of inflammatory joint diseases, in particular for the diagnosis of psoriatic arthritis. In patients with psoriasis, lesions on the scalp, in the anal cleft, and nail changes as well as severe generalized skin disease are all regarded as risk factors for psoriatic arthritis. The symptoms of the associated joint and spinal afflictions can vary considerably and confirmation of the diagnosis, in particular in early stages of the disease often represents a considerable challenge. The most common clinical manifestations of psoriasis and psoriatic arthritis and their differential diagnostics are discussed. abstract_id: PUBMED:19177544 Incidence and clinical predictors of psoriatic arthritis in patients with psoriasis: a population-based study. Objective: To determine the incidence and disease-specific predictors of clinically recognized psoriatic arthritis (PsA) in patients with psoriasis. Methods: We identified an incidence cohort of psoriasis subjects age &gt;/=18 years diagnosed between January 1, 1970 and December 31, 1999 in a population-based setting. Psoriasis diagnoses were validated by confirmatory diagnosis in the medical record. Incident and clinically recognized PsA subjects were classified according to the Classification of Psoriatic Arthritis (CASPAR) criteria. Cox proportional hazards models were used to identify predictors of PsA within the psoriasis cohort. Results: The psoriasis incidence cohort comprised 1,633 subjects. Of these, 40 were diagnosed with PsA concurrently with psoriasis and were excluded from analysis. The remaining 1,593 psoriasis subjects had a mean age of 43 years and 50% were men. Over 20,936 person-years of followup, 57 subjects were clinically recognized with new-onset PsA, with a cumulative incidence of 1.7% (95% confidence interval [95% CI] 1.0-2.3%), 3.1% (95% CI 2.2-4.1%), and 5.1% (95% CI 3.7-6.6%) at 5, 10, and 20 years following psoriasis incidence, respectively. Psoriasis features associated with higher risk of PsA were scalp lesions (hazard ratio [HR] 3.89, 95% CI 2.18-6.94), nail dystrophy (HR 2.93, 95% CI 1.68-5.12), and intergluteal/perianal lesions (HR 2.35, 95% CI 1.32-4.19). Calendar year was not associated with risk of PsA (P = 0.15), indicating that the likelihood of PsA in psoriasis subjects did not change over time. Conclusion: In this population-based study, &lt;10% of patients with psoriasis developed clinically recognized PsA during a 30-year period. Psoriasis features associated with a higher likelihood of PsA were nail dystrophy, scalp lesions, and intergluteal/perianal psoriasis. abstract_id: PUBMED:22294201 Psoriatic arthritis: a critical review. Psoriatic arthritis is a chronic inflammatory arthritis that affects about 5–25% of patients with psoriasis. The prevalence varies from 20–420 per 100,000 population across the world except in Japan where it is 1 per 100,000. Psoriatic arthritis affects both genders equally and in more than half it follows long-standing psoriasis. Psoriatic arthritis has been grouped into five subtypes: distal interphalangeal (DIP) predominant, symmetrical polyarthritis, asymmetrical oligoarthritis and monoarthritis, predominant spondylitis, and arthritis mutilans. Oligoarthritis occurs in nearly 60% during early disease but later polyarticular disease predominates mainly due to evolution of oligoarthritis to polyarthritis. In 50–60% polyarthritis is symmetrical. Dactylitis and enthesopathy are other major features seen in nearly one third of patients. The diagnosis of psoriatic arthritis is easy in the presence of typical skin lesions, however it can also be made in absence of skin lesions using Classification of Psoriatic Arthritis criteria. Though 30–40% of patients develop joint deformities at a follow-up of 5–10 years but most retain good functional status. Clinical damage has a strong relationship with number of swollen joints, erythrocyte sedimentation rate, and duration of arthritis. Radiological damage occurs early and erosions are present in nearly 50% at 10 years of disease. Spinal disease also has good outcome with maintained spinal mobility in majority of the patients. Screening of patients with psoriasis using questionnaire can help in early diagnosis. Nail dystrophy, scalp lesions, and intergluteal/perianal psoriasis are associated with higher chance of development of psoriatic arthritis. Early diagnosis will lead to early treatment and better outcome especially with advent of new drugs. abstract_id: PUBMED:26666215 Detection of asymptomatic enthesitis in psoriasis patients: An onset of psoriatic arthritis? Presence of asymptomatic joint involvement is recognized in patients with psoriasis. However, it remains elusive whether such patients develop psoriatic arthritis (PsA). The aim of the present study was to examine the incidence of asymptomatic joint lesions, in particular, enthesitis in patients with psoriasis vulgaris (PsV) and to further assess the clinical features. Eighteen PsV and 28 PsA patients were enrolled for examination by positron emission tomography/computed tomography (PET/CT) using (18) F-fluorodeoxyglucose (FDG). Any nail, scalp and intergluteal involvements were reported. Levels of serum C-reactive protein (CRP), white blood cell (WBC) counts and erythrocyte sedimentation rate (ESR) were examined. All of the PsA patients showed FDG accumulation in the affected joints. Notably, asymptomatic enthesitis was detected in six out of 18 PsV patients (33%), and they were diagnosed as having subclinical PsA. Incidences of scalp, intergluteal and nail psoriasis in subclinical PsA patients were 100%, 83% and 64%, respectively, which were higher than those in PsV patients (67%, 25% and 40%, respectively). CRP, WBC counts and ESR were invariable between PsV and subclinical PsA groups. PET/CT imaging could discover asymptomatic enthesitis. Our data suggested that the subpopulation of subclinical PsA was much higher than expected. Higher prevalence of nail, scalp and intergluteal psoriasis confirmed the risk of PsA as previously described. abstract_id: PUBMED:27932277 Detection of subclinical ultrasound enthesopathy and nail disease in patients at risk of psoriatic arthritis. Objectives: To detect subclinical entheses and nail abnormalities using gray-scale (GS) and power Doppler ultrasonography (PDUS) between patients with nail psoriasis and those with inverse and scalp psoriasis. Methods: In this prospective monocentric study, patients with nail, inverse and scalp psoriasis, without psoriatic arthritis or systemic treatment, were included. Clinical evaluation and ultrasonographic assessment of 14 entheses and 12 nails were done by a dermatologist (clinical assessment) and a rheumatologist (ultrasonographic assessment). Results: 518 entheses were analyzed, with no statistical difference between the two groups when considering GS enthesopathy (P=0.66). PDUS signal of the entheses were low (&lt;1%) in both groups. Matrix thickness was significantly higher in patients with nail disease (1.94mm vs. 1.77mm; P=0.007). PDUS of the skin thickness at the level of distal joint and the loss of the trilaminar appearance were also significantly associated with nail psoriasis (P=0.037 and P&lt;0.0001 respectively). Conclusion: Detection of subclinical US enthesopathy is not so rare in both groups, unlike PD signal, but with no statistical difference. US are a good tool to evaluate the different components of psoriatic nails (loss of trilaminar appearance, nail thickening and inflammation of the skin thickness) which are significantly associated with nail psoriasis. abstract_id: PUBMED:24985559 Is there a psoriasis skin phenotype associated with psoriatic arthritis? Systematic literature review. Psoriatic arthritis (PsA) is associated with psoriasis with a prevalence varying from 5.94% to 23.9%. The aim of this study was to assess if some psoriatic skin features are associated with a higher risk of PsA. A systematic literature search was carried out from 1980 to January 2013, in the Embase and Pubmed databases, using a combination of keywords including (Psoriasis) AND (PsA). Of the 2746 articles retrieved, 25 references were selected. Meta-analysis was performed when possible. Mean age at psoriasis onset appeared to be similar among patients with skin disease alone and in those with PsA. There was no clinical type of psoriasis specifically associated with PsA, including pustular psoriasis of palms and soles. Nonetheless specific psoriasis localizations were significantly associated with an increased risk of developing PsA in one cohort study: scalp lesions [Hazard Ratio (HR) 3.89 (95% confidence interval (CI):2.18-6.94)] and intergluteal/perianal lesions [HR 2.35 (95%CI:1.32-4.19)]. A similar association was found in two cross-sectional studies. Nail involvement was significantly associated with PsA in the meta-analysis [Odds Ratio (OR) 2.92 (95% CI 2.34-3.64)], particularly onycholysis [OR 2.38 (95% CI 1.74-3.26)]. Moreover, nail psoriasis was also associated with distal interphalangeal joint arthritis. The extent of psoriasis appeared to be associated with PsA in one cohort study [≥3 sites: HR 2.24 (95% CI 1.23-4.08)], one case-control study [body surface area &gt;75%: OR 2.52 (95% CI 1.33-4.75)] and three cross-sectional studies. The meta-analysis suggested a trend for an association between high PASI and PsA risk [mean difference 3.39 (95% CI 0.94-5.83)]. Therefore, psoriasis patients with such clinical features may require a particular attention for early and close detection of PsA during the course of the cutaneous disease. abstract_id: PUBMED:28769136 Nail involvement in psoriatic arthritis. Nail psoriasis is considered a significant psychological and social problem causing functional impairment in affected patients. Nail changes hamper their daily and occupational activities and contribute to a worse quality of life. Almost 50% of patients with psoriasis vulgaris and up to 80% of patients with psoriatic arthritis are afflicted with nail lesions. The important correlation between psoriatic arthritis and nail changes is well established - the presence of the latter is a strong predictor of the development of arthritis. There is a broad spectrum of nail dystrophies associated with psoriasis, ranging from the common pitting, subungual hyperkeratosis and loosening of the nail plate to less frequent discolouration and splinter haemorrhages. Some of these symptoms are also observed in other nail diseases, and further diagnostics should be performed. The assessment tools NAPSI (Nail Psoriasis Severity Index), mNAPSI (Modified Nail Psoriasis Severity Index), and PNSS (Psoriasis Nail Severity Score) are most commonly used to grade the severity of nail involvement in psoriasis and enable the evaluation of therapy effectiveness. The treatment of nail psoriasis is a major clinical challenge. It should be adjusted to the extent of dermal, articular and ungual lesions. Systemic therapies of psoriasis, especially biological agents, are most likely to be effective in treating nail psoriasis. However, as their use is limited in scope and safety, topical therapy remains a mainstay, and the combination of corticosteroids and vitamin D3 analogues is considered to be most helpful. abstract_id: PUBMED:24932889 Psoriatic arthritis and nail changes: exploring the relationship. Objective: Psoriatic arthritis (PsA) has a diverse range of clinical manifestations, both articular and extra-articular. Although the association of PsA with skin changes is well established, the relationship of PsA with psoriatic nail changes remains relatively unexplored. Methods: This report reviews the current literature surrounding the association of PsA with nail changes. A review of the literature was completed using PubMed, MEDLINE, and EMBASE in September 2013, encompassing years 1964-2012. Results: A total of 21 articles were reviewed. On average, 66% [standard deviation (SD) 17.7] of PsA patients had nail changes. The type of nail changes and their associations varied widely between studies. Conclusions: Studies of nail changes in PsA are highly variable with a wide range of results. Given the variability of results that were observed in this review, our recommendations are that further large studies on nail changes in patients with PsA should be conducted. Answer: Nail dystrophies, scalp, and intergluteal/perianal psoriatic lesions have been identified as potential risk factors for the development of psoriatic arthritis (PsA) in patients with mild skin psoriasis. Studies have shown that these specific clinical features of psoriasis may be associated with a higher risk of developing PsA. In a study conducted on patients with mild skin psoriasis, 47% of participants were found to suffer from PsA, with a particularly high incidence (83%) among those with both nail and scalp psoriasis. The study suggested that nail psoriasis might be indicative of a higher risk of joint involvement, while scalp and intergluteal/perianal psoriatic lesions may be less associated with PsA but could still be important comorbidity factors for its development (PUBMED:24819637). Another population-based study found that psoriasis features associated with a higher risk of PsA included scalp lesions, nail dystrophy, and intergluteal/perianal lesions (PUBMED:19177544). Similarly, a systematic literature review confirmed that specific psoriasis localizations such as scalp and intergluteal/perianal lesions, as well as nail involvement, were significantly associated with an increased risk of PsA (PUBMED:24985559). Furthermore, the presence of nail psoriasis has been recognized as a strong predictor of the development of PsA, with up to 80% of patients with psoriatic arthritis experiencing nail lesions (PUBMED:28769136). The detection of subclinical enthesopathy and nail disease in patients at risk of PsA has also been explored, with ultrasound being a useful tool for evaluating psoriatic nails and associated inflammation (PUBMED:27932277). Overall, the evidence suggests that nail dystrophies, scalp, and intergluteal/perianal psoriatic lesions are indeed risk factors for the development of psoriatic arthritis in patients with mild skin psoriasis. Clinicians should be vigilant in monitoring patients with these psoriatic features for early signs of PsA to facilitate timely diagnosis and treatment (PUBMED:22294201, PUBMED:26666215, PUBMED:24932889).
Instruction: Can transcutaneous carbon dioxide pressure be a surrogate of blood gas samples for spontaneously breathing emergency patients? Abstracts: abstract_id: PUBMED:26718224 Can transcutaneous carbon dioxide pressure be a surrogate of blood gas samples for spontaneously breathing emergency patients? The ERNESTO experience. Background: It is known that the arterial carbon dioxide pressure (PaCO2) is useful for emergency physicians to assess the severity of dyspnoeic spontaneously breathing patients. Transcutaneous carbon dioxide pressure (PtcCO2) measurements could be a non-invasive alternative to PaCO2 measurements obtained by blood gas samples, as suggested in previous studies. This study evaluates the reliability of a new device in the emergency department (ED). Methods: We prospectively included patients presenting to the ED with respiratory distress who were breathing spontaneously or under non-invasive ventilation. We simultaneously performed arterial blood gas measurements and measurement of PtcCO2 using a sensor placed either on the forearm or the side of the chest and connected to the TCM4 CombiM device. The agreement between PaCO2 and PtcCO2 was assessed using the Bland-Altman method. Results: Sixty-seven spontaneously breathing patients were prospectively included (mean age 70 years, 52% men) and 64 first measurements of PtcCO2 (out of 67) were analysed out of the 97 performed. Nineteen patients (28%) had pneumonia, 19 (28%) had acute heart failure and 19 (28%) had an exacerbation of chronic obstructive pulmonary disease. Mean PaCO2 was 49 mm Hg (range 22-103). The mean difference between PaCO2 and PtcCO2 was 9 mm Hg (range -47 to +54) with 95% limits of agreement of -21.8 mm Hg and 39.7 mm Hg. Only 36.3% of the measurement differences were within 5 mm Hg. Conclusions: Our results show that PtcCO2 measured by the TCM4 device could not replace PaCO2 obtained by arterial blood gas analysis. abstract_id: PUBMED:22795407 Concordance between transcutaneous and arterial measurements of carbon dioxide in an ED. Background: Transcutaneous carbon dioxide pressure (PtcCO(2)) has been suggested as a noninvasive surrogate of arterial carbon dioxide pressure (PaCO(2)). Our study evaluates the reliability of this method in spontaneously breathing patients in an emergency department. Patients And Methods: A prospective, observational study was performed in nonintubated dyspneic patients who required measurement of arterial blood gases. Simultaneously and blindly to the physicians in charge, PtcCO(2) was measured using a TOSCA 500 monitor (Radiometer, Villeurbanne, France). Agreement between PaCO(2) and PtcCO(2) was assessed using the Bland-Altman method. Results: Forty-eight patients (mean age, 65 years) were included, and 50 measurements were done. Eleven (23%) had acute heart failure; 10 (21%), pneumonia; 7 (15%), acute asthma; and 7 (15%), exacerbation of chronic obstructive pulmonary disease. Median PaCO(2) was 42 mm Hg (range, 17-109). Mean difference between PaCO(2) and PtcCO(2) was 1 mm Hg with 95% limits of agreement of -3.4 to +5.6 mm Hg. All measurement differences were within 5 mm Hg, and 32 (64%) were within 2 mm Hg. Conclusion: Transcutaneous carbon dioxide pressure accurately predicts PaCO(2) in spontaneously breathing patients. abstract_id: PUBMED:37812542 Implementation Techniques for Transcutaneous Carbon Dioxide Monitoring: Approaches for Wearable Smart Health Applications. Wearable smart health applications aim to continuously monitor critical physiological parameters without disrupting patients' daily activities, such as giving a blood sample for lab analysis. For example, the partial pressure of arterial carbon dioxide, the critical indicator of ventilation efficacy reflecting the respiratory and acid-base status of the human body, is measured invasively from the arteries. Therefore, it can momentarily be monitored in a clinical setting when the arterial blood sample is taken. Although a noninvasive surrogate method for estimating the partial pressure of arterial carbon dioxide exists (i.e., transcutaneous carbon dioxide monitoring), it is primarily limited to intensive care units and comes in the form of a large bedside device. Nevertheless, recent advancements in the luminescence sensing field have enabled a promising technology that can be incorporated into a wearable device for the continuous and remote monitoring of ventilation efficacy. In this review, we examine existing and nascent techniques for sensing transcutaneous carbon dioxide and highlight novel wearable transcutaneous carbon dioxide monitors by comparing their performance with the traditional bedside counterparts. We also discuss future directions of transcutaneous carbon dioxide monitoring in next-generation smart health applications. abstract_id: PUBMED:34892745 An Infra-Red-Based Prototype for a Miniaturized Transcutaneous Carbon Dioxide Monitor. New types of miniaturized biomedical devices transform contemporary diagnostic and therapeutic techniques in medicine. This evolution has demonstrated exceptional promise in providing infrastructures for enabling precision health by creating diverse sensing modalities. To this end, this paper presents a prototype for transcutaneous carbon dioxide monitoring to diversify the measurable critical parameters for human health. Transcutaneous carbon dioxide monitoring is a noninvasive, surrogate method of assessing the partial pressure of carbon dioxide in the blood. The partial pressure of carbon dioxide is a vital index that can help understand momentarily changing ventilation trends. Therefore, it needs to be reported continuously to monitor the ventilatory status of critically ill patients. The proposed prototype employs an infrared LED as the excitation source. The infrared emission, which decreases in response to an increasing carbon dioxide concentration, is applied to a thermopile sensor that can detect the infrared intensity variations precisely. We have measured the changes in the partial pressure of carbon dioxide in the range of 0-120 mmHg, which covers humans' typical values, 35-45 mmHg. The prototype occupies an area of 25 cm2 (50 mm × 50 mm) and consumes 85 mW power. abstract_id: PUBMED:19386868 Validation study of a transcutaneous carbon dioxide monitor in patients in the emergency department. Background: Transcutaneous carbon dioxide tension (TcPco(2)) monitors offer a non-invasive method of continuously measuring arterial carbon dioxide tensions. The concordance between the TcPco(2) measurement and the value obtained from arterial blood gas sampling (Paco(2)) was measured in patients attending the emergency department. Methods: A prospective observational cohort study was performed in 49 adult patients who were undergoing arterial blood gas sampling as part of their assessment. Blood gas sampling and recording of the TcPco(2) level from the monitor was done simultaneously. Concordance between the two values (Paco(2) and TcPco(2)) was demonstrated using the method described by Bland and Altman. Results: The mean difference was 0.02 kPa (95% CI -0.11 to 0.15). The Pearson's correlation coefficient was 0.94 (p&lt;0.001) and the Bland-Altman limits of agreement were +/-0.9 kPa. Conclusions: In adult patients, concordance between carbon dioxide measurements by transcutaneous monitor and arterial blood sampling is good. Using the transcutaneous method, patients may be monitored non-invasively which may reduce the need for repeated blood gas sampling. abstract_id: PUBMED:27878795 Transcutaneous versus blood carbon dioxide monitoring during acute noninvasive ventilation in the emergency department - a retrospective analysis. Questions Under Study: Transcutaneous measurement of carbon dioxide (PtCO2) has been suggested as an alternative to invasively obtained PaCO2 for the monitoring of patients with hypercapnic respiratory failure during noninvasive ventilation (NIV). Current data on monitoring in hypoxaemic respiratory failure are scarce and show conflicting results in hypercapnic patients in the emergency department. Methods And Setting: We performed a retrospective comparison of real-time PtCO2 (SenTec Digital Monitor) and arterial/venous carbon dioxide tension (PaCO2/PvCO2) measurements in patients with severe hypoxaemic and/or hypercapnic respiratory failure during NIV. Agreement between PtCO2 and PaCO2/PvCO2 was the primary endpoint. Bland-Altman analysis and linear regression were used. Results: 102 patients had at least one matched measurement of PtCO2 and PaCO2/PvCO2. For patients with arterial blood gas analysis, the mean difference was 0.46 kPa at baseline (95% confidence interval [CI] 0.23 to 0.60, limits of agreement 95% CI -0.54 to 1.45) and 0.12 kPa after NIV (95% CI -0.04 to 0.29, limits of agreement 95% CI: -0.61 to 0.86). The linear regression analysis found a correlation R2 of 0.88 (p &lt;0.001) at baseline and an R2 of 0.99 (p &lt;0.001) after initiating NIV. For patients with venous blood gas analysis, the mean difference was 0.64 kPa at baseline (95% CI 0.04 to 1.24, limits of agreement 95% CI -0.72 to 2) and 0.80 kPa after NIV (95% CI 0.51 to 1.10, limits of agreement 95% CI 0.29 to 1.32), R2 0.78 (p &lt;0.001) at baseline and R2 0.91 (p &lt;0.001) after initiating NIV. A PaCO2/PvCO2 &gt;8 kPa was associated with a lesser degree of agreement between the levels of PtCO2 and PaCO2/PvCO2 (p &lt;0.001). Conclusion: Transcutaneous PCO2 monitoring shows a good concordance with PaCO2 and is a reliable, feasible, patient-friendly and safe alternative to repeated blood gas analysis for patients with severe hypoxaemic and/or hypercapnic respiratory failure receiving emergency NIV in the emergency department. An initial blood gas analysis to evaluate the respiratory and metabolic state and to rule out a significant discrepancy compared with the transcutaneous measurement is recommended. abstract_id: PUBMED:18612267 Transcutaneous carbon dioxide monitoring in spontaneously breathing, nonintubated patients in the early postoperative period. Background: The authors investigated the accuracy of transcutaneous capnometry (TcPCO(2)) in estimating arterial blood carbon dioxide partial pressure (PaCO(2)) during spontaneous breathing in patients admitted to our surgical intensive care unit (ICU). Methods: Serial TcPCO(2) and PaCO(2) measurements were taken in stable patients undergoing postoperative monitoring after major abdominal, vascular, or thoracic surgery. Patients were enrolled 12 hours after extubation. Exclusion criteria were pulmonary dysfunction, hemodynamic instability, or anemia. Linear regression, mixed models, and Bland-Altman analyses were used to compare accuracy and correlation between the two variables. Data are presented as means (95% confidence intervals). Results: PaCO(2) values ranged between 26 mmHg and 52 mmHg. Mean values for TcPCO(2) and PaCO(2) were 35.3 (33.8-36.8) mmHg and 39.2 (37.6-40.7) (P&lt;0.001). Bland-Altman analysis showed a bias of -3.8 (-5.1-2.5) mmHg with upper and lower limits of agreement of 5.6 (3.4-7.9) mmHg and -13.3 (-15.6 -11.1) mmHg, respectively. Thirty-one (55.4%) measurements disagreed by 3 mmHg or more. Linear mixed model analysis with adjustment for repeated measurements showed low correlation (r=0.63; P&lt;0.0005). There were small but significant differences between patients in the correlation coefficient (P=0.04). Conclusion: TcPCO(2) showed moderate bias with wide limits of agreement when compared to PaCO(2) in our patients. The regression model showed low correlation of the measurements when adjusting for repeated measurements and between-subject variances. TcPCO(2) may not provide an accurate quantitative estimation of PaCO(2) in stable, spontaneously breathing, nonintubated patients in the early postoperative period. abstract_id: PUBMED:19236597 Transcutaneous carbon dioxide monitoring in infants and children. Objective: To review the technology required for and the applications of transcutaneous carbon dioxide (TC-CO2) monitoring in infants and children. Data Source: A computerized, bibliographic search regarding the applications of transcutaneous carbon dioxide (TC-CO2) monitoring in infants and children. Results: Although the direct measurement of P(a)CO2 remains the gold standard, it provides only a single measurement of what is often a rapidly changing and evolving clinical picture. Given these concerns, there remains a clinical need for a means to continuously monitor P(a)CO2 without the need for repeated blood gas analysis. Although initially introduced into the neonatal intensive care unit; with improvements in the technology, TC-CO2 monitoring can now be used in infants, children and even adults. When compared with end-tidal carbon dioxide (ET-CO2) monitoring techniques, TC-CO2 monitoring has been shown to be equally as accurate in patients with normal respiratory function and more accurate in patients with shunt or ventilation-perfusion inequalities. TC-CO2 monitoring can be applied in situations that generally preclude ET-CO2 monitoring such as high frequency ventilation, apnea testing, and noninvasive ventilation. TC-CO2 monitoring has also been used in spontaneously breathing children with airway and respiratory issues such as croup and status asthmaticus as well as to monitor metabolic status during treatment of acidosis related to diabetic ketoacidosis. Conclusions: Transcutaneous carbon dioxide monitoring may be a useful adjunct in various clinical scenarios in infants and children. It should be viewed as a complimentary technology and may be used in combination with ET-CO2 monitoring. abstract_id: PUBMED:37195846 A Transcutaneous Carbon Dioxide Monitor Based on Time-Domain Dual Lifetime Referencing. The partial pressure of arterial carbon dioxide plays a critical role in assessing the acid-base and respiratory status of the human body. Typically, this measurement is invasive and can only be taken momentarily when an arterial blood sample is drawn. Transcutaneous monitoring is a noninvasive surrogate method that provides a continuous measure of arterial carbon dioxide. Unfortunately, current technology is limited to bedside instruments mainly used in intensive care units. We developed a first-of-its-kind miniaturized transcutaneous carbon dioxide monitor that utilizes a luminescence sensing film and a time-domain dual lifetime referencing method. Gas cell experiments confirmed the monitor's ability to accurately identify changes in the partial pressure of carbon dioxide within the clinically significant range. Compared to the luminescence intensity-based technique, the time-domain dual lifetime referencing method is less prone to measurement errors caused by changes in excitation strength, reducing the maximum error from ∼ 40% to ∼ 3% and resulting in more reliable readings. Additionally, we analyzed the sensing film by investigating its behavior under various confounding factors and its susceptibility to measurement drift. Finally, a human subject test demonstrated the effectiveness of the applied method in detecting even slight changes in transcutaneous carbon dioxide, as small as ∼ 0.7%, during hyperventilation. The prototype, which consumes 30.1 mW of power, is a wearable wristband with compact dimensions of 37 mm× 32 mm. abstract_id: PUBMED:36639787 Noninvasive carbon dioxide monitoring in pediatric patients undergoing laparoscopic surgery: transcutaneous vs. end-tidal techniques. Purpose: The present study aimed to investigate the correlation between transcutaneous carbon dioxide partial pressure (PtcCO2) and arterial carbon dioxide pressure (PaCO2) and the accuracy of PtcCO2 in predicting PaCO2 during laparoscopic surgery in pediatric patients. Methods: Children aged 2-8 years with American Society of Anesthesiologists (ASA) class I or II who underwent laparoscopic surgery under general anesthesia were selected. After anesthesia induction and tracheal intubation, PtcCO2 was monitored, and radial arterial catheterization was performed for continuous pressure measurement. PaCO2, PtcCO2, and end-tidal carbon dioxide partial pressure (PetCO2) were measured before pneumoperitoneum, and 30, 60, and 90 min after pneumoperitoneum, respectively. The correlation and agreement between PtcCO2 and PaCO2, PetCO2, and PaCO2 were evaluated. Results: A total of 32 patients were eventually enrolled in this study, resulting in 128 datasets. The linear regression equations were: PtcCO2 = 7.89 + 0.82 × PaCO2 (r2 = 0.70, P &lt; 0.01); PetCO2 = 9.87 + 0.64 × PaCO2 (r2 = 0.69, P &lt; 0.01). The 95% limits of agreement (LOA) of PtcCO2 - PaCO2 average was 0.66 ± 4.92 mmHg, and the 95% LOA of PetCO2 - PaCO2 average was -4.4 ± 4.86 mmHg. A difference of ≤ 5 mmHg was noted between PtcCO2 and PaCO2 in 122/128 samples and between PetCO2 and PaCO2 in 81/128 samples (P &lt; 0.01). Conclusion: In pediatric laparoscopic surgery, a close correlation was established between PtcCO2 and PaCO2. Compared to PetCO2, PtcCO2 can estimate PaCO2 accurately and could be used as an auxiliary monitoring indicator to optimize anesthesia management for laparoscopic surgery in children; however, it is not a substitute for PetCO2. Registration Number Of Chinese Clinical Trial Registry: ChiCTR2100043636. Answer: The use of transcutaneous carbon dioxide pressure (PtcCO2) as a surrogate for arterial carbon dioxide pressure (PaCO2) in spontaneously breathing emergency patients has been explored in various studies, with mixed results. Some studies have found that PtcCO2 can accurately predict PaCO2 in spontaneously breathing patients. For instance, one study reported that all measurement differences between PaCO2 and PtcCO2 were within 5 mm Hg, and 64% were within 2 mm Hg, suggesting that PtcCO2 accurately predicts PaCO2 in this patient population (PUBMED:22795407). Another study found good concordance between transcutaneous PCO2 monitoring and PaCO2, indicating that it is a reliable, feasible, patient-friendly, and safe alternative to repeated blood gas analysis for patients with severe hypoxaemic and/or hypercapnic respiratory failure receiving emergency noninvasive ventilation (NIV) in the emergency department (PUBMED:27878795). Additionally, a study in pediatric patients undergoing laparoscopic surgery showed a close correlation between PtcCO2 and PaCO2, suggesting that PtcCO2 can estimate PaCO2 accurately and could be used as an auxiliary monitoring indicator (PUBMED:36639787). However, other studies have reported less favorable results. One study found that the mean difference between PaCO2 and PtcCO2 was 9 mm Hg, with only 36.3% of the measurement differences within 5 mm Hg, leading to the conclusion that PtcCO2 measured by the TCM4 device could not replace PaCO2 obtained by arterial blood gas analysis (PUBMED:26718224). Another study showed moderate bias with wide limits of agreement when comparing TcPCO2 to PaCO2 in stable, spontaneously breathing, nonintubated patients in the early postoperative period, indicating that TcPCO2 may not provide an accurate quantitative estimation of PaCO2 in these patients (PUBMED:18612267). In summary, while some studies support the use of PtcCO2 as a surrogate for PaCO2 in spontaneously breathing emergency patients, others suggest that it may not always provide an accurate estimation. The variability in results may be due to differences in patient populations, underlying conditions, and the devices used for measurement.
Instruction: Do North Atlantic eels show parallel patterns of spatially varying selection? Abstracts: abstract_id: PUBMED:36858982 Far-field analytical solutions of the non-homogeneous Helmholtz and wave equations for spatially non-localized sources. Non-localized impulsive sources are ubiquitous in underwater acoustic applications. However, analytical expressions of their acoustic field are usually not available. In this work, far-field analytical solutions of the non-homogeneous scalar Helmholtz and wave equations are developed for a class of spatially extended impulsive sources. The derived expressions can serve as benchmarks to verify the accuracy of numerical solvers. abstract_id: PUBMED:27783621 Spatially Explicit Analysis of Genome-Wide SNPs Detects Subtle Population Structure in a Mobile Marine Mammal, the Harbor Porpoise. The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis. abstract_id: PUBMED:19388831 Survival and growth of Salmonella Enteritidis in liquid egg products varying by temperature, product composition, and carbon dioxide concentration. Cryogenic cooling of shell eggs with carbon dioxide (CO(2)) is known to improve egg content quality through rapid cooling as well as by increasing internal CO(2) levels. A study was undertaken to determine the effects of variations in atmospheric CO(2) concentrations (aerobically stored, flushed with CO(2) and sealed, or bubbled with CO(2)) on the survival and growth of Salmonella Enteritidis in liquid egg products including whole egg, albumen, yolk, and albumen + 1% yolk. Egg products were inoculated with a three-strain composite of Salmonella Enteritidis at ca. 4 log colony-forming units (CFU)/mL and stored at 7 degrees C or 10 degrees C for 8 or 4 days, respectively, or at ca. 2 log CFU/mL and stored at 23 degrees C and 37 degrees C for 48 or 24 hours, respectively. Salmonella populations differed based on variations in liquid egg composition (p &lt; 0.05). Manipulating the atmospheric concentrations of CO(2) in which liquid egg products were stored did not significantly inhibit the growth of Salmonella Enteritidis (p &gt; 0.05) in yolk-containing egg products or affect the inhibitory activity of albumen-containing products. Populations of Salmonella were static at 7 degrees C over the entire storage period and significant growth occurred in whole egg and yolk stored at 10 degrees C. Populations in egg stored at 23 degrees C and 37 degrees C were greater in yolk than in whole egg, although whole egg had populations greater than in albumen or albumen +1% yolk (p &lt; 0.05). Results of this investigation suggest that increasing atmospheric CO(2) to enhance egg quality should not promote the growth of Salmonella Enteritidis in eggs. abstract_id: PUBMED:35028495 Which factor explains the life-history of Xanthium strumarium L., an aggressive alien invasive plant species, along its altitudinal gradient? Invasive biology acknowledges the concept of better performance by invasive plants in the introduced range. Xanthium strumarium L. is one of the successful invasive species in Khyber Pakhtunkhwa, Pakistan. The phenological pattern, vegetative and reproductive traits plasticity analysis of the species was explored to explain the invasive success across the altitudinal gradient in the current invaded habitats. Phenological patterns and timing (seedling, vegetative growth, flowering and fruiting, drying, and seed bank) were observed during a full year for two seasons. We also examine plant functional traits at altitudes of 500, 1000, and 1500 m a.s.l. to assess traits and biomass variations. The X. strumarium exhibits late vegetative and reproductive phenology at higher altitudes, enabling them to occupy an empty niche and benefit from decreased competition for resource acquisition. The lower altitude plants show a higher growth rate (stem size increase, number of leaves, and leaf area) due to the higher nutrient availability. Higher altitude plants have the highest reproductive biomass and biomass ratio revealing plant abilities to be reproductively adapted in the higher altitudes. Among climatic variables, mean yearly temperature, mean annual yearly humidity, and mean day length in hours, while in soil variables, organic matter and nitrogen percentage significantly affect the phenological and morphological stages. Therefore, we conclude that X. strumarium can invade higher altitudes with a shift in its phenological and morphological changes making the invasion process successful. abstract_id: PUBMED:9188578 Tissue culture adaptation of foot-and-mouth disease virus selects viruses that bind to heparin and are attenuated in cattle. Isolates of foot-and-mouth disease virus (FMDV) exist as complex mixtures of variants. Two different serotype O1 Campos preparations that we examined contained two variants with distinct plaque morphologies on BHK cells: a small, clear-plaque virus that replicates in BHK and CHO cells, and a large, turbid-plaque virus that only grows in BHK cells. cDNAs encoding the capsids of these two variants were inserted into a genome-length FMDV type A12 infectious cDNA and used to produce chimeric viruses that exhibited the phenotype of the original variants. Analyses of these viruses, and hybrids created by exchanging portions of the capsid gene, identified codon 56 in VP3 (3056) as the critical determinant of both cell tropism and plaque phenotype. Specifically, the CHO growth/clear-plaque phenotype is dependent on the presence of the highly charged Arg residue at 3056, and viruses with this phenotype and genotype were selected during propagation in tissue culture. The genetically engineered Arg 3056 virus was highly attenuated in bovines, but viruses recovered from animals inoculated with high doses of this virus had lost the ability to grow in CHO cells and contained either an uncharged residue at 3056 or a negatively charged Glu substituted for a Lys at a spatially and antigenically related position on VP2 (2134). Comparison of these animal-derived viruses to other natural and engineered viruses demonstrated that positively charged residues are required at both 2134 and 3056 for binding to heparin. Taken together, these results indicate that in vitro cultivation of FMDV type O selects viruses that bind to heparin and that viruses with the heparin-binding phenotype are attenuated in the natural host. abstract_id: PUBMED:34070575 Local Seal or Imported Meat? Sustainability Evaluation of Food Choices in Greenland, Based on Life Cycle Assessment. Achieving a sustainable global food chain is becoming particularly acute as modern Western diets are adopted in a growing number of countries and cultures around the world. Understanding the consequences that this shift has on health and sustainability is important. This exploratory study is the first to apply the life cycle assessment (LCA) methodology to analyze the sustainability implication of ongoing dietary shifts in Greenland, where locally hunted seal meat is increasingly being replaced by imported livestock products, primarily pig and poultry produced in Denmark. This dietary shift, indirectly driven by international trade bans such as the EU seal product ban, has sustainability implications. To inform and support more comprehensive analyses and policy discussions, this paper explores the sustainability of these parallel Greenlandic food supply chains. A quantitative comparison of the greenhouse gas emissions of Greenlandic hunted seal and Danish pig and poultry is complemented by a qualitative discussion of nutrition, cultural food preferences, animal welfare, and the use of land, pesticides and antibiotics. Although the variability in the life cycle inventory data collected from Greenlandic hunters was considerable, greenhouse gas emissions of seal meat were consistently lower than those of imported livestock products. Emissions of the latter are dominated by biogenic emissions from feed production and manure management, while these are absent for seal meat, whose emissions instead are dominated by fossil fuel use. The implications of these results for sustainable national food policies in a modern global context as well as important areas for additional research are discussed. abstract_id: PUBMED:26860200 Genome-wide prediction models that incorporate de novo GWAS are a powerful new tool for tropical rice improvement. To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable to keep pace with demand. Genomic selection (GS) is a new technique that helps accelerate the rate of genetic gain in breeding by using whole-genome data to predict the breeding value of offspring. Here, we describe a new GS model that combines RR-BLUP with markers fit as fixed effects selected from the results of a genome-wide-association study (GWAS) on the RR-BLUP training data. We term this model GS + de novo GWAS. In a breeding population of tropical rice, GS + de novo GWAS outperformed six other models for a variety of traits and in multiple environments. On the basis of these results, we propose an extended, two-part breeding design that can be used to efficiently integrate novel variation into elite breeding populations, thus expanding genetic diversity and enhancing the potential for sustainable productivity gains. abstract_id: PUBMED:19435646 Students in the BN program--do their perceptions change? Perceptions of nursing have been linked to students' decisions to enter the nursing profession and their decision to continue in or withdraw from nursing programs. As part of a longitudinal study, the Nursing Attitude Questionnaire and Nursing Orientation Tool were used to explore the perceptions of nursing of students in a baccalaureate nursing program. The study began with 213 students, of which 140 completed the study. The Nursing Attitude Questionnaire (NAQ) measures attitudes towards nursing using statements that reflect nursing roles, values, responsibilities, characteristics of nurses/nursing, professionalism, and stereotypes of society. The Nursing Orientation Tool measures students' orientation to nursing using three sub-scales: caring, nursing expertise, and life orientation. Overall, the students held a positive image of nursing and total scores for the NAQ became significantly more positive as students progressed in the program. Life orientation continued to be the most dominant orientation to nursing over the four years. abstract_id: PUBMED:38428179 Satellite imagery as a management tool for monitoring water clarity across freshwater ponds on Cape Cod, Massachusetts. Water clarity serves as both an indicator and a regulator of biological function in aquatic systems. Large-scale, consistent water clarity monitoring is needed for informed decision-making. Inland freshwater ponds and lakes across Cape Cod, a 100-km peninsula in Massachusetts, are of particular interest for water clarity monitoring. Secchi disk depth (SDD), a common measure of water clarity, has been measured intermittently for over 200 Cape Cod ponds since 2001. Field-measured SDD data were used to estimate SDD from satellite data, leveraging the NASA/USGS Landsat Program and Copernicus Sentinel-2 mission, spanning 1984 to 2022. Random forest machine learning models were generated to estimate SDD from satellite reflectance data and maximum pond depth. Spearman rank correlations (rs) were "strong" for Landsat 5 and 7 (rs = 0.78 and 0.79), and "very strong" for Landsat 8, 9, and Sentinel-2 (rs = 0.83, 0.86, and 0.80). Mean absolute error also indicated strong predictive capacity, ranging from 0.65 to 1.05 m, while average bias ranged from -0.20 to 0.06 m. Long- and recent short-term changes in satellite-estimated SDD were assessed for 193 ponds, selected based on surface area and the availability of maximum pond depth data. Long-term changes between 1984 and 2022 established a retrospective baseline using the Mann-Kendall test for trend and Theil-Sen slope. Generally, long-term water clarity improved across the Cape; 149 ponds indicated increasing water clarity, and 8 indicated deteriorating water clarity. Recent short-term changes between 2021 and 2022 identified ponds that may benefit from targeted management efforts using the Mann-Whitney U test. Between 2021 and 2022, 96 ponds indicated deteriorations in water clarity, and no ponds improved in water clarity. While the 193 ponds analyzed here constitute only one quarter of Cape Cod ponds, they represent 85% of its freshwater surface area, providing the most spatially and temporally comprehensive assessment of Cape Cod ponds to date. Efforts are focused on Cape Cod, but can be applied to other areas given the availability of local field data. This study defines a framework for monitoring and assessing change in satellite-estimated SDD, which is important for both local and regional management and resource prioritization. abstract_id: PUBMED:22417595 Evaluating the effect of temperature on microbial growth rate--the Ratkowsky and a Bělehrádek-type models. The objective of this paper to conduct a parallel comparison of a new Bělehrádek-type growth rate (with an exponent of 1.5, or the Huang model), Ratkowsky square-root, and Ratkowsky square equations as secondary models for evaluating the effect of temperature on the growth of microorganisms. Growth rates of psychrotrophs and mesophiles were selected from the literature, and independently analyzed with the 3 models using nonlinear regression. Analysis of variance (ANOVA) was used to compare the means of growth rate (μ), estimated minimum temperature (T(min) ), approximate standard errors (SE) of T(min) , model mean square errors (MSE), accuracy factor (A(f) ), bias factor (B(f) ), relative residual errors (δ), Akaike information criterion (AICc), and Bayesian information criterion (BIC). Based on the estimated T(min) values, the Huang model distinctively classified the bacteria into 2 groups (psychrotrophs and mesophiles). No significant difference (P &gt; 0.05) was observed among the means of the μ values reported in the literature or estimated by the 3 models, suggesting that all 3 models were suitable for curve fitting. Nor was there any significant difference in MSE, SE, δ, A(f) , B(f) , AICc, and BIC. The T(min) values estimated by the Huang model were significantly higher than those estimated by the Ratkowsky models, but were in closer agreement with the biological minimum temperatures for both psychrotrophs and mesophiles. The T(min) values estimated by the Ratkowsky models systematically underestimated the minimum growth temperatures. In addition, statistical estimation showed that the mean exponent for the new Bělehrádek-type growth rate model may indeed be 1.5, further supporting the validity of the Huang model. Answer: The provided abstracts do not contain information regarding North Atlantic eels or patterns of spatially varying selection in this species. The abstracts cover a range of topics including underwater acoustics, population genetics of harbor porpoises, the survival and growth of Salmonella in egg products, the life-history traits of an invasive plant species, tissue culture adaptation of foot-and-mouth disease virus, sustainability of food choices in Greenland, perceptions of nursing students, monitoring water clarity using satellite imagery, and microbial growth rate models. None of these topics are directly related to North Atlantic eels or spatially varying selection in these organisms. Therefore, based on the provided abstracts, it is not possible to answer the question about North Atlantic eels showing parallel patterns of spatially varying selection.
Instruction: Do pressure ulcers influence length of hospital stay in surgical cardiothoracic patients? Abstracts: abstract_id: PUBMED:19220621 Do pressure ulcers influence length of hospital stay in surgical cardiothoracic patients? A prospective evaluation. Aim And Objective: The aim and objective of this study was to determine whether the occurrence of pressure ulcers following cardiothoracic surgery increases the length of hospitalisation. Background: Literature suggests that a pressure ulcer extends the length of hospital stay. The impact of pressure ulcers on length of hospital stay after cardiothoracic surgery is yet undetermined. Design: Prospective follow-up study. Methods: Two hundred and four patients admitted for elective cardiothoracic surgery and with an intensive care unit stay of &gt;48 hours were included in a prospective cohort study. The course of the skin condition in relation to pressure ulcers was monitored during their stay in a teaching hospital. Length of hospital stay was compared between the group with and without pressure ulcers. Results: Hospital stay for cardiothoracic patients with and without pressure ulcers did not differ significantly (p = 0.23). Patients that developed pressure ulcers had a median hospital stay of 13 days (interquartile range: 9-19) vs. 12 days (interquartile range: 7-15) for patients without pressure ulcers. However, we observed that length of stay in the intensive care unit was significantly (p = 0.005) longer for patients with pressure ulcers. This was not associated with the occurrence of complications. However, the occurrence of pressure ulcers was significantly correlated with length of intensive care support in postoperative care. Conclusion: Non-complicated pressure ulcers in postsurgery patients do not significantly increase the length of total hospital stay. Relevance To Clinical Practice: Health professionals need to recognise that patients who have undergone major surgery (such as cardiothoracic surgery) are prone to develop pressure ulcers. Despite the fact that pressure ulcers do not necessarily extend hospital stay, monitoring the skin condition remains of crucial importance for prevention and early detection as well as treatment of pressure ulcers and to minimise patient discomfort. Moreover, as pressure ulcers often originate in the operating room, prevention in the operating room should receive more attention. abstract_id: PUBMED:27487753 Evaluation of a Progressive Mobility Protocol in Postoperative Cardiothoracic Surgical Patients. Cardiothoracic surgical patients are at high risk for complications related to immobility, such as increased intensive care and hospital length of stay, intensive care unit readmission, pressure ulcer development, and deep vein thrombosis/pulmonary embolus. A progressive mobility protocol was started in the thoracic cardiovascular intensive care unit in a rural academic medical center. The purpose of the progressive mobility protocol was to increase mobilization of postoperative patients and decrease complications related to immobility in this unique patient population. A matched-pairs design was used to compare a randomly selected sample of the preintervention group (n = 30) to a matched postintervention group (n = 30). The analysis compared outcomes including intensive care unit and hospital length of stay, intensive care unit readmission occurrence, pressure ulcer prevalence, and deep vein thrombosis/pulmonary embolism prevalence between the 2 groups. Although this comparison does not achieve statistical significance (P &lt; .05) for any of the outcomes measured, it does show clinical significance in a reduction in hospital length of stay, intensive care unit days, in intensive care unit readmission rate, and a decline in pressure ulcer prevalence, which is the overall goal of progressive mobility. This study has implications for nursing, hospital administration, and therapy services with regard to staffing and cost savings related to fewer complications of immobility. Future studies with a larger sample size and other populations are warranted. abstract_id: PUBMED:10326353 Pressure ulcers, hospital complications, and disease severity: impact on hospital costs and length of stay. Objective: To determine whether or not the development of a Stage II or greater pressure ulcer in-hospital is associated with increased hospital costs and length of stay after adjusting for admission severity of illness, comorbidities, nosocomial infections, and other hospital complications. Design: Prospective, inception cohort study. Setting: Tertiary care, urban, university teaching hospital. Participants: 286 patients identified within 3 days of admission to a tertiary care, urban teaching hospital were enrolled in a prospective, inception cohort study. Patients were age 55 or greater; expected to be confined to bed or chair or with a hip fracture; and expected to remain in hospital at least 5 days. Measurements: Baseline data were collected within 3 days of admission. Weekly skin assessments were performed by study nurses to document the development of pressure ulcers. Medical record reviews, patient exams, and physician and nurse interviews were used to obtain baseline demographic, medical, functional, nutritional, and global measures of disease severity. The incidence of nosocomial infections and the number of other hospital complications were monitored by medical record reviews. Hospital costs were estimated using category-specific cost-to-charge ratios. Diagnostic-related group (DRG) adjusted length of stay was calculated by subtracting the mean length of stay for assigned DRGs from actual stays. Results: Incident pressure ulcers were associated with significantly higher mean unadjusted hospital costs ($37,288 vs $13,924, P = 0.0001) and length of stay (30.4 vs 12.8 days, P = 0.0001). In addition to pressure ulcers, other independent predictors of hospital costs and length of stay after multivariable analyses included: admission to an intensive care unit or surgical service, younger age, nosocomial infection, the physician assessment of disease severity, and the number of other hospital complications. Compared with those who did not develop pressure ulcers, patients who developed pressure ulcers also were more likely to develop nosocomial infections (45.9% [17/37] vs 20.1% [50/249], P = 0.001) and other hospital complications (86.5% [32/37] vs 43.0% [107/249], P &lt; 0.001). After adjusting for only the admission predictors of costs and length of stay by multivariable analyses, hospital costs, and length of stay for those who developed pressure ulcers remained significantly greater than for those who did not develop pressure ulcers ($14,260 vs $12,382, P = 0.03, and 16.9 vs 12.9 days, P = 0.02, respectively). The differences in costs and length of stay for those with and without incident pressure ulcers were even greater when adjusted for admission predictors and also the occurrence of nosocomial infections and other complications ($29,048 vs $13,819, P = 0.002, and 20.9 vs 12.7 days, P = 0.0001, respectively). Conclusion: Incident pressure ulcers are associated with substantial and significant increases in hospital costs and length of stay. Nosocomial infections and other hospital complications are additional significant independent predictors of health care utilization among patients at risk for pressure ulcers. abstract_id: PUBMED:22498229 The implications of hospital acquired adverse events on mortality, length of stay and costs for patients undergoing radical cystectomy for bladder cancer. Purpose: The incidence of hospital acquired adverse events in radical cystectomy and their implications for hospital outcomes and costs remain poorly described. We describe the incidence of hospital acquired adverse events in radical cystectomy, and characterize its relationship with in-hospital mortality, length of stay and hospitalization costs. Materials And Methods: We identified 10,856 patients who underwent radical cystectomy for bladder cancer at 1,175 hospitals in the Nationwide Inpatient Sample from 2001 to 2008. We used hospital claims to identify adverse events for accidental puncture, decubitus ulcer, deep vein thrombosis/pulmonary embolus, methicillin-resistant Staphylococcus aureus, Clostridium difficile, surgical site infection and sepsis. Logistic regression and generalized estimating equation models were used to test the associations of hospital acquired adverse events with mortality, predicted prolonged length of stay and total hospitalization costs. Results: Hospital acquired adverse events occurred in 11.3% of all patients undergoing radical cystectomy (1,228). Adverse events were associated with a higher odds of in-hospital death (OR 8.07, p&lt;0.001), adjusted prolonged length of stay (41.3%) and total costs ($54,242 vs $26,306; p&lt;0.001) compared to no adverse events on multivariate analysis. The incremental total costs attributable to hospital acquired adverse events were $43.8 million. Postoperative sepsis was associated with the highest risk of mortality (OR 17.56, p&lt;0.001), predicted prolonged length of stay (62.22%) and adjusted total cost ($79,613). Conclusions: With hospital acquired adverse events occurring in approximately 11% of radical cystectomy cases, they pose a significant risk of in-hospital mortality and higher hospitalization costs. Therefore, increased attention is needed to reduce adverse events by improving patient safety, while understanding the economic implications for tertiary referral centers with possible policy changes such as denial of payment for hospital acquired adverse events. abstract_id: PUBMED:38463526 Pressure ulcers in German hospitals: Analysis of reimbursement and length of stay. Objective: Hospital-acquired pressure ulcers are an important indicator of the quality of care. Most pressure ulcers are avoidable with a robust protocol for prevention, but prevention activities often have a low priority for senior management because the true costs to the hospital are not visible. Our aim was to raise awareness of the value of pressure ulcer prevention by estimating the excess length of inpatient stay associated with hospital-acquired pressure ulcers, and by assessing whether additional costs are covered by increased reimbursement. Methods: National activity data for hospitals in Germany are available through the InEK Data Browser. Data were extracted covering discharges from German hospitals between January 1 and December 31, 2021. Cases were selected according to the presence of a pressure ulcer diagnosis using ICD-10-GM codes L89.0-L89.3. Information was extracted for the ten most common German Diagnosis-Related Group (G-DRG) codes in patients with a secondary pressure ulcer diagnosis on mean length of stay and average reimbursement. Ulcer-associated excess length of stay was estimated by comparing cases within the same G-DRG with and without a pressure ulcer diagnosis. Results: Mean length of stay was higher in patients with a pressure ulcer than in patients with no ulcer by between 1.9 (all ages) and 2.4 days (patients aged ≥65) per case. In patients aged ≥65 years, 22.1% of cases with a pressure ulcer had a length of stay above the norm for the DRG. In the German system length of stay above the norm is not normally reimbursed. Excess length of stay between 1.9 and 2.4 days leads to a potential cost to a hospital of between 1,633€ and 2,074€ per case. Conclusion: Hospital-acquired pressure ulcers represent an important source of cost for a hospital which highlights the potential value of effective prevention. abstract_id: PUBMED:15796283 Effect of pressure ulcers on length of hospital stay. Objective: To identify the independent effect of pressure ulcers on excess length of stay and control for all observable factors that may also contribute to excess length of stay. Hospitalized patients who develop a pressure ulcer during their hospital stay are at a greater risk for increased length of stay as compared with patients who do not. Design: Cross-sectional, observational study. Setting: Tertiary-care referral and teaching hospital in Australia. Patients: Two thousand hospitalized patients 18 years and older who had a minimum stay in the hospital of 1 night and admission to selected clinical units. Methods: Two thousand participants were randomly selected from 4,500 patients enrolled in a prospective survey conducted between October 2002 and January 2003. Quantile median robust regression was used to assess risk factors for excess length of hospital stay. Results: Having a pressure ulcer resulted in a median excess length of stay of 4.31 days. Twenty other variables were statistically significant at the 5% level in the final model. Conclusions: Pressure ulcers make a significant independent contribution to excess length of hospitalization beyond what might be expected based on admission diagnosis. However, our estimates were substantially lower than those currently used to make predictions of the economic costs of pressure ulcers; existing estimates may overstate the true economic cost. abstract_id: PUBMED:26204450 The Determinants of Costs and Length of Stay for Hip Fracture Patients. Background And Purpose: An ageing population at greater risk of proximal femoral fracture places an additional clinical and financial burden on hospital and community medical services. We analyse the variation in i) length of stay (LoS) in hospital and ii) costs across the acute care pathway for hip fracture from emergency admission, to hospital stay and follow-up outpatient appointments. Patients And Methods: We analyse patient-level data from England for 2009/10 for around 60,000 hip fracture cases in 152 hospitals using a random effects generalized linear multi-level model where the dependent variable is given by the patient's cost or length of stay (LoS). We control for socio-economic characteristics, type of fracture and intervention, co-morbidities, discharge destination of patients, and quality indicators. We also control for provider and social care characteristics. Results: Older patients and those from more deprived areas have higher costs and LoS, as do those with specific co-morbidities or that develop pressure ulcers, and those transferred between hospitals or readmitted within 28 days. Costs are also higher for those having a computed tomography (CT) scan or cemented arthroscopy. Costs and LoS are lower for those admitted via a 24h emergency department, receiving surgery on the same day of admission, and discharged to their own homes. Interpretation: Patient and treatment characteristics are more important as determinants of cost and LoS than provider or social care factors. A better understanding of the impact of these characteristics can support providers to develop treatment strategies and pathways to better manage this patient population. abstract_id: PUBMED:33098399 Association of patient safety indicator 03 and clinical outcome in a surgery hospital. Purpose: Patient safety indicators (PSIs) were developed as a tool for hospitals to identify potentially preventable complications and improve patient safety performance. The study aimed at measuring the incidence of the Agency for Healthcare Research and Quality (AHRQ) PSI03 (pressure ulcer [PU] rate) and to identify the association between PSI03 and clinical outcomes including death, readmission within 30 days and length of stay (LOS) at the cardiothoracic surgery hospital at Ain Shams University, Cairo, Egypt. Design/methodology/approach: An exploratory prospective cohort study was conducted to follow up patients, who fulfilled the inclusion criteria, from admission until one month after discharge at the cardiothoracic surgery hospital. Data were collected through basic information and follow-up sheets. The total number of included participants in the study was 330. Findings: PSI03 incidence rate was 67.7 per 1,000 discharges. Patients aged 60 years and above had the highest risk among all age groups. In patients who developed PSI03, the risk ratio (RR) of death was 8.8 [95% CI (3.79-20.24)], RR of staying more than 30 days at the hospital was 1.5 [95% CI (1.249-1.872)] and of readmission within 30 days in patients who developed PSI03 was 1.5 [95% CI (0.38-6.15)]. In the study's hospital, the patients who developed PSI03 were at higher risk of death and stayed longer at the hospital than patients without PSI03. This study demonstrated a clear association between PSI03 and patient outcomes such as LOS and mortality. Early detection, prevention and proper management of PSI03 are recommended to decrease unfavorable clinical outcomes. Originality/value: The importance of PSIs lies in the fact that they facilitate the recognition of the adverse events and complications which occurred during hospitalization and give the hospitals a chance to improve the possible clinical outcomes. Therefore, the current study aimed at measuring the association between AHRQ PSI03 ( PU rate) and the clinical outcomes including death, readmission within 30 days and the LOS at the cardiothoracic surgery hospital at Ain Shams University. This study will provide the hospital management with baseline data for this type of adverse event and guide them to develop a system for identifying the high-risk group of patients and to upgrade relevant hospital policies and guidelines that lead to improved patient outcomes. abstract_id: PUBMED:34996571 Prevalence of Prolonged Length of Stay in an Emergency Department in Urban Denmark: A Retrospective Health Records Repository Review. Introduction: Prolonged length of stay in emergency departments is associated with increased hospitalization, hospital-acquired pressure ulcers, medication errors, and mortality. In acute admissions in Denmark in 2018, 67% of patients experienced waiting time from arrival to examination. This study aimed to estimate the prevalence of prolonged length of stay (≥6 hours) and identify risk factors related to input, throughput, and output components. Methods: A retrospective health records repository review included 4743 patients admitted to a single urban emergency department in Denmark in January 2019. Data collected from the electronic health record system repository included demographic and organizational characteristics and were analyzed using descriptive statistics and logistic regression. Results: Among patients admitted in the study period, 31% had a prolonged length of stay of ≥6 hours. Prolonged length of emergency department stay was associated with being female (male odds ratio [OR], 0.86; 95% confidence interval [CI], 0.75-0.98), treatment by medical service (OR, 4.25, 95% CI, 3.63-4.98) vs surgical or injury, triage acuity of 2-Orange (OR, 1.45; 95% CI, 1.18-1.78) or 3-Yellow (OR, 1.47; 95% CI, 1.23-1.75) on a 5-level scale, evening (OR, 1.44; 95% CI, 1.24-1.66) or night (OR, 2.36; 95% CI, 1.91-2.91) arrival, ages 56 to 80 (OR, 1.79; 95% CI, 1.52-2.11) and &gt;81 (OR, 2.40; 95% CI, 1.99-2.88) years, and hospital admission (OR, 1.19; 95% CI, 1.04-1.38) vs discharge from the emergency department to home. Discussion: Female, elderly, and medical patients were each identified as at-risk characteristics for ≥6-hour length of stay in the emergency department. Acute care patient pathways in the emergency department, particularly for evening and night, with guideline-based care and system level improvements in patient flow are warranted. Further research with larger populations is needed to identify and support interventions to decrease prolonged length of stay. abstract_id: PUBMED:21945976 How dangerous is a day in hospital? A model of adverse events and length of stay for medical inpatients. Background: Despite extensive research into adverse events, there is no quantitative estimate for the risk of experiencing adverse events per day spent in hospital. This is important information for hospital managers, because they may consider discharging patients earlier to alternative care providers if this is associated with lower risk, but other costs and benefits are similar. Methods: We model adverse events as a function of patient risk factors, hospital fixed effects, and length of stay. Potential endogeneity of length of stay is addressed with instrumental variable methods, using days and months of discharge as instruments. We use administrative hospital episode data for 206,489 medical inpatients in all public hospitals in the state of Victoria, Australia, for the year 2005/2006. Results: A hospital stay carries a 5.5% risk of an adverse drug reaction, 17.6% risk of infection, and 3.1% risk of ulcer for an average episode, and each additional night in hospital increases the risk by 0.5% for adverse drug reactions, 1.6% for infections, and 0.5% for ulcers. Length of stay is endogenous in models of adverse events, and risks would be underestimated if length of stay was treated as exogenous. Conclusions: The results of our research contribute to assessing the benefits and costs of hospital stays-and their alternatives-in a quantitative manner. Instead of discharging patients early to alternative care, it would be more desirable to address underlying causes of adverse events. However, this may prove costly, difficult, or impossible, at least in the short run. In such situations, our research supports hospital managers in making informed treatment and discharge decisions. Answer: The influence of pressure ulcers on the length of hospital stay in surgical cardiothoracic patients appears to be variable based on the studies provided. One study specifically focused on cardiothoracic surgery patients found that non-complicated pressure ulcers did not significantly increase the total length of hospital stay, although they were associated with a longer stay in the intensive care unit (ICU) (PUBMED:19220621). Another study on postoperative cardiothoracic surgical patients implementing a progressive mobility protocol suggested a clinical significance in the reduction of hospital length of stay and a decline in pressure ulcer prevalence, although the results did not achieve statistical significance (PUBMED:27487753). In contrast, a study examining the impact of hospital complications, including pressure ulcers, on hospital costs and length of stay found that incident pressure ulcers were associated with significantly higher unadjusted hospital costs and length of stay (PUBMED:10326353). Similarly, another study on patients undergoing radical cystectomy for bladder cancer indicated that hospital-acquired adverse events, including pressure ulcers, were associated with higher odds of in-hospital death, adjusted prolonged length of stay, and increased hospitalization costs (PUBMED:22498229). An analysis of reimbursement and length of stay in German hospitals also highlighted that hospital-acquired pressure ulcers led to an excess length of stay, which may not be fully covered by increased reimbursement, thus representing an important source of cost for hospitals (PUBMED:38463526). Additionally, a study in an Australian hospital found that having a pressure ulcer resulted in a median excess length of stay of 4.31 days (PUBMED:15796283). In the context of hip fracture patients, specific co-morbidities, including the development of pressure ulcers, were associated with higher costs and longer length of stay (PUBMED:26204450). Furthermore, a study in a cardiothoracic surgery hospital in Egypt found that patients who developed pressure ulcers were at higher risk of death and stayed longer in the hospital than patients without pressure ulcers (PUBMED:33098399). Overall, while one study suggests that pressure ulcers may not significantly extend the total hospital stay for cardiothoracic surgery patients (PUBMED:19220621), the majority of the studies indicate that pressure ulcers are associated with increased length of stay and costs in various patient populations, including surgical cardiothoracic patients (PUBMED:10326353, PUBMED:22498229, PUBMED:38463526, PUBMED:15796283, PUBMED:26204450, PUBMED:33098399).
Instruction: Surgeon-Therapist Communication: Do All Members See Eye-to-Eye? Abstracts: abstract_id: PUBMED:36970500 Binocular see-through configuration and eye movement attenuate visual rivalry in peripheral wearable displays. Visual confusion occurs when two dissimilar images are superimposed onto the same retinal location. In the context of wearable displays, it can be used to provide multiple sources of information to users on top of the real-world scene. While useful, visual confusion may cause visual rivalry that can suppress one of the sources. If two different images are projected to each eye (i.e., monocular displays), it provokes binocular rivalry wherein visual perception intermittently switches between the two images. When a semi-transparent image is superimposed (i.e., see-through displays), monocular rivalry results, causing perceptual alternations between the foreground and the background images. Here, we investigated how these rivalries influence the visibility of the peripheral target using three configurations of wearable displays (i.e., monocular opaque, monocular see-through, and binocular see-through) with three eye movement conditions (i.e., saccades, smooth pursuit, and central fixation). Using the HTC VIVE Eye Pro headset, subjects viewed a forward vection of a 3D corridor with a horizontally moving vertical grating at 10° above the center fixation. During each trial (~1 min), subjects followed a fixation cross that varied in location to induce eye movements and simultaneously reported whether the peripheral target was visible. Results showed that the binocular display had significantly higher target visibility than both monocular displays, and the monocular see-through display had the lowest target visibility. Target visibility was also higher when eye movements were executed, suggesting that the effects of rivalry are attenuated by eye movements and binocular see-through displays. abstract_id: PUBMED:10328388 Eye care utilization by older Americans: the SEE Project. Salisbury Eye Evaluation. Objective: To determine eye care utilization patterns among older Americans, particularly characterizing those who sought different types of providers, and the predictive factors for seeking eye care services in general and among those with diabetes and those with visual loss. Design And Participants: The SEE Project, a population-based survey of 2520 persons aged 65 to 84 in Salisbury, Maryland, provided cross-sectional data on eye care use. Questions on eye care use, demographics, medical history, and other factors were asked on the home interview. Main Outcome Measures: Use of an eye care provider in the previous year, with additional outcomes of use of different types of eye care providers. Results: Blacks were significantly less likely to see any type of eye care provider over 1 year: 50% versus 69% among whites. Those who reported having a vision problem, those with more education, and those in the older age groups were significantly more likely to see either an ophthalmologist or an optometrist. Diabetes and driving a car were predictive factors for seeing an ophthalmologist but not for seeing an optometrist. Self-report of diabetes and eye care problems, and being a current driver, were predictive of seeing an eye care professional among those with visual impairment. Conclusions: Although blacks are known to be at greater risk for several age-related eye diseases, they are much less likely to see an eye care provider. Interventions that remove barriers to eye care services should be considered. abstract_id: PUBMED:35743531 Clients in Simulated Teletherapy via Videoconference Compensate for Altered Eye Contact When Evaluating Therapist Empathy. Eye contact is frequently associated with an increased perception of empathy and telepresence, but the currently used videoconferencing (VC) technologies diminish the possibility of naturally conveying eye contact. This study compared the empathy, telepresence, and eye gaze patterns of clients in simulated VC teletherapy sessions where eye contact was altered or facilitated. Forty-two would-be clients met with one of four therapists in training for one 20-min simulated teletherapy session taking place via VC. The session either altered or facilitated eye contact perception by manipulating the positioning of the webcams and of the clients in their chair. Eye-tracking data focusing on the eyes, face, and general body regions of interest were obtained for 25 clients. The results show that facilitating eye contact in VC did not increase the clients' perceptions of empathy or telepresence. However, empathy was associated with greater time spent looking at the eyes and faces of the therapists, but only in the sessions facilitating eye contact. We suggest that clients successfully rely on other verbal and nonverbal cues to detect therapist empathy when eye contact is altered in teletherapy sessions. abstract_id: PUBMED:26060324 The dual nature of eye contact: to see and to be seen. Previous research has shown that physiological arousal and attentional responses to eye contact are modulated by one's knowledge of whether they are seen by another person. Recently it was shown that this 'eye contact effect' can be elicited without seeing another person's eyes at all. We aimed to investigate whether the eye contact effect is actually triggered by the mere knowledge of being seen by another individual, i.e. even in a condition when the perceiver does not see the other person at all. We measured experienced self-awareness and both autonomic and brain activity responses while participants were facing another person (a model) sitting behind a window. We manipulated the visibility of the model and the participants' belief of whether or not the model could see them. When participants did not see the model but believed they were seen by the model, physiological responses were attenuated in comparison to when both parties saw each other. However, self-assessed public self-awareness was not attenuated in this condition. Thus, two requirements must be met for physiological responses to occur in response to eye contact: an experience of being seen by another individual and an experience of seeing the other individual. abstract_id: PUBMED:22696024 What do surgeons see: capturing and synchronizing eye gaze for surgery applications. Recording eye motions in surgical environments is challenging. This study describes the authors' experiences with performing eye-tracking for improving surgery training, both in the laboratory and in the operating room (OR). Three different eye-trackers were used, each with different capabilities and requirements. For monitoring eye gaze shifts over the room scene in a simulated OR, a head-mounted system was used. The number of surgeons' eye glances on the monitor displaying patient vital signs was successfully captured by this system. The resolution of the head-mounted eye-tracker was not sufficient to obtain the gaze coordinates in detail on the surgical display monitor. The authors then selected a high-resolution eye-tracker built in to a 17-inch computer monitor that is capable of recording gaze differences with resolution of 1° of visual angle. This system enables one to investigate surgeons' eye-hand coordination on the surgical monitor in the laboratory environment. However, the limited effective tracking distance restricts the use of this system in the dynamic environment in the real OR. Another eye-tracker system was found with equally high level of resolution but with more flexibility on the tracking distance, as the eye-tracker camera was detached from the monitor. With this system, the surgeon's gaze during 11 laparoscopic procedures in the OR was recorded successfully. There were many logistical challenges with unobtrusively integrating the eye-tracking equipment into the regular OR workflow and data processing issues in the form of image compatibility and data validation. The experiences and solutions to these challenges are discussed. abstract_id: PUBMED:17890048 Effects of eye movement versus therapist instructions on the processing of distressing memories. The effectiveness of components of eye movement desensitization and reprocessing (EMDR) was tested by randomly assigning 48 participants to either an eye movement or an eye stationary condition and to one of two types of therapist instructions (reliving or distancing). Participants were university students (mean age 23) who were asked to recall a personal distressing memory with measures of distress and vividness taken before and after treatment, and at follow-up. There was no significant effect of therapist's instruction on the outcome measures. There was a significant reduction in distress for eye movement at post-treatment and at follow-up but overall no significant reduction in vividness. Post hoc analysis revealed a significant reduction in vividness only for the eye movement and distancing instruction condition. The results were consistent with other evidence that the mechanism of change in EMDR is not the same as traditional exposure. abstract_id: PUBMED:35503112 German reference network for rare eye diseases (DRN-EYE). The way to national networking The German reference network for rare eye diseases (DRN-EYE) is a national network of ophthalmologists, who treat patients with rare eye diseases in Germany. The aim of DRN-EYE is to create a transparent structure that facilitates and improves the medical care of rare diseases of the eye for patients and for ophthalmologists. The competence of DRN-EYE lies in the area of anterior eye segment diseases as well as of the retina, neuro-ophthalmology and the orbit. The network also offers open virtual meetings every 3 months to present unsolved clinical cases as well as further education and overview of the currently running clinical trials. abstract_id: PUBMED:8659981 One in the eye for an orthopaedic surgeon. Despite many reports of injuries to surgeons during operative procedures, there is no record of an eye injury caused by a foreign body. Orthopaedic surgeons are particularly vulnerable to such injury. An instance in which a penetrating eye injury occurred while hammering a rasp into the femur during a hip replacement is described. There is a potential oblique trajectory for a foreign body to reach the eye from the operative field despite the use of a visor for eye protection. abstract_id: PUBMED:99602 Case report on "congenital cystic eye" (author's transl) The "congenital cystic eye" is a rare ocular malformation due to an arrest of normal development with complete or partial failure in the invagination of the primary optic vesicle between the third and fifth week of embryonic life. A coincidence of maternal varicella infection and congenital cystic eye is described and the possibility of a causal relationship is discussed. abstract_id: PUBMED:16412916 Surgeon offsets and dynamic eye movements in laser refractive surgery. Purpose: To determine the amount of static and dynamic pupil decentrations that occur during laser refractive surgery. Setting: The Center of Visual Science and the Department of Ophthalmology, University of Rochester, Rochester, New York, USA. Methods: The surgeon's accuracy in aligning the pupil center with the laser center axis was measured when engaging the eye-tracker in 17 eyes receiving conventional laser in situ keratomileusis (LASIK) procedures (Technolas 217z; Bausch &amp; Lomb). Eye movements were measured subsequently during the treatment in 10 eyes using a pupil camera operating at 50 Hz. Temporal power spectra were calculated from the eye movement measurements. Results: The mean pupil misalignment by the surgeon at the beginning of the procedure was 206.1 microm +/- 80.99 (SD) (with respect to the laser center). The laser center was typically misaligned below (inferiorly) and to the left (nasally and temporally in left and right eyes, respectively) of the pupil [corrected] center. Small amounts of cyclotorsion were observed during the ablation (&lt;2 degrees). The mean magnitude of dynamic pupil decentration from the laser center during treatment was 227.0 +/- 44.07 microm. The mean standard deviation of eye movements was 65.7 +/- 25.64 microm. Temporal power spectra calculated from the horizontal and vertical changes in eye position during the ablation were similar. Ninety-five percent of the total power of the eye movements was contained in temporal frequencies up to 1 Hz, on average, in both directions. Conclusions: Most eye movements during LASIK are slow drifts in fixation. An eye-tracker with a 1.4 Hz closed-loop bandwidth could compensate for most eye movements in conventional or customized ablations. Answer: The abstracts provided do not directly address the topic of communication between surgeons and therapists or whether they have a unified perspective ("see eye to eye"). However, several abstracts touch on aspects of vision, eye movement, and eye care that could be tangentially related to the broader topic of visual perception and attention in professional settings. For instance, the study on binocular see-through configuration and eye movement in wearable displays (PUBMED:36970500) suggests that binocular displays can improve target visibility and that eye movements can attenuate the effects of visual rivalry, which could be relevant in contexts where surgeons or therapists use visual aids or monitors during procedures or consultations. The research on eye care utilization by older Americans (PUBMED:10328388) highlights disparities in eye care services, which could indirectly affect communication if visual impairment is not adequately addressed in patient populations. The study on simulated teletherapy via videoconference (PUBMED:35743531) discusses how clients compensate for altered eye contact when evaluating therapist empathy, indicating the importance of nonverbal cues in communication, which could be relevant to surgeon-therapist interactions, especially in telehealth settings. The investigation into the dual nature of eye contact (PUBMED:26060324) emphasizes the physiological and attentional responses to being seen, which could influence interpersonal dynamics and communication in professional settings. The experiences with eye-tracking in surgery applications (PUBMED:22696024) could provide insights into how surgeons' visual attention and gaze patterns affect their performance and potentially their communication with other team members. The study on the processing of distressing memories (PUBMED:17890048) and the role of eye movement in EMDR could be tangentially related to the focus and attention aspects of communication between professionals. Lastly, the German reference network for rare eye diseases (PUBMED:35503112) and the case of an orthopaedic surgeon's eye injury (PUBMED:8659981) are more focused on clinical and safety aspects of eye care rather than communication between surgeons and therapists. In summary, while the abstracts provided do not directly address the communication between surgeons and therapists, they do offer insights into various aspects of visual perception and attention that could indirectly influence professional interactions and the ability to "see eye to eye" in a figurative sense.
Instruction: On-field management and return-to-play in sports-related concussion in children: Are children managed appropriately? Abstracts: abstract_id: PUBMED:25772997 On-field management and return-to-play in sports-related concussion in children: Are children managed appropriately? Objectives: On-field management and return-to-play guidelines aim to ensure the identification and appropriate management of the concussed athlete. Compliance with current guidelines in many settings is unknown. We assessed whether key components of current concussion guidelines are being followed in child athletes. Design: Prospective observational study. Methods: Data were collected from children (5-18 years) presenting to a paediatric emergency department with sport-related concussion via researcher-administered surveys in the emergency department and during a follow up phone call. On hospital discharge all patients received a return to sports fact sheet based on the International Concussion in Sports Group. Results: Ninety-three had sustained a concussion (mean age 12.7 (±0.27) years, 83% male). Sports played included Australian Football (47%), soccer (12%), rugby (9%) basketball (8%), other (25%). 82% participated in organised sports. Concussive signs or symptoms included loss of consciousness (41%), disorientation (36%), vomiting (23%), amnesia (30%), headache (60%). For concussive injury in organised sports (n=76), overall 42% were not managed according to recommended guidelines: 19% were not immediately removed from play, 29% were allowed to return to play on the same day and 27% were not assessed by qualified personnel. 93% of parents and 96% of patients were unaware of concussion or return-to-play guidelines from their organisations. Overall, 72% were compliant with provided return-to-play guidelines. Conclusions: Many children with sports related-concussion are not formally assessed on-field and continue to play. On-field concussion management and return to play practices are often suboptimal. Awareness and education of coaches, teachers, parents and children need to be improved. abstract_id: PUBMED:24790785 Pediatric sports specific return to play guidelines following concussion. Purpose/background: In 2010, the American Academy of Pediatrics officially adopted the recommended return to play guidelines proposed by the International Conference on Concussion in Sport. The guidelines include a six-step process that provides structure to guide an athlete who is recovering from a concussion in a gradual return to play (RTP) by allowing participation in increasingly difficult physical activities. Unfortunately, the guidelines fail to take into account the variability that occurs within different sports and the resulting challenges medical professionals face in making sure each athlete is able to withstand the rigors of their specific sport, without return of symptoms. Therefore, the purpose of this clinical commentary is to expand upon the current general consensus guidelines for treatment of concussed pediatric athletes and provide sport specific RTP guidelines. Description Of Topic: The intention of the sport specific guidelines is to maintain the integrity of the current six-step model, add a moderate activity phase highlighted by resistance training, and to provide contact and limited contact drills specific to the athlete's sport and/or position. The drills and activities in the proposed seven-step programs are designed to simulate sport specific movements; the sports include: football, gymnastics, cheerleading, wrestling, soccer, basketball, lacrosse, baseball, softball, and ice hockey. These activities will provide sports specific challenges to each athlete while simultaneously accomplishing the objectives of each stage of the RTP progression. The final RTP determination should occur with documented medical clearance from a licensed healthcare provider who has been trained in the evaluation and management of concussions. Discussion/relation To Clinical Practice: There have been significant strides in the management and care of concussed athletes. However, there continues to be a lot of confusion among, athletes, parents, and coaches regarding the proper management of an athlete with a concussion, particularly in the pediatric population. In an effort to eliminate ambiguity and help further promote adherence to the RTP guidelines, the authors developed several sports-specific RTP guidelines. Level Of Evidence: 5. abstract_id: PUBMED:30887284 Treatment of Acute Sports-Related Concussion. Purpose Of Review: Here, we summarize the current scientific literature on the management of sports-related concussion (SRC) in the acute period (&lt; 6 weeks post-injury) with a focus on rest, return to learn, return to play, and emerging treatments. Recent Findings: While relative rest is recommended for the first 24-48 h following acute SRC, the most recent guidelines highlight the lack of evidence for complete rest and in fact show that prolonged cognitive and physical rest can be detrimental. Gradual return to learn and play is recommended. Return to sport should only occur once the patient is symptom free. While there are no FDA-approved medications for acute treatment of concussion, there is preclinical data for the benefit of omega 3 fatty acids. Evidence is limited around the benefits of treating sleep disorders, vestibular-ocular dysfunction, and neck pain in the acute period. After 24-48 h of rest, SRC patients may gradually resume cognitive and physical activity. More research is needed to determine if any supplements, medications, and/or physical therapy are indicated in the management in acute SRC. abstract_id: PUBMED:28152320 Fighting to keep a sport safe: toward a structured and sport-specific return to play protocol. Combat sports are growing in popularity, viewership, and participation. The nature of these sports involves repetitive head contact, yet unlike most other professional contact sports, there are no endorsed guidelines or mandates for graduated and systematic return to play following concussion. Here, we review the literature related to concussion and fighting sports, and propose guidelines for concussion management and safe return to play following concussion. abstract_id: PUBMED:33831982 Evidence based management of sports related concussion. Sports related concussion (SRC) is a common condition evaluated by healthcare professionals. In an article entitled "Return to Play After Concussion: Clinical Guidelines for Young Athletes" published in the December 2019 issue of the Journal of the American Osteopathic Association, guidelines for the management of SRC were presented to assist healthcare professionals in the management of patients with SRC. However, much of the information presented in that article is contradicted by current expert recommendations and evidence based practice guidelines. The management of SRC has evolved to a nuanced, domain driven diagnosis requiring a multidisciplinary treatment team and a customized management plan for each patient to ensure competent treatment of patients with SRC. As such, this Commentary summarizes current recommendations for diagnosis and management of SRC. abstract_id: PUBMED:37413953 Existence ≠ adherence. Exploring barriers to best practice in sports-related concussion return to play (SRC-RTP) in Irish amateur female sport. Objectives: Sports-Related Concussion (SRCs) are of relevant risk and burden for female athletes. The following study aimed to explore potential barriers to return to play (RTP) best practice in coaches of amateur female athletes, and medical professionals who may treat and manage SRCs in amateur female athletes. Design: Qualitative, semi-structured, virtual interviews following a critical analysis framework. Methods: Twenty-five coaches, allied healthcare professionals (AHPs), and general practitioners (GPs) were recruited and interviewed using a convenience snowball sample. Data were transcribed verbatim and thematically analysed. Results: Three themes were identified following reflexive thematic analysis; biopsychosocial norms, stakeholder inaction, and practitioner effectiveness. The findings identify numerous factors which limit the uptake of best practice guidelines as endorsed by Irish national governing bodies (NGBs). Education, training, and implementation of these guidelines are lacking, with sub-standard or non-existent medical support and poor general injury and/or SRC attitudes providing further resistance to these measures. Conclusion: Existence of SRC-RTP protocols does not equal adherence to such. Greater efforts are required to translate knowledge provided by the 6th Concussion Consensus statement. NGBs, league and club administrators, and educators need to better support coaches, practitioners and athletes in the implementation of these protocols in amateur female sport. abstract_id: PUBMED:30675103 Unifying Pathophysiological Explanations for Sports-Related Concussion and Concussion Protocol Management: Literature Review. Objective: There is a plethora of theories about the pathophysiology behind a sport-related concussion. In this review of the literature, the authors evaluated studies on the pathophysiology of sport-related concussion and professional athlete return-to-play guidelines. The goal of this article is to summarize the most common hypotheses for sport-related concussion, evaluate if there are common underlying mechanisms, and determine if correlations are seen between published mechanisms and the most current return-to-play recommendations. Methods: Two authors selected papers from the past 5 years for literature review involving discussion of sport-related concussion and pathophysiology, pathology, or physiology of concussion using mutually agreed-upon search criteria. After the articles were filtered based on search criteria, pathophysiological explanations for concussion were organized into tables. Following analysis of pathophysiology, concussion protocols and return-to-play guidelines were obtained via a Google search for the major professional sports leagues and synthesized into a summary table. Results: Out of 1112 initially identified publications, 53 met our criteria for qualitative analysis. The 53 studies revealed 5 primary neuropathological explanations for sport-related concussion, regardless of the many theories talked about in the different papers. These 5 explanations, in order of predominance in the articles analyzed, were (1) tauopathy, (2) white matter changes, (3) neural connectivity alterations, (4) reduction in cerebral perfusion, and (5) gray matter atrophy. Pathology may be sport specific: white matter changes are seen in 47% of football reports, tauopathy is seen in 50% of hockey reports, and soccer reports 50% tauopathy as well as 50% neural connectivity alterations. Analysis of the return-to-play guidelines across professional sports indicated commonalities in concussion management despite individual policies. Conclusions: Current evidence on pathophysiology for sport-related concussion does not yet support one unifying mechanism, but published hypotheses may potentially be simplified into 5 primary groups. The unification of the complex, likely multifactorial mechanisms for sport-related concussion to a few common explanations, combined with unique findings within individual sports presented in this report, may help filter and link concussion pathophysiology in sport. By doing so, the authors hope that this review will help guide future concussion research, treatment, and management. abstract_id: PUBMED:29125444 Online postconcussion return-to-play instructions. OBJECTIVE The authors of recent concussion guidelines have sought to form a consensus on injury management, but it is unclear if they have been effective in conveying this information to the public. Many parents and athletes obtain medical recommendations via the Internet. This review is aimed at evaluating consistency between online resources and published guideline statements in postconcussion return-to-play (RTP) decisions. METHODS Five websites were selected through a Google search for RTP after concussion, including a federal government institution (Centers for Disease Control and Prevention) website, a national high school association (National Federation of State High School Associations) website, a popular nationally recognized medical website for patients (WebMD), a popular parent-driven website for parents of children who participate in sports (MomsTeam), and the website of a private concussion clinic (Sports Concussion Institute), along with a university hospital website (University of Michigan Medicine). Eight specific items from the Zurich Sport Concussion Consensus Statement 2012 were used as the gold standard for RTP recommendations. Three independent reviewers graded each website for each of the 8 recommendations (A = states guideline recommendations appropriately; B = mentions guideline recommendation; C = does not mention guideline recommendation; F = makes inappropriate recommendation). RESULTS A grade of A was assigned for 45.8% of the recommendations, B for 25.0%, C for 25.0%, and F for 4.2%. All the websites were assigned an A grade for the recommendation of no RTP on the day of injury. Only 1 website (WebMD) mentioned medication usage in conjunction with the Zurich statement, and only 2 websites (Sports Concussion Institute and University of Michigan Medicine) mentioned appropriate management of persistent symptoms. None of these websites commented correctly on all 8 guideline recommendations. CONCLUSIONS Online resources are inconsistent in relaying guideline recommendations for RTP and provide a potential source of confusion in the management of concussion for athletes and their parents, which can result in inappropriate RTP decisions. abstract_id: PUBMED:33604476 Prevalence of concussion and adherence to return-to-play guidelines amongst male secondary school rugby and hockey players. Background: Concussion injuries are common in contact sports. Young players can suffer life-threatening complications if concussion is not recognised and managed. Objectives: To determine the prevalence of concussion amongst secondary school rugby and hockey players and describe players' knowledge and adherence to return-to-play guidelines. Method: A mixed-method approach included Phase A, which utilised a questionnaire completed by 221 players (n = 139 rugby; n = 82 hockey) between 13 and 18 years of age, and Phase B, which utilised three focus group discussions of 15 participants who had suffered a concussion. Results: The prevalence of concussion (n = 221) was 31.2% (n = 69). Of those, 71% (n = 49) were rugby players. Those who had suffered a concussion were more confident in identifying symptoms of concussion in themselves and others compared with those who did not suffer a concussion (p = 0.001), were more aware of return-to-play guidelines and more confident in their knowledge of concussion (p = 0.001). There were no differences between groups when identifying concussion symptoms. Of those who had a concussion, 30.4% (n = 21) adhered to return-to-play guidelines and followed graded return-to-play after their concussion. Explanations for non-adherence to return-to-play protocols included peer pressure, intrinsic motivation and ignorance. Conclusion: Nearly a third had suffered a concussion injury; having suffered a concussion, and awareness of return-to-play guidelines, did not guarantee adherence to return-to-play protocols. Peer pressure and intrinsic factors explained this lack of adherence. Clinical Implications: Physiotherapists are often involved with the diagnosis and management of concussion injuries among rugby and hockey players. Understanding the prevalence and the reasons why young players do not adhere to the 'return to play' guidelines may inform preventative strategies. abstract_id: PUBMED:30482346 Return to play following sports-related concussion. Returning a concussed athlete to practice and competition is an ever-evolving concept that has received considerable scientific and public attention during the past 20 years. Whereas there is now more international consensus regarding return to play, the decision to do so is clinically based with scant objective measures to guide decision making. To encapsulate the return-to-play process, this chapter will give a historic overview of the graded progression process, outline how rest can be both beneficial and detrimental in recovery, and how exercise is emerging as a management tool. Answer: The on-field management and return-to-play (RTP) practices for sports-related concussion in children appear to be suboptimal and not always in line with recommended guidelines. A study assessing compliance with current concussion guidelines in child athletes found that many children with sports-related concussion were not formally assessed on-field and continued to play despite their injuries. Specifically, in organized sports, 42% were not managed according to recommended guidelines: 19% were not immediately removed from play, 29% were allowed to return to play on the same day, and 27% were not assessed by qualified personnel. Additionally, there was a lack of awareness of concussion or RTP guidelines among 93% of parents and 96% of patients (PUBMED:25772997). Despite the existence of RTP protocols, adherence to these guidelines is not guaranteed. Barriers to best practice in RTP for sports-related concussion (SRC-RTP) have been identified, including lack of education, training, and implementation of guidelines, as well as sub-standard medical support and poor general injury attitudes (PUBMED:37413953). Furthermore, online resources that many parents and athletes rely on for medical recommendations are inconsistent in relaying guideline recommendations for RTP, which can result in inappropriate RTP decisions (PUBMED:29125444). The American Academy of Pediatrics adopted the recommended RTP guidelines proposed by the International Conference on Concussion in Sport, which include a six-step process. However, these guidelines do not account for the variability within different sports, leading to challenges for medical professionals in ensuring each athlete can withstand the rigors of their specific sport without return of symptoms (PUBMED:24790785). Moreover, the prevalence of concussion amongst secondary school rugby and hockey players was found to be 31.2%, and of those who had a concussion, only 30.4% adhered to RTP guidelines. Reasons for non-adherence included peer pressure, intrinsic motivation, and ignorance (PUBMED:33604476). In conclusion, while guidelines and protocols for on-field management and RTP after sports-related concussion in children exist, there is evidence that these are not consistently followed or understood, indicating that children are not always managed appropriately following a concussion. This suggests a need for improved awareness, education, and adherence to concussion management guidelines among coaches, teachers, parents, children, and medical professionals involved in youth sports.
Instruction: Pineal Parenchymal Tumor with Marked Cytologic Pleomorphism: Is there a Correlation with the Malignancy Grade? Abstracts: abstract_id: PUBMED:27270146 Pineal Parenchymal Tumor with Marked Cytologic Pleomorphism: Is there a Correlation with the Malignancy Grade? Introduction: In benign pineal parenchymal tumors (PPTs), namely, pineocytoma(PC)and PPT of intermediate differentiation (PPTID), cytologic pleomorphism has occasionally been found;however, it is controversial as to whether the presence of pleomorphic cells leads to upgrading of tumors. We experienced a rare case of pleomorphic PPT in an elderly woman and compared it with a retrospective series of 12 PPTs (PC:3, PPTID:6, pineoblastoma[PB]:3)to evaluate the correlation between pleomorphism and the malignancy grade. CASE AND MATERIALS: A 76-year-old woman presented with gradual cognitive deterioration and gait disturbance. Gadolinium-enhanced magnetic resonance imaging(Gd-MRI)revealed a small, enhanced tumor in the pineal gland with marked hydrocephalus. Endoscopic tumor biopsy and third ventriculostomy were performed simultaneously. The tumor was soft, pinkish, and slightly hemorrhagic. After the biopsy, the patient underwent gamma knife radiosurgery. Pathological Findings: The PPT presented with areas of tumor cells forming pineocytomatous rosettes and areas of giant and multinucleated cells with hyperchromatic nuclei. Neither mitosis nor necrosis was observed. The tumor cells were positive for synaptophysin(SYN)and neurofilament(NF), but negative for glial fibrillary acidic protein(GFAP)and oligodendrocyte lineage transcription factor 2 (Olig2). The MIB-1 labeling index(LI)was 8.1%. There was no difference in the MIB-1 LI between pleomorphic and non-pleomorphic areas. All the 12 PPTs were immunopositive for the neuronal markers SYN and NF. The MIB-1 LI was 0% in PC, 3.5% in PPTID, and 10.5% in PB. The proliferative potential was correlated with the WHO grade. From these findings, the final diagnosis of this pleomorphic case was PPTID grade II, not PC, because the MIB-1 LI was relatively high, even though some tumor cells were forming pineocytomatous rosettes. Conclusion: Although cytologic pleomorphism in PPTs is generally considered not to be correlated with the malignancy grade, the final pathological diagnosis should be determined while considering the proliferative potential. abstract_id: PUBMED:34606044 Prognosticating survival of pineal parenchymal tumors of intermediate differentiation (PPTID) by grade. Background: Pineal parenchymal tumors of intermediate differentiation (PPTID) are a rare group of pineal parenchymal tumors classified by histology as either World Health Organization (WHO) Grades 2 or 3. The rarity of these tumors in adults has left a number of clinical management questions open. Correspondingly, the aim of this study was to aggregate a large PPTID cohort with sufficient statistical power from a large national cancer database to analyze prognostic parameters. Methods: All PPTID patients aged over 18 years in the U.S. National Cancer Database (NCDB) between 2005 and 2016 were retrospectively reviewed. Data were summarized and survival was modeled using Kaplan-Meier and Cox regression analyses. Results: A total of 103 adult PPTID patients were identified in the NCDB with 63 (61%) WHO Grade 2 and 40 (39%) WHO Grade 3 tumors. Overall, mean age was 53 ± 18 years with even gender distribution. A total of 75 (73%) patients underwent surgical resection for diagnosis, with gross total resection (GTR) was the most common resection outcome in 50/75 (67%). Chemotherapy was utilized in 18 (17%) patients, and radiation therapy in 37 (36%) patients. Overall, 5-year survival rate was estimated to be 54% (95% CI 42-64%), with mean survival was 84 (95% CI 69-99) months. Patients with Grade 2 tumors survived statistically longer than Grade 3 tumor counterparts (P &lt; 0.01). Overall, older age (HR 1.09, P &lt; 0.01) was associated with shorter survival, whereas GTR (HR 0.43, P = 0.02) was associated with longer survival. Both these parameters were significant within Grade 2 and Grade 3 subgroup analyses as well. Conclusions: PPTID are rare tumors with expected mean survival more than 5 years, although Grade 2 tumors are expected to survive longer than Grade 3 tumors. Age and gross total resection are significant independent predictors of survival in PPTID overall, as well as within Grade 2 and Grade 3 subgroups separately. The prognostic role and benefit of adjuvant therapy is yet to be elucidated, mandating more molecular and biologic research be done to further optimize clinical management in the future. abstract_id: PUBMED:7543235 Cytologic diagnosis of intracranial germinomas in smear preparations. Although cytologic smear preparations have been used in the intraoperative diagnosis of central nervous system tumors for a long time, detailed descriptions of the rarer tumors, including germinomas, have not been provided in the literature. We reviewed the cytologic features of five intracranial germinomas diagnosed with the smear technique. The smears showed sheets of large tumor cells intimately mixed with mature lymphocytes. The cells had delicate, vacuolated cytoplasm with distinct borders, while the nucleoli were prominent and elongated. There often was an amorphous, eosinophilic background of extracellular matrix. The main differential diagnoses were true pineal gland tumors, pituitary adenomas, craniopharyngiomas, pilocytic astrocytomas and parasellar meningiomas. abstract_id: PUBMED:16771177 Pineal parenchymal tumor of intermediate differentiation with cytologic pleomorphism. We report a case of pineal parenchymal tumor in a 33-year-old man incidentally detected by radiological examination. The MRI showed an unhomogeneously enhanced, small tumor (approximately 1 cm in size) in the pineal region. A tumor specimen was obtained at endoscopic biopsy. Routine histology showed a highly cellular tumor characterized by a predominance of small cells showing high nuclear : cytoplasmic ratio and moderate nuclear atypia, pleomorphism including giant cells and an absence of pineocytomatous rosettes. Mitotic figures were rare (approximately 1 per 10 high-power fields). Tumor necrosis was not evident. Immunohistochemically, the neoplastic cells showed positivity for neural markers (neurofilament protein, synaptophysin) and pinealocyte-associated antibodies (PP1, PP5, PP6), but not for glial fibrillary acidic protein or S-100. The MIB-1 labeling index was relatively high (6.3%). Ultrastructurally, there was some evidence of pinealocytic differentiation, such as vesicle-crowned rodlets (synaptic ribbons) and paired twisted filaments in neoplastic cells. Thus, the tumor was confirmed as a pineal parenchymal tumor of intermediate differentiation by histology, immunohistochemistry and electron microscopy. This case indicates that marked cytologic pleomorphism can occur in pineal parenchymal tumors of intermediate differentiation. abstract_id: PUBMED:294776 Cytologic diagnosis of pineal germinoma in cerebrospinal fluid and sputum. The cytologic findings in an unusual case of primary germinoma of the pineal region which metastasized to the lungs are presented and compared with those in a case of typical testicular seminoma metastatic to the central nervous system (CNS). Tumor cells in Papanicolaou- or Wright's-stained cytocentrifuge preparations and Papanicolaou-stained sputum smears could all be readily compared to biopsies of the primary tumors. Large round nuclei with dispersed chromatin and multiple, prominent nucleoli were important identifying features. The cytoplasm was usually scanty and often vacuolated. Human chorionic gonadotropin (HCG) (8 ng/ml of the beta subunit) appeared in the cerebrospinal fluid (CSF) of the patient with pineal germinoma, indicating that trophoblasts were present in the tumor even though none were seen in the biopsy or cytologic preparations. CSF polyamine levels, a test with 81% sensitivity and 66% specificity for brain tumors, were normal in the same patient. A comparison of tumor cells from both cases illustrates the similarity of germinoma cells from pineal primary tumors and testicular tumors metastic to the CNS. Although the identification of malignant germ cells in body fluids remains a grave prognostic sign, treatment with vincristine, bleomycin and cis-platinum is now inducing progressively longer remissions. Cytology should play an increasingly greater role in monitoring disease activity in patients receiving long-term treatment for malignant germ cell tumors in all locations. abstract_id: PUBMED:2992748 Tumor markers and cytologic features of cerebrospinal fluid. Tumor markers are useful in establishing the diagnosis of certain central nervous system tumors, especially germinal tumors of the pineal region. They are not sufficiently specific to be able to replace biopsy for exact diagnosis. They may also be useful for monitoring of therapy, as an indicator of recurrence of the tumor. Cerebrospinal fluid cytology is not generally useful in establishing a specific histologic diagnosis, especially in children, but can help to monitor therapy and predict tumor recurrence. More extensive studies are needed in both areas to define more precisely the role of markers and cytologic studies. abstract_id: PUBMED:33477138 Cytologic Features of Ventricular Tumors of the Central Nervous System: A Review with Emphasis on Diff-Quik Stained Smears. Background: Neoplasms from the ventricular system share a common location but have highly variable histogenesis. Many are slowly growing tumors that behave in a benign fashion. They can be classified as primary and secondary tumors. The most common primary tumors are ependymomas, subependymomas, subependymal giant cell astrocytomas, central neurocytomas, choroid plexus tumors, meningiomas, germinomas, pineal parenchymal tumors, papillary tumors of the pineal region, chordoid gliomas, rosette-forming glioneuronal tumors of the fourth ventricle, and craniopharyngiomas. Pilocytic astrocytomas, medulloblastomas, and atypical teratoid/rhabdoid tumors often show secondary involvement of the ventricular system. Summary: Advances in neurosurgery have facilitated access to the ventricular system increasing the number of cases in which such tumors can be biopsied. In this context, cytology has been proven to be an extremely useful diagnostic tool during intraoperative pathologic consultations. Many ventricular tumors are infrequent, and the cytologic information available is limited. In this review, we describe the cytologic features of the uncommon ventricular tumors and report on unusual findings of the more common ones. For the cytologic evaluation of brain tumors, many neuropathologists prefer formalin fixation and hematoxylin and eosin staining. In this review, we highlight the cytologic findings as seen with Diff-Quik, a very popular staining method among cytopathologists. In fact, when pathologists are unfamiliar with cytology, it is common to request the assistance of cytopathologists during the evaluation of intraoperative procedures. Key Message: Ventricular tumors of the central nervous system comprise a group of heterogeneous tumors with very different cytologic features. The cytomorphology of these tumors, including rare entities, is often very characteristic, allowing a precise recognition during intraoperative pathologic consultations. Diff-Quik is a valuable staining method that can be used alone or as a complement to hematoxylin and eosin staining. Diff-Quik allows for clear visualization of the overall architecture, cytoplasmic details, and extracellular material. abstract_id: PUBMED:10561221 Comparison of lumbar and shunt cerebrospinal fluid specimens for cytologic detection of leptomeningeal disease in pediatric patients with brain tumors. Purpose: Leptomeningeal disease (LMD) significantly affects the prognosis and treatment of pediatric patients with primary CNS tumors. Cytologic examination of lumbar CSF is routinely used to detect LMD. To determine whether examination of CSF obtained from ventricular shunt taps is a more sensitive method of detecting LMD in these patients, we designed a prospective study to compare the findings of cytologic examinations of CSF obtained from concurrent lumbar and ventriculoperitoneal (VP) shunt taps. Patients And Methods: As a part of diagnostic staging, follow-up testing, or both, 52 consecutive patients underwent concurrent lumbar and shunt taps on 90 separate occasions, ranging from the time of diagnosis to treatment follow-up. CSF from both sites was examined cytologically for malignant cells. Results: The median age of the 28 males and 24 females was 7.5 years (range, 0.6 to 21.4 years). The primary CNS tumors included medulloblastoma (n = 29), astrocytoma (n = 10), ependymoma (n = 5), germinoma (n = 3), atypical teratoid rhabdoid tumor (n = 2), choroid plexus carcinoma (n = 2), and pineoblastoma (n = 1). Each site yielded a median CSF volume of 1.0 mL. Fourteen of 90 paired CSF test results were discordant: in 12, the cytologic findings from shunt CSF were negative for malignant cells, but those from lumbar CSF were positive; in two, the reverse was true. Malignant cells were detected at a higher rate in lumbar CSF than in shunt CSF (P =.0018). When repeat analyses were excluded, examination of lumbar CSF remained significantly more sensitive in detecting malignant cells (P =.011). Analysis of the subset of patients with embryonal tumors showed similar results (P =.0008). Conclusion: Cytologic examination of lumbar CSF is clearly superior to cytologic examination of VP shunt CSF for detecting leptomeningeal metastases in pediatric patients with primary CNS tumors. abstract_id: PUBMED:18371183 Pineocytoma and pineal parenchymal tumors of intermediate differentiation presenting cytologic pleomorphism: a multicenter study. Cytologic pleomorphism has been described in a limited number of benign pineal tumors, namely pineocytoma (PC) and pineal parenchymal tumors (PPTs) of intermediate differentiation (PPTID). We examined the clinicopathologic features in a retrospective series of 14 cases (seven females and seven males aged from 10 to 65 years) of pleomorphic PPT. Seven cases were PC, with no mitoses and with areas of tumoral cells forming large pineocytomatous rosettes and other areas with giant cells containing hyperchromatic nuclei. The other seven were PPTID, presenting few mitoses (&lt; or =2), a Ki67 proliferation index between 3% and 7%, and predominantly composed of small neoplastic cells and scattered giant cells, sometimes multinucleated. In the 14 tumors, the proportion of pleomorphic areas was variable. Most tumoral cells showed extensive neuronal differentiation with strong expression of neuron-specific enolase, synaptophysin and neurofilaments. Some of the neoplastic cells expressed S100 protein. The follow-up period ranged from 1.2 to 13 years and only one PC and one PPTID progressed after stereotactic biopsy or incomplete resection. The lack of invasiveness and the low proliferation index of these tumors suggest a benign clinical course despite the marked pleomorphism, the latter of which can lead to upgrading. abstract_id: PUBMED:27535020 Cytologic features of papillary tumor of the pineal region: A case report showing tigroid background. Papillary tumor of the pineal region (PTPR) is a rare neuroepithelial tumor of the pineal region of adults. We herein describe on a 34-year-old female the cytologic features of a PTPR studied using Diff-Quik stain. The patient presented hydrocephalia secondary to a pineal tumor. During surgery an intraoperative pathologic consultation was requested. Smears were hypercellular with numerous papillary tissue fragments and single cells. Papillary fragments had an evident inner vascular core. Neoplastic cells displayed moderate pleomorphism, plasmocytoid morphology, and cytoplasmic fragility. The background had a foamy, lace-like, tigroid appearance. Pathologic analysis revealed morphologic and immunohistochemical features of PTPR. When evaluating smears of a pineal tumor in an adult, the presence of true papillary structures and numerous single cells should raise the possibility of PTPR. This is particularly useful during intraoperative pathologic evaluation since, on frozen sections, the epithelial morphology of the tumor may be misleading. The papillary morphology which gives name to this neoplams is best appreciated on squash smears. An interesting feature seen in the present case was a prominent tigroid background. This is a characteristic cytologic feature observed in Romanowsky-type stained smears of germinoma and related tumors. It can be seen in other clear cell, glycogen-rich tumors. PTPR shares with these neoplasms its clear cell morphology and PAS positive cytoplasmic granules. As seen from the present case, germinoma is not the only tumor of the pineal region that may show a tigroid background. Cytologic features of PTPR differ from those of germinoma allowing their distinction during intraoperative consultations. Diagn. Cytopathol. 2016;44:1098-1101. © 2016 Wiley Periodicals, Inc. Answer: The presence of cytologic pleomorphism in pineal parenchymal tumors (PPTs) has been a subject of controversy regarding its correlation with malignancy grade. In benign PPTs, such as pineocytoma (PC) and PPT of intermediate differentiation (PPTID), cytologic pleomorphism has occasionally been found, but it is debated whether this feature should lead to an upgrade in the tumor's malignancy grade (PUBMED:27270146). A study involving a rare case of pleomorphic PPT in an elderly woman, compared with a retrospective series of 12 PPTs, aimed to evaluate the correlation between pleomorphism and malignancy grade. The study found no difference in the MIB-1 labeling index (LI) between pleomorphic and non-pleomorphic areas. The MIB-1 LI was correlated with the WHO grade, with PCs showing 0%, PPTIDs 3.5%, and pineoblastomas (PB) 10.5%. The final diagnosis for the pleomorphic case was PPTID grade II, not PC, because the MIB-1 LI was relatively high, despite some tumor cells forming pineocytomatous rosettes. The study concluded that while cytologic pleomorphism in PPTs is generally not correlated with malignancy grade, the final pathological diagnosis should consider the proliferative potential (PUBMED:27270146). Another study on PPTIDs found that patients with Grade 2 tumors survived statistically longer than those with Grade 3 tumors, indicating a correlation between grade and prognosis. Age and gross total resection were significant independent predictors of survival in PPTIDs, suggesting that the grade does have prognostic implications (PUBMED:34606044). A multicenter study examined the clinicopathologic features of 14 cases of pleomorphic PPT, including both PCs and PPTIDs. The study observed that despite marked pleomorphism, which could lead to upgrading, the lack of invasiveness and low proliferation index suggested a benign clinical course. This indicates that cytologic pleomorphism alone should not necessarily result in a higher malignancy grade (PUBMED:18371183). In summary, while cytologic pleomorphism is observed in PPTs, it does not consistently correlate with a higher malignancy grade. The final diagnosis and grading should take into account the proliferative potential and other factors, such as invasiveness and other factors, such as invasiveness and the presence of mitotic figures or necrosis. The presence of pleomorphism can occur in PPTs of intermediate differentiation without necessarily indicating a more aggressive or malignant behavior (PUBMED:18371183). However, the grade of the tumor, as determined by other histological and clinical features, does have prognostic significance, as evidenced by the difference in survival rates between Grade 2 and Grade 3 PPTID tumors (PUBMED:34606044). Therefore, while cytologic pleomorphism may be a feature of PPTs, it should be interpreted in the context of the overall histopathological assessment and not used in isolation to determine malignancy grade.
Instruction: The 'functional muscle-cartilage unit': a reasonable approach to describe a putative relationship between muscle force and longitudinal growth at the forearm in children and adolescents? Abstracts: abstract_id: PUBMED:18824867 The 'functional muscle-cartilage unit': a reasonable approach to describe a putative relationship between muscle force and longitudinal growth at the forearm in children and adolescents? Background: Several reports give evidence that the perichondral ossificiation of bone tubes (modeling) strongly depends on muscular forces in children and adolescents. The present analyses intend to investigate the hypothesis that muscular forces also partly determine enchondral ossification and, therefore, longitudinal growth of bone tubes. Subjects And Methods: Analyses were based on a single cross-sectional investigation with peripheral quantitative computed tomography in 296 individuals (age 5-19 years) participating in the Dortmund Nutritional and Anthropometric Longitudinally Designed (DONALD) study. Results: Forearm length (FL) was correlated with body height in gender-related subgroups of prepubertal and pubertal individuals (rs between 0.76 and 0.86). Cross-sectional muscle area (MA) increased faster than FL and faster than cross-sectional bone area (BA) close to the distal growth plate in puberty. Close to the growth plate, longitudinal growth was faster than perichondral ossification in females. The ratio MA/BA (surrogate of pressure on the distal growth plate) was correlated with FL in prepubertal boys (r = -0.249, p = 0.043) and pubertal individuals (r = 0.153, p = 0.051). Conclusions: Results support the hypothesis that longitudinal growth precedes modeling at the distal forearm. Confounding variables such as puberty may modify the relationship between muscle forces and longitudinal growth at the forearm in boys. abstract_id: PUBMED:37402166 Longitudinal changes of grip strength and forearm muscle thickness in young children. Background: Grip strength is a marker of future health conditions and is mainly generated by the extrinsic flexor muscles of the fingers. Therefore, whether or not there is a relationship between grip strength and forearm muscle size is vital in considering strategies for grip strength development during growth. Thus, this study aimed to examine the association between changes in grip strength and forearm muscle thickness in young children. Methods: Two hundred eighteen young children (104 boys and 114 girls) performed maximum voluntary grip strength and ultrasound-measured muscle thickness measurements in the right hand. Two muscle thicknesses were measured as the perpendicular distance between the adipose tissue-muscle interface and muscle-bone interface of the radius (MT-radius) and ulna (MT-ulna). All participants completed the first measurement and underwent a second measurement one year after the first one. Results: There were significant (P &lt; 0.001) within-subject correlations between MT-ulna and grip strength [r = 0.50 (0.40, 0.60)] and MT-radius and grip strength [r = 0.59 (0.49, 0.67)]. There was no significant between-subject correlation between MT-ulna and grip strength [r = 0.07 (-0.05, 0.20)], but there was a statistically significant (P &lt; 0.001) between-subject relationship between MT-radius and grip strength [r = 0.27 (0.14, 0.39)]. Conclusion: Although we cannot infer causation from the present study, our findings suggest that as muscle size increases within a child, so does muscle strength. Our between-subject analysis, however, suggests that those who observed the greatest change in muscle size did not necessarily get the strongest. abstract_id: PUBMED:33401578 Optimum Angle of Force Production Temporarily Changes Due to Growth in Male Adolescence. The peak increase in lean mass in adolescents is delayed from peak height velocity (PHV), and muscle flexibility temporarily decreases as bones grow. If the decrease in muscle flexibility is caused by muscle elongation, the relationship between the exerted torque and the joint angle could change in adolescents. The purpose of this study was to investigate the change in the optimum angle of force production due to growth. Eighty-eight healthy boys were recruited for this study. Isokinetic knee extension muscle strength of the dominant leg was recorded. The outcome variable was the knee flexion angle when maximal knee extension torque was produced (optimum angle). The age at which PHV occurred was estimated from subjects' height history. We calculated the difference between the age at measurement and the expected age of PHV (growth age). A regression analysis was performed with the optimal angle of force exertion as the dependent variable and the growth age as the independent variable. Then, a polynomial formula with the lowest p-value was obtained. A significant cubic regression was obtained between optimum angle and growth age. The results suggest that the optimum angle of force production temporarily changes in male adolescence. abstract_id: PUBMED:35608202 The influence of longitudinal muscle fascicle growth on mechanical function. Skeletal muscle has the remarkable ability to remodel and adapt, such as the increase in serial sarcomere number (SSN) or fascicle length (FL) observed after overstretching a muscle. This type of remodeling is termed longitudinal muscle fascicle growth, and its impact on biomechanical function has been of interest since the 1960s due to its clinical applications in muscle strain injury, muscle spasticity, and sarcopenia. Despite simplified hypotheses on how longitudinal muscle fascicle growth might influence mechanical function, existing literature presents conflicting results partly due to a breadth of methodologies. The purpose of this review is to outline what is currently known about the influence of longitudinal muscle fascicle growth on mechanical function and suggest future directions to address current knowledge gaps and methodological limitations. Various interventions indicate longitudinal muscle fascicle growth can increase the optimal muscle length for active force, but whether the whole force-length relationship widens has been less investigated. Future research should also explore the ability for longitudinal fascicle growth to broaden the torque-angle relationship's plateau region, and the relation to increased force during shortening. Without a concurrent increase in intramuscular collagen, longitudinal muscle fascicle growth also reduces passive tension at long muscle lengths; further research is required to understand whether this translates to increased joint range of motion. Finally, some evidence suggests longitudinal fascicle growth can increase maximum shortening velocity and peak isotonic power; however, there has yet to be direct assessment of these measures in a neurologically intact model of longitudinal muscle fascicle growth. abstract_id: PUBMED:12067849 Influence of puberty on muscle development at the forearm. Despite its fundamental importance for physical development, the growth of the muscle system has received relatively little consideration. In this study, we analyzed the relationship between cross-sectional area (CSA) of forearm muscles and maximal isometric grip force with age and pubertal stage. The study population comprised 366 children, adolescents, and young adults from 6 to 23 yr of age (185 female) and 107 adults (88 female) aged 29 to 40 yr. By use of peripheral quantitative computed tomography, muscle CSA was determined at the site of the forearm, whose distance to the ulnar styloid process corresponded to 65% of forearm length. Both muscle CSA and grip force were higher in prepubertal boys than in girls. The gender differences decreased until pubertal stage 3 and reincreased thereafter. In girls at pubertal stage 5, muscle CSA no longer increased with age (P &gt; 0.4), whereas there was still some age-related increase in grip force (P = 0.02). In boys at pubertal stage 5, both muscle CSA and grip force continued to increase significantly with age (P &lt; 0.005 each). Specific grip force (grip force per muscle CSA) adjusted for forearm length increased by almost one-half between 6 and 20 yr of age, with no difference between the genders. In conclusion, forearm muscle growth takes a gender-specific course during puberty, indicating that it is influenced by hormonal changes. However, the increase in specific grip force is similar in both genders and thus appears to be independent of sex hormones. abstract_id: PUBMED:17194607 The 'Functional Muscle-Bone Unit': probing the relevance of mechanical signals for bone development in children and adolescents. The present text deals with the relationship of muscle force and mass to bone mass and geometry in the developing skeleton of children and adolescents. Recent results of the last ten years are discussed with reflection on Harold Frost's 'mechanostat hypothesis'. Bone mass and geometry follow the development of body size and muscle force in children and adolescents. Thereby, bone is adapted to the tissue strain due to biomechanical forces. This process is modified by hormonal signals (i.e., estrogens and androgens). Therefore, the quantified relationship of muscle force to bone stability is a reasonable approach to distinguish between primary and secondary bone diseases. Primary bone diseases are characterized by a disturbed adaptation of bone to biomechanical forces. In contrast, secondary bone diseases show a correct adaptation of bone to loaded forces in combination with a decline of muscle force. Therefore, the 'Functional Muscle-Bone Unit' was introduced into the diagnostics of pediatric bone diseases. The ratio of two parameters--referred to bone strength on the one and to biomechanical forces on the other side--is a reasonable diagnostic approach to distinguish between primary and secondary bone diseases. abstract_id: PUBMED:25130178 In vivo vastus lateralis force-velocity relationship at the fascicle and muscle tendon unit level. The force velocity relationship of in vivo human muscle fibers has often been derived from the torque-angular speed relationship during maximal voluntary isokinetic contractions. However, the assumption of a close association between joint performance and muscle mechanics is questionable. We aimed to determine the relationship between knee extension angular speeds, vastus lateralis fascicle and muscle tendon unit (MTU) shortening speeds, and maximal knee extensor force for the entire range of knee joint movement, for the isokinetic range, and for the ranges before, after and at peak torque occurrence, with different commonly used pre-loading conditions. Higher peak forces were observed when knee extensions were preceded by a pre-load, despite the similarity in fascicle shortening speeds. For the entire and the isokinetic range, MTU always shortened faster than fascicles, and this difference increased as joint speed increased. Interestingly, fascicle shortening velocities were greater before compared to after peak torque occurrence while the opposite happened at the MTU level. Assuming a close relationship between joint and fascicle dynamics results in an overestimation of muscle contractile component shortening velocity or force production at peak torque. The force velocity relationships obtained in vivo depend crucially on the test conditions, and the movement range used for analysis. abstract_id: PUBMED:32624008 The longitudinal relationship between dietary intake, physical activity and muscle strength among adolescents. The longitudinal relationship between muscle strength, dietary intake and physical activity among adolescents is not well understood. We investigated the trend and longitudinal effects of dietary intakes and physical activity scores on muscle strength in adolescents. This prospective cohort study consisted of 436 adolescents (134 males; 302 females) aged 13 years at baseline (2012) who were followed up at the ages of 15 (2014) and 17 (2016) years, respectively. We measured muscle strength using a calibrated hand dynamometer, estimated dietary intake with a 7-d dietary history and physical activity scores with a validated physical activity questionnaire for older children. A generalised estimating equation was used to examine the effect of dietary intakes and physical activity on muscle strength changes. The analysis was performed separately by sex. The muscle strength for males and females had increased within the 5-year period. The dietary intakes (energy and macronutrients) also increased initially but plateaued after the age of 15 years for both sexes. Females recorded a significant declining trend in physical activity scores compared with males as they grew older. A significant positive longitudinal relationship was found between protein (β = 0·035; P = 0·016), carbohydrate intake (β = 0·002; P = 0·013) and muscle strength among males. However, no longitudinal relationship was found between dietary intake, physical activity and muscle strength among females. Higher protein and carbohydrate intake among males was associated with higher muscle strength but was not observed in females. Nutrition and physical activity focusing on strength building are required in early adolescence and need to be tailored to males and females accordingly. abstract_id: PUBMED:29222596 The effects of local forearm muscle cooling on motor unit properties. Purpose: Muscle cooling impairs maximal force. Using needle electromyography (EMG) to assess motor unit properties during muscle cooling, is limited and equivocal. Therefore, we aimed to determine the impact of local muscle cooling on motor unit firing properties using surface EMG decomposition. Methods: Twenty participants (12 M, 8 F) completed maximal, evoked, and trapezoidal contractions during thermoneutral and cold muscle conditions. Forearm muscle temperature was manipulated using 10-min neutral (~ 32 °C) or 20-min cold (~ 3 °C) water baths. Twitches and maximal voluntary contractions were performed prior to, and after, forearm immersion in neutral or cold water. Motor unit properties were assessed during trapezoidal contractions to 50% baseline force using surface EMG decomposition. Results: Impaired contractile properties from muscle cooling were evident in the twitch amplitude, duration, and rate of force development indicating that the muscle was successfully cooled from the cold water bath (all d ≥ 0.5, P &lt; 0.05). Surface EMG decomposition showed muscle cooling increased the number of motor units (d = 0.7, P = 0.01) and motor unit action potential (MUAP) duration (d = 0.6, P &lt; 0.001), but decreased MUAP amplitude (d = 0.2, P = 0.012). Individually, neither motor unit firing rates (d = 0.1, P = 0.843) nor recruitment threshold (d = 0.1, P = 0.746) changed; however, the relationship between the recruitment threshold and motor unit firing rate was steeper (d = 1.0, P &lt; 0.001) and had an increased y-intercept (d = 0.9, P = 0.007) with muscle cooling. Conclusions: Since muscle contractility is impaired with muscle cooling, these findings suggest a compensatory increase in the number of active motor units, and small but coupled changes in motor unit firing rates and recruitment threshold to produce the same force. abstract_id: PUBMED:15458468 Analysis of the functional muscle-bone unit of the forearm in pediatric renal transplant recipients. Background: Renal transplantation in children and adolescents is associated with various skeletal complications. The incidence of spontaneous fractures appears to be increased, but the reasons for this are not entirely clear. Our objective was therefore to evaluate macroscopic bone architecture, mass, and strength by peripheral quantitative computed tomography (pQCT), a method that is not influenced by size-related artifacts. In addition, we investigated the muscle-bone relationship in these patients because under physiologic conditions bone strength continually adapts to increasing mechanical loads, that is, muscle force. Methods: In 55 patients (41 males) aged 15.8 +/- 4.1 years, we evaluated in a cross-sectional study 4.9 +/- 3.6 years after renal grafting bone mass, density, geometry, and strength of the radius, as well as forearm muscle size and strength, using pQCT at the proximal and distal radius, radiography of the second metacarpal shaft and hand dynamometry. Data were compared to a large cohort (N= 350) of healthy children. Results: Muscle mass and force were adequate for body size in pediatric renal transplant recipients. However, the radial bone was characterized by an inadequately thin cortex in relation to muscular force, as shown by a reduced height-adjusted cortical thickness both at the proximal (-0.83 +/- 1.12 SDS) and distal radius (-0.52 +/- 1.69 SDS), the metacarpal shaft (-0.54 +/- 1.35 SDS), and by a reduced relative cortical area (-0.90 +/- 1.13 SDS), while the mineralization of trabecular bone was unaltered. As a consequence of cortical thinning, the Strength-Strain Index that reflects the combined strength of trabecular and cortical bone was reduced in these patients. Conclusion: While bone mineral density of the forearm is not decreased in pediatric renal transplant recipients, bone strength in relation to muscular force is reduced. This alteration may contribute to the increased propensity for fractures in these patients. Answer: The concept of the 'functional muscle-cartilage unit' is a reasonable approach to describe the putative relationship between muscle force and longitudinal growth at the forearm in children and adolescents. This relationship is supported by evidence suggesting that muscular forces partly determine enchondral ossification and, therefore, the longitudinal growth of bone tubes (PUBMED:18824867). The study found that forearm length was correlated with body height in prepubertal and pubertal individuals, and that cross-sectional muscle area increased faster than forearm length and bone area close to the distal growth plate during puberty. Additionally, the ratio of muscle area to bone area, which can be seen as a surrogate of pressure on the distal growth plate, was correlated with forearm length in prepubertal boys and pubertal individuals. These results support the hypothesis that longitudinal growth precedes modeling at the distal forearm, although confounding variables such as puberty may modify the relationship between muscle forces and longitudinal growth in boys (PUBMED:18824867). Furthermore, the 'Functional Muscle-Bone Unit' has been introduced into the diagnostics of pediatric bone diseases, emphasizing the relevance of mechanical signals for bone development in children and adolescents. This concept is based on the relationship of muscle force and mass to bone mass and geometry, where bone is adapted to the tissue strain due to biomechanical forces, and this process is modified by hormonal signals (PUBMED:17194607). In addition, the influence of puberty on muscle development at the forearm has been shown to take a gender-specific course, indicating that it is influenced by hormonal changes. However, the increase in specific grip force is similar in both genders and thus appears to be independent of sex hormones (PUBMED:12067849). Overall, the 'functional muscle-cartilage unit' approach is supported by the correlation between muscle forces and longitudinal growth, the adaptation of bone to biomechanical forces, and the gender-specific patterns of muscle development during puberty. These findings collectively suggest that muscle force is an important factor in the growth and development of the forearm in children and adolescents.
Instruction: Locoregional disease patterns in well-differentiated and dedifferentiated retroperitoneal liposarcoma: implications for the extent of resection? Abstracts: abstract_id: PUBMED:24705628 Locoregional disease patterns in well-differentiated and dedifferentiated retroperitoneal liposarcoma: implications for the extent of resection? Background: Well-differentiated (WD)/dedifferentiated (DD) liposarcoma is the most common soft tissue sarcoma of the retroperitoneum. The frequency of distant metastasis is low and the major burden of disease is locoregional. We sought to define the patterns of locoregional disease to help guide surgical decision making. Methods: Data were collected from 247 patients with de novo or recurrent tumors treated at our institution from 1993 to early 2012. The number and location of tumors at both initial presentation and subsequent locoregional recurrence were determined by combined analysis of operative dictations and radiologic imaging. Results: Thirty-four percent of patients had multifocal locoregional disease (two or more tumors) at initial presentation to our institution, including 9 % who had tumors at synchronous remote retroperitoneal sites. The impact of multifocal disease on overall survival was dependent on histologic subtype (WD vs. DD) and disease presentation (de novo vs. recurrence) at the time of resection. Among patients with initial unifocal disease, 57 % progressed to multifocal locoregional disease with subsequent recurrence, including 11 % with new tumors outside of the original resection field. No clinicopathologic or treatment-related variable, including the type or extent of resection, was predictive of either multifocal or 'outside field' progression. Conclusions: Multifocal disease is common in patients with WD/DD retroperitoneal liposarcoma, and tumors can also develop at remote, locoregional sites. Surgical resection remains the primary method of locoregional control in this disease; however, the aggressiveness of resection should be individualized, with consideration of both tumor and patient-related factors. abstract_id: PUBMED:38230390 A case of primary orbital liposarcoma with dedifferentiated transformation from a well-differentiated form. Purpose: Primary orbital liposarcomas are rare. To the best of our knowledge, only four cases of primary dedifferentiated liposarcomas of the orbit have been reported. Furthermore, there have been no reports of primary orbital liposarcomas transitioning from a highly differentiated to a dedifferentiated form. Here, we report a case of primary orbital liposarcoma that was well-differentiated at the time of initial resection at our hospital but had dedifferentiated on recurrence 10 years after the initial resection. Observations: The patient was diagnosed with an inflammatory mass after an initial tumor resection by a previous physician at age 52. Thereafter, there were four recurrences (first to fourth recurrences), and the patient underwent five surgeries and radiotherapy. For the fifth recurrence, he first visited our hospital at age 64 and was diagnosed with a well-differentiated liposarcoma after undergoing tumor resection. When the tumor recurred 9 years later (the sixth recurrence), it was well-differentiated. When the tumor recurred (the seventh recurrence) six months after surgery at the age of 73 years, the patient underwent orbital exenteration because of rapid tumor growth, and pathological examination showed that the tissue had changed to a dedifferentiated liposarcoma. Conclusions And Importance: Primary well-differentiated orbital liposarcoma may transform to a dedifferentiated form over time. The risk of dedifferentiation at recurrence should be considered in developing a treatment plan, even if the initial pathology is a well-differentiated liposarcoma. abstract_id: PUBMED:28856628 Primary colonic well-differentiated / dedifferentiated liposarcoma of the ascending colon: a case report. Background: Primary colonic and dedifferentiated liposarcomas are both remarkably rare. This work describes a case of primary colonic well-differentiated/dedifferentiated liposarcoma and reviews the clinical characteristics and current therapies for liposarcoma tumors. Case Presentation: A 52-year-old woman was referred to our hospital with a submucosal tumor of the ascending colon. Clinical analysis by ultrasound colonoscopy and computed tomography revealed a partially ossified tumor with irregular edges continuous with the muscular layer. High F-18 deoxyglucose uptake was detected by positron emission tomography. Radical resection with lymph node dissection was performed, yielding a tumor specimen approximately 6.5 × 4.0 × 3.2 cm. Neoplastic spindle cell proliferation was found from submucosa to subserosa. Well-differentiated adipose tissue surrounded the tumor, but contained atypical nuclei with condensed chromosomes. Immunohistochemical staining was positive for p16, CDK4, and MDM2 expression. Based on these findings, a diagnosis of well-differentiated/dedifferentiated liposarcoma was given. Dedifferentiated liposarcomas are more aggressive than their well-differentiated, low-grade counterparts. While local recurrence can occur with both tumor types, dedifferentiated liposarcomas are more prone to developing distant metastases. The patient received no postoperative therapy, and no recurrences have been observed 12 months after surgery. Conclusions: Here we report an extremely rare case of well-differentiated/dedifferentiated liposarcoma of the ascending colon. The dedifferentiated component was high-grade liposarcoma and well-differentiated liposarcoma was detected around the main tumor. abstract_id: PUBMED:34366104 Prognosis and survival of patients diagnosed with well-differentiated and dedifferentiated retroperitoneal liposarcoma. Objectives: The main objective is to establish the overall survival and disease-free survival profiles regarding the patients with retroperitoneal liposarcoma, making a comparison based on the well-differentiated and dedifferentiated histological subtypes. The secondary objectives are to descriptively analyze the clinical characteristics of said patients and to identify and analyze other independent variables that might modify these survival profiles significantly. Methods: An observational and analytical study was performed using a retrospective historical cohort that was followed prospectively. The inclusion criteria consisted of: the procedure of liposarcoma located in the retroperitoneum, the well-differentiated and dedifferentiated histological subtypes, between January 2002 and May 2019. As a result, 32 patients took part in this study's sample. Kaplan-Meier estimator was used to summarize the results and log-rank test was used in the comparative analysis. Results: The overall survival at 5 years was around 59%. No differences were found between the patients with a well-differentiated subtype compared to the dedifferentiated ones (p=0.834). The disease-free survival at 2 years was 59% regarding the well-differentiated and 26% regarding the dedifferentiated, with these differences being statistically significant (p=0.005). None of the other studied variables modified these survival profiles significantly. Conclusions: Dedifferentiated retroperitoneal liposarcomas show less disease-free survival than well-differentiated liposarcomas. However, regarding overall survival no differences can be claimed. abstract_id: PUBMED:35753575 Prognosis and survival of patients diagnosed with well-differentiated and dedifferentiated retroperitoneal liposarcoma. Objectives: The main objective is to establish the overall survival and disease-free survival profiles regarding the patients with retroperitoneal liposarcoma, making a comparison based on the well-differentiated and dedifferentiated histological subtypes. The secondary objectives are to descriptively analyze the clinical characteristics of said patients and to identify and analyze other independent variables that might modify these survival profiles significantly. Methods: An observational and analytical study was performed using a retrospective historical cohort that was followed prospectively. The inclusion criteria consisted of: the procedure of liposarcoma located in the retroperitoneum, the well-differentiated and dedifferentiated histological subtypes, between January of 2002 and May of 2019. As a result, 32 patients took part in this study's sample. Kaplan-Meier estimator was used to summarise the results and log-rank test was used in the comparative analysis. Results: The overall survival at 5 years was around 59%. No differences were found between the patients with a well-differentiated subtype compared to the dedifferentiated ones (p = 0.834). The disease-free survival at 2 years was 59% regarding the well-differentiated and 26% regarding the dedifferentiated, with these differences being statistically significant (p = 0.005). None of the other studied variables modified these survival profiles significantly. Conclusions: Dedifferentiated retroperitoneal liposarcomas show less disease-free survival than well-differentiated liposarcomas. However, regarding overall survival no differences can be claimed. abstract_id: PUBMED:28293370 An exceptional association between retroperitoneal dedifferentiated liposarcoma and well differentiated pericolonic liposarcoma: about a case Liposarcoma is an extremely rare malignant adipose mesenchymal tumor. World Health Organization classifies liposarcomas into five subtypes: well differentiated, myxoid, pleomorphic, dedifferentiated and mixed. The association between multiple and synchronous location of several subtypes of liposarcomas is exceptional. Only 34 cases have been reported in the literature. We report the case of synchronous association between retroperitoneal dedifferentiated liposarcoma and two small well-differentiated lipoma-like pericolic liposarcomas. Anatomopathological aspects, therapeutic options and prognostic factors of liposarcomas have been reviewed in this study. abstract_id: PUBMED:33754090 Limb Salvage Resection of Massive Dedifferentiated Thigh Liposarcoma in a Patient Lost to Follow-Up. Dedifferentiated liposarcoma (DDL) is a rare soft tissue tumor that represents a non-lipogenic progression of well-differentiated liposarcoma (WDL). Unlike WDL, DDL has the propensity for metastasis and is associated with an increased incidence of local recurrence. For DDL of the extremities that is resectable with acceptable functional outcomes, treatment includes primary surgical resection with negative margins. Although rare due to advances in reconstructive techniques, amputation for DDL of the extremities should be considered in which appropriate tumor resection cannot be performed without adequate preservation of limb function. We present the clinical progression of a patient with a large DDL of the right thigh who was initially lost to follow-up, but ultimately underwent delayed definitive therapy with the intent for limb salvage. This case illustrates the importance of assessing neurovascular, osseous, and soft tissue involvement to properly predict and preserve limb function while achieving adequate tumor resection. abstract_id: PUBMED:28903316 MDM2 antagonists synergize with PI3K/mTOR inhibition in well-differentiated/dedifferentiated liposarcomas. Background: Well-differentiated/dedifferentiated liposarcoma (WDLPS/DDLPS) are characterized by a consistent amplification of the MDM2 gene. The PI3K/AKT/mTOR pathway has been suggested to play also an important role in their tumorigenesis. Our goal was to determine whether combined MDM2 and PI3K/AKT/mTOR targeting is associated with higher anti-tumor activity than single agent alone in preclinical models of WDLPS/DDLPS. Methods: WDLPS/DDLPS cells were exposed to RG7388 (MDM2 antagonist) and BEZ235 (PI3K/mTOR dual inhibitor) after which apoptosis and signaling/survival pathway perturbations were monitored by flow cytometry and Western blot analysis. Xenograft mouse models were used to assess tumor growth and animal survival. Western blotting, histopathology, and tumor volume evolution were used for the assessment of treatment efficacy. Results: The PI3K/AKT/mTOR was upregulated in up to 81% of the human WDLPS/DDLPS samples analysed. Treatment with RG7388 and BEZ235 resulted in a greater tumor activity than either drug alone with a significant difference in terms of cell viability after 72h of treatment with RG-73888 alone, BEZ235 alone and a combination of both agents. Consistent with these observations, we found a significant increase in apoptosis with the combination versus the single agent treatment alone. We then analysed the in vivo antitumor activity of RG7388 and BEZ235 in a xenograft model of DDLPS. The combination regimen significantly reduced tumor growth rate in comparison with single agent alone. Conclusions: Our results represent the first in vivo evidence of synergy between MDM2 and PI3K/AKT/mTOR antagonists and represent a strong rationale to evaluate the therapeutic potential of such a combination in WDLPS/DDLPS. abstract_id: PUBMED:37649981 Whole exome sequencing of well-differentiated liposarcoma and dedifferentiated liposarcoma in older woman: a case report. Background: Common kinds of soft tissue sarcomas (STS) include well-differentiated liposarcoma (WDLPS) and dedifferentiated liposarcoma (DDLPS). In this case, we present a comprehensive clinical profile of a patient who underwent multiple recurrences during the progression from WDLPS to DDLPS. Case Presentation: A 62-year-old Asian female underwent retroperitoneal resection of a large tumor 11 years ago, the initial pathology revealed a fibrolipoma-like lesion. Over the next six years, the patient underwent three resections for recurrence of abdominal tumors. Postoperative histology shows mature adipose tissue with scattered "adipoblast"-like cells with moderate-to-severe heterogeneous spindle cells, pleomorphic cells, or tumor giant cells. Immunohistochemistry (IHC) demonstrated positive staining for MDM2 and CDK4, confirming that the abdominal tumor was WDLPS and gradually progressing to DDLPS. Post-operative targeted sequencing and IHC confirmed the POC1B::ROS1 fusion gene in DDLPS. Whole-exome sequencing (WES) revealed that WDLPS and DDLPS shared similar somatic mutations and copy number variations (CNVs), whereas DDLPS had more mutated genes and a higher and more concentrated amplification of the chromosome 12q region. Furthermore, somatic mutations in DDLPS were significantly reduced after treatment with CDK4 inhibitors, while CNVs remained elevated. Conclusion: Due to the high likelihood of recurrence of liposarcoma, various effective treatments should be taken into consideration even if surgery is the primary treatment for recurrent liposarcoma. To effectively control the course of the disease following surgery, combination targeted therapy may be a viable alternative to chemotherapy and radiotherapy in the treatment of liposarcoma. abstract_id: PUBMED:37760460 IGF2BP3 as a Prognostic Biomarker in Well-Differentiated/Dedifferentiated Liposarcoma. Background: Although IGF2BP3 has been implicated in tumorigenesis and poor outcomes in multiple cancers, its role in soft-tissue sarcoma (STS) remains unknown. Preliminary data have suggested an association with IGF2BP3 expression among patients with well-differentiated/dedifferentiated liposarcoma (WD/DD LPS), a disease where molecular risk stratification is lacking. Methods: We examined the survival associations of IGF2BP3 via univariate and multivariate Cox regression in three unique datasets: (1) the Cancer Genome Atlas (TCGA), (2) an in-house gene microarray, and (3) an in-house tissue microarray (TMA). A fourth dataset, representing an independent in-house TMA, was used for validation. Results: Within the TCGA dataset, IGF2BP3 expression was a poor prognostic factor uniquely in DD LPS (OS 1.6 vs. 5.0 years, p = 0.009). Within the microarray dataset, IGF2BP3 expression in WD/DD LPS was associated with worse survival (OS 7.7 vs. 21.5 years, p = 0.02). IGF2BP3 protein expression also portended worse survival in WD/DD LPS (OS 3.7 vs. 13.8 years, p &lt; 0.001), which was confirmed in our validation cohort (OS 2.7 vs. 14.9 years, p &lt; 0.001). In the multivariate model, IGF2BP3 was an independent risk factor for OS, (HR 2.55, p = 0.034). Conclusion: IGF2BP3 is highly expressed in a subset of WD/DD LPS. Across independent datasets, IGF2BP3 is also a biomarker of disease progression and worse survival. Answer: The locoregional disease patterns in well-differentiated (WD) and dedifferentiated (DD) retroperitoneal liposarcoma suggest that multifocal disease is common, and tumors can develop at remote locoregional sites. In a study of 247 patients with de novo or recurrent tumors, 34% had multifocal locoregional disease at initial presentation, including 9% with tumors at synchronous remote retroperitoneal sites. Among patients with initial unifocal disease, 57% progressed to multifocal locoregional disease with subsequent recurrence, including 11% with new tumors outside of the original resection field. The impact of multifocal disease on overall survival was dependent on histologic subtype (WD vs. DD) and disease presentation (de novo vs. recurrence) at the time of resection. No clinicopathologic or treatment-related variable, including the type or extent of resection, was predictive of either multifocal or 'outside field' progression. This suggests that surgical resection remains the primary method of locoregional control in WD/DD retroperitoneal liposarcoma, but the aggressiveness of resection should be individualized, considering both tumor and patient-related factors (PUBMED:24705628). In terms of prognosis and survival, dedifferentiated retroperitoneal liposarcomas show less disease-free survival than well-differentiated liposarcomas. However, no differences in overall survival between the two subtypes were found in a study of 32 patients (PUBMED:34366104, PUBMED:35753575). This indicates that while dedifferentiated liposarcomas may recur more frequently, the overall survival rate is not significantly different from well-differentiated liposarcomas, which may influence decisions regarding the extent of resection. In conclusion, the patterns of locoregional disease in WD/DD retroperitoneal liposarcoma and the implications for the extent of resection suggest that while multifocal and remote site disease is common, the decision for the extent of resection should be tailored to the individual patient, taking into account the histologic subtype and the potential for disease progression and recurrence.
Instruction: Are there income differences in the impact of a national reimbursement policy for smoking cessation treatment and accompanying media attention? Abstracts: abstract_id: PUBMED:24814565 Are there income differences in the impact of a national reimbursement policy for smoking cessation treatment and accompanying media attention? Findings from the International Tobacco Control (ITC) Netherlands Survey. Background: This study examined possible income differences in the impact of a national reimbursement policy for smoking cessation treatment and accompanying media attention in the Netherlands in 2011. Methods: We used three waves of the International Tobacco Control (ITC) Netherlands Survey, a nationally representative longitudinal sample of smokers aged 15 years and older (n=1912). The main analyses tested trends and income differences in outcome measures (smokers' quit attempt rates, use of behavioral counseling, use of cessation medications, and quit success) and awareness variables (awareness of reimbursement possibilities, the media campaign, medications advertisements and other media attention) with generalized estimating equations analyses. Results: In the first half of 2011, there was a significant increase in quit attempts (odds ratio (OR)=2.02, p&lt;0.001) and quit success (OR=1.47, p&lt;0.001). Use of counseling and medications remained stable at 3% of all smokers in this period. Awareness of reimbursement possibilities increased from 11% to 42% (OR=6.38, p&lt;0.001). Only awareness of the media campaign was associated with more quit attempts at the follow-up survey (OR=1.95, p&lt;0.001). Results were not different according to smokers' income level. Conclusions: The Dutch reimbursement policy with accompanying media attention was followed by an increase in quit attempts and quit success, but use of cessation treatment remained stable. The impact of the policy and media attention did not seem to have decreased or increased socioeconomic inequalities in quit attempts, use of cessation treatment, or quit success. abstract_id: PUBMED:24842854 Effectiveness of a national reimbursement policy and accompanying media attention on use of cessation treatment and on smoking cessation: a real-world study in the Netherlands. Background: In 2011, the Netherlands implemented a national policy that ensured that health insurance companies reimbursed behavioural counselling for smoking cessation or the combination of behavioural counselling with pharmacological therapy. Objective: To examine the real-world impact of a national reimbursement policy and accompanying media attention on use of cessation treatment and on smoking cessation. Methods: We used a four-wave longitudinal survey among 2763 adult smokers that started in September 2010 and was repeated at approximately 3 month intervals until June 2011. Two survey waves were conducted before the implementation of the policy and two survey waves after. Findings: There were significant increases in quit attempts (among moderate-to-heavy smokers) and in quit success (among all smokers) following the implementation of the reimbursement policy and the media attention. Use of behavioural counselling did not increase, while use of pharmacological therapy without behavioural counselling (unreimbursed treatment) increased among moderate-to-heavy smokers. Attention to media about the reimbursement was significantly associated with more quit attempts and more quit success. Awareness of the policy was significantly associated with more use of reimbursed treatment among all smokers, while attention to the media coverage was only significantly associated with more use of reimbursed treatment among moderate-to-heavy smokers. Awareness/attention variables were not significantly associated with use of unreimbursed treatment. Conclusions: It seems that a national reimbursement policy for smoking cessation treatment that is accompanied by media attention can increase cessation. Our findings suggest that this increase can (partly) be ascribed to the media attention that accompanied the policy implementation. abstract_id: PUBMED:22994396 Population impact of reimbursement for smoking cessation: a natural experiment in The Netherlands. Aims: To report on the impact of financial reimbursement of pharmacotherapy for smoking cessation in combination with behavioural support on the number of enrollees to proactive counselling in the Dutch national quitline. Design: Descriptive time-series analysis was used to compare quitline enrolment in 2010 and 2012 (no reimbursement) with 2011 (reimbursement). Settings: National smoking cessation quitline. Participants: Smokers signing up for proactive counselling. Measurements: Treatment enrolment data recorded by the quitline as part of usual care from 2010, 2011 and 2012 (until May). Findings: In 2010, a total of 848 smokers started treatment. In 2011, 9091 smokers enrolled. In 2012, the number of enrollees dropped dramatically, even below the 2010 level. In addition, the proportion of smokers in the population dropped from 27.2% in 2010 to 24.7% in 2011. Conclusions: The introduction of a national reimbursement system in the Netherlands was associated with a more than 10-fold increase in telephone counselling for smoking cessation and suggests that reimbursement for smoking cessation contributed to improvements in public health. abstract_id: PUBMED:27257383 Inequalities in the Impact of National Reimbursement of Smoking Cessation Pharmacotherapy and the Influence of Injunctive Norms: An Explorative Study. In 2011, the Dutch government reimbursed smoking cessation pharmacotherapy with behavioral therapy for quitting smokers. We investigate whether inequalities in the use of pharmacotherapy change and, if not, whether this is due to a relatively positive injunctive norm in lower socioeconomic status (SES) groups. A total of 75,415 participants aged ≥15 years from the Dutch Continuous Survey of Smoking Habits, 2009-2012, were considered with the following measures: SES (education/income), injunctive norm (mostly acceptable/neutral/mostly unacceptable), period (2011/all other years), and pharmacotherapy use (yes/no). The proportion of low SES smokers compared with high SES smokers making quit attempts with pharmacotherapy did not differ significantly. The injunctive norm of low SES smokers differed significantly from high SES smokers and nonsmokers of all SES levels. Low income smokers with mostly acceptable injunctive norms were significantly less likely to make quit attempts using pharmacotherapy than those with a neutral or less accepting injunctive norm. The significantly lower use of pharmacotherapy in quit attempts in low income smokers with a positive injunctive norm toward smoking may partly underlie the lack of uptake of reimbursed pharmacotherapy in low SES smokers. abstract_id: PUBMED:26451100 Budgetary impact analysis on funding smoking-cessation drugs in patients with COPD in Spain. The aim of the study was to assess the budgetary impact of funding smoking-cessation drugs in COPD patients in Spain. A hybrid model (cohort and Markov) was developed for a 5-year time horizon. Only approved cessation drugs (varenicline, bupropion, and nicotine replacement therapy) were considered. Irrespective of the drug, the model allowed for an initial cessation attempt, and up to three additional attempts in case of failure or smoking relapse during a 5-year period. Drug effectiveness was based on controlled clinical trials. National Health System perspective was applied; therefore, only medical resources were included. The pharmaceutical costs for smoking-cessation drugs, extra medical follow-up as a consequence of public reimbursement, and annual savings for health costs avoided due to stopping smoking were considered. The model estimated that 17,756 COPD patients would stop smoking if public funding was available, compared with 1,303 without reimbursement. In the reimbursement scenario, the savings accounted for a total of €48.0 million, compensating for expenditures on drugs and medical visits (€40.4 million). Accumulated total additional savings in 5 years (€4.3 million) compared with the scenario without reimbursement was shown. Sensitivity analyses supported the results robustness. Funding smoking-cessation drugs in COPD patients seems to be an efficient option and a National Health System drug reimbursement scheme would represent a cost-saving policy in Spain. abstract_id: PUBMED:15923466 Impact of UK policy initiatives on use of medicines to aid smoking cessation. Context: Increasing the use of effective smoking cessation aids could in principle have a substantial public health impact. The UK government has undertaken several major policy initiatives to try to increase usage of smoking cessation medicines. It is important to evaluate what effect, if any, these have had to inform future policy in the UK and internationally. Objective: This study used sales data to examine the impact of government initiatives to increase access to smoking cessation medicines. Design: Information about prescription and non-prescription sales (1999-2002) was obtained. Estimates of utilisation were compared with findings from the Office of National Statistics (ONS) omnibus surveys. The effects of policy initiatives (making the medicines reimbursable and making them available on general sale outside pharmacies) were assessed by means of time series analysis. In addition a new nicotine replacement therapy (NRT) product (a nicotine lozenge) was launched and the effect of this on total utilisation was assessed. Results: Making bupropion, and subsequently nicotine replacement therapy (NRT), reimbursable had a major impact in medication usage; the estimated increase in each case was more than 80 000 "treatment weeks" purchased per month. In addition, introduction of a nicotine lozenge increased total utilisation and did not detract from usage of other medicines. According to both the sales and the survey data, the proportion of smokers using medicines to aid smoking cessation more than doubled from 8-9% in 1999 to 17% in 2002. The ONS surveys showed no increase in the proportions of smokers making quit attempts and so the effects were solely on the proportions of quit attempts that were aided by medication. Conclusions: In the UK, making smoking cessation medicines reimbursable led to a large increase in utilisation. While the effect on smoking prevalence would be too small to be detected in national surveys it could have a substantial public health impact. abstract_id: PUBMED:27794567 Budgetary Impact Analysis of Reimbursement Varenicline for the Smoking-Cessation Treatment in Patients with Cardiovascular Diseases, Chronic Obstructive Pulmonary Disease or Type-2 Diabetes Mellitus: A National Health System Perspective. Aims: The study aimed to assess the budgetary impact (BI) of reimbursing varenicline in patients with chronic obstructive pulmonary disease (COPD), type-2 diabetes mellitus (t2-DM) or cardiovascular diseases (CVD). Methods: The BI was estimated comparing the current non-reimbursed scenario versus a projected reimbursed scenario using the Spanish National Health System (SNHS) perspective. A hybrid model was developed using epidemiological data and Markov chains to estimate smoking cessation rates with varenicline during a 5-year horizon. Costs of cessation were considered in the reimbursement scenario only. Efficacy, expressed as a 1-year continuous abstinence rate, was derived from clinical trials. Cost savings due to smoking cessation were extracted from local cost-of-illness studies. Results are shown as incremental cost savings. Univariate sensitivity analysis was also applied. Results: A total of 68,684 patients stopped smoking in the reimbursed scenario compared with 15,208 without reimbursement. In the reimbursed scenario, total savings accounted for €36.3 million, showing 14.6 million accumulated additional savings compared with the scenario without reimbursement. Sensitivity analyses showed results to be robust with monetary savings starting in the third year of modeling. Conclusion: Reimbursement of varenicline in smoking cessation is a cost-effective health policy in the SNHS in COPD, t2-DM or CVD, and could produce cost savings starting in the third year of implementation. abstract_id: PUBMED:32341191 Budgetary impact from multiple perspectives of sustained antitobacco national media campaigns to reduce the harms of cigarette smoking. Background: High-intensity antitobacco media campaigns are a proven strategy to reduce the harms of cigarette smoking. While buy-in from multiple stakeholders is needed to launch meaningful health policy, the budgetary impact of sustained media campaigns from multiple payer perspectives is unknown. Methods: We estimated the budgetary impact and time to breakeven from societal, all-payer, Medicare, Medicaid and private insurer perspectives of national antitobacco media campaigns in the USA. Campaigns of 1, 5 and 10 years of durations were assessed in a microsimulation model to estimate the 10 and 20-year health and budgetary impact. Simulation model inputs were obtained from literature and both pubic use and proprietary data sets. Results: The microsimulation predicts that a 10-year national smoking cessation campaign would produce net savings of $10.4, $5.1, $1.4, $3.6 and $0.2 billion from the societal, all-payer, Medicare, Medicaid and private insurer perspectives, respectively. National antitobacco media campaigns of 1, 5 and 10-year durations could produce net savings for Medicaid and Medicare within 2 years, and for private insurers within 6-9 years. A 10-year campaign would reduce adult cigarette smoking prevalence by 1.2 percentage points, prevent 23 500 smoking-attributable deaths over the first 10 years. In sensitivity analysis, media campaign costs would be offset by reductions in medical care spending of smoking among all payers combined within 6 years in all tested scenarios. Conclusions: 1, 5 and 10-year antitobacco media campaigns all yield net savings within 10 years from all perspectives. Multiyear campaigns yield substantially higher savings than a 1-year campaign. abstract_id: PUBMED:19087472 The impact of a smoking cessation policy on visits to a psychiatric emergency department. Objective: Smoking cessation policies are increasingly imposed in mental health facilities because of the high prevalence of tobacco smoking and its related adverse health consequences. The objective of this study was to measure the impact of 2 smoking cessation policies--one imposed in a specific psychiatric hospital and the other across the entire province of Ontario--on weekly visit rates to a psychiatric emergency department. Methods: Administrative data records from consecutive patient visits to a psychiatric emergency department were grouped by week from March 1, 2002, to December 31, 2005. The patients were grouped into 3 broad diagnostic categories: substance-related disorders, psychotic disorders, and other disorders. The impact of 2 smoking cessation policies--one imposed on September 21, 2005 at the Centre for Addiction and Mental Health (CAMH) and one imposed on May 31, 2006 across the province of Ontario--on psychiatric emergency department visit rates was measured using time series analysis. Results: The CAMH-specific smoking cessation policy had no impact on psychiatric emergency department visit rates in any diagnostic category. The province-wide smoking cessation policy resulted in a 15.5% reduction in patient visits for patients with a primary diagnosis of psychotic disorder. Conclusions: The benefits of a smoking cessation policy need to be balanced by the impact of the policy on the likelihood of patients to seek treatment. abstract_id: PUBMED:31261447 Predictors of long-term abstinence rate by income level in the Korean smoking cessation programme. Background And Aims: To assess whether predictors of success in stopping smoking vary as a function of income level in Korean smoking cessation services. Design: Prospective study of predictors of smoking cessation up to 6 months' follow up. Participants: A sample of 954 people (mean age 49.13 ± 10.69 years; 863 [90.5%] men) enrolled in the Korean National Health Insurance Service smoking cessation programme in 2015. Measures: The outcome measure was self-reported continuous abstinence up to 6-month follow up. Predictors were income and other sociodemographic variables as well as smoking-related variables measured at baseline. Results: The continuous 6-month abstinence rate was 30.5%. The adjusted odds of 6-month continuous abstinence were lower among low-income versus the middle- or high-income smokers (OR, 0.54; 95% CI, 0.35-0.84), those with severe versus light/moderate cigarette dependence (OR, 0.72; 95% CI, 0.52-0.98), and use of bupropion versus varenicline (OR, 0.60; 95% CI, 0.39-0.91). The association between cigarette dependence and outcome was only present among low-income smokers. Conclusions: Lower income, higher cigarette dependence, and choice of bupropion versus varenicline are associated with lower chances of stopping smoking in Korean smoking cessation services, but the association with cigarette dependence is only found in low-income smokers. Answer: The impact of a national reimbursement policy for smoking cessation treatment and accompanying media attention does not appear to differ significantly by income level. According to the International Tobacco Control (ITC) Netherlands Survey, the Dutch reimbursement policy with media attention in 2011 was followed by an increase in quit attempts and quit success, but the use of cessation treatment remained stable. Importantly, the impact of the policy and media attention did not seem to decrease or increase socioeconomic inequalities in quit attempts, use of cessation treatment, or quit success (PUBMED:24814565). Another study in the Netherlands found that the national reimbursement policy and media attention led to significant increases in quit attempts (among moderate-to-heavy smokers) and quit success (among all smokers). However, the use of behavioral counseling did not increase, while the use of pharmacological therapy without behavioral counseling increased among moderate-to-heavy smokers. Awareness of the policy was significantly associated with more use of reimbursed treatment among all smokers, but the awareness/attention variables were not significantly associated with the use of unreimbursed treatment (PUBMED:24842854). Furthermore, an explorative study indicated that the proportion of low socioeconomic status (SES) smokers compared with high SES smokers making quit attempts with pharmacotherapy did not differ significantly. However, low-income smokers with mostly acceptable injunctive norms were significantly less likely to make quit attempts using pharmacotherapy than those with a neutral or less accepting injunctive norm. This suggests that the significantly lower use of pharmacotherapy in quit attempts in low-income smokers with a positive injunctive norm toward smoking may partly underlie the lack of uptake of reimbursed pharmacotherapy in low SES smokers (PUBMED:27257383). In summary, the available evidence suggests that national reimbursement policies for smoking cessation treatment, along with media attention, have increased quit attempts and quit success without creating significant income differences in their impact. However, there may be underlying factors, such as injunctive norms, that influence the use of pharmacotherapy among low-income smokers (PUBMED:24814565; PUBMED:24842854; PUBMED:27257383).
Instruction: Adequate oral fluid intake in hospitalized stroke patients: does viscosity matter? Abstracts: abstract_id: PUBMED:22949278 Adequate oral fluid intake in hospitalized stroke patients: does viscosity matter? Purpose: Sufficient intake of oral fluids to meet hydration needs is a critical recovery issue for patients hospitalized post stroke. Concerns for adequate oral fluid intake are generally focused on dysphagic patients restricted to thickened liquids; however, fluid intake patterns in stroke patients receiving thin liquids are unknown. Method: This study examines the oral fluid intake patterns of three groups over 72 hours: community dwelling individuals, patients hospitalized post stroke receiving thin liquids and patients hospitalized post stroke receiving thickened liquids. Results: Mean oral fluid intake differed significantly between the two hospitalized groups (p = .04), with individuals receiving thickened liquids consuming less. Less than 1% of patients hospitalized post stroke met a minimum standard of 1500 mL/day, regardless of liquid viscosity. Conversely, community dwelling participants consumed significantly more fluids on average than their hospitalized counterparts. Conclusion: Compliance with beverage preference, frequency of beverage offering, and inaccurate preparation of thickened beverages were identified as factors potentially influencing fluid intake. abstract_id: PUBMED:24007386 Intake of thickened liquids by hospitalized adults with dysphagia after stroke. There is widespread concern that individuals with dysphagia as a result of stroke do not drink enough fluids when they are prescribed thickened liquids. This paper details a retrospective audit of thickened liquid consumption of 69 individuals with dysphagia following stroke in acute and rehabilitation hospitals in Adelaide, South Australia. Hospitalized individuals with dysphagia following stroke drank a mean of 781 ml (SD = 507 ml) of prescribed thickened liquids per day, significantly less in the acute setting (M = 519 ml, SD = 305 ml) than in the rehabilitation setting (M = 1274 ml, SD = 442 ml) (t(67) = -8.34, p &lt; .001). This daily intake of thickened liquids was lower than recommended standards of fluid intake for hospitalized adults. Fluid intake could be increased with definitive protocols for the provision and monitoring of consumption of thickened liquids, by offering more fluid via food or free water protocols or by routine use of non-oral supplementary routes. Future research into the effectiveness of such recommendations needs to evaluate not only the impact on fluid intake but also on health outcomes. abstract_id: PUBMED:11733894 Quantifying fluid intake in dysphagic stroke patients: a preliminary comparison of oral and nonoral strategies. Objective: To determine whether dysphagic stroke patients receiving oral (thickened-fluid dysphagia) diets or nonoral (enteral feedings supplemented with intravenous fluids) diets met their estimated fluid requirements. Design: Cohort study. Setting: University-affiliated hospital. Participants: Thirteen dysphagic patients with new strokes were studied for 21 days postadmission to hospital. Interventions: Seven patients (group 1) were started on nonoral feeding and later progressed to oral diets and 6 patients (group 2) received oral dysphagia diets only. Main Outcome Measure: Fluid intake. Results: Fluid intake of patients in group 1 significantly declined over the 21 days (mean +/- standard deviation, 3158 +/- 523mL/d vs 984 +/- 486mL/d; p &lt; .0001), representing 134% +/- 26% and 43% +/- 20% of their fluid requirements, respectively. Mean fluid intake of patients in group 2 was 755 +/- 162mL/d, representing 33% +/- 5% of requirements. This volume was significantly lower than the fluid intake of patients who received nonoral feeding (p &lt; .0001). Conclusions: Dysphagic stroke patients who received thickened-fluid dysphagia diets failed to meet their fluid requirements whereas patients on enteral feeding and intravenous fluid regimens received ample fluid. abstract_id: PUBMED:25565596 Barriers to oral fluid intake: beyond thickened liquids. Objectives: Substandard oral fluid intake in poststroke patients receiving thickened liquids has been well documented; however, more recently, it has been reported in poststroke patients receiving thin liquids. Factors contributing to substandard fluid intake have been limited to the altered taste/texture of thickened beverages. The aim of this study was to determine if functional deficits poststroke based on admission Functional Independence Measure (FIM) scores for expression, problem solving, memory, and eating as well as dysphagia severity predict oral fluid intake for poststroke patients regardless of liquid viscosity. A second aim was to determine if there is a significant difference in the amount of oral fluids offered and consumed between patients receiving thin liquids and patients receiving thickened liquids. Methods: Thirty-nine patients with a new diagnosis of ischemic stroke participated. Patients were assigned to one of two groups based on the consistency of liquids they were receiving: group 1, 21 receiving thin liquids, and group 2, 18 receiving nectar or honey consistency. Fluids offered and consumed were monitored for 72 consecutive hours. Admission FIM scores and dysphagia severity ratings were collected. Results: Functional deficits in eating significantly predicted oral fluid intake in the thin-liquid group (p = .0575), whereas functional deficits in cognition (memory and problem solving) significantly predicted oral fluid intake in the thickened-liquid group (p = .0037). Patients receiving thin liquids consumed significantly more than patients receiving thickened liquids (mean = 1,405.45 ml and SD = ±727.1 ml vs. mean = 906.58 ml and SD = ±317.4 ml; p = .0031); however, they were also offered significantly more fluids (mean = 2,574.7 ml vs. 1,588.9 ml, p = .0002). Conclusions: On average, poststroke patients consume substandard amount of fluid during hospitalization, regardless of viscosity. Although patients receiving thin liquids consumed significantly more, they were offered, on average, approximately 1,000 ml more fluids per 24-hour period than the thickened-liquid group. Functional deficits after stroke influence oral fluid intake and should be considered as potential barriers to fluid intake for poststroke patients. abstract_id: PUBMED:36620126 FACTORS ASSOCIATED WITH ORAL INTAKE ABILITY IN PATIENTS WITH ACUTE-STAGE STROKE. Objective: To investigate the factors predicting oral feeding ability following acute stroke. Methods: This retrospective study compared patients admitted to a stroke care unit in 2 groups: an oral intake group and a tube feeding group. The groups were evaluated for 28 items and initial blood investigation tests, and the results compared. Logistic regression analysis was used to identify the clinical variables significantly associated with oral feeding ability. Results: A total of 255 stroke patients (162 in the oral intake group and 93 in the tube feeding group) were admitted to the stroke care unit. Significant differences were observed between the 2 groups for 20 items. Logistic analysis found that the following variables were significant in the prediction model: age, date of initiation of oral feeding, stroke recurrence/patient deterioration during hospitalization, and date of initiation of occupational therapy. Conclusion: Factors associated with achieving oral intake among stroke care unit patients were: young age at time of admission; starting oral intake early; no stroke recurrence/patient deterioration during hospitalization; and achieving rehabilitation of daily activities early during the physical function recovery stage. abstract_id: PUBMED:30655231 Fluid and energy intake in stroke patients during acute hospitalization in a stroke unit. Proper hydration and nutrition have been found to be contributing factors to a better recovery in patients after stroke. A better knowledge of factors influencing fluid and energy intake may contribute to a better care of the acute stroke patient. Aim of this study was to describe hydration status, fluid intake, and energy intake of stroke patients during acute hospitalization in a stroke unit. A retrospective descriptive study was conducted in stroke patients admitted to a stroke unit. All patients received neurological evaluation and urinary osmolality (uOsm) measure at admission and discharge. During stroke unit stay patients received a day-by-day diary to record data about beverages, food, and therapy. Water and energy content were then reported based on a national nutritional database. Ninety-five patients were included in the analysis. uOsm did not differ between patients with or without dysphagia, with uOsm &gt; 500 mosm/kg in 58.1% at admission and 57.8% at discharge. Inadequate fluid intake was found in 41.2% of the sample with no difference between groups. Insufficient energy intake was found in 95.6% of the sample, with no patients with dysphagia reaching the minimum suggested amount. Our results suggest that a relevant proportion of stroke patients with and without dysphagia may not improve their hydration status during the first days after admission. Diet and therapy were insufficient to achieve recommended fluid intake and energy intake in a large proportion of patients, suggesting a more careful monitoring of hydration and nutrition needs. abstract_id: PUBMED:28931986 A comparison of the swallowing function and quality of life by oral intake level in stroke patients with dysphagia. [Purpose] This study was conducted to compare the swallowing function and quality of life (QOL) in stroke patients with dysphagia by oral intake level. [Subjects and Methods] Seventy-nine stroke patients with dysphagia were enrolled. All subjects were evaluated by using the Swallowing Function Test (SFT) and the Swallowing Quality of Life (SWAL-QOL) questionnaire. The subjects were divided into two group based on the SFT score: oral intake (OI) and non-oral intake (NOI) groups. The SFT and SWAL-QOL scores were then compared between the groups. [Results] There were significant differences in items in the SFT, except for the respiratory item, between the OI group and the NOI group. There were significant differences in all items on the SWAL-QOL, except for the sleep item, between the OI group and the NOI group. [Conclusion] Oral intake was positively associated the swallowing function and QOL in stroke patients with dysphagia. Safe intervention using oral intake treatments during dysphagia rehabilitation may have a positive effect on the patients' swallowing function and QOL. abstract_id: PUBMED:31956606 Re-initiation of Oral Food Intake Following Enteral Nutrition Alters Oral and Gut Microbiota Communities. Stroke is associated with multiple forms of disability, including dysphagia. Post-stroke dysphagia increases the risks of pneumonia and mortality and often results in cessation of oral feeding. However, appropriate rehabilitation methods can eventually lead to resumption of oral food intake. This study tried to clarify that re-initiating oral food intake could modify the composition of oral/gut microbial communities in patients with dysphagia. From 78 patients with sub-acute stage of stroke, 11 complete tube feeding subjects without taking antibiotics were enrolled and received rehabilitation for re-initiation of oral food intake, and 8 subjects were brought back to complete oral feeding. Oral and gut microbiota community profiles were evaluated using 16S rRNA sequencing of the saliva and feces samples before and after re-initiation of oral food intake in patients recovering from enteral nutrition under the same nutrient condition. Standard nutrition in the hospital was 1,840 kcal, including protein = 75 g, fat = 45 g, and carbohydrates = 280 g both for tube and oral feeding subjects. Oral food intake increased oral and gut microbiome diversity and altered the composition of the microbiome. Oral and gut microbiome compositions were drastically different; however, the abundance of family Carnobacteriaceae and genus Granulicatella was increased in both the oral and gut microbiome after re-initiation of oral food intake. Although oral microbiota showed more significant changes than the gut microbiota, metagenome prediction revealed the presence of more differentially enriched pathways in the gut. In addition, simpler co-occurrence networks of oral and gut microbiomes, indicating improved dysbiosis of the microbiome, were observed during oral feeding as compared to that during tube feeding. Oral food intake affects oral and gut microbiomes in patients recovering from enteral nutrition. Rehabilitation for dysphagia can modify systemic health by increasing the diversity and altering the composition and co-occurrence network structure of oral and gut microbial communities. abstract_id: PUBMED:26267442 A Descriptive Study of the Fluid Intake, Hydration, and Health Status of Rehabilitation Inpatients without Dysphagia Following Stroke. Adequate hydration is important for all people, particularly when hospitalized with illness. Individuals with dysphagia following stroke are considered to be at risk of inadequate fluid intake and, therefore, dehydration, but there is little information about the fluid intake or hydration of individuals without dysphagia poststroke. This cohort study measured the average beverage intake, calculated the urea/creatinine ratio as a measure of hydration, and documented specific health outcomes of 86 people without dysphagia poststroke who were inpatients in rehabilitation centers. Participants drank on average 1504 ml per day (SD 359 ml), which typically represented 67% of their estimated daily requirement. Approximately 44% of the participants in the sample were dehydrated based on a blood urea nitrogen/creatinine ratio &gt;20:1. Sixteen percent of participants were diagnosed with one or more of the health outcomes of dehydration/hypernatremia, urinary tract infection, or constipation. A greater level of dependence was associated with poorer beverage intake and higher risk of an adverse health outcome. Those in the older/elderly age range (particularly older women) and those with poor mobility were most at risk of poor hydration. This study highlights that patients in rehabilitation facilities poststroke, even without dysphagia, may be at risk of suboptimal fluid intake and hydration. abstract_id: PUBMED:31238591 Effect of an Oral Health Programme on Oral Health, Oral Intake, and Nutrition in Patients with Stroke and Dysphagia in Taiwan: A Randomised Controlled Trial. No previous studies have evaluated an oral health programme, before swallowing therapy, in patients with stroke and dysphagia in Taiwan. This randomised controlled trial evaluated the effect of an oral health programme (i.e., sputum assessment, Bass method-based tooth brushing, and tooth coating with fluoride toothpaste) before swallowing therapy. Sixty-six patients with stroke (23 female, 43 male) in our rehabilitation ward, who underwent nasogastric tube insertion, were assigned randomly to an oral care group (n = 33) and a control group (n = 33). Demographic data, oral health assessment, Functional Oral Intake Scale (FOIS) scores, Mini-Nutritional Assessment-Short Form (MNA-SF) scores, and nasogastric tube removal rates were compared between groups. We evaluated outcomes using generalised estimating equation analysis. Three weeks post-implementation, the oral care group had significant oral health improvements relative to the control group (95% CI =-2.69 to -1.25, Wald χ2 = 29.02, p &lt; 0.001). There was no difference in the FOIS (95% CI = -0.16 to 0.89, Wald χ2 = 1.86, p&gt;0.05), MNA-SF (95% CI = -0.35 to 0.53, Wald χ2 =-0.17, p&gt;0.05), and nasogastric tube removal (p&gt;0.05) between groups. The oral care group had a higher, but non-significant FOIS score (3.94 vs 3.52) (p &gt; 0.05). Routine oral health programmes implemented during stroke rehabilitation in patients with dysphagia may promote oral health and maintain oral intake. Answer: Adequate oral fluid intake is crucial for the recovery of hospitalized stroke patients, and viscosity does appear to matter. Studies have shown that patients with dysphagia following a stroke who are prescribed thickened liquids tend to consume less fluid compared to those receiving thin liquids. For instance, hospitalized stroke patients receiving thickened liquids consumed significantly less fluid than those receiving thin liquids, with less than 1% of patients meeting the minimum standard of 1500 mL/day, regardless of the liquid viscosity (PUBMED:22949278). Another study found that individuals with dysphagia after stroke drank a mean of 781 ml of prescribed thickened liquids per day, which was significantly less than the recommended standards of fluid intake for hospitalized adults (PUBMED:24007386). Furthermore, dysphagic stroke patients on thickened-fluid dysphagia diets failed to meet their fluid requirements, whereas those on enteral feeding and intravenous fluid regimens received ample fluid (PUBMED:11733894). Functional deficits poststroke, such as problems with eating, memory, and problem-solving, were also found to significantly predict oral fluid intake, indicating that functional impairments can be barriers to adequate fluid consumption (PUBMED:25565596). In summary, the viscosity of liquids does matter for oral fluid intake in hospitalized stroke patients. Those receiving thickened liquids tend to have lower fluid intake, which can be influenced by various factors including the altered taste/texture of thickened beverages, functional deficits, and the frequency and accuracy of beverage offerings. Addressing these issues may help improve fluid intake and hydration status in this patient population.
Instruction: Radiolucent pancreatic lithiasis: a precursor stage for calcified pancreatic lithiasis or a new entity? Abstracts: abstract_id: PUBMED:9262980 Radiolucent pancreatic lithiasis: a precursor stage for calcified pancreatic lithiasis or a new entity? Objective: Radiolucent pancreatic lithiasis (RPL) has been identified as a different entity from calcified pancreatic lithiasis. The purpose of this study is to evaluate the frequency, characteristics and evolution of RPL. Patients: Between 1983 and 1995, 278 consecutive patients who presented with pancreatic lithiasis were studied. Forty-four patients had RPL (15.8%): 27 had pure radiolucent stones (PRS) (group 1), 5 had pure radiolucent stones combined with evenly calcified stones (ECS) (group 2), 2 had target calculi (TC) (radiolucent core with calcified shell) (group 3), 10 had TC combined with ECS (group 4). Results: Among the 27 patients with PRS, there were 19 males with a mean age of 41 years. PRS were mainly located in the head of the pancreas with a mean diameter of 5 mm (range 3-26 mm). Seven patients among 27 with PRS (26%) were less than 20 years old (juvenile form) or more than 60 years old (senile form). They were characterized by no or low alcohol consumption and a high rate of attacks of acute pancreatitis. In group 1, PRS turned to more advanced calcified stages in 6/16 of patients (37%) followed in 30 to 144 months with a prior stage of TC in 2 cases. An evolution toward more calcified stages (TC or ECS) occurred in half of the patients belonging to group 2 and 4 in 36 to 84 months. Genetic disposition and alcohol consumption could account for the evolution toward more calcified stages. A genetic factor is suggested by a rapid evolution to evenly calcified stones in two aged children 8 and 10 years and by a high frequency of familial cases in patients belonging to groups 2 and 4 (60% and 20%) as compared to group 1 with PRS (4%). Alcohol consumption could accelerate the calcifying process since patients belonging to groups 2 and 4 had a significantly higher alcohol consumption than those with PRS (group 1). Conclusion: RPL is a heterogeneous pancreatic disease including juvenile and senile presentation which may represent about 15% of pancreatic lithiasis. Evolution towards calcified stages (PRS then TC then ECS) occurred in 37-50% of cases and could be related to a genetic factor and increased alcohol consumption. abstract_id: PUBMED:1736346 Radiolucent and calcified pancreatic lithiasis: two different diseases. Role of alcohol and heredity. One hundred and eighteen consecutive patients presenting with pancreatic calculi have been studied by means of plain films of the abdomen and endoscopic retrograde cholangiopancreatography. Patients were divided into the following groups: 1) Evenly calcified calculi, 74 patients, 66 men, aged at onset (M +/- SM) 40 +/- 10 years; daily consumption of alcohol, 157 +/- 121 g, and of tobacco, 26 +/- 15 cigarettes. Two patients presented with another case in the same family. Composition of calculi--at least 95% calcium salts and a degraded form of lithostathine S (formerly called PSP S2-5)--was similar in the Occidental alcoholic and in the tropical form: calcified calculi are probably related to alcohol and nutritional disorders. 2) Radiolucent stones, 17 patients, 12 men and 5 women, aged at onset 38 +/- 18 years. These patients had a low alcohol and tobacco consumption. There were four familial cases. 3) Radiolucent core with a calcified shell (target calculi), 27 cases, 23 men; age at onset, 38 +/- 15 years; daily ethanol consumption, 111 +/- 93, and tobacco, 19 +/- 15 cigarettes. There were two familial cases. It has been shown that these calculi originated from transparent calculi that later calcified. The frequency of women and of familial cases was significantly greater in groups 2 + 3 than in group 1. The consumption of alcohol and tobacco and the proportion of men were significantly lower in group 2 than in groups 1 and 3. We discuss the role of heredity in lithostathine precipitation and of alcohol in calcium precipitation.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:1463319 Description of a new disease: pancreatic lithiasis with radiolucent calculi 118 patients presenting with pancreatic lithiasis were consecutively observed in our service. They underwent both an endoscopic pancreatography and god plain films of the abdomen. Calculi were classified in 3 groups: 1) Radiolucent calculi (17 cases, 5 females; 4 hereditary cases) are build up of amorphous residues of lithostathine S. They are not related to either alcohol, diet or tobacco. 2) Target calculi (27 cases, 4 females; 3 hereditary cases) have a radiolucent core as in 1 and a peripheral calcification. They are a late evolutionary stage of radiolucent lithiasis. The frequency of females and of hereditary cases is significantly greater in form 1 + 2 than in form 3. These two forms are a newly described disease without relationship with nutrition, alcohol or tobacco but the peripheral calcification of radiolucent calculi is favoured by alcohol and tobacco. This disease could be hereditary. 3) Calcic lithiasis (74 cases, 8 females, 2 hereditary cases) is the most frequent form of pancreatic lithiasis. Its cause is nutritional. abstract_id: PUBMED:19160597 Differences of pancreatic stone morphology and content in patients with pancreatic lithiasis. Pancreatic stones from 25 patients were compared by morphological and/or radiological examination. Twenty patients, mostly alcoholics, had calcified stones. Five (four nonalcoholic women) had radiolucent stones. Aspect and consistency of calcified stones varied from compact and resistant to coralliform and brittle but were identical in the same patient. In the coralliform type, organic fibrils with a diameter up to 10 microm and a length up to a few centimeters were observed, strongly attached to mineral crystals. The lithostathine (formerly called pancreatic stone protein, PSP) content was estimated in each stone significantly lower in the populations with larger stone mass, compared to populations with small amounts of stones. Transparent stones were built up of an amorphous material solubilized at acidic pH and corresponding to degraded forms of lithostathine-S (S for secretory). In one patient, we followed over seven years the evolution of a radiolucent calculus. We observed that the radiolucent core occurred first, and was secondarily wrapped in a calcified shell. We conclude that morphological differences observed in this study among pancreatic stones suggest that different mechanisms have been involved in their formation. Among them, lithostathine transformation into insoluble polypeptides may provide different types of protein aggregates, some of them being able to promote CaCO(3) apposition and others having no affinity for calcium. abstract_id: PUBMED:1633557 Description of a new disease, pancreatic lithiasis with radiotransparent calculi 118 patients presenting with pancreatic lithiasis were consecutively observed in our service. They underwent both an endoscopic pancreatography and good plain films of the abdomen. Calculi were classified in 3 groups: 1) Radiolucent calculi (17 cases, 5 females; 4 hereditary cases) are build up of amorphous residues of lithostathine S. This disease is not related to either alcohol, diet or tobacco. 2) Taget calculi (27 cases, 4 females; 3 hereditary cases) have a radiolucent core as in 1 and a peripheral calcification. They are a late evolutionary stage of radiolucent lithiasis. The frequency of females and of hereditary cases is significantly greater in form 1 + 2 than in form 3. These two forms are a newly described disease without relationship with nutrition, alcohol or tobacco but the peripheral calcification of radiolucent calculi is favoured by alcohol and tobacco. This disease could be hereditary. 3) Calcic lithiasis (74 cases, 8 females, 2 hereditary cases) is the most frequent form of pancreatic lithiasis. Its cause is nutritional. abstract_id: PUBMED:18664088 Management of pancreatic lithiasis in a 5-year-old girl. Pancreatic lithiasis causing chronic pancreatitis is a very rare entity in childhood. A five-year-old girl presenting with pancreatic lithiasis was treated successfully with a well-organized diagnostic and therapeutic algorithm. The authors emphasize that early diagnosis followed by appropriate therapeutic pancreatographic procedures instead of primary invasive surgical approach can certainly ameliorate progressive and irreversible pancreatic damage. The clinicopathologic features of this rare entity are discussed, with emphasis on diagnosis and treatment. abstract_id: PUBMED:8576799 Radiolucent pancreatic lithiasis. N/A abstract_id: PUBMED:32928282 Pancreatic ascites managed with a conservative approach: a case report. Background: Pancreatic ascites refers to the massive accumulation of pancreatic fluid in the peritoneal cavity and is a rare entity. Chronic alcoholic pancreatitis is the most common cause. Ascites is commonly seen in patients with alcoholic liver disease and is usually a consequence of portal hypertension. Biliary pancreatitis, pancreatic trauma and cystic duplications of biliopancreatic ducts, ampullary stenosis, or ductal lithiasis are the remaining causes. Case Presentation: A 53-year-old Chhetri man, a chronic alcoholic, presented with epigastric pain and abdominal distension. He had made several previous visits to a local hospital within the past 6 months for a similar presentation. Serum alkaline phosphatase 248 IU/L, serum amylase 1301 IU/L, and lipase 1311 IU/L were elevated while serum calcium was decreased (1.5 mmol/l). Ascitic fluid amylase was elevated (2801 IU/L). A computed tomography scan of his abdomen revealed features suggestive of acute-on-chronic pancreatitis. The case was managed with a conservative approach withholding oral feedings, starting total parenteral nutrition, paracentesis, octreotide, and pigtail drainage. Conclusion: Pancreatic ascites is a rare entity. Diagnosis is suspected with raised ascitic fluid amylase in the presence of pancreatic disease. Such cases can be managed by conservative approach or interventional approach. We managed this case through a conservative approach. abstract_id: PUBMED:30022925 Metabolic Pancreatitis: Pancreatic Steatosis, Hypertriglyceridemia, and Associated Chronic Pancreatitis in 3 Patients with Metabolic Syndrome. Obesity, insulin resistance, and metabolic syndrome continue to increase in prevalence. Hypertriglyceridemia is commonly associated and represents a valuable marker of metabolic syndrome. An increase in subcutaneous fat deposition places patients at risk for visceral adipose deposition in sites such as the liver, heart, and pancreas. Pancreatic steatosis in the setting of metabolic syndrome is a rapidly emerging entity whose clinical spectrum remains to be defined. Hypertriglyceridemia is an accepted cause of acute pancreatitis but its role in chronic pancreatic injury remains to be explored. We present 3 patients with chronic abdominal pain and pancreatic steatosis in the setting of underlying metabolic syndrome with hypertriglyceridemia. These cases were identified in one endoscopic ultrasonographer's practice over a 12-month period. Each patient had documented hypertriglyceridemia but no history of acute hypertriglyceride-induced pancreatitis. A history of significant alcohol exposure was carefully excluded. Each patient underwent endoscopic ultrasonography (EUS) which proved critical in delineating the spectrum of chronic pancreatic injury. Each of our patients had EUS documentation of pancreatic steatosis and sufficient criteria to establish a diagnosis of chronic pancreatitis. Intraductal pancreatic calculi were identified in all 3 patients. Our series suggests that in the setting of metabolic syndrome, chronic hypertriglyceridemia and pancreatic steatosis may be associated with chronic pancreatitis. We hypothesize that hypertriglyceridemia may provide a pathogenic role in the development of chronic pancreatic microinjury. In addition, each of our patients had EUS-documented pancreatic ductal lithiasis. To our review, these are novel findings which have yet to be reported. We believe that with an enhanced awareness, it is likely that the entity of metabolic syndrome with features of pancreatic steatosis and hypertriglyceridemia with their associated manifestations of chronic pancreatitis, including ductal lithiasis, will be widely appreciated. abstract_id: PUBMED:2933679 Chronic calcifying pancreatitis, pancreatic calculi. New data There are two different forms of chronic pancreatitis: one is obstructive pancreatitis which results from a pre-existing obstacle (usually a tumour or a scar) and the other, much more frequent, is chronic calcifying pancreatitis which seems to begin with the formation of precipitates in acini and ducts, later transformed into stones and calcifications made up of calcium carbonate, and therefore is a pancreatic lithiasis. Since the pancreatic juice is supersaturated in calcium carbonate, the presence of an inhibitor of crystallization must be postulated. This has now been identified as a 13500 daltons molecular weight protein: the pancreatic stone protein secreted by the acinar cells. This protein is decreased in chronic calcifying pancreatitis irrespective of its origin (alcoholic, hereditary, hypercalcaemic, tropical, idiopathic), although its reduction is unrelated to any of these aetiological factors. Chronic alcohol consumption may encourage calcium stone formation possibly by disturbing the cholinergic regulation of pancreatic secretion, with decrease in citrate secretion (citrate is a chelator of calcium) and increase in enzyme secretion. The diagnostic and therapeutic implications of these findings are already obvious. Answer: Radiolucent pancreatic lithiasis (RPL) has been identified as a distinct entity from calcified pancreatic lithiasis, with studies suggesting that it may not simply be a precursor stage but rather a separate condition with its own characteristics and evolution. The frequency, characteristics, and evolution of RPL were evaluated in a study of 278 consecutive patients with pancreatic lithiasis, where 44 patients had RPL (15.8%). These patients were divided into groups based on the type of stones they had, including pure radiolucent stones (PRS), radiolucent stones combined with evenly calcified stones (ECS), and target calculi (TC) which have a radiolucent core with a calcified shell (PUBMED:9262980). The study found that PRS were mainly located in the head of the pancreas and that a subset of patients with PRS evolved to more advanced calcified stages, with some developing TC or ECS over time. The evolution towards calcified stages occurred in 37-50% of cases and could be related to genetic factors and increased alcohol consumption. A genetic predisposition was suggested by the rapid evolution to evenly calcified stones in two young children and by a high frequency of familial cases in certain groups. Additionally, patients with higher alcohol consumption were more likely to have an accelerated calcifying process (PUBMED:9262980). Other studies have also indicated that radiolucent and calcified pancreatic lithiasis may represent two different diseases, with alcohol and heredity playing roles in their development. Radiolucent stones were found to be unrelated to alcohol, diet, or tobacco, and there was a higher frequency of familial cases among patients with radiolucent stones or target calculi. In contrast, calcified stones were related to alcohol and nutritional disorders (PUBMED:1736346, PUBMED:1463319). Furthermore, the morphological differences observed among pancreatic stones suggest that different mechanisms are involved in their formation, with lithostathine transformation into insoluble polypeptides providing different types of protein aggregates, some promoting calcium apposition and others having no affinity for calcium (PUBMED:19160597). In conclusion, while RPL can evolve into calcified stages, the evidence suggests that it is a heterogeneous pancreatic disease with its own juvenile and senile presentations, and it may represent about 15% of pancreatic lithiasis. The evolution towards calcification is not inevitable and may be influenced by genetic factors and alcohol consumption. Therefore, RPL should be considered a new entity rather than merely a precursor stage for calcified pancreatic lithiasis (PUBMED:9262980). The distinction between radiolucent and calcified pancreatic lithiasis is further supported by the observation that radiolucent stones are built up of amorphous residues of lithostathine S and are not related to alcohol, diet, or tobacco, whereas calcified stones are associated with nutritional causes. The presence of a radiolucent core in target calculi indicates that they are a late evolutionary stage of radiolucent lithiasis, and the peripheral calcification of these calculi is favored by alcohol and tobacco (PUBMED:1633557). Additionally, the role of heredity in the formation of radiolucent stones is highlighted by the significant number of familial cases, suggesting a potential genetic predisposition to this form of pancreatic lithiasis (PUBMED:1463319). In summary, radiolucent pancreatic lithiasis is recognized as a distinct clinical entity with its own unique characteristics and potential for evolution into calcified stages. Its development and progression are influenced by a combination of genetic factors and lifestyle choices, such as alcohol consumption, rather than being a simple precursor to calcified pancreatic lithiasis.
Instruction: Multidimensional approach to teaching anatomy-Do gender and learning style matter? Abstracts: abstract_id: PUBMED:27033302 Multidimensional approach to teaching anatomy-Do gender and learning style matter? Background: The aim of this study was to assess the impact of two teaching interventions (ultrasound and arthroscopy) in a peer teaching (PT) environment on anatomy examination scores and also to examine the influence of gender and learning style on these scores. Methods: We randomly assigned 484 second year medical students to one of three groups: musculoskeletal ultrasound (MSUS), arthroscopy (ASC) and control (CON). The MSUS- and the ASC-group attended two additional training sessions in ultrasound or arthroscopy; the CON-group received no additional lessons. Students were asked to complete Kolb's Learning Style Inventory test. We assessed differences in anatomical knowledge (multiple choice (MC) exam) and subjective evaluation with respect to gender and learning style. Results: There were no relevant differences between the three groups regarding the MC exam. Acceptance of the peer teaching concept was good. All students preferred ultrasound to arthroscopy and thought that they learned more from ultrasound despite the fact that they rated the instructors as less competent and needed more time to gain in-depth knowledge. There was no significant effect of gender on evaluation results. Arthroscopy was best enjoyed by accommodators according to Kolb's Inventory and least by divergers, who found that they had learned a lot through ultrasound. The improvement in spatial conceptualization was greatest for accommodators and worst for assimilators. Conclusion: Gender and learning style had no impact on quantitative parameters. Qualitative analysis, however, revealed differences for learning style and further evaluation is warranted to assess the impact on medical education. abstract_id: PUBMED:38192541 Teaching Style Preferences and Online Teaching Practices of Optometry Teachers During COVID-19: A Preliminary Study. Purpose: Although research on teaching style preferences is available in the literature, there is limited information on the preferences of optometry teachers. The Coronavirus disease 2019 pandemic has impacted educational practices in all sectors. This study focuses on the teaching style preferences and online teaching practices of optometry teachers during the COVID-19 pandemic. Patients And Methods: The study used a case study research design and a self-administered online questionnaire and follow-up semi-structured interview for data collection. The Grasha-Riechmann teaching style inventory was used to explore the teaching style preferences (Expert, Formal Authority, Demonstrator, Facilitator and Delegator). Quantitative and qualitative data were analyzed using descriptive statistics and thematic content analysis respectively. Results: The optometry teachers were aged between 29 and 51 years (median = 45 years) and had between 3 and 25 years of teaching experience (median = 5 years). All teachers scored high in the Expert teaching style and most had scores categorized as high for the Delegator, Facilitator and Formal Authority teaching styles. The teachers reported that the initial adaptation to online teaching was difficult and challenging as they were not prepared for the rapid transition to online learning. They used various resources and materials but felt that student participation and engagement was lower in online teaching. Teachers noted that more time was needed to prepare materials for online teaching than for face-to-face teaching. Conclusion: Optometry teachers adopted a multidimensional approach preferring the different teaching styles to varying degrees. Teachers noted challenges with the initial transition and adaptation to online teaching and used a combination of resources to support student learning. abstract_id: PUBMED:29619386 Gender preference between traditional and PowerPoint methods of teaching gross anatomy. Introduction: Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. Objective: This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. Subjects And Methods: This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Results: Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. Conclusion: There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the questions ask. abstract_id: PUBMED:31667119 Clustering students into groups according to their learning style. This method article aims to use group technology to classify engineering students at classroom level into clusters according to their learning style preferences. The Felder and Silverman's Index Learning Style (ILS) was used to evaluate students' learning style preferences. Students were then grouped into clusters based on the similarities of their learning styles preferences by using clustering algorithms, such as complete clustering. •Prior research on Learning Styles preferences in engineering education is limited in Saudi Arabia.•Students' learning style preferences allows instructors to adopt suitable teaching approach. Students having same learning styles can work together in group assignments.•Grouping students into clusters, we find that outlier students who having different learning styles than the rest may allow instructors to deal with them accordingly. abstract_id: PUBMED:29065318 The effect of teaching based on dominant learning style on nursing students' academic achievement. The recognition of learning styles and teaching based on that recognition will help lecturers use suitable methods of teaching. The aim of this study was to evaluate the effect of education based on dominant learning styles on the academic achievement of nursing students. The population of this quasi-experimental research consisted of 40 third-semester nursing students. The data were collected by using Kolb's Learning Style questionnaire. To determine the dominant learning style of the students, the researchers had them take a pre-test; then, based on the dominant learning style, the students were taught through group discussion. A formative exam and a summative exam were taken. The most and least preferred learning styles of the participants were the divergent style and the assimilative style respectively. Education based on learning styles, particularly for college students, can not only enhance students' academic achievement and teachers' professional satisfaction, but can help with training professional nurses. abstract_id: PUBMED:26330041 Learning style-based teaching harvests a superior comprehension of respiratory physiology. Students entering medical college generally show vast diversity in their school education. It becomes the responsibility of teachers to motivate students and meet the needs of all diversities. One such measure is teaching students in their own preferred learning style. The present study was aimed to incorporate a learning style-based teaching-learning program for medical students and to reveal its significance and utility. Learning styles of students were assessed online using the visual-auditory-kinesthetic (VAK) learning style self-assessment questionnaire. When respiratory physiology was taught, students were divided into three groups, namely, visual (n = 34), auditory (n = 44), and kinesthetic (n = 28), based on their learning style. A fourth group (the traditional group; n = 40) was formed by choosing students randomly from the above three groups. Visual, auditory, and kinesthetic groups were taught following the appropriate teaching-learning strategies. The traditional group was taught via the routine didactic lecture method. The effectiveness of this intervention was evaluated by a pretest and two posttests, posttest 1 immediately after the intervention and posttest 2 after a month. In posttest 1, one-way ANOVA showed a significant statistical difference (P=0.005). Post hoc analysis showed significance between the kinesthetic group and traditional group (P=0.002). One-way ANOVA showed a significant difference in posttest 2 scores (P &lt; 0.0001). Post hoc analysis showed significance between the three learning style-based groups compared with the traditional group [visual vs. traditional groups (p=0.002), auditory vs. traditional groups (p=0.03), and Kinesthetic vs. traditional groups (p=0.001)]. This study emphasizes that teaching methods tailored to students' style of learning definitely improve their understanding, performance, and retrieval of the subject. abstract_id: PUBMED:21656915 Use of the one-minute preceptor as a teaching tool in the gross anatomy laboratory. The one-minute preceptor (OMP) is a time-efficient technique used for teaching in busy clinical settings. It consists of five microskills: (1) get a commitment from the student, (2) probe for supporting evidence, (3) reinforce what was done right, (4) correct errors and fill in omissions, and (5) teach a general rule. It can also be used to address structure-identification questions in gross anatomy laboratory small-group settings. The OMP is an active learner-centered teaching approach that prepares students for a style of learning that they need to master in clinical settings, provides novice anatomy teachers with an efficient and effective teaching strategy, and moves anatomy learning beyond mere name tagging to active knowledge construction. abstract_id: PUBMED:25013843 Simultaneous anatomical sketching as learning by doing method of teaching human anatomy. Objective: Learning by lecture is a passive experience. Many innovative techniques have been presented to stimulate students to assume a more active attitude toward learning. In this study, simultaneous sketch drawing, as an interactive learning technique was applied to teach anatomy to the medical students. Materials And Methods: We reconstructed a fun interactive model of teaching anatomy as simultaneous anatomic sketching. To test the model's instruction effectiveness, we conducted a quasi- experimental study and then the students were asked to write their learning experiences in their portfolio, also their view was evaluated by a questionnaire. Results: The results of portfolio evaluation revealed that students believed that this method leads to deep learning and understanding anatomical subjects better. Evaluation of the students' views on this teaching approach was showed that, more than 80% of the students were agreed or completely agreed with this statement that leaning anatomy concepts are easier and the class is less boring with this method. More than 60% of the students were agreed or completely agreed to sketch anatomical figures with professor simultaneously. They also found the sketching make anatomy more attractive and it reduced the time for learning anatomy. These number of students were agree or completely agree that the method help them learning anatomical concept in anatomy laboratory. More than 80% of the students found the simultaneous sketching is a good method for learning anatomy overall. Conclusion: Sketch drawing, as an interactive learning technique, is an attractive for students to learn anatomy. abstract_id: PUBMED:27681711 Adaptation of the Grasha Riechman Student Learning Style Survey and Teaching Style Inventory to assess individual teaching and learning styles in a quality improvement collaborative. Background: NIATx200, a quality improvement collaborative, involved 201 substance abuse clinics. Each clinic was randomized to one of four implementation strategies: (a) interest circle calls, (b) learning sessions, (c) coach only or (d) a combination of all three. Each strategy was led by NIATx200 coaches who provided direct coaching or facilitated the interest circle and learning session interventions. Methods: Eligibility was limited to NIATx200 coaches (N = 18), and the executive sponsor/change leader of participating clinics (N = 389). Participants were invited to complete a modified Grasha Riechmann Student Learning Style Survey and Teaching Style Inventory. Principal components analysis determined participants' preferred learning and teaching styles. Results: Responses were received from 17 (94.4 %) of the coaches. Seventy-two individuals were excluded from the initial sample of change leaders and executive sponsors (N = 389). Responses were received from 80 persons (25.2 %) of the contactable individuals. Six learning profiles for the executive sponsors and change leaders were identified: Collaborative/Competitive (N = 28, 36.4 %); Collaborative/Participatory (N = 19, 24.7 %); Collaborative only (N = 17, 22.1 %); Collaborative/Dependent (N = 6, 7.8 %); Independent (N = 3, 5.2 %); and Avoidant/Dependent (N = 3, 3.9 %). NIATx200 coaches relied primarily on one of four coaching profiles: Facilitator (N = 7, 41.2 %), Facilitator/Delegator (N = 6, 35.3 %), Facilitator/Personal Model (N = 3, 17.6 %) and Delegator (N = 1, 5.9 %). Coaches also supported their primary coaching profiles with one of eight different secondary coaching profiles. Conclusions: The study is one of the first to assess teaching and learning styles within a QIC. Results indicate that individual learners (change leaders and executive sponsors) and coaches utilize multiple approaches in the teaching and practice-based learning of quality improvement (QI) processes. Identification teaching profiles could be used to tailor the collaborative structure and content delivery. Efforts to accommodate learning styles would facilitate knowledge acquisition enhancing the effectiveness of a QI collaborative to improve organizational processes and outcomes. Trial Registration: ClinicalTrials.gov Identifier: NCT00934141 Registered July 6, 2009. Retrospectively registered. abstract_id: PUBMED:36266705 The blacksmith approach: a strategy for teaching and learning in the medical anatomy course (a qualitative study). Background: Anatomy is a symbolic, essential core topic and one of the fundamental pillars of medical and paramedical knowledge. Nevertheless, few exploratory data analyses have focused on how students approach learning anatomy. This study examined how students perceive their learning experience during anatomy lessons and how to make a model which promotes their meaningful learning and professional identity. Methods: Using purposive sampling with maximum variation, we conducted a qualitative content analysis at the Shiraz University of Medical Sciences in Iran (2020 to 2021). Twenty-four medical students and twelve faculty members of Iran's medical science universities were enrolled in the study. The data were collected through semi-structured interviews and analyzed according to the theme. Results: A conceptual model emerged from the data analysis with the main theme called the blacksmith approach, which included Three sub-themes: (1) making a new forge (adequate preparation and mindful beginning), (2) heating the students' hearts (considering supporting systems that learners need) and (3) using Sledgehammer's approach (teaching anatomy by using more active methods and engaging all neuroanatomical regions) and (Using fun for enjoyable learning). All the concepts were related to each other. Conclusion: Medical students experience a challenging fundamental evolution into professional doctors. Educational systems focus primarily on teaching and learning, while students' transition can be facilitated by a three-step model called the Blacksmith Approach. It best serves as an educational framework for any pivotal, preclinical course capable of helping students acquire new roles and tackle challenges. Further research should be conducted to confirm how hard work leads to satisfying results with the opportunity to create enjoyable learning. Answer: The multidimensional approach to teaching anatomy considers various teaching interventions and styles to accommodate different learning preferences among students. The impact of gender and learning style on anatomy examination scores and subjective evaluations has been explored in several studies. One study found that gender and learning style had no impact on quantitative parameters such as multiple-choice exam scores when assessing the effectiveness of teaching interventions like ultrasound and arthroscopy in a peer teaching environment. However, qualitative analysis revealed differences in learning styles, with arthroscopy being best enjoyed by accommodators and least by divergers according to Kolb's Learning Style Inventory. Divergers found that they learned a lot through ultrasound, and the improvement in spatial conceptualization was greatest for accommodators and worst for assimilators (PUBMED:27033302). Another study on optometry teachers during the COVID-19 pandemic showed that teachers adopted a multidimensional approach, preferring different teaching styles to varying degrees. They faced challenges with the initial transition to online teaching and used a combination of resources to support student learning (PUBMED:38192541). A study on the gender preference between traditional and PowerPoint methods of teaching gross anatomy revealed that there are gender preferences with regard to the clarity of lecture contents and reproducibility of text and diagram, with females preferring the conventional method for clarity and PowerPoint for reproducibility, while males showed the opposite preference (PUBMED:29619386). In summary, while gender and learning style may not significantly impact quantitative measures of learning anatomy, they do influence qualitative aspects and preferences for different teaching methods. A multidimensional approach that considers these factors can enhance the learning experience and accommodate the diverse needs of students (PUBMED:27033302; PUBMED:38192541; PUBMED:29619386).
Instruction: Does the severity of mood and anxiety symptoms predict health care utilization? Abstracts: abstract_id: PUBMED:12224674 Does the severity of mood and anxiety symptoms predict health care utilization? Background: Traditional diagnostic criteria for depression and anxiety fail to account for symptom severity. We previously evaluated a severity-based classification system of mood and anxiety symptoms. This study examines whether those severity groups are predictive of differences in health care utilization. Methods: We used a cohort design to compare the health care utilization of 1232 subjects classified into 4 groups according to symptom severity. Health care billing data were evaluated for each subject for a 15-month period around the index visit. Multiple linear regression models were used to examine relative contributions of individual variables to differences in health care utilization. Analysis of variance procedures were used to compare charges among the severity groups after adjusting for demographic and medical comorbidity variables. Results: After adjustment, significant differences in health care utilization between groups were seen in all but 3 of the 15 months studied. Also, after adjustment, the presence of a mood or anxiety disorder influenced utilization for only a 6-month period. At 9 to 12 months, subjects in the high-severity group showed a more than twofold difference in adjusted charges compared with the low-severity group ($225.36 vs $94.37). Conclusions: Our severity-based classification predicts statistically and clinically significant differences in health care utilization over most of a 15-month period. Differences in utilization persist even after adjustment for medical comorbidity and significant demographic covariates. Our work lends additional evidence that beyond screening for the presence of mood and anxiety disorders, it is important to assess symptom severity in primary care patients. Further study directed toward developing effective methods of identifying patients with high levels of mood and anxiety symptom severity could result in significant cost savings. abstract_id: PUBMED:10050640 Investigation of a severity-based classification of mood and anxiety symptoms in primary care patients. Background: Current Diagnostic and Statistical Manual of Mental Disorders (DSM) classifications describe spectrums of symptoms that define mood and anxiety disorders. These DSM classifications have been applied to primary care populations to establish the frequency of these disorders in primary care. DSM classifications, however, might not adequately describe the underlying or natural groupings of mood and anxiety symptoms in primary care. This study explores common clusters of mood and anxiety symptoms and their severity while exploring the degree of cluster congruency with current DSM classification schemes. We also evaluate how well the groupings derived from these different classifying methods explain differences in patients' health-related quality of life. Methods: Study design was cross-sectional, using a sample of 1333 adult primary care patients attending a university-based family medicine clinic. We applied cluster analysis to responses on a 15-item instrument measuring symptoms of mood and anxiety and their severity. We used the PRIME-MD to determine the presence of DSM-III-R disorders. The SF-36 Health Survey was used to assess health-related quality of life. Results: Cluster analysis produced four groups of patients different from groupings based on the DSM. These four groups differed from each other on sociodemographic indicators, health-related quality of life, and frequency of DSM disorders. Cluster membership was associated in three of four clusters with a clinically significant and progressive decrease in mental and physical health functioning as measured by the SF-36 Health Survey. This decline was independent of the presence of a DSM diagnosis. Conclusions: A primary care classification scheme for mood and anxiety symptoms that includes severity appears to provide more useful information than traditional DSM classifications of disorders. abstract_id: PUBMED:33045925 Effect of mood and anxiety disorders on health care utilization in multiple sclerosis. Background: Little is known about the effects of changes in the presence or absence of psychiatric disorders on health care utilization in multiple sclerosis (MS). Objective: To evaluate the association between "active" mood and anxiety disorders (MAD) and health care utilization in MS. Methods: Using administrative data from Manitoba, Canada, we identified 4748 persons with MS and 24,154 persons without MS matched on sex, birth year, and region. Using multivariable general linear models, we evaluated the within-person and between-person effects of any "active" MAD on annual physician visits, hospital days, and number of drug classes dispensed in the following year. Results: Annually, the MS cohort had an additional two physician visits, two drug classes, and nearly two more hospital days versus the matched cohort. Individuals with any MAD had more physician visits, had hospital days, and used more drug classes than individuals without a MAD. Within individuals, having an "active" MAD was associated with more utilization for all outcomes than not having an "active" MAD, but the magnitude of this effect was much smaller for visits and drugs than the between-person effect. Conclusion: Within individuals with MS, changes in MAD activity are associated with changes in health services use. abstract_id: PUBMED:29963890 Comorbid mood and anxiety disorders and severity of posttraumatic stress disorder symptoms in treatment-seeking veterans. Objective: Comorbidity is the rule and not the exception among veterans with posttraumatic stress disorder (PTSD). Examining comorbidities in a veteran population allows us to better understand veterans' symptoms and recognize when mental health treatment may need to be tailored to other co-occurring issues. This article evaluates comorbid mood and anxiety disorders and PTSD symptom severity in a large sample of veterans from multiple eras of service, including the recent wars in Iraq and Afghanistan. Method: The current study used data from veterans who sought treatment for PTSD at a VA PTSD Clinical Team from 2005 to 2013. Veterans were assessed for PTSD, mood, and anxiety disorders using a structured clinical interview and completed self-report symptom measures as part of the PTSD clinic intake procedure. A total of 2,460 veterans were evaluated, and 867 met diagnostic criteria for PTSD. Results: Veterans with PTSD were significantly more likely than those without PTSD to be diagnosed with social anxiety disorder and obsessive-compulsive disorder, but significantly less likely to be diagnosed with depression. In addition, veterans who had at least one comorbid diagnosis in addition to PTSD reported significantly higher PTSD symptom severity than veterans with PTSD alone. PTSD symptom severity also varied by era of service. Conclusion: These results suggest that among veterans seeking treatment for PTSD, comorbid mood and anxiety disorders may be associated with greater severity of PTSD symptoms. Future work is needed to determine the impact of specific comorbidities on trauma-focused treatment outcomes. (PsycINFO Database Record (c) 2019 APA, all rights reserved). abstract_id: PUBMED:29057450 The relationship between physical and psychological symptoms and health care utilization in hospitalized patients with advanced cancer. Background: Patients with advanced cancer often experience frequent and prolonged hospitalizations; however, the factors associated with greater health care utilization have not been described. We sought to investigate the relation between patients' physical and psychological symptom burden and health care utilization. Methods: We enrolled patients with advanced cancer and unplanned hospitalizations from September 2014-May 2016. Upon admission, we assessed physical (Edmonton Symptom Assessment System [ESAS]) and psychological symptoms (Patient Health Questionnaire 4 [PHQ-4]). We examined the relationship between symptom burden and healthcare utilization using linear regression for hospital length of stay (LOS) and Cox regression for time to first unplanned readmission within 90 days. We adjusted all models for age, sex, marital status, comorbidity, education, time since advanced cancer diagnosis, and cancer type. Results: We enrolled 1,036 of 1,152 (89.9%) consecutive patients approached. Over one-half reported moderate/severe fatigue, poor well being, drowsiness, pain, and lack of appetite. PHQ-4 scores indicated that 28.8% and 28.0% of patients had depression and anxiety symptoms, respectively. The mean hospital LOS was 6.3 days, and the 90-day readmission rate was 43.1%. Physical symptoms (ESAS: unstandardized coefficient [B], 0.06; P &lt; .001), psychological distress (PHQ-4 total: B, 0.11; P = .040), and depression symptoms (PHQ-4 depression: B, 0.22; P = .017) were associated with longer hospital LOS. Physical (ESAS: hazard ratio, 1.01; P &lt; .001), and anxiety symptoms (PHQ-4 anxiety: hazard ratio, 1.06; P = .045) were associated with a higher likelihood for readmission. Conclusions: Hospitalized patients with advanced cancer experience a high symptom burden, which is significantly associated with prolonged hospitalizations and readmissions. Interventions are needed to address the symptom burden of this population to improve health care delivery and utilization. Cancer 2017;123:4720-4727. © 2017 American Cancer Society. abstract_id: PUBMED:34003175 Abdominal Pain and Depression, Not Bowel Habits, Predict Health Care Utilization in Patients With Functional Bowel Disorders. Introduction: Patients with disorders of gut-brain interaction (DGBIs) are high users of health care. Past studies exploring predictors of utilization have lacked patient-level clinical data. The aim of the current study is to identify demographic, clinical, and psychological predictors of health care utilization in patients with irritable bowel syndrome (IBS), functional constipation (FC), and functional diarrhea (FDr). Methods: Consecutive new patients diagnosed with IBS, FC, and FDr (using Rome IV criteria) completed questionnaires assessing health care utilization as well as clinical and psychological symptoms. Health care utilization was assessed using a 13-item measure inquiring about the previous 6 months. Patient-Reported Outcome Measures Information System (PROMIS) was used to assess severity of abdominal pain, constipation, diarrhea, anxiety, depression, and sleep disturbance. Results: Of the 507 patients diagnosed with IBS, FC, or FDr, 434 completed the health care utilization questionnaire (mean age of 44 years, 79.5% female, and 73.5% IBS). In the final multivariable models, more severe abdominal pain and higher depression scores were significantly associated with increased utilization of (i) total outpatient visits, (ii) outpatient visits for gastrointestinal (GI) symptoms, and (iii) number of medications for GI symptoms. More severe abdominal pain was also significantly predictive of GI-related emergency department visits. Altered bowel habits were not consistent predictors of health care utilization. Discussion: Severity of abdominal pain and depressive symptoms, but not bowel habits, is a primary driver of increased care-seeking behavior in patients with IBS, FC, and FDr. abstract_id: PUBMED:27638971 Pain severity and emotion dysregulation among Latinos in a community health care setting: relations to mental health. Background: Although pain severity is often related to poorer mental health and is one of the most common presenting complaints in community health care settings, there is little understanding of the pain experience in relation to anxiety/depressive symptoms and disorders among Latino populations in medical contexts. Method: To address this gap, the current study explored an interactive model of pain severity and emotion dysregulation in relation to anxiety/depressive symptoms and psychopathology among 274 Latinos who attended a community-based primary health care clinic [86.9% female; Mage=39.3 (SD=11.2); 96.0% indicated Spanish as their first language]. Results: Results indicated a statistically significant interaction between pain severity and emotion dysregulation for suicidal symptoms, social anxiety symptoms and number of mood/anxiety disorders, such that more severe pain and greater levels of emotion dysregulation related to poorer mental health. Both pain severity and emotion dysregulation were significant predictors of depressive symptoms, but only pain severity was a significant predictor of anxious arousal symptoms. Conclusions: These novel findings suggest a clinically significant interplay between pain severity and emotion dysregulation among Latinos in. The results are discussed in relation to the need for new screening and intervention tactics that address interrelations between pain severity and emotional dysregulation among Latinos seeking treatment in community health-care-based settings. abstract_id: PUBMED:28845455 Immigrant-based Disparities in Mental Health Care Utilization. Studies of immigrant-based disparities in mental health care have been limited by small sample sizes and a lack of measures of different dimensions of acculturation. This study draws on the National Epidemiological Survey on Alcohol and Related Conditions to address these limitations. Results indicate first-generation immigrants have lower rates of utilization for both mood and anxiety disorders. Nativity-based disparities in treatment are particularly notable among people from African and Hispanic origins, while there is little evidence of disparities among people from European origins. Of three dimensions of acculturation, only the identity dimension has a positive association with mental health care utilization. abstract_id: PUBMED:25077847 Impact of mindfulness-based cognitive therapy on health care utilization: a population-based controlled comparison. Objective: Elevated rates of mood and anxiety disorders among high utilizers of health care have been suggested as one driver of increased service use. We compared the impact of Mindfulness Based Cognitive Therapy (MBCT), a structured group treatment, on the rates of health care utilization with matched control participants receiving non-MBCT group therapy. Methods: Using Ontario health administrative data, we created a retrospective cohort of population-based patients receiving MBCT and an age- and gender-matched (3:1) cohort of non-MBCT group therapy controls. Subjects were recruited between 2003 and 2010 and stratified according to high/low rates of primary care utilization, with the high utilization cohort being the cohort of interest. The primary outcome was a reduction in an aggregate measure of non-mental health utilization comprising Emergency Department, non-mental health primary care, and non-psychiatrist specialist visits. Results: There were 10,633 MBCT recipients, 4851 (46%) of whom were high utilizers. The proportion of high utilizers was 13,274 (45%, N=29,795) for non-MBCT group therapy controls. Among high utilizers, there was a significant reduction in non-mental health utilization among MBCT recipients compared to non-MBCT group therapy recipients (0.55 (0.21-0.89)) suggesting that for every two MBCT patients treated, there is a reduction in 1 non-mental health visit. Conclusion: Among high utilizers of primary care, MBCT reduced non-mental health care utilization 1year post-therapy compared to non-MBCT, group therapy controls. The reductions suggest that MBCT, an established treatment modality for a variety of mental illnesses, has the added benefit of reducing distress-related high health care utilization. abstract_id: PUBMED:8690701 Generalized anxiety and mixed anxiety-depression: association with disability and health care utilization. Generalized anxiety and mixed anxiety-depression have received less attention than the major mood and anxiety disorders ith respect to their possible effects in increasing disability and health care utilization. A review of recent studies, however, indicates that these conditions are prevalent in primary care medical settings and are associated with significant social and occupational disability. Generalized anxiety disorder is also one of the most common diagnoses seen in patients presenting with medically unexplained somatic complaints such as chest pain, irritable bowel symptoms, and hyperventilation and in patients prone to overutilize health care services in general. It is poorly recognized by primary care physicians, possibly due to its chronicity, which may limit the ability of symptoms to "stand out" and be easily detected. However, it is disproportionately present in "high utilizer" samples found to be particularly "frustrating" to their physicians and is accompanied by a high rate of personality disorders, suggesting that maladaptive personality traits and styles of interaction in such patients may also contribute to underrecognition of symptoms by primary care physicians. These preliminary associations between generalized anxiety disorder/mixed anxiety-depression and both disability and increased health care utilization need to be confirmed with carefully designed and controlled studies. Answer: Yes, the severity of mood and anxiety symptoms does predict health care utilization. Studies have shown that there are statistically and clinically significant differences in health care utilization based on the severity of mood and anxiety symptoms. For instance, one study found that after adjusting for demographic and medical comorbidity variables, there were significant differences in health care utilization between groups with different severity levels of mood and anxiety symptoms over a 15-month period. Specifically, subjects in the high-severity group showed a more than twofold difference in adjusted charges compared with the low-severity group (PUBMED:12224674). Another study that investigated a severity-based classification of mood and anxiety symptoms in primary care patients found that cluster membership was associated with a clinically significant and progressive decrease in mental and physical health functioning as measured by the SF-36 Health Survey, independent of the presence of a DSM diagnosis. This suggests that a classification scheme that includes symptom severity provides more useful information than traditional DSM classifications of disorders (PUBMED:10050640). Furthermore, research on patients with multiple sclerosis (MS) indicated that changes in "active" mood and anxiety disorders (MAD) are associated with changes in health services use, with individuals having an "active" MAD experiencing more physician visits, hospital days, and using more drug classes than individuals without a MAD (PUBMED:33045925). In the context of advanced cancer, physical symptoms, psychological distress, and depression symptoms were associated with longer hospital length of stay, while physical and anxiety symptoms were associated with a higher likelihood for readmission (PUBMED:29963890). Additionally, in patients with functional bowel disorders, more severe abdominal pain and higher depression scores were significantly associated with increased utilization of outpatient visits for gastrointestinal symptoms and the number of medications for gastrointestinal symptoms (PUBMED:34003175). These findings are consistent across various studies and populations, indicating that the severity of mood and anxiety symptoms is a predictor of health care utilization (PUBMED:27638971, PUBMED:28845455, PUBMED:25077847, PUBMED:8690701).
Instruction: Is hepatic portal venous gas the writing on the wall? Abstracts: abstract_id: PUBMED:24696625 A case of hepatic portal venous gas due to viral gastroenteritis. Introduction: Hepatic portal venous gas is the presence of gas within the portal venous system that can result from a variety of insults to the gastrointestinal system, and which may be a diagnostic clue to ominous abdominal pathologies, such as acute bowel ischemia or necrosis. Case: We report a case of a 71-year-old man with acute onset abdominal pain whose initial radiologic testing showed extensive portal venous gas. The patient was managed conservatively with resolution of portal venous gas on day 2 of hospitalization. The findings were consistent with a final diagnosis of hepatic portal venous gas caused by viral gastroenteritis. Conclusion: History and physical examination along with presenting signs and symptoms, in conjunction with imaging studies, are vital to delineate the cause of hepatic portal venous gas. In the absence of alarming signs and symptoms, hepatic portal venous gas can be conservatively managed without the need for invasive surgical exploration. abstract_id: PUBMED:36212755 White spirit poisoning: An unusual cause of hepatic portal venous gas. Traditionally, the presence of air within the hepatic portal venous system has been considered a rather ominous sign as it has been associated with conditions of increased associated morbidity and mortality such as bowel ischemia and intraabdominal sepsis. However, benign conditions, not requiring any particular intervention, have been implemented in the etiology, as well. In the present report, we present the case of the accidental ingestion of white spirit as a rather unusual cause of hepatic portal vein gas. A 32-year-old, otherwise healthy, male was admitted to the emergency department following the accidental ingestion of a "sip," approximately 15 ml, of white spirit. The patient was complaining of nausea and upper abdominal pain that started soon after the ingestion of caustic agent. An imaging investigation with a computed tomography scan (CT) of the abdomen revealed the presence of hepatic portal vein gas along with a diffuse edema of the gastric wall at the site of the lesser curvature. A follow-up CT, 2 days after the admission, revealed no evidence of hepatic portal venous gas. Based on the patient's good general condition, an expectant management was decided. No intervention was required, oral feeding was recommenced after 6 days of fasting and the patient was discharged 8 days after the admission. Hepatic portal venous gas is a very impressive imaging finding with remarkably diverse etiology and prognostic correspondence. Irrespective of the cause, an approach of managing patients with hepatic portal venous gas according to their clinical condition appears reasonable. abstract_id: PUBMED:26185666 Hepatic portal venous gas. Differentiating hepatic portal venous gas (HPVG) and pneumobilia on the CT scan can be accomplished by comparing the pattern of intrahepatic air spread. HPVG can be an indicator of significant intra-abdominal pathology and bowel ischaemia is the most common causative etiology for HPVG. abstract_id: PUBMED:36320790 A Case Report on Hepatic Portal Venous Gas (HPVG). Hepatic Portal Venous Gas (HPVG) is the abnormal presence of gas in the portal venous system. It is associated with life-threatening conditions and is a sinister radiological sign. This case report aims to evaluate the significance of HPVG as a radiological sign. Our case involves a 49-year-old man who was admitted to the hospital following a one-day history of severe epigastric pain and haematemesis. Investigations showed extensive HPVG, gastric pneumatosis, a large retroperitoneal haematoma, and an obstructive lesion between the first and second part of the duodenum. Our patient was managed conservatively in the High Dependency Unit (HDU). A repeat Computerised Tomography (CT) scan showed successful resolution of the HPVG and gastric pneumatosis without any invasive intervention. abstract_id: PUBMED:26633137 Computed Tomography Identifies Hepatic Portal Venous Gas Caused by Severe Diarrhoea. Aim: To use computed tomography (CT) to diagnose the reasons for hepatic portal venous gas (HPVG) in the case of an elderly male patient. Methods: This is a case study of an elderly male patient who suffered acute, obvious abdominal pain accompanied with stop of exhaust defecation following three days of diarrhoea, abdominal distention and emesis. The patient also developed asthma, which gradually became severe. The patient was admitted to the hospital where he underwent a physical examination and a CT scan. Results: The CT results confirmed that the patient was suffering from HPVG caused by severe diarrhoea. The CT scan showed obvious expansion and pneumatosis in the enteric cavity and subcutaneous emphysema in the intestinal wall. Also, the intrahepatic portal branches and small branches of veins in the mesentery were filled with a high density of gas. The combination of many factors led to HPVG. Gastrointestinal mucosa and pressure accompanied with intestinal septic infection were the main factors. The case report revealed that gas in the enteric cavity went into the submucosa, then into the small branches of veins in the mesentery and finally into the intrahepatic portal vein system. Conclusions: Computed examination revealed the imaging features of HPVG. Hepatic portal venous gas suggested the growth of enteric cavity pressure, the damage of intestinal mucosa and intestinal infection, providing references for clinical diagnosis. abstract_id: PUBMED:27956952 Hepatic Portal Venous Gas with Pneumatosis Intestinalis. Hepatic portal venous gas with pneumatosis intestinalis are radiological clues of intestinal ischemia. Prompt evaluation and a high index of clinical suspicion for the diagnosis of acute mesenteric ischemia are necessary given the high mortality of this condition. We present a case of mesenteric ischemia, radiological clues to diagnosis, and associated mechanisms. abstract_id: PUBMED:25681812 Extensive portal venous gas: Unlikely etiology and outcome. Portal venous gas or hepatic portal venous gas (HPVG) found on imaging portends grave outcomes for patients suffering from ischemic bowel disease or mesenteric ischemia. HPVG is more rarely seen with severe but treatable abdominal infection as well as multiple benign conditions, and therefore must be aggressively evaluated. We report a 70-year old female who developed extensive intra- and extra-hepatic portal venous gas, pneumatosis intestinalis and free air associated with a perforation of the jejunojejunostomy after a gastrectomy for gastric carcinoma. abstract_id: PUBMED:34955462 Pathological features of hepatic portal venous gas in a cat. An 8-year 8-month-old castrated male Munchkin presented with vomiting, anorexia and hypoactivity. Computed tomography revealed excessive gas accumulation within the intestinal lumen and gas bubbles in the liver, spleen, and portal venous system, indicating hepatic portal venous gas. The cat died without any significant improvement, and mild splenomegaly was found at necropsy. Histologically, multiple gas vacuoles were diffusely observed in the liver and spleen. In the stomach, multiple gas vacuoles and scattered focal ulcers were detected within the mucosa. Multifocal hemorrhage was noted in the small and large intestines, whereas gas vacuoles were not present. Based on these findings, a gastric ulcer under high gas pressure may have provided an entry point for gas into the portal venous system. abstract_id: PUBMED:33489222 A case of hepatic portal venous gas that resolved with conservative treatment. Hepatic portal venous gas (HPVG) is a potentially fatal condition. If vital signs are normal and laboratory data are not suggestive of any necrotic changes, a follow-up computed tomography after a conservative procedure can be performed at short intervals to conservatively monitor the patient. abstract_id: PUBMED:29123742 Evaluation of patients with hepatic portal venous gas who can be treated with conservative therapy. Aim: Hepatic portal venous gas is an indication for emergency surgery and reportedly has a high mortality rate. However, these days, cases of hepatic portal venous gas associated with milder disease are increasing. In this report, we compared two groups to investigate whether there were any objective indicators for selecting conservative therapy. Methods: From July 2007 to August 2011, 19 patients with hepatic portal venous gas were evaluated at our hospital. The condition was diagnosed by computed tomography imaging. Vital signs, laboratory test results, and physical examination findings were compared. Results: The A group included 12 patients who could be treated with conservative therapy. The B group included 7 patients who could not be treated with conservative therapy. The combined mortality rate was 31.5% (6/19 patients). There was a significant difference in the pulse rate and shock index. In the A group, none of the patients had signs of peritoneal irritation. However, in the B group, physical examination revealed signs of peritoneal irritation in all patients. Conclusion: Conservative therapy may be possible in patients with hepatic portal venous gas, depending on the cause. Evaluation of vital signs and serial changes on physical examination are important. Answer: Hepatic portal venous gas (HPVG) is a radiological finding characterized by the presence of gas within the portal venous system. Traditionally, it has been considered a sign of serious and potentially life-threatening conditions, such as bowel ischemia, intraabdominal sepsis, or mesenteric ischemia, which are associated with high morbidity and mortality (PUBMED:36212755, PUBMED:26185666, PUBMED:27956952). However, the prognostic significance of HPVG is not always dire, as it can also be associated with a variety of benign conditions that may not require aggressive intervention (PUBMED:24696625, PUBMED:36212755). The management and outcome of patients with HPVG depend on their clinical condition and the underlying cause. There are reports of patients with HPVG who were managed conservatively with successful resolution of the condition without the need for invasive surgical exploration or intervention (PUBMED:24696625, PUBMED:36320790, PUBMED:33489222). In some cases, HPVG has been associated with milder diseases, and conservative therapy has been possible, especially when vital signs are stable, laboratory data do not suggest necrotic changes, and there are no signs of peritoneal irritation (PUBMED:29123742). Computed tomography (CT) scans play a crucial role in diagnosing HPVG and differentiating it from other conditions such as pneumobilia. CT can also help identify the underlying cause, whether it is severe diarrhea, ingestion of caustic agents, or other factors (PUBMED:26633137, PUBMED:34955462). In conclusion, while HPVG can be a marker of severe intra-abdominal pathology, it is not always the "writing on the wall" for a grim prognosis. The clinical context, patient's condition, and imaging studies are essential in determining the management approach and potential outcomes for patients with HPVG. An individualized approach based on clinical evaluation and serial imaging is recommended to guide treatment decisions (PUBMED:24696625, PUBMED:36212755, PUBMED:36320790, PUBMED:29123742).
Instruction: Are non-allergic drug reactions commonly documented as medication "allergies"? Abstracts: abstract_id: PUBMED:27862587 Are non-allergic drug reactions commonly documented as medication "allergies"? A national cohort of Veterans' admissions from 2000 to 2014. Purpose: Adverse drug reactions (ADRs) including medication allergies are not well-described among large national cohorts. This study described the most common documented medication allergies and their reactions among a national cohort of Veterans Affairs (VA) inpatients. Methods: We evaluated inpatient admissions in any VA Medical Center from 1 January 2000 to 31 December 2014. Each admission was linked with allergy history preceding or upon admission. Individual drugs were aggregated into drug class category including: penicillins, sulfonamides, angiotensin converting enzyme (ACE) inhibitors, opiates, HMG-CoA reductase inhibitors ("statins") and non-steroidal anti-inflammatory inhibitors (NSAID). Results were reported in aggregate and over time. Results: Approximately ~10.8 million inpatient admissions occurred from 2000 to 2014. We found the most commonly reported allergy drug classes were penicillins (13%, n = 1 410 080), opiates (9.1%, n = 984 978), ACE inhibitors (5.7%, n = 618 075) sulfonamides (5.1%, n = 558 653), NSAIDs (5.1%, n = 551 216) and statins (3.6%, n = 391 983). Several allergy histories increased over time including opiates (6.2 to 11.2%), ACE inhibitors (1.3 to 10.2%), statins (0.3 to 7.3%) and NSAIDs (3.9 to 6.0%). Rash was the most commonly documented reaction on reports for penicillins (25.5%, n = 371 825), sulfonamides (25.6%, n = 165 954) and NSAIDs (10.3%, n = 65 741). The most common reaction for opiates was nausea/vomiting (17.9%, n = 211 864), cough/coughing for ACE inhibitors (41.0%, n = 270 537) and muscle pain/myalgia for statins (34.1%, n = 186 565). Conclusions: We report that penicillins and opiates are the most commonly documented drug allergies among VA inpatients, but other drug classes such as ACE inhibitors, statins and NSAIDs are becoming increasingly common. Clinicians also commonly document non-allergic ADRs in the allergy section such as cough or myalgia. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:18692136 Allergic reactions to oral drugs: A case/non-case study from an Italian spontaneous reporting database (GIF). Despite the wide number of studies investigating on drug-induced allergy, limited data focused on allergies associated with orally administered drugs are available. The aim of the study is to evaluate allergic drug reactions associated with oral drug use, using an Italian spontaneous reporting database of adverse drug reactions (ADRs). Spontaneous reports associated with oral drugs retrieved from seven Italian regions (GIF research group), collected from 1988 to 2006, were analysed. Association between drugs and allergic adverse reactions was assessed using the case/non-case method, calculating the ADR reporting odds ratio (ROR) as a measure of disproportionality. Overall, 27,175 reports of adverse reactions related to oral drug use were analysed; of these, 3143 (11.6%) were judged as allergy cases. Paediatric patients (&lt;or=15 years) and inpatients (p&lt;0.001) were more represented in cases than in non-cases. Antibiotics and Non-Steroidal Anti-inflammatory Drugs (NSAIDs) were the only two drug classes associated with a significant increase of ROR. Regarding antibiotics, cinoxacin (6.88; 95%CI 4.19-11.29) and moxifloxacin (4.20; 95% CI 3.19-5.55) were related to the highest ROR values, while propionic acid derivates (ROR 2.75; 95% CI 2.30-3.28), and in particular ibuprofen (4.20; 95% CI 3.13-5.63), have shown the highest ROR values among NSAIDs. The results of the present paper confirm the higher frequency of allergic reactions with oral antibiotics and NSAIDs, although more data are needed. Given the widespread use of these drug classes (some of them being purchased as over the counter drugs), awareness should be raised among patients and prescribers about these risks. abstract_id: PUBMED:16310532 Drug hypersensitivity. Allergic drug reactions compose a small percentage of ADRs, yet they are commonly encountered in clinical practice, and physicians are taught routinely to question patients about these reactions during history taking. Among antibiotics, the immunochemistry of penicillins has been elucidated,leading to the development of validated skin test reagents to diagnose type 1 allergy. Currently, the temporary commercial unavailability of Pre-Pen makes accurate penicillin skin testing impossible; however, this important skin test reagent is expected to become available sometime in 2006. Type 1 allergies to most other drugs lack comparable diagnostic tests, and their diagnosis is therefore driven by the patient's history. When readministration of medications to which patients report previous reactions is indicated, it may be almost always successfully accomplished by means of either graded challenge or desensitization. abstract_id: PUBMED:31037361 Incidence, etiology, predictors and outcomes of suspected drug hypersensitivity reactions in a tertiary care university hospital's emergency department : A retrospective study. Background: Allergic drug reactions are adverse drug reactions that result from a specific immunologic response to a medication. Considering the epidemiological and clinical importance of drug allergy, this retrospective analysis focused on drug hypersensitivity in a tertiary care university hospital emergency department (ED). Methods: In this study 74,929 ED records obtained from March 2012 to March 2015 were reviewed to determine the incidence, etiology, predictors and clinical features of drug hypersensitivity. Results: The observed incidence of drug hypersensitivity was 0.87% of all ED admissions. It was significantly higher in female patients aged 18-29 years (2.26%; P &lt; 0.0001) and during winter months (1.09%; P = 0.0058). Most patients had mild to moderate symptoms which regressed following ED treatment. Only five patients (7 per 100,000 ED visits) were diagnosed with drug-induced anaphylaxis, and only five patients were provisionally diagnosed with severe non-immediate reactions with systemic involvement. No patient died of drug hypersensitivity in the ED, and only a small proportion required subsequent hospitalization. The most common causes of drug hypersensitivity reactions were amoxicillin and paracetamol. Conclusion: Drug hypersensitivity is a common reason for tertiary centre emergency admissions. This is the largest analysis of ED drug hypersensitivity admissions so far. Beta-lactams were identified as the leading cause of drug hypersensitivity requiring ED evaluation, which also explains the peak of drug hypersensitivity cases during winter months when the use of these medications is highest. abstract_id: PUBMED:22766617 Diagnostic issues in pediatric drug allergy. Purpose Of Review: The serious health hazards posed by drug allergies have long been recognized and are commonly encountered in daily pediatric practice. Our general lack of knowledge of the pathomechanims greatly hampers our ability to correctly diagnose allergic drug reactions. The present review addresses the most recent literature regarding the diagnosis of allergy for the most commonly implicated drugs in children, that is, antibiotics, nonsteroidal anti-inflammatory drugs (NSAID) and vaccines. Recent Findings: Systematic approaches have been proposed and, if implemented, will likely reduce the number of children being inappropriately labeled as 'drug allergic'. In case of suspicion of an allergy, a complete allergy work-up should always be performed. This evaluation based on carefully selected diagnostic tests will differ according to the drug involved and the mechanisms suspected. The drug provocation test remains the gold standard and has gained in importance, particularly in children presenting with a benign rash while taking antibiotic treatment. Several new diagnostic tools are currently under investigation and provide promising results. Summary: Accurate diagnosis of drug allergy is important not only to prevent serious or even life-threatening reactions, but also to avoid unnecessary drug restriction associated with increased resistance and healthcare costs. abstract_id: PUBMED:20719007 Angio-oedema and urticaria as side effects of frequently used drugs Angio-oedema and urticaria can be symptoms of both allergic (IgE-mediated) and non-allergic drug hypersensitivity reactions. Non-allergic drug reactions, that may have a similar clinical presentation as allergic drug reactions, are not caused by an IgE-mediated immune mechanism. Because of unfamiliarity with non-allergic drug reactions and the unclear time course between drug use and reactions, the relationship with the responsible drug is often not recognized, leading to unnecessary patient risks. In the present article three patients with angio-oedema and urticaria as side effects of frequently used drugs (ACE-inhibitors, NSAIDs and betalactams) are presented and discussed. Patient A was a 69-year-old man with ACE-inhibitor induced angio-oedema. Patient B was a 40-year-old woman with urticaria and angio-oedema after ingestion of a NSAID caused by a non-allergic drug reaction. Patient C was a 54-year-old woman who developed an anaphylactic shock because of a type I allergy to betalactams. abstract_id: PUBMED:22559997 Determining safe alternatives for multidrug hypersensitive patients with the alternative triple antibiotic-analgesic test. Background: Drug provocation tests (DPTs) need technical equipment, staff and time. There are very few allergy centres performing DPTs in Turkey. Therefore many patients are referred to these centres. One day triple-double antibiotic or non-steroidal anti-inflammatory drug (NSAID) oral DPT for determining safe alternatives is safe, cost-effective and time saving compared to conventional one day one drug oral DPT. Our aim was to investigate the safety of antibiotic-NSAID oral DPT performed on the same day to find safe alternatives in multidrug hypersensitive patients. Methods: Forty-two patients who had been diagnosed as having both antibiotic and NSAID hypersensitivity were enrolled to the study between 15 November and 15 July 2010. The reactions were urticaria and/or angio-oedema not including laryngeal oedema for all patients. Two antibiotics-one NSAID or two NSAIDs-one antibiotic triple test have been performed on the same day to study patients (n=22), while the control group (n=20) had taken drugs on three separate days. Results: Only two patients had positive reactions during triple test and two patients had adverse reactions; one had gastric pain, one had nausea. Three patients in the control group had positive reactions. There were no significant differences between the two groups in frequency of adverse and allergic drug reactions (p&gt;0.05). Sixty days were spent for the tests of the control group with only 28 days for the study population. Conclusion: Triple test performed with antibiotic and NSAID on the same day for determining safe alternatives for multidrug hypersensitive patients reporting non-life-threatening allergic reactions seems to be safe and time-saving. abstract_id: PUBMED:27655415 New Advances in the Study of IgE Drug Recognition. Allergic drug reactions are currently a major public health problem affecting patient health and increasing healthcare costs. They are caused by interactions between a drug and the human immune system and result in symptoms ranging from urticaria or angioedema to those more serious such as anaphylaxis or anaphylactic shock. The most commonly accepted mechanism for immunological activation is based on the hapten hypothesis. Drugs are low molecular weight substances that cannot cause an immune response on their own. However, they can act as haptens and form covalent adducts with proteins. The resulting hapten-carrier (drug-protein) conjugate can induce the production of IgE antibodies or T cells. An epitope, or antigenic determinant, is the part of the drug-protein antigen that is specifically recognized by the immune system. This may involve not only the drug derivative but also part of the carrier protein. Understanding the way in which drugs are metabolized after protein conjugation is vital in order to make progresses in the diagnosis of clinical allergy. In this review, recent advances in the identification of the chemical structures of antigenic determinants involved in immediate allergic reactions to drugs are presented. We have focused on drugs that most commonly elicit these reactions: betalactam and quinolone antibiotics and the non-steroidal anti-inflamatory drug pyrazolone. This will be discussed from a chemical point of view, relating our understanding of drug structure, chemical reactivity and immune recognition. abstract_id: PUBMED:19099275 Adverse drug reactions and the skin--from trivial to fire signal Adverse drug reactions are a common medical problem. Especially hypersensitivity reactions are mostly impressive and unsettling due to their nature of being not dose-dependent and non-predictable. Clinical manifestations of the skin comprise manifold appearances from mild macular exanthems to bullous eruptions, and urticaria, angioedema to anaphylaxis. Systemic hypersensitivity reactions can affect nearly all internal organs and cells. Characteristic clinical danger signs for severe allergic drug reactions, for immediate type as well as for delayed type manifestations, help to recognise the incipient allergic drug reaction, to stop the further administration of the potential elicitor/allergen and to start therapeutic actions. It is essential for the success of a further allergologic work-up of such reactions to describe and document accurately the morphologic characteristics of the skin and other organ manifestations and the exact time course of medication intake and development of the clinical adverse reaction. This facilitates the choice of adequate diagnostic measures and the interpretation of the respective results. abstract_id: PUBMED:19075990 Understanding adverse drug reactions and drug allergies: principles, diagnosis and treatment aspects. Adverse Drug Reactions (ADRs) and drug allergies- as a subset of ADRs- make a significant public health concern, complicating 5 to 15% of therapeutic drug courses. They may result in diminished quality of life, increased physician visits, health care costs, hospitalizations, and even death. The incidence of serious ADRs in hospitalized patients was estimated to be 6.7% and for fatal ADRs to be 0.32%, so recognizing and taking action on ADRs is an important aspect of medication management. Allergic reactions to drugs refer to those ADRs that involve immune mechanisms which account up to 15% of ADRs and can be identified as being a type I through IV immune reaction that the most common immunologic mechanism is IgE-mediated- type I reaction. Clinical manifestations of allergic reactions range from pruritus and rash to serious reactions such as systemic anaphylaxis and cardiovascular emergencies and they are responsible for 2-3% of hospitalized patients. Health professionals should be aware of the ADRs presenting clinical features and the risk factors and should be able to differentiate between allergic and non-allergic adverse drug reactions. This will lead to increased opportunities to review drug selection and prescribing practices affecting patients' outcome. This article will review the definition and estimated incidence, the features, classification and types of ADRs and drug allergies and related patents. It will highlight the role of detecting, reporting, and assessing suspected ADRs and drug allergies in the most clinically relevant drugs group. Priorities in the evaluation and management of the conditions of patients who have experienced allergic and non-allergic drug reactions also will be discussed. Answer: Yes, non-allergic drug reactions are commonly documented as medication "allergies." The study of Veterans Affairs (VA) inpatients found that clinicians frequently document non-allergic adverse drug reactions (ADRs) in the allergy section of medical records, including reactions such as cough or myalgia, which are not true allergies (PUBMED:27862587). This is supported by the fact that non-allergic drug reactions can have a similar clinical presentation as allergic drug reactions, which can lead to confusion and the mislabeling of these reactions as allergies (PUBMED:20719007). The misclassification of non-allergic reactions as allergies can result in unnecessary patient risks due to unfamiliarity with non-allergic drug reactions and unclear time courses between drug use and reactions (PUBMED:20719007). Additionally, the study from an Italian spontaneous reporting database also confirms the higher frequency of allergic reactions with oral antibiotics and NSAIDs, indicating that these drug classes are often associated with reported allergies (PUBMED:18692136).
Instruction: Screening for recurrences in patients treated with breast-conserving surgery: is there a role for MRI? Abstracts: abstract_id: PUBMED:18266039 Screening for recurrences in patients treated with breast-conserving surgery: is there a role for MRI? Background: Although it has been shown that magnetic resonance imaging (MRI) is more sensitive than mammography in the detection of breast cancer in high-risk populations, there is little data on the use of MRI as a screening tool to detect recurrence after breast-conserving surgery. Our objective was to determine the potential role of MRI in the screening of breast cancer patients treated with breast-conserving surgery. Methods: Retrospective chart review of all patients undergoing margin-negative lumpectomy and adjuvant radiation therapy for infiltrating breast carcinoma between 1(st) January 1993 and 1(st) January 2004. Patients were followed for recurrence in the ipsilateral or contralateral breast by physical exam and mammography. Results: Four hundred and seventy-six primary tumor excisions were performed. Patients were followed for a median of 5.4 years. Ipsilateral breast recurrences developed in eight patients (1.7%) with a mean diameter of 1.6 cm. All of these women are alive and free of metastases. Contralateral cancers developed in 11 patients (2.3%) with a mean diameter of 1.5 cm. Ten of these 11 women are alive and free of disease. Conclusions: In a contemporary patient population the risk of local recurrence after lumpectomy and radiation therapy is very low. If screening MRI had been a part of annual follow-up, a total of 2570 MRIs would have been performed. Given the small tumor size at detection and the excellent survival of those who recurred, annual screening MRI would have incurred significant cost and would have been unlikely to improve overall survival. abstract_id: PUBMED:29030785 Usefulness of abbreviated breast MRI screening for women with a history of breast cancer surgery. Purpose: We investigated the usefulness of abbreviated breast MRI (AB-MRI), including fat-suppressed T2-weighted imaging, pre- and postcontrast image acquisition, and subtracted maximum-intensity projection imaging, for the screening of women with a history of breast cancer surgery. Methods: Between October 2014 and March 2016, a total of 799 AB-MRI examinations were performed for 725 women with a history of breast cancer surgery. The image acquisition time was 8.5 min. Screening mammography, ultrasound, and AB-MRI were generally performed around the same time. The cancer detection rate, positive predictive values for recall and biopsy, sensitivity and specificity of screening MRI, and rate of malignancy belonging to each breast imaging reporting and data system (BI-RADS) category were assessed. Results: AB-MRI detected 12 malignancies in 12 women (15.0 cancers per 1000 cases). Seven of these 12 malignancies were initially invisible on ultrasound and mammography, although subsequent targeted ultrasound revealed lesions corresponding to the MRI-detected lesions. The positive predictive values for recall and biopsy and sensitivity and specificity values for screening MRI were 12.4, 61.5, 100, and 89.2%, respectively. The rates of malignancies belonging to categories 1, 2, 3, and 4 of the BI-RADS were 0, 0, 4.8, and 57.1%, respectively. Conclusions: The diagnostic performance of screening AB-MRI for women with a history of breast cancer surgery is acceptable, with the advantages of short examination and interpretation times and low costs. Thus, it could be used as a main screening modality that may replace conventional imaging in breast cancer survivors. abstract_id: PUBMED:34877066 Imaging Surveillance of the Reconstructed Breast in a Subset of Patients May Aid in Early Detection of Breast Cancer Recurrence. Objectives: The purpose of this study is to determine the biological markers more frequently associated with recurrence in the reconstructed breast, to evaluate the detection method, and to correlate recurrent breast cancers with the detection method. Material And Methods: An institutional review board-approved retrospective study was conducted at a single institution on 131 patients treated with mastectomy for primary breast cancer followed by breast reconstruction between 2005 and 2012. Imaging features were correlated with clinical and pathologic findings. Results: Of the 131 patients who met our inclusion criteria, 40 patients presented with breast cancer recurrence. The most common histopathologic type of primary breast cancer was invasive ductal carcinoma in 82.5% (33/40) of patients. Triple-negative breast cancer was the most common biological marker with 42.1% (16/38) of cases. Clinically, 70% (28/40) of the recurrences presented as palpable abnormalities. Of nine patients who underwent mammography, a mass was seen in eight patients. Of the 35 patients who underwent ultrasound evaluation, an irregular mass was found in 48.6% (17/35) of patients. Nine patients with recurrent breast cancer underwent breast MRI, and MRI showed an irregular enhancing mass in four patients, an oval mass in four patients, and skin and trabecular thickening in one patient. About 55% of patients with recurrent breast cancer were found to have distant metastases. Conclusion: Patients at higher risk for locoregional recurrence may benefit from imaging surveillance in order to detect early local recurrences. abstract_id: PUBMED:15770888 The application of breast MRI in staging and screening for breast cancer. Contrast-enhanced breast magnetic resonance imaging (MRI) is a relatively new but increasingly used modality for the detection of breast cancer. MRI has demonstrated utility in identifying additional tumor foci and extent of disease in patients with known breast cancer. This is especially useful with invasive lobular carcinoma, which is difficult to evaluate on mammography. MRI has been found to identify the primary tumor in 70% to 86% of cases of occult breast cancer. Contrast-enhanced breast MRI has shown some usefulness in the detection of residual cancer following surgery but is limited by postoperative changes. In patients who have undergone neoadjuvant chemotherapy, breast MRI is most accurate in those patients in whom there is little or no response to chemotherapy. The use of contrast-enhanced breast MRlfor breast cancer screening is controversial. It has only been used in afew small studies of high-risk patients. The limitations of breast MRI include uptake in benign lesions and normal tissue, sensitivity for ductal carcinoma in situ, cost, and availability. This paper will discuss the uses, benefits, and limitations of contrast-enhanced breast MRI in the staging and screening of breast cancer. abstract_id: PUBMED:31559181 Hereditary breast cancer: screening and risk reducing surgery. Background: The screening modalities for women at high risk for breast cancer has received an increasing role during the last years. The aim of this study was to evaluate the performance of our screening program comparing the diagnostic sensitivity of clinical breast examination, mammography, ultrasonography (US) and magnetic resonance imaging (MRI). Methods: Clinical Breast examination, mammography, US and MRI for each patient with BRCA1 and BRCA2 mutation who underwent breast surgery in our Institution from October 2008 to April 2016 were retrospectively evaluated. The diagnostic accuracy for MRI and for the other surveillance tests in identifying early breast cancer were assessed. Results: Twenty-six female patients with genetic mutation underwent breast surgery. Twenty-two out of 26 (85%) developed cancer during the dedicated screening protocol whereas 4 women who underwent surgery did not have cancer. Imaging was able to detect cancer in all 22 patients (per patient sensibility of 100%), identifying all 35 neoplastic lesions (per lesion sensibility of 100%). The combination of Clinical Breast Examination, US and mammography aided the cancer diagnosis in 14 (64%) of patients with a sensitivity of 64% and specificity of 100%. MRI identified all the cancers, with sensibility and specificity of 100%. Moreover, in 8 (36%) of the 22 patients who developed breast cancers, the cancers were detected only by MRI, revealing a significant superiority respect to the other surveillance modalities (P&lt;0.05). Conclusions: MRI demonstrated to be the best imaging modality in detection of breast cancer even for lesion &lt;1 cm. Prophylactic mastectomy is the most effective risk reduction strategy in women at high risk, contributing to the reduction of anxiety related to the condition of a carrier. abstract_id: PUBMED:16032998 Feasibility of MRI imaging in patients with breast cancer Three women aged 45, 37 and 56 underwent MRI imaging of the breast for: breast-cancer screening, pre-operative staging and for monitoring the response on chemotherapy. Breast cancer at an early stage for which breast-saving surgery was possible was discovered in the first woman. In the second woman, the tumour was larger than was seen with mammography or ultrasound and other tumour sites were seen, leading to a more extensive treatment plan. In the year thereafter no metastases were found. In the third woman the response to chemotherapy was monitored. 8 months after therapy she appeared to have skeletal metastases. Mammography and ultrasound are the most commonly used modalities in breast imaging. Over the past few years MRI has been making an increasingly large contribution to the screening, staging and follow-up of patients with breast cancer. MRI can be an important supplementary study but its exact role still needs to be defined. abstract_id: PUBMED:17206485 Results of MRI screening for breast cancer in high-risk patients with LCIS and atypical hyperplasia. Background: Magnetic resonance imaging (MRI) can detect breast cancer in high-risk patients, but is associated with a significant false-positive rate resulting in unnecessary breast biopsies. More data are needed to define the role of MRI screening for specific high-risk groups. We describe our experience with MRI screening in patients with atypical hyperplasia (AH) and lobular carcinoma in situ (LCIS). Methods: We retrospectively reviewed data from our high-risk screening program prospective database for the period from April 1999 (when screening MRI was first performed at our institution) to July 2005. Patients with AH or LCIS demonstrated on previous surgical biopsy were identified. All patients underwent yearly mammography and twice yearly clinical breast examination. Additional screening MRI was performed at the discretion of the physician and patient. Results: We identified 378 patients; 126 had AH and 252 had LCIS. Of these, 182 (48%) underwent one or more screening MRIs (mean, 2.6 MRIs; range, 1-8) during this period, whereas 196 (52%) did not. Those who had MRIs were younger (P &lt; 0.001) with stronger family histories of breast cancer (P = 0.02). In MRI-screened patients, 55 biopsies were recommended in 46/182 (25%) patients, with 46/55 (84%) biopsies based on MRI findings alone. Cancer was detected in 6/46 (13%) MRI-generated biopsies. None of the six cancers detected on MRI were seen on recent mammogram. All six cancers were detected in five patients (one with bilateral breast cancer) with LCIS; none were detected by MRI in the AH group. Thus, cancer was detected in 5/135 (4%) of patients with LCIS undergoing MRI. The yield of MRI screening overall was cancer detection in 6/46 (13%) biopsies, 5/182 (3%) MRI-screened patients and 5/478 (1%) total MRIs done. In two additional MRI-screened patients, cancer was detected by a palpable mass in one, and on prophylactic surgery in the other and missed by all recent imaging studies. For 196 non-MRI-screened patients, 21 (11%) underwent 22 biopsies during the same period. Eight of 22 (36%) biopsies yielded cancer in seven patients. All MRI-detected cancers were stage 0-I, whereas all non-MRI cancers were stage I-II. Conclusion: Patients with AH and LCIS selected to undergo MRI screening were younger with stronger family histories of breast cancer. MRI screening generated more biopsies for a large proportion of patients, and facilitated detection of cancer in only a small highly selected group of patients with LCIS. abstract_id: PUBMED:9718177 Is mammography useful in screening for local recurrences in patients with TRAM flap breast reconstruction after mastectomy for multifocal DCIS? Background: Skin-sparing mastectomy with immediate transverse rectus abdominis musculocutaneous (TRAM) flap reconstruction is being used more often for the treatment of breast cancer. Mammography is not used routinely to evaluate TRAM flaps in women who have undergone mastectomy. We have identified the potential value of its use in selected patients. Methods And Results: We report on four women who manifested local recurrences in TRAM flaps after initial treatment for ductal carcinoma in situ (DCIS) or DCIS with microinvasion undergoing skin-sparing mastectomy and immediate reconstruction. All four patients presented with extensive, high-grade, multifocal DCIS that precluded breast conservation. Three of four mastectomy specimens demonstrated tumor close to the surgical margin. Three of the four recurrences were detected by physical examination; the remaining local recurrence was documented by screening mammography. The recurrences had features suggestive of malignancy on mammography. Conclusion: We conclude that all patients undergoing mastectomy and TRAM reconstruction for extensive, multifocal DCIS should undergo regular routine mammography of the reconstructed breast. Our experience with this subgroup of patients raises concern about the value of skin-sparing mastectomy with immediate reconstruction for therapy. Adjuvant radiation therapy should be recommended for those patients with negative but close surgical margins. abstract_id: PUBMED:28926729 Guidelines and recommendations for MRI in breast cancer follow-up: A review. Objective: Present article will perform a review regarding the current recommendations of Magnetic resonance imaging (MRI) in routine follow-up after successfully treated breast cancer by surgery, radiation and/or systemic therapy. Methods: For this review, a literature review search was done with the MeSH-terms: Magnetic Resonance Imaging, breast neoplasm, post-operative period and follow-up, according to PRISMA. The literature published between 2006 and 2016 in MedBase, PubMed and Embase was consulted. Results: The 10 articles and 8 guidelines were analysed for their recommendations regarding MRI use in routine follow-up for breast cancer. One article concluded that MRI could influence further policy, all others did not find superiority of MRI over mammography or were inconclusive. One guideline recommended annual MRI for patients with a positive personal history (PPH) for breast cancer without further risk factors, one guideline offered no MRI to these patients and all other guidelines were inconclusive. Conclusion: There is insufficient evidence regarding superiority of MRI versus mammography in routine follow-up for patients with a PPH of breast cancer. MRI does not improve survival in all patients and should therefore not be offered to patients in follow-up without increased risk for recurrences. abstract_id: PUBMED:35202192 Missed Breast Cancers on MRI in High-Risk Patients: A Retrospective Case-Control Study. Purpose: To determine if MRI features and molecular subtype influence the detectability of breast cancers on MRI in high-risk patients. Methods and Materials: Breast cancers in a high-risk population of 104 patients were diagnosed following MRI describing a BI-RADS 4-5 lesion. MRI characteristics at the time of diagnosis were compared with previous MRI, where a BI-RADS 1-2-3 lesion was described. Results: There were 77 false-negative MRIs. A total of 51 cancers were overlooked and 26 were misinterpreted. There was no association found between MRI characteristics, the receptor type and the frequency of missed cancers. The main factors for misinterpreted lesions were multiple breast lesions, prior biopsy/surgery and long-term stability. Lesions were mostly overlooked because of their small size and high background parenchymal enhancement. Among missed lesions, 50% of those with plateau kinetics on initial MRI changed for washout kinetics, and 65% of initially progressively enhancing lesions then showed plateau or washout kinetics. There were more basal-like tumours in BRCA1 carriers (50%) than in non-carriers (13%), p = 0.0001, OR = 6.714, 95% CI = [2.058-21.910]. The proportion of missed cancers was lower in BRCA carriers (59%) versus non-carriers (79%), p &lt; 0.05, OR = 2.621, 95% CI = [1.02-6.74]. Conclusions: MRI characteristics or molecular subtype do not influence breast cancer detectability. Lesions in a post-surgical breast should be assessed with caution. Long-term stability does not rule out malignancy and multimodality evaluation is of added value. Lowering the biopsy threshold for lesions with an interval change in kinetics for a type 2 or 3 curve should be considered. There was a higher rate of interval cancers in BRCA 1 patients attributed to lesions more aggressive in nature. Answer: The role of MRI in screening for recurrences in patients treated with breast-conserving surgery is a subject of ongoing debate and research. According to a retrospective chart review (PUBMED:18266039), the risk of local recurrence after lumpectomy and radiation therapy is very low, and the study suggests that the inclusion of annual screening MRI would have incurred significant cost without likely improving overall survival due to the small tumor size at detection and the excellent survival of those who recurred. However, other studies have found a potential role for MRI in this context. A study on the usefulness of abbreviated breast MRI (AB-MRI) found that it has acceptable diagnostic performance for women with a history of breast cancer surgery, with the advantages of short examination and interpretation times and low costs, suggesting it could be used as a main screening modality that may replace conventional imaging in breast cancer survivors (PUBMED:29030785). Another study (PUBMED:34877066) indicated that patients at higher risk for locoregional recurrence may benefit from imaging surveillance to detect early local recurrences. This study found that triple-negative breast cancer was the most common biological marker associated with recurrence, and imaging, including MRI, was instrumental in detecting recurrences. The application of breast MRI in staging and screening for breast cancer has been found to be useful in identifying additional tumor foci and extent of disease in patients with known breast cancer, especially with invasive lobular carcinoma, which is difficult to evaluate on mammography (PUBMED:15770888). For women at high risk for breast cancer, such as those with BRCA1 and BRCA2 mutations, MRI has demonstrated to be the best imaging modality in the detection of breast cancer, even for lesions smaller than 1 cm (PUBMED:31559181). In summary, while routine use of MRI for screening recurrences in all patients treated with breast-conserving surgery may not be justified due to cost and low recurrence rates, it may have a role in selected high-risk patients or those with a history of breast cancer surgery, where it can provide significant benefits in early detection of recurrences (PUBMED:18266039, PUBMED:29030785, PUBMED:34877066, PUBMED:15770888, PUBMED:31559181).
Instruction: Thoracic actinomycosis: Thinking of it earlier? Abstracts: abstract_id: PUBMED:21458973 Thoracic actinomycosis: Thinking of it earlier? Unlabelled: Thoracic actinomycosis is a rare disease, with high morbidity related to delayed diagnosis. Objectives: To identify and analyse paediatric thoracic actinomycosis to promote earlier diagnosis. Design: A 10-year descriptive retrospective review (1999-2008). Clinical, biological, and radiological data were collected and compared to identify common signs. Results: Three patients were identified. The time to diagnosis varied from 4 months to 3 years. Fever, asthenia, pain, weight loss, and sweats were the most common complaints. A predisposing factor was systematically observed. Irregular subpleural nodules were revealed with an infiltrative aspect. Definitive diagnosis was always histological. Conclusion: Actinomycosis should be sought in patients with asthenia and weight loss, local pain, when chest radiological evidence of distal nodules is observed with a non-specific biologic inflammatory syndrome. At this time, anaerobic bacteriological tests and histology should be performed. abstract_id: PUBMED:21068286 Pelvic actinomycosis-like disease due to Propionibacterium propionicum after hysteroscopic removal of an intrauterine device. A female patient presented with episodes of fever and pain in the lower right abdomen after hysteroscopic removal of an intrauterine device 2 months earlier. Pelvic actinomycosis originating from a tubo-ovarian abscess was diagnosed with Propionibacterium propionicum, formerly known as Arachnia propionica, as causative agent. abstract_id: PUBMED:8086208 Cervicofacial actinomycosis in a patient treated by surgery of the lymph node due to laryngeal cancer: case report An actinomycotic cervical abscess was excised in a 55-year-old man who had undergone partial laryngectomy and ipsilateral cervical lymph node dissection 6 years earlier. This rare case is described in detail. We conclude that postoperative structural changes producing the loss of physical and immunologic barriers may be determinant factors in the development of this late actinomycotic infection. abstract_id: PUBMED:3474146 Treatment of abdominothoracic actinomycosis with imipenem. The probably successful treatment of a 42 year old diabetic patient suffering from relapsing abdominothoracic actinomycosis with imipenem is reported. Several earlier attempts to cure the patient by combination of surgical procedures (including cysto-jejunostomy and pancreatic tail resection) in combination with four or six week courses of intravenously administered penicillin G had failed. abstract_id: PUBMED:1841980 Median rhomboid glossitis: secondary to colonisation of the tongue by Actinomyces (a case report). Median rhomboid glossitis is an inflammatory lesion of the tongue, now believed to be secondary to candidiasis. We document a case of median rhomboid glossitis with heavy colonisation by Actinomyces in a 60-year-old male. We propose that Actinomyces, like Candida, induces pseudoepitheliomatous hyperplasia of the mucosa of the tongue and florid inflammatory hyperplasia of the underlying connective tissue, resulting in the characteristic elevated lesion. Actinomyces has not earlier been implicated as a cause of median rhomboid glossitis. abstract_id: PUBMED:9640839 Cervicofacial actinomycosis after partial mandibular resection: case report. An unusual case of cervicofacial actinomycosis in a 60-year old male Nigerian is reported. The patient had had partial mandibular resection for plexiform ameloblastoma one year earlier with insertion of Kirschner wire to serve as a splint. The authors highlight the possible pathophysiology of this condition. It is suggested that clinicians be aware of the various modes of presentation and natural history of the disease to facilitate prompt diagnosis and appropriate treatment. This will assist in forestalling the numerous complications that may result from actinomycosis. abstract_id: PUBMED:30121203 A 27-Year-Old Man With Multiple Cavitary Lung Lesions. Case Presentation: A 27-year-old Lebanese man was admitted to our department for multiple pulmonary lesions. The patient had reported persistent fever, cough, shortness of breath, and weight loss since his return from Lebanon 6 weeks earlier. He had been diagnosed with a severe form of Behçet disease 4 years ago, for which the ongoing treatment was a corticosteroid therapy associated with methotrexate and infliximab. abstract_id: PUBMED:9082236 A rare case of primary abdominal actinomycosis The authors describe a case of primary abdominal actinomycosis operated on because of peritonitis sustained by a tubo-ovarian abscess. They discuss the pathogenesis of the case: the patient had been on intrauterine device contraception till two months earlier and had been operated on for breast cancer. Preoperative diagnosis is quite impossible and only the microscopic observation of the specimen can show the causative agent. Surgical options are reported, stressing the need for an adequate period of antimicrobial therapy. abstract_id: PUBMED:18348380 An unusual manifestation of actinomycosis infection of the maxilla. Postoperative actinomycosis infection caused by Actinomyces israelii is uncommon. Cervicofacial actinomycosis may appear as an opportunistc infection in the oral cavity. Initial diagnosis of this condition is difficult after oral surgery and the condition often is mistaken for more common odontogenic infections. Actinomycosis responds to antibiotic therapy but the condition recurs after therapy is discontinued. This article presents an unusual case of postsurgical actinomycosis that manifested in the cheek and appeared to be unrelated to the surgical extraction of an impacted maxillary third molar that had occurred one year earlier. The organism was sampled under anaerobic conditions, isolated, and diagnosed by Gram's stain and culture tests. The condition was cured after extraoral surgical drainage, debridement, and several months of oral antibiotic administration. abstract_id: PUBMED:24852779 Cervicofacial actinomycosis: a long forgotten infectious complication of immunosuppression - report of a case and review of the literature. Actinomycosis is a rare chronic granulomatous infection caused by Gram-positive, non-acid-fast, anaerobic to microaerophilic bacteria.We report a case of cervicofacial actinomycosis in an 86-year-old woman undergoing immunosuppressive therapy with azathioprine and prednisone for rheumatoid arthritis. She underwent a dental treatment several months earlier. The diagnosis of culture-negative actinomycosis was based on histolopathology findings and the isolation of companion bacteria. The patient was treated with amoxicillin-clavulanic acid for 3 months, which produced complete clearance of her cervicofacial actinomycosis.Our case points out the pitfalls of diagnostic procedures in actinomycosis and the ability of this rare disease to mimic other medical conditions. Answer: Thoracic actinomycosis is a rare disease that often presents with high morbidity due to delayed diagnosis. A retrospective review over a 10-year period identified pediatric cases of thoracic actinomycosis, with the time to diagnosis ranging from 4 months to 3 years. Common symptoms included fever, asthenia, pain, weight loss, and sweats, and a predisposing factor was systematically observed in patients. Radiological findings typically showed irregular subpleural nodules with an infiltrative aspect. The definitive diagnosis was always made histologically (PUBMED:21458973). Given the non-specific nature of the symptoms and the potential for misdiagnosis, it is crucial for healthcare providers to consider actinomycosis earlier in the diagnostic process, especially in patients presenting with asthenia, weight loss, local pain, and chest radiological evidence of distal nodules accompanied by a non-specific biologic inflammatory syndrome. Prompt anaerobic bacteriological tests and histology should be performed to confirm the diagnosis (PUBMED:21458973). The importance of early consideration of actinomycosis is further underscored by the various presentations of the disease in other cases, such as pelvic actinomycosis-like disease following the removal of an intrauterine device (PUBMED:21068286), cervicofacial actinomycosis after surgery for laryngeal cancer (PUBMED:8086208), and abdominothoracic actinomycosis in a diabetic patient (PUBMED:3474146). These cases highlight the diverse clinical manifestations of actinomycosis and the potential for it to be overlooked or mistaken for other conditions. In conclusion, thoracic actinomycosis should be considered earlier in the diagnostic process to reduce morbidity associated with delayed diagnosis. Clinicians should be vigilant for the characteristic symptoms and radiological findings, and should not hesitate to perform the necessary tests to identify this rare but treatable condition.
Instruction: Do trends in population levels of blood pressure and other cardiovascular risk factors explain trends in stroke event rates? Abstracts: abstract_id: PUBMED:12364723 Do trends in population levels of blood pressure and other cardiovascular risk factors explain trends in stroke event rates? Comparisons of 15 populations in 9 countries within the WHO MONICA Stroke Project. World Health Organization Monitoring of Trends and Determinants in Cardiovascular Disease. Background And Purpose: Previous studies have indicated a reasonably strong relationship between secular trends in classic cardiovascular risk factors and stroke incidence within single populations. To what extent variations in stroke trends between populations can be attributed to differences in classic cardiovascular risk factor trends is unknown. Methods: In the World Health Organization Monitoring of Trends and Determinants in Cardiovascular Disease (WHO MONICA) Project, repeated population surveys of cardiovascular risk factors and continuous monitoring of stroke events have been conducted in 35- to 64-year-old people over a 7- to 13-year period in 15 populations in 9 countries. Stroke trends were compared with trends in individual risk factors and their combinations. A 3- to 4-year time lag between changes in risk factors and change in stroke rates was considered. Results: Population-level trends in systolic blood pressure showed a strong association with stroke event trends in women, but there was no association in men. In women, 38% of the variation in stroke event trends was explained by changes in systolic blood pressure when the 3- to 4-year time lag was taken into account. Combining trends in systolic blood pressure, daily cigarette smoking, serum cholesterol, and body mass index into a risk score explained only a small fraction of the variation in stroke event trends. Conclusions: In this study, it appears that variations in stroke trends between populations can be explained only in part by changes in classic cardiovascular risk factors. The associations between risk factor trends and stroke trends are stronger for women than for men. abstract_id: PUBMED:25585905 Secular trends of blood pressure, body mass index, lipids and fasting glucose among children and adolescents in Turkey. What Is Already Known About This Subject: • Turkey is in a state of rapid transition and urbanization. There is ample evidence for an increased cardiovascular risk over the past two decades in Turkey. • Obesity, dyslipidemia and high blood pressure in children are markers of the future burden of cardiovascular disease and clustering of cardiovascular risk factors starts early in life. • The patterns of cardiovascular risk factors in populations are not static over time and the secular trends in obesity and cardiovascular risk factors among children and adolescents of developing countries have not been completely defined. What This Study Adds: • The prevalences of obesity and high blood pressure have increased several folds among Turkish children and adolescents over the past two decades. • The temporal trends in fasting glucose and lipids did not correlate with the increasing prevalence of obesity with the exception of HDL-C levels. • Finally, the prevalence of metabolic syndrome has doubled in Turkish children and adolescents over the last two decades. Our findings indicate that there may be an increase in the future morbidity and mortality from cardiovascular disease, thrombosis, stroke and diabetes mellitus throughout the developing countries. Summary: Patterns of cardiovascular risk factors and metabolic syndrome risk criteria in populations are not static over time. We analyzed the trends in lipid levels, blood pressure, fasting glucose and body mass index over the last two decades among children and adolescents. Two different cross-sectional studies were examined among students (ages 8-18) in 1989 (n = 1385) and in 2008 (n = 1746). Using the international cut-off points and percentiles, we identified children and adolescents with elevated blood pressure, triglycerides, fasting glucose (≥100 mg dL(-1) ), body mass index and low high-density lipoprotein cholesterol (HDL-C) levels. The prevalence of obesity has increased several folds over the last two decades. According to the international criteria, 4.3% of children were obese and 15.2% were overweight in 2008 vs. only 0.6% obese and 4.3% overweight in 1989 (P &lt; 0.001). The prevalence of elevated systolic blood pressure also increased significantly among 8-12-year-old boys and girls (28.5 vs. 15.7%, P &lt; 0.001). The temporal trends in fasting glucose and lipids did not correlate with the increasing prevalence of obesity with the exception of HDL-C levels. Low HDL-C (≤ 10th percentile) was observed in 16.3% in 1989 vs. 21.6% in 2008 (P = 0.016). HDL-C levels decreased significantly among adolescent boys. Finally, the prevalence of metabolic syndrome by modified National Cholesterol Education Program (NCEP) criteria has doubled over the last two decades (2.2 vs. 4.5%, P &lt; 0.001). Strategies for combating the future cardiovascular mortality should focus on the prevention of obesity, high blood pressure and metabolic syndrome among children and adolescents. abstract_id: PUBMED:26076950 Contributions of risk factors and medical care to cardiovascular mortality trends. Ischaemic heart disease, stroke, and other cardiovascular diseases (CVDs) lead to 17.5 million deaths worldwide per year. Taking into account population ageing, CVD death rates are decreasing steadily both in regions with reliable trend data and globally. The declines in high-income countries and some Latin American countries have been ongoing for decades without slowing. These positive trends have broadly coincided with, and benefited from, declines in smoking and physiological risk factors, such as blood pressure and serum cholesterol levels. These declines have also coincided with, and benefited from, improvements in medical care, including primary prevention, diagnosis, and treatment of acute CVDs, as well as post-hospital care, especially in the past 40 years. These variables, however, explain neither why the decline began when it did, nor the similarities and differences in the start time and rate of the decline between countries and sexes. In Russia and some other former Soviet countries, changes in volume and patterns of alcohol consumption have caused sharp rises in CVD mortality since the early 1990s. An important challenge in reaching firm conclusions about the drivers of these remarkable international trends is the paucity of time-trend data on CVD incidence, risk factors throughout the life-course, and clinical care. abstract_id: PUBMED:7578538 Age and secular trends in risk factors for cardiovascular disease in Busselton. Mortality rates from heart disease and stroke in Australia have been falling for more than 20 years. No completely satisfactory explanations for this trend exist. However, it is believed to be due, at least in part, to changes in the incidence of cardiovascular disease arising from changes in the prevalence and severity of risk factors for cardiovascular disease. The adult community of Busselton in Western Australia participated in cross-sectional health surveys every three years from 1966 to 1981. This paper describes secular trends from 1966 to 1981 and age trends from 25 to 80 years for cardiovascular risk factors in Busselton men and women. Downwards secular trends were observed for mean blood pressure and smoking for men and women, upwards trends were observed for body mass index in men, and mean cholesterol was approximately constant over this period. The age and secular trends were consistent with other Australian studies conducted in the 1980s and with overseas studies. An estimated 67 per cent of the decline in cardiovascular mortality rates among Busselton men and 22 per cent of the decline among Busselton women may be attributed to changes in the prevalence of risk factors for cardiovascular disease. abstract_id: PUBMED:3335877 The World Health Organization MONICA Project (monitoring trends and determinants in cardiovascular disease): a major international collaboration. WHO MONICA Project Principal Investigators. A World Health Organization Working Group has developed a major international collaborative study with the objective of measuring over 10 years, and in many different populations, the trends in, and determinants of, cardiovascular disease. Specifically the programme focuses on trends in event rates for validated fatal and non-fatal coronary heart attacks and strokes, and on trends in cardiovascular risk factors (blood pressure, cigarette smoking and serum cholesterol) in men and women aged 25-64 in the same defined communities. By this means it is hoped both to measure changes in cardiovascular mortality and to see how far they are explained; on the one hand by changes in incidence mediated by risk factor levels; and on the other by changes in case-fatality rates, related to medical care. Population centres need to be large and numerous; to reliably establish 10-year trends in event rates within a centre 200 or more fatal events in men per year are needed, while for the collaborative study a multiplicity of internally homogeneous centres showing differing trends will provide the best test of the hypotheses. Forty-one MONICA Collaborating Centres, using a standardized protocol, are studying 118 Reporting Units (subpopulations) with a total population aged 25-64 (both sexes) of about 15 million. abstract_id: PUBMED:3673887 Trends in cardiovascular risk factors in an urban black population, 1973-74 to 1985: the Minnesota Heart Survey. Cardiovascular disease mortality rates have fallen dramatically in blacks in the United States. To determine whether this may be due to a decline in cardiovascular risk factors, we compared risk factors in a 1985 urban black population, ages 35 to 69, with those obtained in blacks of the same area in 1973-74. Age-adjusted mean body mass and the prevalence of overweight increased significantly over the 12-year period in both men and women. Mean systolic blood pressures declined significantly in both sexes, diastolic blood pressure declined significantly in men, and the proportion of men and women hypertensives on medication and under control increased. The overall prevalence of cigarette smoking changed very little, but the proportion of heavy smokers decreased significantly in men. No significant changes occurred in resting heart rate. Concurrently with these risk factor trends, age-adjusted heart disease mortality rates in area blacks fell 27% between 1968-73 and 1979-84, and stroke mortality fell 58%. Changing risk factors may be contributing to declining cardiovascular mortality rates in blacks. However, overweight seems to be a worsening problem. abstract_id: PUBMED:28260600 Trends in population blood pressure and determinant factors for population blood pressure. Strategies to reduce the burden of blood pressure attributable diseases require knowledge of secular trend in PBP and its determinants. The issues were investigated in the Copenhagen City Heart Study. The design of CCHS is a repeated measures study. Such designs are uniquely suited to studying changes of an outcome and what risk factors may be associated with that outcome. Repeated measures studies are very well suited for trend analysis by using mixed effect analyses. SBP decreased about 2 mmHg in 25 years. The risk factors age, gender and BMI were found valid as determinant factors for secular trends in SBP. In addition, the following factors were identified: household income and the interactions ''gender*age'' and ''survey*age''. The interaction ''gender*age'' stated that the difference between SBP in the two genders was great in the young individuals and diminished by age. The interaction ''survey*age'' stated that SBP in the young individuals decreased more with survey than SBP in the older individuals. Thus, the 20 years old subjects in survey 2, 3 and 4 have lower SBP than the 20 years old subjects in preceding surveys. The slopes were less steep in higher ages. In the group of elderly and old subjects the trend is partly explained by treatment bias because more and more subjects leave the untreated group and start treatment. The factor ''household income'' was significant only in the female population and stated that high-income women had lower SBP and a more beneficial secular trend in SBP than low-income women. Marital status, self-reported physical exercise and alcohol intake were not significant factors. A number of factors, that are interesting in relation to SBP, were not included in the CCHS and therefore not investigated. Among them are salt intake, childhood factors, genetic factors and the DASH diet. A survival study was performed to investigate the mortality rate in relation to SBP changes during the observation period. A Cox regression analysis was used in this study. The survival study demonstrated that SBP was a significant variable in survival models for all age groups. There was a decrease in mortality rate in young to middle-aged individuals. The mortality rate that is associated with a particular value of SBP did not change. Thus, it was concluded that SBP was as dangerous as it has always been and that the reduction in mortality rate was most pronounced in the age classes that also experienced the greatest reduction in blood pressure. During the observation period the number of treated individuals in the population increased from 6.5% to 18.1%. About 50% of the population was hypertensive (SBP ≥ 140 mmHg or treated with antihypertensive medication). The value of SBPtreated was used as an indicator for hypertension control in the treated population. Hypertension control is a collection of topics that includes guidelines, available medicine, physicians attitude towards hypertension treatment, systematic control, patient awareness and patient compliance. The analysis of trends in SPB in treated hypertensives showed that SBPtreated decreased 9.2 mmHg in 25 years. The result may be ascribed to improvements in treatment but may also be caused by a change in start-to-treat practice: If hypertensives start treatment at an increasingly lower SBPthreshold then SBPtreated will decrease without improvements in treatment. Therefore the start-to-treat practice was evaluated by SBPthreshold. A change in SBPthreshold was not observed. Thus, the 9.2 mmHg decrease in SBPtreated may represent improvements in treatment. ''Age'' was a significant factor for SBPtreated. This result demonstrated that elderly and old individuals were treated less successful than young and middle-aged individuals. Subjects diagnosed with ischemic heart disease constitute a group with a more advantageous slope than subjects with other diagnoses (stroke, IHD in combination with stroke, and hypertension alone). Self-reported physical exercise, gender, alcohol intake, household income and family structure were not significant as variables in the decreasing SBP among treated hypertensives. Thus, the papers in this thesis described SBP trends in the untreated and in the treated part of a population. Different patient-related factors were identified as determinant factors for trends in the two groups. The determinant factors are the explanatory variables most associated with trends in SBP. The determinant factors were different for the two groups (except for age). abstract_id: PUBMED:19533384 Diverging trends in cardiovascular morbidity and mortality in a low risk population. Cardiovascular (CVD) disease morbidity and mortality are changing over the years, following changes in socioeconomic conditions and underlying risk factors. However, the trends of these changes differ among various populations. There is little data regarding these trends in low CVD risk populations. Tables of deaths by cause and age for the period 1956-2007 and tables of hospitalizations for the period 1979-2003 published by the National Statistical Service of Greece were used. Trends over time were determined using log-linear regression models. Age-adjusted all-cause mortality has declined steadily since 1964 in both sexes. CVD mortality initially increased until the late 1980s and subsequently decreased. An increase in mortality from stroke was seen until 1978, especially in men, followed by a decline. Mortality from coronary heart disease (CHD) increased initially, continued to increase for one decade more than stroke and started to decrease in 1989. However, only in women has CHD mortality returned below 1956 levels. As a result, deaths from CHD have surpassed those from stroke. Although the in-hospital fatality of acute myocardial infarction (AMI) has decreased by half between 1979 and 2003, deaths from AMI have decreased only slightly, as hospitalization and morbidity rates have increased during the same period. Although the various types of CVD share common risk factors, the trends of their respective mortality rates have differed significantly over the past five decades in the Greek population. This could partly be explained by the fact that risk factors do not equally contribute to CHD and stroke, and they might have not all been equally well controlled. abstract_id: PUBMED:34312813 Cardiovascular Event Rates After Myocardial Infarction or Ischaemic Stroke in Patients with Additional Risk Factors: A Retrospective Population-Based Cohort Study. Introduction: The impact of additional risk factors on major cardiovascular event (MACE) rates in patients with a history of myocardial infarction (MI) or ischaemic stroke (IS) treated with statins is not well defined. Methods: In this retrospective population-based cohort study, patients with a history of MI or IS treated with moderate- or high-intensity statins were identified using Swedish national register data. Patients were incident (index event between July 2006 and December 2014 and followed from diagnosis) or prevalent (MI or IS before July 2006 and followed thereafter). Four subgroups were defined on the basis of additional risk factors associated with increased cardiovascular risk: diabetes mellitus with target organ damage; chronic kidney disease stages 3-4; index event within 2 years after prior MI or IS; and polyvascular disease. First and total MACE rates (i.e. MI, IS, or cardiovascular death) were calculated, and first MACE 10-year risks (prevalent cohort only) were predicted. Results: Numerically, MACE rates in subgroups were 1.5-3 times higher than in overall populations, and were highest in the 2 years after the index event. First MACE rates in the additional risk factor subgroups were 17.2-33.5 per 100 person-years for the incident cohorts and 9.9-13.2 per 100 person-years for the prevalent cohorts. Total MACE rates per 100 person-years were 20.1-39.8 per 100 person-years and 12.4-17.6 per 100 person-years, respectively. Conclusion: Despite previous use of moderate- or high-intensity statins, patients with a history of MI or IS, and additional risk factors remain at very high cardiovascular risk. abstract_id: PUBMED:30580724 Less Than Ideal: Trends in Cardiovascular Health Among US Stroke Survivors. Background and Purpose- The American Heart Association's Life's Simple 7 (LS7) defines ideal cardiovascular health by 7 metrics: not smoking, regular physical activity, normal body mass index, blood pressure, plasma glucose, and total cholesterol levels, and a healthy diet. We assessed prevalence and predictors of ideal LS7 among US stroke survivors. Methods- Among 67 514 participants in the National Health and Nutrition Examination Surveys from 1988 to 1994 and 1999 to 2014, 1597 adults (≥18 years) had self-reported history of stroke. LS7 metrics were categorized as poor, intermediate, and ideal; ideal LS7 scores were calculated (1 point for each ideal metric met). Trends in poor, intermediate, and ideal cardiovascular health were assessed. Odds of low (0-1) versus high (≥4) ideal LS7 scores were assessed according to sex, race, poverty income ratio, and education level, before and after adjusting for covariates. Results- Only 1 participant met all ideal LS7 metrics. The proportion with low LS7 score increased from 17.9% in 1988 to 1994 to 35.4% in 2011 to 2014 (P&lt;0.001). Over that time frame, prevalence of poor blood pressure (≥140/90 mm Hg) and poor cholesterol (≥240 mg/dL) decreased (45.2%-26.5% and 37.2%-10.3%), whereas prevalence of poor body mass index (≥30 kg/m2), poor diet (healthy eating index score &lt;50), and poor physical activity (0 minutes moderate/vigorous activity per week) increased (26.9%-39.0%; 14.2%-50.6%; 44.6%-70.9%; all P&lt;0.05). After adjustment, black race (odds ratio, 2.29; 95% CI, 1.17-4.48), poverty income ratio ≤200% (odds ratio, 2.20, 95% CI, 1.11-4.36), and ≤12th grade education (odds ratio, 4.50; 95% CI, 2.27-8.92) were associated with low ideal LS7 scores. Conclusions- Over the past 3 decades, blood pressure and cholesterol control among stroke survivors improved, but rates of obesity, poor diet, and physical inactivity increased. Stroke survivors who are black, poor, or less educated are less likely to have ideal cardiovascular health. Answer: The trends in population levels of blood pressure and other cardiovascular risk factors partially explain trends in stroke event rates. In the WHO MONICA Stroke Project, population-level trends in systolic blood pressure showed a strong association with stroke event trends in women, but not in men. In women, changes in systolic blood pressure explained 38% of the variation in stroke event trends when considering a 3- to 4-year time lag. However, combining trends in systolic blood pressure, daily cigarette smoking, serum cholesterol, and body mass index into a risk score explained only a small fraction of the variation in stroke event trends (PUBMED:12364723). In Turkey, the prevalences of obesity and high blood pressure have increased several folds among children and adolescents over the past two decades, which may indicate an increase in future morbidity and mortality from cardiovascular disease, thrombosis, stroke, and diabetes mellitus (PUBMED:25585905). Globally, CVD death rates are decreasing steadily, coinciding with declines in smoking and physiological risk factors, such as blood pressure and serum cholesterol levels, as well as improvements in medical care. However, these variables do not fully explain why the decline began when it did, nor the differences in the start time and rate of the decline between countries and sexes (PUBMED:26076950). In Busselton, Australia, downward secular trends were observed for mean blood pressure and smoking, and an estimated 67% of the decline in cardiovascular mortality rates among men and 22% among women may be attributed to changes in the prevalence of risk factors for cardiovascular disease (PUBMED:7578538). In an urban black population in Minnesota, mean systolic blood pressures declined significantly in both sexes, and the proportion of hypertensives on medication and under control increased, which may be contributing to declining cardiovascular mortality rates (PUBMED:3673887). In Copenhagen, systolic blood pressure decreased about 2 mmHg in 25 years, and the risk factors age, gender, and BMI were found valid as determinant factors for secular trends in systolic blood pressure (PUBMED:28260600). In Greece, cardiovascular mortality initially increased until the late 1980s and subsequently decreased, with mortality from stroke declining after 1978 and mortality from coronary heart disease starting to decrease in 1989.
Instruction: Are French general practitioners consulted before travel to developing countries? Abstracts: abstract_id: PUBMED:26139617 Are French general practitioners consulted before travel to developing countries? A cross-sectional study conducted in a French airport. Background: General practitioners (GPs) could play a central role in preventing travel-related health issues. The aim of this study was to assess, in travellers departing to developing countries from a French airport, the proportion of individuals having sought GP counseling before departure and to identify determinants for having consulted a GP. Methods: Cross-sectional study conducted between November 2012 and July 2013, in all adults living in France. Sociodemographic, health characteristics, type of travel and resources consulted before departure were collected. A descriptive analysis was performed. Determinants for having consulted a GP before departure were investigated using a logistic regression analysis. Results: Of the 360 travellers included, 230 (64%) sought health counseling before departure. GPs were the main source of information for 134 (58%) travellers having sought health information and the only one for 49 (21%). Almost half of the travellers (48%) departing to sub-Saharan countries did not seek health counseling from a medical doctor (GP, non-GP specialist, specialist consulted in an international vaccination center or occupational physician). Individuals significantly more likely to travel without having consulted a GP were young and male, held foreign nationality, had travelled more than five times before, rarely consulted their GP and were travelling to a non-malarious area. Conclusion: GPs were the main but not the only source of information and counseling before traveling to a developing country. This study helps identify the characteristics of individuals likely to travel without having consulted a GP before departure. abstract_id: PUBMED:23732446 Pre travel advice - a study among Swiss general practitioners Up to 65 % of travellers seek pre-travel advice at their general practitioner. Professionals should inform about the most common and most dangerous health threats, requiring up-to-date knowledge about epidemiology of respective disorders. The aim of the present study was to investigate the content of pre-travel advice given by general practitioners in order to provide them with better expert support from travel medicine specialists. One third of them perform pre-travel advice weekly, and some two thirds do so at least monthly. The most frequently discussed topics are malaria, immunisation, insect bite prevention and travellers' diarrhoea. Less than half of the advice sessions included talking about the risk of accidents. Apart from the need for yellow fever vaccination, referral to travel medicine experts was highest for immunocompromised and pregnant travellers, and for trips to "high risk" countries. A considerable number of practitioners do not comply with the Swiss recommendations, continuously updated in the Bulletin of the Federal Office of Public Health, possibly because only 21 % consult them at regular intervals. abstract_id: PUBMED:29394392 Female genital mutilation: an evaluation of the knowledge of French general and specialized travel medicine practitioners. We investigated the knowledge of female genital mutilation (FGM) among 60 general and 52 specialized travel medicine practitioners. Less than 50% of these practitioners had adequate knowledge of FGM. Only 42.9% declared having encountered FGM. FGM is likely underestimated in health facilities. Medical education and supporting information should be developed to better address and prevent FGM. abstract_id: PUBMED:2343232 Travel to developing countries: pre-departure medical advice. Travelers to developing countries are at risk of contracting tropical infectious diseases that they or their physicians may be unfamiliar with. Proper pre-travel counsel should be given concerning general health risks that may be encountered abroad, immunizations, malaria prophylaxis and prevention and treatment of traveler's diarrhea. In Rhode Island, expert advice may be obtained at the Traveler's Clinics at the Miriam Hospital in Providence (401-274-3700 or 331-8500, ext. 4075) and the Memorial Hospital in Pawtucket (401-722-6000, ext. 2545). The Miriam Traveler's Clinic is open Wednesday (9-1) and all day Friday while the Memorial Traveler's Clinic is open Tuesday afternoon. These Traveler's Clinics are headed by Drs G.R. Olds and S.M. Opal, respectively. abstract_id: PUBMED:3598228 Health problems after travel to developing countries. Travelers to developing countries participated in a follow-up study of the health risks associated with short (less than three months) visits to these nations. Travelers to the Greek or Canary Islands served as a control cohort. Participants completed a questionnaire to elicit information regarding pretravel vaccinations, malaria prophylaxis, and health problems during and after their journey. Relevant infections were confirmed by the respondent's personal physician. The questionnaire was completed by 10,524 travelers; the answer rate was 73.8%. After a visit to developing countries, 15% of the travelers reported health problems, 8% consulted a doctor, and 3% were unable to work for an average of 15 days. The incidence of infection per month abroad was as follows: giardiasis, 7/1,000; amebiasis, 4/1,000; hepatitis, 4/1,000; gonorrhea, 3/1,000; and malaria, helminthiases, or syphilis, less than 1/1,000. There were no cases of typhoid fever or cholera. abstract_id: PUBMED:11383492 Preparing adult patients for international travel. More than 50 million Americans engage in international travel annually. Travel to developing countries can expose patients to a myriad of infectious diseases. Primary care clinicians can provide counseling, immunizations, and drugs to keep patients healthy while traveling. This article reviews risk assessment and required and recommended immunizations. Patients can greatly reduce their risk for travel-acquired illness by updating immunizations, taking prophylactic drugs, and observing recommended health behaviors. abstract_id: PUBMED:21057065 Evaluating a sick child after travel to developing countries. Every year, millions of children travel internationally with their families, many to developing countries. Although the vast majority experience uneventful travel and return home well, it is not uncommon for children to present as ill during or after travel. Although the majority of travel-associated illness is mild and self-limited, serious conditions regularly occur. Almost all life-threatening conditions after travel present with fever, and malaria is the most important of these to rapidly exclude. Gastrointestinal symptoms are common after travel in the developing world, and most diarrhea in child travelers has a bacterial source. Children who have a rash in association with fever or who appear ill should receive a priority work-up focused on ruling out serious conditions. Many children traveling internationally experience respiratory illness during or shortly after travel, mainly common upper respiratory infections, yet serious conditions, such as tuberculosis, may occur. Eosinophilia is common in the returned pediatric traveler, particularly those with prolonged stays in the tropics. Not all eosinophilia is caused by parasitic infection; drug reactions, asthma, and other allergic conditions are also common causes. With a focus first on ruling out life-threatening disease and subsequently on an informed and efficient path to diagnosis and treatment, clinicians may confidently provide care for this challenging group of patients. abstract_id: PUBMED:29959783 The knowledge, attitude and practice of health practitioners towards antibiotic prescribing and resistance in developing countries-A systematic review. What Is Known And Objective: Inappropriate antibiotic use is the leading cause of antibiotic resistance especially in developing countries, where patient management is mainly based on the prescription of medicines due to deficiencies in diagnostic procedures. The objective of this study was to conduct a systematic review of studies on knowledge, attitude and practice of health practitioners towards antibiotic prescribing and resistance in developing countries. Methods: We used MEDLINE and EMBASE to conduct a systematic search for studies. We included papers that focused on health practitioners' knowledge on antibiotic use, local resistance and extent of the antibiotic resistance problem; the health practitioners' confidence in prescribing; commonly used guides; and recommendations to improve antibiotic prescribing. Studies that assessed other indicators were excluded. We assessed the quality of the individual studies using a previously published quality assessment tool. Data were summarized into proportions and means. We registered the review with PROSPERO, registration number CRD42018085664. Results And Discussion: We obtained 384 papers, 345 papers after deduplication, 28 relevant papers upon reviewing titles and abstracts, and 15 articles fulfilled our inclusion criteria upon full-text review. Most of the studies were of medium quality (ten), three were of low quality, and two were of high quality. An average of 80.9% of respondents correctly answered questions relating to antibiotic use, whereas 39.6% were aware of the local resistance patterns in their health facilities. Participants stated that antibiotic resistance was a general problem (75.2%), a global problem (84.7%), a national problem (88.0%), a problem in their health facilities (71.9%) and a problem in their daily practice (71.7%). Up to 78.2% of the participants reported that they were very confident or confident in antibiotic prescribing. What Is New And Conclusion: There was a high reported proportion of participants with an apparently good level of knowledge on antibiotic use and a high level of confidence in antibiotic prescribing, but the reported level of knowledge on local antibiotic resistance was low. The analysis was limited by the low number of studies included, and most of them had a medium quality. abstract_id: PUBMED:25596567 Developing countries: health round-trip. International travel can pose various risks to health, depending both on the health needs of the traveller and on the type of travel to be undertaken. Travellers intending to visit a destination in a developing country should consult a travel medicine clinic or medical practitioner before the journey. General precautions can greatly reduce the risk of exposure to infectious agents. Vaccination is a highly effective method of preventing certain infectious diseases. The aim of this study is to know the risks involved and the best way to prevent them. abstract_id: PUBMED:3086975 Health advice before travel to tropical countries Some 8% of Swiss citizens travel in the developing countries annually. Advising these travellers is an important task for the physician, though the most frequent infections are the less serious travellers' diarrhea and common colds. To some extent the more serious tropical diseases can be avoided by appropriate measures, e.g. vaccination (against yellow fever, tetanus, polio etc). and by chemoprophylaxis (malaria). Individual fitness for travel should be checked prior to any journey abroad, and special advice must be given to travellers with special risks (e.g. pregnant women, infants, persons with chronic diseases, diabetics etc.). Most important during a stay in the tropics is a sensible way of life with enough sleep, general hygiene, especially with regard to food, and avoidance of untreated water. Further safety measures are recommended for bathing, strong sun exposure, mountaineering above 10,000 ft and in general for exposure to special health risks. Recognition of risks and their avoidance is often the best prophylaxis. Answer: Yes, French general practitioners (GPs) are consulted before travel to developing countries, but not by all travelers. A cross-sectional study conducted at a French airport found that 64% of the 360 travelers included in the study sought health counseling before departure, with GPs being the main source of information for 58% of those who sought health information and the only source for 21% (PUBMED:26139617). However, the study also revealed that almost half of the travelers (48%) departing to sub-Saharan countries did not seek health counseling from a medical doctor, including GPs. Factors associated with not consulting a GP before travel included being young and male, holding foreign nationality, having traveled more than five times before, rarely consulting their GP, and traveling to a non-malarious area (PUBMED:26139617).
Instruction: Can clinicians recognize DSM-IV personality disorders from five-factor model descriptions of patient cases? Abstracts: abstract_id: PUBMED:19289453 Can clinicians recognize DSM-IV personality disorders from five-factor model descriptions of patient cases? Objective: This article examined, using theories from cognitive science, the clinical utility of the Five-Factor Model (FFM) of Personality, an assessment and classification system under consideration for integration into the forthcoming fifth edition of the Diagnostic and Statistical Manual (DSM) of Mental Disorders. Specifically, the authors sought to test whether FFM descriptors are specific enough to allow practicing clinicians to capture core features of personality disorders. Method: In two studies, a large nationwide sample of clinical psychologists, psychiatrists, and clinical social workers (N=187 and N=191) were presented case profiles based on symptom formats from either the DSM-IV and/or FFM. Ratings for six aspects of clinical utility for DSM-IV and FFM profiles were obtained and participants provided DSM-IV diagnoses. Prototypic cases (only one personality disorder) and comorbid cases were tested in separate studies. Results: Participants rated the DSM-IV as more clinically useful than the FFM on five out of six clinical utility questions. Despite demonstrating considerable background knowledge of DSM-IV diagnoses, participants had difficulty identifying correct diagnoses from FFM profiles. Conclusion: The FFM descriptors may be more ambiguous than the criteria of the DSM-IV and the FFM may therefore be less able to convey important clinical details than the DSM-IV. The findings flag challenges to clinical utility for dimensional-trait systems such as the FFM. abstract_id: PUBMED:21208595 Can personality disorder experts recognize DSM-IV personality disorders from five-factor model descriptions of patient cases? Background: Dimensional models of personality are under consideration for integration into the next Diagnostic and Statistical Manual of Mental Disorders (DSM-5), but the clinical utility of such models is unclear. Objective: To test the ability of clinical researchers who specialize in personality disorders to diagnose personality disorders using dimensional assessments and to compare those researchers' ratings of clinical utility for a dimensional system versus for the DSM-IV. Method: A sample of 73 researchers who had each published at least 3 (median = 15) articles on personality disorders participated between December 2008 and January 2009. The Five-Factor Model (FFM), one of the most-studied dimensional models to date, was compared to the DSM-IV. Participants provided diagnoses for case profiles in DSM-IV and FFM formats and then rated the DSM-IV and FFM on 6 aspects of clinical utility. Results: Overall, participants had difficulty identifying correct diagnoses from FFM profiles (t₇₂ = 12.36, P &lt; .01), and the same held true for a subset reporting equal familiarity with the DSM-IV and FFM (t₂₃ = 6.96, P &lt; .01). Participants rated the FFM as less clinically useful than the DSM for making prognoses, devising treatment plans, and communicating with professionals (all t₆₉ &gt; 2.19, all P &lt; .05), but more useful for communicating with patients (t₆₉ = 3.03, P &lt; .01). Conclusions: The results suggest that personality disorder expertise and familiarity with the FFM are insufficient to correctly diagnose personality disorders using FFM profiles. Because of ambiguity inherent in FFM profile descriptors, this insufficiency may prove unlikely to be attenuated with increased clinical familiarity with the FFM. abstract_id: PUBMED:21838562 Clinician's judgments of the utility of the DSM-IV and five-factor models for personality disordered patients. The purpose of the current study was to examine the clinical utility of the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV; American Psychiatric Association, 2000) and the Five Factor Model of personality disorder (FFM; Widiger, Costa, &amp; McCrae, 2002) in describing personality pathology. In the current study, practicing psychologists described one or two of their personality disordered patients in terms of the FFM and DSM models. In some instances, the patient was someone who met the criteria for one of the 10 DSM-IV personality disorders; in others, the patient was someone who received a diagnosis of personality disorder, not otherwise specified. Participants then rated each model on six aspects of clinical utility. The current study found that the FFM was consistently rated higher than the DSM model in terms of four of the six aspects of clinical utility. Across both cases, the clinicians rated the FFM as significantly more useful with respect to its ability to provide a global description of the individual's personality, to communicate information to clients, and to encompass all of the individual's important personality difficulties. abstract_id: PUBMED:11502083 Using the five-factor model to represent the DSM-IV personality disorders: an expert consensus approach. This study sought to extend previous work on the five-factor dimensional model (FFM) of personality disorder (PD) by developing more comprehensive FFM descriptions of prototypic cases. Specifically, the authors asked experts in each of the 10 DSM-IV PDs to rate the prototypic case by using all 30 facets of the FFM. Aggregating across raters of the given disorder generated a prototype for each disorder. In general, there was good agreement among experts and with previous theoretical and empirical FFM translations of DSM diagnostic criteria. Furthermore, the ability of the FFM explanation to reproduce the high comorbidity rates among PDs was demonstrated. The authors concluded that, with the possible exception of schizotypal PD, the DSM PDs can be understood from the dimensional perspective of the FFM. Future directions for research, including the use of the present prototypes to "diagnose" personality disorder, are discussed. abstract_id: PUBMED:24174888 Dimensional models of personality: the five-factor model and the DSM-5. It is evident that the classification of personality disorder is shifting toward a dimensional trait model and, more specifically, the five-factor model (FFM). The purpose of this paper is to provide an overview of the FFM of personality disorder. It will begin with a description of this dimensional model of normal and abnormal personality functioning, followed by a comparison with a proposal for future revisions to DSM-5 and a discussion of its potential advantages as an integrative hierarchical model of normal and abnormal personality structure. abstract_id: PUBMED:9205425 Clinician ratings of the five-factor model of personality and the DSM-IV personality disorders. This study explored the associations among the domains of the five-factor model (FFM) of personality (neuroticism, extraversion, openness, agreeableness, and conscientiousness) and the DSM-IV personality disorders (PDs). Clinician ratings were obtained for both the DSM-IV PDs and the FFM on a sample of 100 PD patients. The correlational data showed that the DSM PDs were most strongly associated with the FFM domains of neuroticism, extraversion, and agreeableness. Factor analysis revealed four underlying factors that provided insights into qualities shared by subgroups of the DSM-IV PDs. The domain of neuroticism was associated with the borderline, avoidant, and dependent PDs (factor 1). The paranoid, avoidant, schizoid, and schizotypal PDs were negatively associated with the domain of agreeableness (factor 2). The domain of extraversion was positively associated with the narcissistic and histrionic PDs and negatively with schizoid PD (factor 3). The FFM conscientiousness and openness domains loaded onto a single factor and were positively associated with the obsessive-compulsive PD and negatively associated with the antisocial and borderline PDs. Exploring the relationships between these two personality systems will improve our conceptualization and understanding of the DSM PDs. abstract_id: PUBMED:8582310 DSM IV and training: the limits Since its third edition, DSM has been considered to be an excellent tool for psychiatric research. The primary objective of this classificatory system was to put forward internationally accepted standard definitions. DSM diagnostic criteria are now indispensable for any publication in the scientific literature. It appears however that this work has gradually lost sight of its initial objective and is used as an educational tool for training of clinicians. What are the limits and risks of such a use? Can the DSM IV philosophy be reconciled with the objectives of training? Are the criteria in force for the selection of homogeneous patients groups identical to those which enable knowledge acquisition required for identification of disorders and their treatment? What is the heuristic value of enumerating symptoms and syndromes isolated from any theoretical context? Can symptoms be separated from the patient's history and personality? Is the excessive use of concurrent disorders not likely to be a source of conceptual and therapeutic inflation? Is a purely descriptive approach to psychiatric disorders not likely to run the risk of overestimating them? The points are discussed in succession by the authors. abstract_id: PUBMED:24288580 Clinicians' Use of Personality Disorder Models within a Particular Treatment Setting: A Longitudinal Comparison of Temporal Consistency and Clinical Utility. An active line of current investigation is how the five-factor model (FFM) of personality disorder might be applied by clinicians and particularly, how clinically useful this model is in comparison to the existing nomenclature. The current study is the first to investigate the temporal consistency of clinicians' application of the FFM and the DSM-IV-TR to their own patients. Results indicated that FFM ratings were relatively stable over six-months of treatment, supporting their use by clinicians, but also indexed potentially important clinical changes. Additionally, ratings of utility provided by the clinicians suggested that the FFM was more useful for clinical decision making than was the DSM-IV-TR model. We understand the clinical utility findings within the context of previous research indicating that the FFM is most useful among patients who are not prototypic for a personality disorder. abstract_id: PUBMED:20552504 Using clinician-rated five-factor model data to score the DSM-IV personality disorders. Proposals suggest that many or all of the Diagnostic and Statistical Manual of Mental Disorders (4th ed. [DSM-IV]; American Psychiatric Association, 1994) personality disorders (PDs) may be omitted from the DSM (5th ed.; DSM-V) and replaced with a dimensional trait model of personality pathology (Krueger, Skodol, Livesley, Shrout, &amp; Huang, 2007; Skodol, 2009). Several authors have expressed concerns that this may be difficult for clinicians and researchers who are more comfortable with the extant PD diagnoses. In this study, we tested whether clinician ratings of traits from the Five-factor model (FFM; Costa &amp; McCrae, 1990) can be used to recreate DSM-IV PDs. Using a sample of 130 clinical outpatients, we tested the convergent and discriminant validity of the FFM PD counts in relation to consensus ratings of the DSM-IV PDs. We then examined whether the FFM and DSM-IV PD scores correlate in similar ways with self-reported personality traits from the Schedule for Nonadaptive and Adaptive Personality (Clark, 1993). Finally, we tested the clinical utility of the FFM PD counts in relation to functional impairment. Overall, the FFM PD counts, scored using clinician ratings of the FFM traits, appeared to function like the DSM-IV PDs, thus suggesting that the use of a dimensional trait model of personality in the DSM-V may still allow for an assessment of the DSM-IV PD constructs. abstract_id: PUBMED:37103256 Five-Factor Model and DSM-5 Alternative Model of Personality Disorder Profile Construction: Associations with Cognitive Ability and Clinical Symptoms. Although numerous studies have explored latent profiles using the Five-Factor Model (FFM) of normative personality, no studies have investigated how broad personality traits (i.e., FFM) and pathological personality traits using the alternative model of personality disorder (AMPD) may combine for latent personality profiles. The present study recruited outpatients (N = 201) who completed the Big Five Aspects Scales (BFAS), Personality Inventory for DSM-5 (PID-5), Structured Clinical Interview for DSM-IV (SCID-I/P), gambling and alcohol use measures, and the Weschler Intelligence subtests. When FFM and AMPD measures were combined, latent profile analyses revealed four profiles, Internalizing-Thought disorder, Externalizing, Average-Detached, and Adaptive. Detachment and openness to experience were the most and least essential traits for profile distinction, respectively. No associations between group membership and cognitive ability measures were found. Internalizing-Thought disorder membership was linked with a current mood and anxiety disorder diagnosis. Externalizing profile membership was associated with younger age, problematic gambling, alcohol use, and a current substance use disorder diagnosis. The four FFM-AMPD profiles overlapped with the four FFM-only and three AMPD-only profiles. Overall, the FFM-AMPD profiles appeared to have better convergent and discriminant validity with DSM-relevant psychopathology. Answer: Clinicians have had difficulty recognizing DSM-IV personality disorders from Five-Factor Model (FFM) descriptions of patient cases. In studies where clinical psychologists, psychiatrists, and clinical social workers were presented with case profiles based on DSM-IV and/or FFM formats, participants rated the DSM-IV as more clinically useful than the FFM on five out of six clinical utility questions. Despite their background knowledge of DSM-IV diagnoses, they struggled to identify correct diagnoses from FFM profiles (PUBMED:19289453). Similarly, even among clinical researchers specializing in personality disorders, there was difficulty in diagnosing personality disorders using FFM profiles. These experts rated the FFM as less clinically useful than the DSM-IV for making prognoses, devising treatment plans, and communicating with professionals, although they found it more useful for communicating with patients (PUBMED:21208595). However, it is important to note that in another study, practicing psychologists described their personality disordered patients in terms of the FFM and DSM models and rated the FFM higher than the DSM model in terms of four of the six aspects of clinical utility. They found the FFM more useful for providing a global description of the individual's personality, communicating information to clients, and encompassing all of the individual's important personality difficulties (PUBMED:21838562). Despite these mixed findings, the overall evidence suggests that there is a challenge for clinicians in using FFM descriptors to recognize DSM-IV personality disorders, which may be due to the ambiguity inherent in FFM profile descriptors. This challenge may not be easily overcome even with increased clinical familiarity with the FFM (PUBMED:21208595).
Instruction: Is there a rural gradient in the diagnosis of aortic stenosis? Abstracts: abstract_id: PUBMED:23683323 Is there a rural gradient in the diagnosis of aortic stenosis? An analysis of a remote Scottish cohort. Introduction: Calcific aortic stenosis is the most common cardiac valve lesion and is becoming increasingly prevalent as life expectancy rises. There is evidence that patients in remote and rural areas with certain diseases have worse outcomes and present to specialist services later than their urban counterparts. It is not known whether patients with aortic stenosis follow a similar pattern. The aim of this study was to investigate whether increasing rurality was associated with later presentation to healthcare services at a more advanced stage of aortic stenosis. Methods: This was a retrospective cohort study. Using ICD-10 discharge codes and local databases, 605 patients with aortic stenosis who presented between 31 November 1999 and 1 December 2008 were identified. Aortic stenosis was defined as a pressure gradient across the aortic valve of 25 mmHg or more. Patients with prior aortic valve replacement were excluded. Clinical notes were reviewed for all patients. Gender, age and pressure gradient across the aortic valve at presentation and patient GP-practice location were recorded. Patients were then assigned a Clinical Peripherality Index score based on the postcode of their GP's practice to define rurality. Patient data were compared across the six defined levels of clinical peripherality by ANOVA. Results: Mean patient age was 73 ± 13 years, and 336 (54%) were male. The peak gradient across the valve was 41.1 ± 26.7 mmHg. There was no association between the level of clinical peripherality and the stage of aortic stenosis at presentation, age or gender (all p &gt;0.05). Conclusions: There was no urban-rural gradient in the severity of aortic stenosis at presentation in this remote Scottish cohort. This suggests that patients with this condition in remote areas do not present later in their disease trajectory. abstract_id: PUBMED:34650372 Moderate gradient severe aortic stenosis: diagnosis, prognosis and therapy. Aortic stenosis (AS) is defined as severe in the presence of: mean gradient ≥40 mmHg, peak aortic velocity ≥4 m/s, and aortic valve area (AVA) ≤1 cm2 (or an indexed AVA ≤0.6 cm2/m2). However, up to 40% of patients have a discrepancy between gradient and AVA, i.e. AVA ≤1 cm2 (indicating severe AS) and a moderate gradient: &gt;20 and &lt;40 mmHg (typical of moderate stenosis). This condition is called 'low-gradient AS' and includes very heterogeneous clinical entities, with different pathophysiological mechanisms. The diagnostic tools needed to discriminate the different low-gradient AS phenotypes include colour-Doppler echocardiography, dobutamine stress echocardiography, computed tomography scan for the definition of the calcium score, and recently magnetic resonance imaging. The prognostic impact of low-gradient AS is heterogeneous. Classical low-flow low-gradient AS [reduced left ventricular ejection fraction (LVEF)] has the worst prognosis, followed by paradoxical low-flow low-gradient AS (preserved LVEF). Conversely, normal-flow low-gradient AS is associated with a better prognosis. The indications of the guidelines recommend surgical or percutaneous treatment, depending on the risk and comorbidities of the individual patient, both for patients with classic low-flow low-gradient AS and for those with paradoxical low-flow low-gradient AS. abstract_id: PUBMED:26886708 Paradoxical low-flow low-gradient aortic stenosis In approximately one third of patients presenting with suspected severe aortic stenosis, there is a discrepancy between a severely reduced aortic valve opening area (&lt; 1 cm(2)) and a non-severe increase of the mean transvalvular gradient (&lt; 40 mmHg). In a substantial number of these cases there is evidence of a severe paradoxical low-flow low-gradient aortic stenosis, characterized by a reduced stroke volume index in the setting of a normal left ventricular ejection fraction. This finding should trigger an extensive diagnostic work-up, including echocardiography, stress echocardiography and computed tomography to rule out measurement errors and to identify the cause(s) of the hemodynamic discrepancy. If the diagnosis of a severe paradoxical low-flow low-gradient aortic stenosis is confirmed and, furthermore, the patient is normotensive and reports stenosis-associated symptoms, the feasibility of an aortic valve replacement should be considered. abstract_id: PUBMED:22736038 Low flow/low gradient aortic valve stenosis : clinical and diagnostic management Low gradient aortic stenoses (AS) represent a special challenge for physicians with respect to an exact diagnosis and optimal therapy. The difficulty lies in the estimation of the severity of AS which is decisive for subsequent treatment and the prognosis. Low flow and low gradient can be due to systolic or diastolic dysfunction by high-grade as well as by medium-grade AS and be of non-valvular origin. The latter group is to be interpreted as pseudoaortic stenosis as long as the low flow can successfully be raised by interventional means. However, only patients in the first group can be expected to profit from valve replacement and for patients in the second group the accompanying diseases must be the focus of therapeutic treatment. Therefore, according to recent European surveys up to 30% of patients with severe AS are undertreated due to false estimation of the severity of stenosis and perioperative risk stratification. Furthermore, follow-up investigations have shown that patients with low flow/low gradient stenosis and borderline-normal ejection fraction (EF) are in an advanced stage of the disease because they have often developed a severe reduction in longitudinal myocardial function and in addition have pronounced myocardial replacement fibrosis due to cardiac remodelling despite a preserved EF. Therefore, aortic valve area, mean pressure gradient and EF alone cannot be taken into consideration for the management of patients with severe AS but a comprehensive assessment of the hemodynamics, such as stroke volume, special functional parameters as well as individual clinical appearance is essential for precise diagnostic and therapeutic decision making. abstract_id: PUBMED:36306951 TAVI for patients with normal-flow low-gradient compared to high-gradient aortic stenosis. Background: Normal-flow (stroke volume index, SVi &gt;35 ml/m2) low-gradient (dPmean &lt;40 mmHg) aortic stenosis (NFLG-AS) is subject of scientific debate. Guidelines fail to give conclusive treatment recommendations. We hypothesized that NFLG patients are heterogenous, containing a subgroup similar to high-gradient aortic stenosis patients (dPmean ≥40 mmHg, HG-AS) concerning characteristics and outcomes. Methods: 2326 patients undergoing transcatheter aortic valve replacement (TAVI) at our centre between 2013 and 2019 were analysed. 386 patients fulfilled criteria of NFLG-AS. Their median dPmean was 33 mmHg, which was used for grouping (204 patients with higher gradient NFLG-AS, 186 patients with lower gradient NFLG-AS). They were compared to 956 HG-AS patients. Results: Characteristics of lower gradient NFLG-AS patients differed from HG-AS patients in many aspects while higher gradient NFLG-AS and HG-AS patients were mostly similar, underscored by higher Society of Thoracic Surgeons scores in lower gradient NFLG-AS (lower gradient NFLG-AS, 3.9, HG-AS, 3.0, p = 0.03, higher gradient NFLG-AS, 3.0, p = 0.04). Procedural complications were comparable. Estimated 3-year all-cause mortality was higher in lower gradient NFLG-AS compared to HG-AS patients (hazard ratio 1.7, p &lt; 0.01), whereas mortality of higher gradient NFLG-AS was similar to HG-AS patients (hazard ratio 1.2, p = 0.31). Cardiovascular mortality was highest among lower gradient NFLG-AS patients (21.6% vs. higher gradient NFLG-AS, 15.4%, vs. HG-AS, 11.1%, p &lt; 0.01). Conclusions: NFLG-AS patients are indeed heterogenous. NFLG-AS patients with higher gradients resemble HG-AS patients in clinical characteristics and outcomes and should not be treated differently. Lower gradient NFLG-AS patients have increased long-term mortality and the use of TAVI requires careful consideration. abstract_id: PUBMED:36178641 Aortic valve calcium volume as measured by native versus contrast-enhanced computer tomography and the implications for the diagnosis of severe aortic stenosis in TAVR patients with low-gradient aortic stenosis. Background: Most of TAVR centers evaluate the calcium score in contrast-enhanced (ce) CT. We compared in this study between different methodologies to measure calcium score. We studied also the difference between patients with low-gradient (LG) and high-gradient (HG) severe aortic stenosis (AS) as regard the burden of aortic valve calcium (AVC). Results: We measured the calcium volume and score using Agatston methodology in non-contrast (nc) CT and with modified and fixed 850 Hounsfield unit (HU) thresholds in ce CT. The calcium score and volume in ceCT using even with modified thresholds is significantly lower than the assessed score and volume in ncCT. The median (IQR) of calcium score in nc CT and in cc CT were 1288 AU (750-1815) versus 947 HU (384-2202). The median (IQR) of calcium volume in nc CT and in cc CT with modified thresholds were 701 mm3 (239-1632) versus 197 mm3 (139-532). Agatston score and calcium volume were lower in patients with LG AS than HG AS; 2069 AU (899-2477) versus 928AU (572-1284) and 1537 mm3 (644-1860) versus 286 mm3 (160-700), respectively. Only 20% of patients with LGAS had Agatston score higher than the previously supposed AVC score threshold for the diagnosis of severe AS (&gt; 2000AU in men and &gt; 1200 in women). Conclusions: The diagnosis of severe LGAS should not depend on a single parameter as calcium score. In these patients, calcium score should be measured in nc CT and not in ce CT. abstract_id: PUBMED:35430696 Severe low-gradient aortic stenosis: impact of inadequate left ventricular responses to high afterload on diagnosis and therapeutic decision-making. The fact that nearly 50% of patients with an aortic valve (AV) area &lt; 1.0 cm2, consistent with severe aortic stenosis (AS), can have mean trans-AV pressure gradients &lt; 40 mmHg, consistent with non-severe AS, indicates that "low-gradient" (LG) severe AS, which is often associated with poor prognosis, deserves particular consideration. Inadequate left ventricular (LV) adaptation to severe AV stenosis resulting from preexistent intrinsic myocardial damages and/or maladaptive LV responses to increased afterload are typical features of severe LG-AS. The diagnosis and management of patients with severe LG-AS are particularly challenging because the discrepancy between the AV area and the trans-AV pressure gradient raises doubts concerning the actual severity of AS and therefore also about the necessity of AV replacement (AVR). LG-AS diagnosis requires integrative multimodality evaluation of both the AV and the LV and therapeutic decision-making necessitates careful individual benefit-risk estimation. Although patients with severe LG-AS associated with low trans-AV flow (i.e., stroke volume ≤ 35 ml/m2) have worse outcomes after AVR than those with high-gradient severe AS, even those with reduced LV ejection fraction (LVEF) can have a significant survival benefit particularly by transcatheter AVR. Dobutamine stress echocardiography facilitates distinction between true-severe and pseudo-severe low-flow LG-AS with reduced LVEF. The review aimed to provide an updated theoretical and practical basis for those engaged in this demanding and still current topic due to the new aspects which have emerged in conjunction with both the evolving scientific knowledge about the various LV responses to the increased afterload and the increasing use of the less invasive transcatheter AVR. abstract_id: PUBMED:34719256 Prevalence and Outcomes of Low-Gradient Severe Aortic Stenosis-From the National Echo Database of Australia. Background The prevalence and outcomes of the different subtypes of severe low-gradient aortic stenosis (AS) in routine clinical cardiology practice have not been well characterized. Methods and Results Data were derived from the National Echocardiography Database of Australia. Of 192 060 adults (aged 62.8±17.8 [mean±SD] years) with native aortic valve profiling between 2000 and 2019, 12 013 (6.3%) had severe AS. Of these, 5601 patients (47%) had high-gradient and 6412 patients (53%) had low-gradient severe AS. The stroke volume index was documented in 2741 (42.7%) patients with low gradient; 1750 patients (64%) with low flow, low gradient (LFLG); and 991 patients with normal flow, low gradient. Of the patients with LFLG, 1570 (89.7%) had left ventricular ejection fraction recorded; 959 (61%) had paradoxical LFLG (preserved left ventricular ejection fraction), and 611 (39%) had classical LFLG (reduced left ventricular ejection fraction). All-cause and cardiovascular-related mortality were assessed in the 8162 patients with classifiable severe AS subtype during a mean±SD follow-up of 88±45 months. Actual 1-year and 5-year all-cause mortality rates varied across these groups and were 15.8% and 49.2% among patients with high-gradient severe AS, 11.6% and 53.6% in patients with normal-flow, low-gradient severe AS, 16.9% and 58.8% in patients with paradoxical LFLG severe AS, and 30.5% and 72.9% in patients with classical LFLG severe AS. Compared with patients with high-gradient severe AS, the 5-year age-adjusted and sex-adjusted mortality risk hazard ratios were 0.94 (95% CI, 0.85-1.03) in patients with normal-flow, low-gradient severe AS; 1.01 (95% CI, 0.92-1.12) in patients with paradoxical LFLG severe AS; and 1.65 (95% CI, 1.48-1.84) in patients with classical LFLG severe AS. Conclusions Approximately half of those patients with echocardiographic features of severe AS in routine clinical practice have low-gradient hemodynamics, which is associated with long-term mortality comparable with or worse than high-gradient severe AS. The poorest survival was associated with classical LFLG severe AS. abstract_id: PUBMED:26033985 Low gradient severe aortic stenosis with preserved ejection fraction: reclassification of severity by fusion of Doppler and computed tomographic data. Aims: Low gradient severe aortic stenosis (AS) with preserved left ventricular ejection fraction (LVEF) may be attributed to aortic valve area index (AVAi) underestimation due to the assumption of a circular shape of the left ventricular outflow tract (LVOT) with 2-dimensional echocardiography. The current study evaluated whether fusing Doppler and multidetector computed tomography (MDCT) data to calculate AVAi results in significant reclassification of inconsistently graded severe AS. Methods And Results: In total, 191 patients with AVAi &lt; 0.6 cm2/m2 and LVEF ≥ 50% (mean age 80 ± 7 years, 48% male) were included in the current analysis. Patients were classified according to flow (stroke volume index &lt;35 or ≥35 mL/m2) and gradient (mean transaortic pressure gradient ≤40 or &gt;40 mmHg) into four groups: normal flow-high gradient (n = 72), low flow-high gradient (n = 31), normal flow-low gradient (n = 46), and low flow-low gradient (n = 42). Left ventricular outflow tract area was measured by planimetry on MDCT and combined with Doppler haemodynamics on continuity equation to obtain the fusion AVAi. The group of patients with normal flow-low gradient had significantly larger AVAi and LVOT area index compared with the other groups. Although MDCT-derived LVOT area index was comparable among the four groups, the fusion AVAi was significantly larger in the normal flow-low gradient group. By using the fusion AVAi, 52% (n = 24) of patients with normal flow-low gradient and 12% (n = 5) of patients with low flow-low gradient would have been reclassified into moderate AS due to AVAi ≥ 0.6 cm2/m2. Conclusion: The fusion AVAi reclassifies 52% of normal flow-low gradient and 12% of low flow-low gradient severe AS into true moderate AS, by providing true cross-sectional LVOT area. abstract_id: PUBMED:38204672 The robustness of the flow-gradient classification of severe aortic stenosis. Objectives: A flow-gradient classification is used to determine the indication for intervention for patients with severe aortic stenosis (AS) with discordant echocardiographic parameters. We investigated the agreement in flow-gradient classification by stroke volume (SV) measurement at the left ventricular outflow tract (LVOT) and at the left ventricle. Methods: Data were used from a prospective cohort study and patients with severe AS (aortic valve area index ≤0.6 cm2/m2) with preserved ejection fraction (&gt;50%) were selected. SV was determined by an echocardiographic core laboratory at the LVOT and by subtracting the 2-dimensional left ventricle end-systolic from the end-diastolic volume (volumetric). Patients were stratified into 4 groups based on SV index (35 mL/m2) and mean gradient (40 mm Hg). The group composition was compared and the agreement between the SV measurements was investigated using regression, correlation, and limits of agreement. In addition, a systematic LVOT diameter overestimation of 1 mm was simulated to study flow-gradient reclassification. Results: Of 1118 patients, 699 were eligible. The group composition changed considerably as agreement on flow state occurred in only 50% of the measurements. LVOT SV was on average 15.1 mL (95% limits of agreement -24.9:55.1 mL) greater than volumetric SV. When a systematic 1-mm LVOT diameter overestimation was introduced, the low-flow groups halved. Conclusions: There was poor agreement in the flow-gradient classification of severe AS as a result of large differences between LVOT and volumetric SV. Furthermore, this classification was sensitive to small measurement errors. These results stress that parameters beyond the flow-gradient classification should be considered to ensure accurate recommendations for intervention. Answer: Based on the provided abstracts, there is no evidence of a rural gradient in the diagnosis of aortic stenosis. Specifically, a study investigating whether increasing rurality was associated with later presentation to healthcare services at a more advanced stage of aortic stenosis in a remote Scottish cohort found no association between the level of clinical peripherality and the stage of aortic stenosis at presentation, age, or gender (PUBMED:23683323). This suggests that patients with aortic stenosis in remote areas do not present later in their disease trajectory compared to their urban counterparts. Therefore, the available data from this study indicate that there is no rural gradient in the diagnosis of aortic stenosis.
Instruction: Can pulmonary angiography be limited to the most suspicious side if the contralateral side appears normal on the ventilation/perfusion lung scan? Abstracts: abstract_id: PUBMED:8697839 Can pulmonary angiography be limited to the most suspicious side if the contralateral side appears normal on the ventilation/perfusion lung scan? Data from PIOPED. Prospective Investigation of Pulmonary Embolism Diagnosis. Purpose: The purpose of this investigation was to determine the frequency of pulmonary embolism (PE) in a single lung that showed a normal ventilation/perfusion (V/Q) lung scan when the V/Q scan on the contralateral side was interpreted as non-high-probability for PE. Methods: Data are from the national collaborative study Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED). PE was diagnosed or excluded in all lungs by pulmonary angiography. Results: Single lungs with no V/Q abnormalities, when the V/Q scan on the contralateral side was interpreted as non-high-probability for PE, showed PE in 2 of 19 (11%) (95% confidence interval [CI], 1 to 33%). If PE was excluded by angiography on the side of the abnormal V/Q scan, then PE on the side of the normal V/Q scan was shown in only 1 of 19 (5%) (95% CI, 0 to 26%). Conclusion: A normal V/Q scan in a single lung, when the contralateral lung was interpreted as non-high-probability for PE, did not completely exclude PE on the apparently normal side. In such lungs, the probability of PE was in the range of low-probability interpretations. If the pulmonary angiogram showed no PE on the side of the abnormal V/Q scan, the probability of PE on the side of the normal V/Q scan satisfied the definition of very low probability for PE. This observation in patients undergoing pulmonary angiography may assist in determining whether the pulmonary angiogram should be bilateral. abstract_id: PUBMED:24900034 Compression of the right pulmonary artery by a massive descending aortic aneurysm causing bilateral perfusion defects on pulmonary scintigraphy. A 67-year-old woman, who presented with a 2 month history of dyspnea, had a ventilation and perfusion lung scan that showed absent perfusion of the entire right lung with normal ventilation, as well as a rounded matched defect in the left lower lung adjacent to midline, suspicious for an aortic aneurysm or dissection. CT pulmonary angiography revealed a massive descending aortic aneurysm compressing the right pulmonary artery as well as the left lung parenchyma, accounting for the bilateral perfusion scan defects. We present the Xe-133 ventilation, Tc-99m MAA perfusion and CT pulmonary angiography imaging findings of this rare case. abstract_id: PUBMED:10540639 Ventilation-perfusion lung scintigraphy as a guide for pulmonary angiography in the localization of pulmonary emboli. Purpose: To assess the appropriateness of ventilation-perfusion (V-P) scintigraphic abnormalities as a guide to pulmonary angiography for the diagnosis of pulmonary embolism (PE). Materials And Methods: V-P scintigrams and pulmonary angiograms of 104 patients with angiographically proved PE were reviewed by two nuclear medicine physicians and two interventional radiologists. For V-P scintigrams, the lung with the larger amount of perfusion abnormality was determined followed by identification of specific lobes. Pulmonary angiograms were similarly evaluated for lateralization and lobar distribution of PE. Conclusions were initially reached independently and subsequently by consensus. Results: Interobserver agreement for lateralization was 88% (kappa = 0.75) for V-P scintigraphy and 98% (kappa = 0.96) for pulmonary angiography. In 72 patients, V-P scintigrams predicted unilateral embolus; 64 patients underwent pulmonary angiography of the suspected side. Eight patients underwent contralateral angiography only. Of the 64 patients, 61 (95%) had PE on the predicted side at angiography. V-P scintigrams predicted lobar distribution in 55 patients. Of these, PE was found in the predicted lobe in 42 (76%). Conclusion: Localization of perfusion abnormalities at V-P scintigraphy provides useful information for the interventional radiologist and serves as an accurate guide for determining the initial approach for pulmonary angiography. abstract_id: PUBMED:11388574 Utility of ventilation and perfusion scan in the diagnosis of young military recruits with an incidental finding of hyperlucent lung. Swyer-James-MacLeod syndrome (SJMS) is considered to be a relatively uncommon disease presenting with unilateral hyperlucent lung due to hypoplasia of a pulmonary artery and bronchiectasis of the affected lung. In this report, we describe the ventilation-perfusion (V/Q) scan findings of nine male recruits (aged 20-29 years, mean 24.4+/-2.96 years) with SJMS in whom the diagnosis was first established in adulthood. V/Q scan findings of all patients were compared with those on planar radiographs, pulmonary function studies, high resolution computed tomography (HRCT) and digital subtraction angiography (DSA). The ventilation (133Xe) and perfusion (99Tcm-macro-aggregated albumin) scans showed the characteristic pattern of a matched V/Q defect and marked air trapping on the washout phase on 133Xe scintigraphy. HRCT displayed hypodense lung with integrity of main airways, and markedly diminished vasculature on the affected side in all patients. A smaller pulmonary artery on the affected side with poor peripheral vasculature was observed with DSA in all patients. All patients had features of obstructive airway disease in varying degrees on pulmonary function studies. In contrast to other imaging methods, bronchiectasis as an etiological factor was displayed on HRCT. Some pulmonary areas, which were normal on HRCT and planar radiographs, showed air trapping on V/Q scan. Although a V/Q scan was more helpful in determining the extent of the disease and correlates well with conventional imaging methods, HRCT was the most valuable imaging method for the evaluation of aetiology in unilateral hyperlucent lung. abstract_id: PUBMED:2357994 Digital subtraction angiography in abnormal perfusion scintigraphy of the lungs. A normal lung perfusion isotope scan can exclude pulmonary emboli. However, the demonstration of lung emboli presents problems, as an abnormal lung perfusion scan can have various causes. Some of these can be excluded if the perfusion scan is assessed in combination with a chest X-ray and a ventilation scan. In this study, 21 patients with an abnormal perfusion scan were also examined using digital subtraction angiography (DSA) of the lungs. With the help of DSA it was possible to differentiate between the various causes of a lung perfusion defect on the isotope scan by having a sufficiently reliable pulmonary angiogram. From experience it appears that the quality of the pulmonary angiogram confined to the first three divisions obtained with DSA, is comparable with that from a conventional pulmonary angiogram. A perfusion isotope scan is the initial screening modality for suspected lung emboli, if this shows an abnormality, DSA seems to be indicated as the next step. abstract_id: PUBMED:8697840 Pulmonary embolism among patients with a nearly normal ventilation/perfusion lung scan. Background: Among patients with nearly normal ventilation/perfusion (V/Q) lung scans in the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED), pulmonary embolism (PE) was diagnosed more frequently in those who underwent pulmonary angiography than in those in whom PE was diagnosed on the basis of an adverse outcome while receiving no anticoagulant therapy. This may suggest that an adverse outcome is not apparent in patients with PE of such mild severity that the V/Q scan is nearly normal. If this were the case, patients with mild PE might not require treatment. Purpose: The purpose of this investigation was to evaluate patients in PIOPED with nearly normal-V/Q lung scans. The V/Q scans and clinical characteristics of those in whom PE was diagnosed or excluded by pulmonary angiography (angiography group) were compared with those in whom PE was diagnosed or excluded by the presence or absence of an adverse outcome while not receiving anticoagulant therapy (outcome group). If the characteristics were the same, it would suggest that some patients with mild PE do well without treatment. If the characteristics were different, it would indicate that there is no evidence from these data that mild PE need not be treated. Methods: Data from PIOPED were evaluated from patients with suspected acute PE who had V/Q scans interpreted as nearly normal. There were 75 patients in the angiography group and 90 patients in the outcome group. Patients with entirely normal V/Q scans were excluded. Results: PE was more frequent in the angiography group than in the outcome group, 8 of 75 (11%) vs 0 of 90 (0%) (p &lt; 0.01). In patients with nearly normal V/Q scans who were in the outcome group in comparison to the angiography group, the V/Q scan showed fewer mismatched segmental perfusion defects, a lower percentage of low-probability V/Q interpretations by one of the two V/Q readers (compared with very low or normal probability), and a generally lower clinical assessment. Conclusion: The observed lower frequency of PE in the outcome group in comparison to the angiography group can be attributed to a lower likelihood of PE in this group of patients with nearly normal V/Q scans in comparison to those who underwent pulmonary angiography. There is no evidence from these data that occasional patients with nearly normal V/Q scans who have PE do not require treatment. abstract_id: PUBMED:3877417 Lung scan perfusion defects limited to matching pleural effusions: low probability of pulmonary embolism. Patients with a new pleural effusion are often sent for a ventilation-perfusion scan to exclude a pulmonary embolism. This retrospective study assessed the probability of pulmonary embolism when a pleural effusion and a perfusion defect of similar size are the only significant imaging abnormalities. In 451 reports of patients who were scanned for suspected pulmonary embolism, 53 had perfusion defects secondary to pleural effusion without other significant perfusion defects. Using pulmonary angiography, venography, analysis of pleural fluid, clinical course, and other radiographic and laboratory studies to establish the final diagnosis, only two patients had documented venous thrombotic disease: one had pulmonary emboli, the other thrombophlebitis. Lung scans having significant perfusion defects limited to pleural effusions and matching them in size have a low probability for pulmonary embolism. abstract_id: PUBMED:26397339 Evaluation of Primary Pulmonary Vein Stenosis in Children: Comparison of Radionuclide Perfusion Lung Scan and Angiography. Objective: The purpose of this study was to evaluate the utility of radionuclide perfusion lung scanning in the evaluation of primary pulmonary vein stenosis (PVS) in pediatric patients by comparing it with angiography. Materials And Methods: We retrospectively identified pediatric patients with primary PVS who underwent both lung scanning and angiography. A cohort of 34 patients was evaluated. The presence of PVS in the right upper, right lower, left upper, and left lower pulmonary veins on angiograms was recorded. Two nuclear medicine physicians evaluated the lung scans for perfusion defects. Agreement between lung scan and angiographic findings was assessed with contingency tables. Sensitivity and specificity of lung scanning for accurate detection of PVS with angiographic findings as the reference standard were assessed by ROC analysis. Results: Angiography depicted PVS in 90 of the total 136 pulmonary veins (66%). Lung scans correctly depicted 65 (72%) of the cases of PVS diagnosed with angiography. The sensitivity and specificity of lung scans were 76.0% and 88.9% for the right upper pulmonary vein, 70.6% and 94.1% for the right lower pulmonary vein, 77.3% and 58.3% for the left upper pulmonary vein, and 65.4% and 87.5% for the left lower pulmonary vein. Conclusion: Lung scan findings correlate with angiographic findings in the detection of primary PVS in pediatric patients. Perfusion lung scanning may have a role in angiographically diagnosed PVS by noninvasively showing relative perfusion at the tissue level. abstract_id: PUBMED:8195867 Unilateral absence of right-lung perfusion with normal ventilation on radionuclide lung scan as a sign of aortic dissection. A case of aortic dissection with unilateral absence of right-lung perfusion and normal ventilation on radionuclide ventilation/perfusion (V/Q) lung scan is presented with a review of the literature. Anticoagulation or thrombolytic therapy for presumed pulmonary embolism may be catastrophic if the clinical syndrome and V/Q scan appearance are instead due to aortic dissection. With this V/Q scan appearance, these therapies should not be instituted unless pulmonary embolism is diagnosed by pulmonary arteriography. abstract_id: PUBMED:34678101 Ventilation-Perfusion Scan: A Primer for Practicing Radiologists. Lung scintigraphy, or ventilation-perfusion (V/Q) scan, is one of the commonly performed studies in nuclear medicine. Owing to variability in clinical applications and different departmental workflows, many trainees are not comfortable interpreting the results of this study. This article provides a simplified overview of V/Q imaging, including a review of its technique, interpretation methods, and established and emerging clinical applications. The authors review the role of V/Q imaging in evaluation of acute and chronic pulmonary embolism, including the role of SPECT/CT and comparing V/Q scan with CT angiography. In addition, a variety of other applications of pulmonary scintigraphy are discussed, including congenital heart disease, pretreatment planning for lung cancer and emphysema, posttransplant imaging for bronchiolitis obliterans, and less common vascular and nonvascular pathologic conditions that may be detected with V/Q scan. This article will help radiologists and residents interpret the results of V/Q scans and understand the various potential clinical applications of this study. Online supplemental material is available for this article.©RSNA, 2021. Answer: Based on the data from the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED), pulmonary angiography should not be limited to the most suspicious side if the contralateral side appears normal on the ventilation/perfusion (V/Q) lung scan. The study found that even when one lung showed a normal V/Q scan and the contralateral lung was interpreted as non-high-probability for pulmonary embolism (PE), there was still a possibility of PE on the apparently normal side. Specifically, PE was found in 2 of 19 cases (11%) where single lungs had no V/Q abnormalities, and in 1 of 19 cases (5%) when PE was excluded by angiography on the side of the abnormal V/Q scan (PUBMED:8697839). This suggests that a normal V/Q scan in one lung does not completely exclude the possibility of PE on that side, and the probability of PE was in the range of low-probability interpretations. Therefore, it may be prudent to perform bilateral pulmonary angiography to ensure that PE is not missed on the side that appears normal on the V/Q scan.
Instruction: Could heart rate play a role in pericardial inflammation? Abstracts: abstract_id: PUBMED:22858356 Could heart rate play a role in pericardial inflammation? Unlabelled: PURPOSE AND MEDICAL HYPOTHESIS: Rest is usually recommended in acute pericarditis, as it could help to lower heart rate (HR) and contribute to limit "mechanical inflammation". Whether HR on admission could be correlated and perhaps participate to inflammation has not been reported. Methods: Between March 2007 and February 2010, we conducted a retrospective study on all patients admitted to our center for acute pericarditis. Diagnosis criteria included two of the following ones: typical chest pain, friction rub, pericardial effusion on cardiac echography, or typical electrocardiogram (ECG) findings. Primary endpoint was biology: CRP on admission, on days 1, 2, 3, and especially peak. Results: We included 73 patients. Median age was 38 years (interquartiles 28-51) and median hospitalization duration was 2.0 days (1.5-3.0). Median heart rate was 88.0 beats per minute (bpm) on admission (interquartiles 76.0-100.0) and 72.0 on discharge (65.0-80.0). Heart rate on admission was significantly correlated with CRP peak (p&lt;0.001), independently of temperature on admission, hospitalization duration and age. Recurrences occurred within 1 month in 32% of patients. Heart rate on hospital discharge was correlated with recurrence, independently of age. Conclusion: In acute pericarditis, heart rate on admission is independently correlated with CRP levels and heart rate on discharge seems to be independently correlated to recurrence. This could suggest a link between heart rate and pericardial inflammation. abstract_id: PUBMED:11714997 Elevated interleukin-1beta in pericardial fluid of patients with ischemic heart disease. Background: Inflammatory cytokines may play an important role in the pathogenesis of atherosclerosis and heart failure. We have previously demonstrated that long-term treatment with interleukin (IL)-1beta in the coronary artery and myocardium promotes coronary arteriosclerosis and impairs cardiac function, respectively. The cytokines in pericardial fluid may reflect the extent of coronary atherosclerosis and may also directly promote the atherosclerotic process. This study was designed to examine the significance of cytokine concentrations in pericardial fluid of patients with cardiovascular disease. Methods: We measured concentrations of 10 major cytokines in the pericardial fluid of 56 consecutive patients obtained during open heart surgery, 27 with ischemic heart disease (IHD group), 21 with valvular heart disease (VHD group) and eight with congenital heart disease (CHD group). Results: The pericardial concentrations of IL-1beta (pg/ml) were significantly higher in the IHD group (60 +/- 15) than in the VHD (29 +/- 5) or the CHD group (26 +/- 4) (P &lt; 0.05 both). There was no significant difference in pericardial concentrations of other cytokines among the three groups. In the IHD group, the IL-1beta concentrations were significantly elevated in patients who had undergone emergency operations or in those with unstable angina. Conclusions: These results suggest that pericardial concentrations of IL-1beta may reflect the extent of ischemic heart disease and that elevated IL-1beta concentrations in pericardial fluid may also directly promote the process of coronary atherosclerosis. abstract_id: PUBMED:28565777 Expression of pericardial fluid T-cells and related inflammatory cytokines in patients with chronic heart failure. Pericardial fluid, as a biochemical indicator of heart status, directly indicates pathological alteration to the heart. The accumulation of pericardial fluid can be attributed to an underlying systemic or local inflammatory process. However, the pericardial fluid expression of cellular surface markers, as well as several cytokines in chronic heart failure (CHF), remain unclear. In order to evaluate these issues further the pericardial fluid expression of several cytokines and the surface expression of activity markers between CHF patients and non-heart failure (NHF) patients were analyzed. The pericardial fluid expression of cytokines was measured by immunofluorescence and biomarker of plasma N-terminal propeptide of B-type natriuretic peptide (NT-proBNP), while pericardial fluid levels of soluble glycoprotein 130 (sgp130) were analyzed by ELISA in 50 CHF and 24 NHF patients. In addition, the surface expression of activation markers for T-cells was measured by immunohistochemistry. Patients with CHF demonstrated increased levels of plasma NT-proBNP and pericardial fluid sgp130. Surface expression of cellular activation markers CD25 and Foxp3 in the pericardial fluid was increased in patients with CHF. Moreover, the pro- and anti-inflammatory cytokines interferon (IFN)-γ, interleukin (IL)-6 and IL-10 in patients with CHF also demonstrated an increased expression within its pericardial fluid. In addition, there was infiltration of inflammatory cells and enhanced expression of inflammatory cytokines in the pericardial fluid of patients with CHF, which may reflect T cell activation, suggesting that systemic inflammation is important in the progression of CHF. This evidence could indicate a possible novel target for future therapeutics and prevention of CHF. abstract_id: PUBMED:30329111 Mechanism of fowl adenovirus serotype 4-induced heart damage and formation of pericardial effusion. Fowl adenovirus serotype 4 (FAdV-4) is the causative agent of hydropericardium syndrome (HPS), which is characterized by the accumulation of a clear, straw-colored fluid in the pericardial sac, and high mortality rates. In order to explore the mechanism of FAdV-4-induced cardiac damage, dynamic pathology, apoptosis, and inflammatory reactions were analyzed in vivo. Moreover, we detected viral proliferation, and ultrastructure, inflammation and apoptosis of cardiomyocytes (CM) after FAdV-4 infection in vitro. The results showed that FAdV-4 impaired cardiac integrity and function by causing apoptosis and inflammation in vivo. Flow cytometry showed that CM infected with FAdV-4 did not show apoptosis in vitro. In addition, the mRNA expression of four inflammatory cytokines (interleukin (il)1B, il6, il8, and tumor necrosis factor), and activity of three myocardial enzymes were significantly different between FAdV-4 and control groups. However, in vitro, these indexes showed no significant difference between the groups. These observations collectively indicated that the heart was not the target organ of FAdV-4, and the virus may not directly lead to the occurrence of CM apoptosis and inflammation. To explore the source of pericardial effusion, we measured total protein, albumin, aspartate aminotransferase, creatine kinase isoenzyme, lactate dehydrogenase, potassium, sodium, and chloride ions in serum and pericardial effusion. Pericardial effusion was derived from vascular exudation rather than CM degeneration. Further studies are needed to investigate the exudation mechanism of vascular endothelial cells in FAdV-4 infection then weakened or eliminated pericardial effusion to minimize heart injury and/or restore damaged CM. abstract_id: PUBMED:32189516 Accumulation of Pericardial Fat Is Associated With Alterations in Heart Rate Variability Patterns in Hypercholesterolemic Pigs. Background: Heart rate variability (HRV) and pulse rate variability are indices of autonomic cardiac modulation. Increased pericardial fat is associated with worse cardiovascular outcomes. We hypothesized that progressive increases in pericardial fat volume and inflammation prospectively dampen HRV in hypercholesterolemic pigs. Methods: WT (wild type) or PCSK9 (proprotein convertase subtilisin-like/kexin type-9) gain-of-function Ossabaw mini-pigs were studied in vivo before and after 3 and 6 months of a normal diet (WT-normal diet, n=4; PCSK9-normal diet, n=6) or high-fat diet (HFD; WT-HFD, n=3; PCSK9-HFD, n=6). The arterial pulse waveform was obtained from an arterial telemetry transmitter to analyze HRV indices, including SD (SD of all pulse-to-pulse intervals over a single 5-minute period), root mean square of successive differences, proportion &gt;50 ms of normal-to-normal R-R intervals, and the calculated ratio of low-to-high frequency distributions (low-frequency power/high-frequency power). Pericardial fat volumes were evaluated using multidetector computed tomography and its inflammation by gene expression of TNF (tumor necrosis factor)-α. Plasma lipid panel and norepinephrine level were also measured. Results: At diet completion, hypercholesterolemic PCSK9-HFD had significantly (P&lt;0.05 versus baseline) depressed HRV (SD of all pulse-to-pulse intervals over a single 5-minute period, root mean square of successive differences, proportion &gt;50 ms, high-frequency power, low-frequency power), and both HFD groups had higher sympathovagal balance (SD of all pulse-to-pulse intervals over a single 5-minute period/root mean square of successive differences, low-frequency power/high-frequency power) compared with normal diet. Pericardial fat volumes and LDL (low-density lipoprotein) cholesterol concentrations correlated inversely with HRV and directly with sympathovagal balance, while sympathovagal balance correlated directly with plasma norepinephrine. Pericardial fat TNF-α expression was upregulated in PCSK9-HFD, colocalized with nerve fibers, and correlated inversely with root mean square of successive differences and proportion &gt;50 ms. Conclusions: Progressive pericardial fat expansion and inflammation are associated with a fall in HRV in Ossabaw mini-pigs, implying aggravated autonomic imbalance. Hence, pericardial fat accumulation is associated with alterations in HRV and the autonomic nervous system. Visual Overview: A visual overview is available for this article. abstract_id: PUBMED:18941647 Heart failure due to a post-traumatic calcified pericardial hematoma. Chest-wall trauma can produce bleeding into the pericardium and initiate a process of inflammation, calcification, and scarring that may eventually produce pericardial constriction. Herein, we present an unusual case of a man who experienced chest trauma at age 16 years, and developed heart failure 40 years later secondary to a large, calcified pericardial hematoma. During its prolonged genesis, the pericardial mass became deeply embedded in the myocardium and produced evidence of both constrictive and restrictive cardiomyopathy. Despite attempted surgical resection, the lesion could not be completely removed, nor could its hemodynamic impact be completely resolved. abstract_id: PUBMED:34023529 An overview of human pericardial space and pericardial fluid. The pericardium is a double-layered fibro-serous sac that envelops the majority of the surface of the heart as well as the great vessels. Pericardial fluid is also contained within the pericardial space. Together, the pericardium and pericardial fluid contribute to a homeostatic environment that facilitates normal cardiac function. Different diseases and procedural interventions may disrupt this homeostatic space causing an imbalance in the composition of immune mediators or by mechanical stress. Inflammatory cells, cytokines, and chemokines are present in the pericardial space. How these specific mediators contribute to different diseases is the subject of debate and research. With the advent of highly specialized assays that can identify and quantify various mediators we can potentially establish specific and sensitive biomarkers that can be used to differentiate pathologies, and aid clinicians in improving clinical outcomes for patients. abstract_id: PUBMED:31020103 Chronic pericardial effusion in the setting of pericardial capillary haemangioma: a case report and review of the literature. Introduction: Cardiac haemangiomas are rare vascular tumours of the heart accounting for less than 5% of benign primary cardiac neoplasms. They are sometimes diagnosed incidentally, since patients can be asymptomatic. The clinical presentation in symptomatic patients, however, is variable, depending on size and exact localization of the tumour. Although cardiac haemangiomas have been reported everywhere in the heart, those localized in the pericardium are extremely rare. Case Presentation: A 48-year-old female patient with a history of pericardial effusion and pneumonia was admitted to our hospital with progressive dyspnoea on exertion. Echocardiography demonstrated recurrence of pericardial effusion with 'swinging heart'. Further investigation by computed tomography, cardiac magnetic resonance imaging and coronary angiography revealed a hypervascular pericardial mass with typical 'tumour blush' after contrast injection. The tumour could be resected in toto by open heart surgery, and histological evaluation confirmed the diagnosis of a pericardial capillary haemangioma. There were no signs of recurrence of neither the pericardial effusion nor the tumour during follow-up. Discussion: We here report a very rare case of a pericardial haemangioma in the adult which was diagnosed by multi-modality workup of recurrent pericardial effusion. This case illustrates that in the setting of chronic pericardial effusion non-inflammatory and non-malignant causes should be taken into account. abstract_id: PUBMED:25696710 Friction and fluid: correlating pericardial effusion and pericardial friction rub. Background: In this study the relationship between pericardial friction rub (PFR) and the degree of pericardial effusion was investigated. Methods: A retrospective study was performed involving all patients for whom the diagnosis pericarditis was made on clinical grounds (type of chest pain, fever, laboratory findings and/or electrocardiographic signs) in the period 1990-1999. In this patient group (n=138) the presence of PFR was correlated against the amount of pericardial effusion measured echocardiographically. Results: No statistically significant correlation between the presence of PFR and the amount of pericardial effusion was observed. It is, therefore, a misconception that the presence of a PFR signifies absence or only a small volume of pericardial effusion. Our results are in line with the literature. Conclusion: Based on this study and results presented in the literature, we postulate that the friction rub associated with pericarditis is not caused by friction of the 'roughened' pericardial layers, as is commonly propagated. Instead fibrin strands caused by the inflammation, connecting the two pericardial layers, may function as snares and generate, through the movements of the heart, the typical triphasic pericardial friction rub. abstract_id: PUBMED:38062920 Therapeutic effect of adipose-derived stem cells injected into pericardial cavity in rat heart failure. Aims: There are few studies on the treatment of heart failure by injecting stem cells into the pericardial cavity. Can the cells injected into the pericardial cavity migrate through the epicardium to the myocardial tissue? Whether there is therapeutic effect and the mechanism of therapeutic effect are still unclear. This study investigated the therapeutic efficacy and evidence of cell migration of adipose-derived stem cells (ADSCs) injected into the pericardial cavity in rat heart failure. The aim of this study is to demonstrate the effectiveness and mechanism of treating heart failure by injecting stem cells into the pericardial cavity, laying an experimental foundation for a new approach to stem cell therapy for heart disease in clinical practice. Methods And Results: The inguinal adipose tissue of male SD rats aged 4-6 weeks was taken, ADSCs were isolated and cultured, and their stem cell surface markers were identified. Forty rats aged 6-8 weeks were divided into sham operation group, heart failure group, and treatment group; there were 15 rats in the heart failure group and 15 rats in the treatment group. The heart failure model was established by intraperitoneal injection of adriamycin hydrochloride. The heart function of the three groups was detected by small animal ultrasound. The model was successful if the left ventricular ejection fraction &lt; 50%. The identified ADSCs were injected into the pericardial cavity of rats in the treatment group. The retention of transplanted cells in pericardial cavity was detected by small animal in vivo imaging instrument, and the migration of transplanted cells into myocardial tissue was observed by tissue section and immunofluorescence. Western blotting and immunohistochemical staining were used to detect brain natriuretic peptide (BNP), α-smooth muscle actin (α-SMA), and C-reactive protein (CRP). ADSCs express CD29, CD44, and CD73. On the fourth day after injection of ADSCs into pericardial cavity, they migrated to myocardial tissue through epicardium and gradually diffused to deep myocardium. The cell density in the pericardial cavity remains at a high level for 10 days after injection and gradually decreases after 10 days. Compared with the heart failure group, the expression of BNP and α-SMA decreased (P &lt; 0.05 and P &lt; 0.001, respectively), and the expression of CRP in the treatment group was higher than that in the heart failure group (P &lt; 0.0001). A small amount of BNP, α-SMA, and CRP was expressed in the myocardium of the sham operation group. After injection of ADSCs, interleukin-6 in myocardial tissue was significantly lower than that in heart failure myocardium (P &lt; 0.01). After treatment, vascular endothelial growth factor A was significantly higher than that of heart failure (P &lt; 0.01). Conclusions: Pericardial cavity injected ADSCs can penetrate the epicardium, migrate into the myocardium, and have a therapeutic effect on heart failure. Their mechanism of action is to exert therapeutic effects through anti-inflammatory, anti-fibrosis, and increased angiogenesis. Answer: Yes, heart rate could play a role in pericardial inflammation. A study found that in acute pericarditis, heart rate on admission was independently correlated with C-reactive protein (CRP) levels, suggesting a link between heart rate and pericardial inflammation. Additionally, heart rate on hospital discharge was correlated with recurrence, independently of age (PUBMED:22858356). This indicates that heart rate may not only be associated with the level of inflammation during the acute phase of pericarditis but could also be a factor in the recurrence of the condition.
Instruction: Trends in gender distribution among anesthesiology residents: do they matter? Abstracts: abstract_id: PUBMED:17056956 Trends in gender distribution among anesthesiology residents: do they matter? Background: The number of women graduating from United States medical schools progressively increased during the 26 yr period from 1978 to 2004. This change was associated with shifts in the gender distribution of residents training in Accreditation Council for Graduate Medical Education-accredited residency programs. Methods: We compared trends in the number and gender distribution of residents enrolled in the 10 specialties with the largest national enrollment of residents. Results: The gender distribution of residents training in different specialty programs varies widely. The percentage of women enrolled in anesthesiology training programs is less than the national average, and the rate of increase is less than that of many other specialties. Conclusions: The reasons for this distribution are multifactorial. Contributing factors may include limited exposure to women role models (including fewer women with senior academic rank and in leadership positions), gender insensitivity leading to an unprofessional work environment, limited involvement of women anesthesiologists in undergraduate medical education, misperceptions of the physician-patient relationship in anesthesiology, and practice scheduling requirements that are inconsistent and inflexible. abstract_id: PUBMED:34903458 Perceptions of Gender Disparities Among Women in Cardiothoracic Anesthesiology. Objective: In this study of women in cardiothoracic anesthesiology, the authors aimed to characterize demographics, roles in leadership, and perceived professional challenges. Design: A prospective cross-sectional survey of female cardiothoracic anesthesiologists in the United States. Setting: An internet-based survey of 43 questions was sent to women in cardiothoracic anesthesiology. The survey included questions on demographics, leadership, and perceptions of professional challenges including career advancement, compensation, promotion, harassment, and intimidation. Participants: A database of women in cardiothoracic anesthesiology was created via personal contacts and snowball sampling. Interventions: None. Measurements And Main Results: A total of 153 responses were analyzed, for a response rate of 65.1%. Most respondents were at the Clinical Instructor or Assistant Professor rank. Many women perceived that compensation, promotion, authorship, and career advancement were affected by gender. Furthermore, 67% of respondents identified having children as having a negative impact on career advancement. Many women reported experiencing derogatory comments (55.6%), intimidation (57.8%), microaggression (69.6%), sexual harassment (25.2%), verbal harassment (45.2%), and unwanted physical or sexual advances (24.4%). These behaviors were most often from a surgical attending, anesthesia attending, or patient. Conclusion: This survey study of women in cardiothoracic anesthesiology found that many women perceived inequities in financial compensation, authorship opportunities, and promotion; in addition, many felt that their career advancement was impacted negatively by having children. A striking finding was that the majority of women have experienced intimidation, derogatory comments, and microaggressions in the workplace. abstract_id: PUBMED:36578603 Wake up to the gender gap in academic anesthesiology and address the pipeline. Diversity, equity, and inclusion are important values in health care and academic medicine. Although women have comprised nearly 50% of all US medical school graduates since the early 2000s, gender disparities continue in many specialties and persist in medical education leadership. For women in anesthesiology, gender inequities exist along the academic pipeline, with greater disparity in the higher ranks of academia and leadership. Successful endeavors to improve gender equity in the specialty include increasing awareness, engaging in faculty development, and encouraging transparent promotions processes. The strides will be short and change will be slow unless we shift focus to when the gender imbalance begins-at the transition from undergraduate medical education to graduate medical education. Currently women comprise 37% of academic anesthesiology faculty; however, the current resident pool is only 33% women. If recruitment of a gender-equitable workforce at the undergraduate-to-graduate medical education transition is not a priority, the downstream measures will have minimal impact toward gender equity in academic anesthesiology. abstract_id: PUBMED:36688988 Authorship by gender in anesthesiology journals: a retrospective cross-sectional study for Japan. Purpose: Although it is important to recognize gender disparities in publishing to achieve gender diversity, women's authorship in Japan remains unclear. Therefore, this study aimed to investigate the percentage and analyze the trends of articles authored and published in anesthesiology journals by Japanese female authors. Methods: The genders of the first and last authors affiliated with Japanese institutions were surveyed in the Journal of Anesthesia (JA) (1990, 1995, and 2000-2022) and 11 international anesthesiology journals (2010-2022). Results: We included 845 and 819 original research articles from JA in the analyses of the first and last authors, respectively. The proportion of female first authors significantly increased from 41 (11.7%) out of 351 before 2009 to 119 (24.1%) out of 494 after 2010 (p &lt; 0.001). The proportion of female last authors was 11 (3.3%) out of 335 before 2009 and 22 (4.5%) out of 484 after 2010, respectively, with no significant difference (p = 0.470). We included 624 and 572 original research articles from international anesthesiology journals in the analyses of first and last authors, respectively. Among these, there were 134 (21.5%) and 23 (4.0%) female first and last authors, respectively. These proportions in international anesthesiology journals did not significantly differ from those in JA (p = 0.334, p = 0.789, respectively). Conclusion: The percentage of female first authors has increased, commensurate with the percentage of female anesthesiologists. However, the percentage of female last authors has not increased and remains low in Japan. abstract_id: PUBMED:31789050 Reporting and analyses of sex/gender and race/ethnicity in randomized controlled trials of interventions published in the highest-ranking anesthesiology journals. Aim: We assessed reporting of data on sex/gender and race/ethnicity in randomized controlled trials of interventions published in the highest-ranking anesthesiology journals from 2014 to 2017. Methods: We extracted data regarding terminology for sex/gender, proportion of participants according to the race/gender and race/ethnicity, and results shown for the race/gender and race/ethnicity. Results: Among the analyzed 732 trials, few stratified allocation of participants on the basis of sex/gender and race/ethnicity, few reported results for sex/gender or race/ethnicity and the outcomes reported may be influenced by one or both. Conclusion: In conclusion, outcomes of anesthesiology trials could be vulnerable to an important and avoidable source of bias; trialists need to pay more attention to sex and race/ethnicity when designing and reporting their studies. abstract_id: PUBMED:30049265 Gender differences in career development awards in United States' anesthesiology and surgery departments, 2006-2016. Background: Although the status of women in anesthesiology has advanced by many measures, obtaining career development funding remains challenging. Here, we sought to compare the characteristics of funded career development awards from the National Institutes of Health (NIH) between the specialties of anesthesiology and surgery. We hypothesized that the two groups differ in percentage of faculty with awards, gender distribution among principal investigators, as well as the number of awards promoting diversity. Methods: The NIH grant-funding database RePORT was queried for career development awards for the years 2006-2016 using the filters "Anesthesiology" and "Surgery." Grants were characterized based on the gender of the principal investigator and whether the funding opportunity announcement indicated promotion of underrepresented minorities (URM). The 2016 Association of American Medical Colleges (AAMC) report on "Distribution of U.S. Medical School Faculty by Sex and Rank" was used to adjust comparisons according to baseline gender distributions in anesthesiology and surgery departments. Cohorts were characterized using descriptive methods and compared using Chi-square or Fisher's exact test. Results: Based on our AAMC data query, in 2016, the number of women faculty members at the instructor or assistant professor level in U.S. medical schools was 2314 (41%) for anesthesiology and 2281 (30%) for surgery. Between 2006 and 2016, there were 88 career development grants awarded to investigators in anesthesiology departments compared to 261 in surgery departments. Of the grantees in each specialty, 29 (33%) were women in anesthesiology and 72 (28%) in surgery (P = 0.344). Awards to promote URM were identified for two grants (2%) in anesthesiology and nine grants (3%) in surgery (P = 0.737). Faculty members in surgery were more likely to receive an award than in anesthesiology (P &lt; 0.0001), and women were less likely to receive an award than men (P = 0.026). Conclusions: The major difference between US anesthesiology and surgery departments is that the number of faculty career development awards is significantly higher in surgery departments. Future efforts should aim to identify the reasons for such differences in order to inform strategies that can improve the likelihood for junior faculty members to receive career development funding. abstract_id: PUBMED:30418237 Trends in Authorship in Anesthesiology Journals. Despite increasing representation in medicine, women continue to be a minority in academic practice and leadership, especially in male-dominated fields like anesthesiology. Differences in compensation and participation in leadership may represent barriers to career advancement for women in anesthesiology. Key factors for promotion in academic anesthesiology are research, funding, and publication. As such, designation as a first or senior author on a publication in a professional journal may act as currency for promotion. Here, we examine the prevalence of female first and senior authorship of original research articles published in Anesthesiology and Anesthesia &amp; Analgesia during the years 2002, 2007, 2012, and 2017. Other manuscript characteristics assessed in relation to author gender included study type, subspecialty topic, and total number of authors. Of 2600 manuscripts studied, analysis of authorship by year demonstrated an increase in female first authorship, senior authorship, and editorial board membership by 10%, 9%, and 6%, respectively. Women made up a higher percentage of first authors on manuscripts with female senior authors. More nonexperimental studies had female senior authors than experimental studies. Female first authors had greater representation in the subspecialties of neuroanesthesia, obstetrical anesthesia, pain management, and pediatric anesthesia. Median number of authors was unrelated to senior author gender. This study shows increasing female first and senior authorship, as well as editorial board composition in 2 popular, high-impact anesthesiology journals. Recognizing gender-based trends in publications is important to develop strategies for the recruitment, retention, and advancement of women in anesthesiology. abstract_id: PUBMED:33324526 Gender-Based Disparity in Academic Ranking and Research Productivity Among Canadian Anesthesiology Faculty. Purpose Despite increasing numbers of women entering anesthesiology training, women remain underrepresented in senior academic positions and leadership roles. This study aims to determine the extent of gender disparity in Canadian departments of anesthesiology. In addition, we explore the correlation between publication productivity and academic rank in this cohort. Methods The Canadian Residency Matching Service (CaRMS) was queried to identify 17 training programs for anesthesiology. Department websites were searched to determine the names of faculty members, as well as gender, leadership roles, and academic ranks. The SCOPUS© database was used to generate the number of publications, number of citations, publication range, and h-index of each faculty member. Results In our study cohort of 1404 academic anesthesiologists, 30.1% were women. Women held a minority of 130 leadership positions (27%, n = 35). With increasing academic rank female representation decreased (p = 0.009), such that 21% of full professors were women. Overall, male anesthesiologists had a higher h-index, number of publications, and number of citations (p = 0.001, p = 0.001, and p = &lt;0.001, respectively) than women. Conclusion Despite growing numbers of women entering the academic workforce, women are underrepresented in senior academic ranks and leadership positions. In addition, men and women have significant differences in measures of publication productivity. This study underscores the importance of directed efforts to promote equity in career outcomes. abstract_id: PUBMED:27957512 Gender Diversity in Anesthesiology Programs: The Role of Current Residents and Department Leadership in the 2014 Match Results. Background: Nearly half of graduates of American medical colleges are women, yet the percentage of women entering accredited anesthesiology programs remains less than 40%. There are obviously many factors that influence the choice of a residency training program, from geography to reputation to the atmosphere, composition and camaraderie of the department. We examined whether a greater number of current female residents, a female Chair, or a female Program Director were associated with a program matching a greater number of female candidates in the 2014 NRMP Match. Methods: An electronic questionnaire was sent to all 132 ACGME-accredited anesthesiology programs immediately following the 2014 Match seeking information on the gender mix of their current residents, the gender of the Chair and Program Director, and the gender composition of their newly-matched candidates. Results: The percentage of current female residents was significantly associated with the percentage of incoming female residents (p = 0.013). There was no association between the percentage of new female residents obtained in the Match and the presence of a female Chair or Program Director. Conclusions: The results of the 2013 NRMP anesthesiology match indicate that programs with a higher proportion of female residents were able to sustain that diversity and successfully match a higher percentage of female candidates. No correlation was seen with Chair and Program Director gender, suggesting further work is needed to define the influence of female role models on female applicants' choice of anesthesiology residency programs. abstract_id: PUBMED:36637854 Women Authorship Trends in the Highest-Impact Anesthesiology Journals from 2005 to 2021. Background: Although women and men have matriculated into medical schools in similar proportions since the 1980s, recent data indicate that anesthesiology is lagging in gender equity, especially in academic leadership roles.1,2 As promotion in academic medicine is strongly influenced by publications, understanding whether a lack of women authorship is contributing to this gender gap is crucial.3,4 This article aims to assess how woman authorship trends have changed in the last 16 years, including during the COVID-19 pandemic. Materials and Methods: The five highest impact journals in anesthesia were identified as Journal of Clinical Anesthesia, British Journal of Anaesthesia, Anesthesiology, PAIN, and Regional Anesthesia &amp; Pain Medicine. Number of total authors, including women, men, and unknown gender authors as well as incidence of woman first and/or last author, was documented from articles published in 2005, 2010, 2015, 2020, and 2021. Results: This analysis shows that women are gaining representation in anesthesia publications. Overall, there was a statistically significant increase in the total number of women authors and women first and last authorship. However, as of 2021, women still only represented ∼40% of total and first authors and ∼24% of last authors. In addition, increase in first/last woman authorship was not present in all journals when stratified. Conclusion: These journal differences may suggest the editorial evaluation process as a potential source of gender bias. There was a statistically significant relationship between women senior authors and articles with 50% or more women authors, indicating that woman mentorship is contributing to closing equity gap. These data present a starting point for further investigations into gender disparities within anesthesia to continue the forward progression for women in academic medicine. Answer: The trends in gender distribution among anesthesiology residents do matter, as evidenced by the research presented in the abstracts. The number of women graduating from United States medical schools has increased over the years, but the percentage of women enrolled in anesthesiology training programs remains less than the national average, and the rate of increase is less than that of many other specialties (PUBMED:17056956). This gender distribution is influenced by factors such as limited exposure to women role models, gender insensitivity, limited involvement of women anesthesiologists in undergraduate medical education, misperceptions of the physician-patient relationship in anesthesiology, and inflexible practice scheduling requirements. Women in cardiothoracic anesthesiology perceive inequities in financial compensation, authorship opportunities, promotion, and career advancement, with many reporting negative impacts on their careers due to having children. A majority have experienced intimidation, derogatory comments, and microaggressions in the workplace (PUBMED:34903458). Despite nearly half of US medical school graduates being women since the early 2000s, gender disparities persist in anesthesiology, particularly in the higher ranks of academia and leadership (PUBMED:36578603). In Japan, the percentage of female first authors in anesthesiology journals has increased, but the percentage of female last authors remains low (PUBMED:36688988). Randomized controlled trials in anesthesiology often fail to report or analyze data based on sex/gender and race/ethnicity, which could introduce bias in study outcomes (PUBMED:31789050). Career development awards from the National Institutes of Health (NIH) show that faculty in surgery departments are more likely to receive an award than those in anesthesiology, and women are less likely to receive an award than men (PUBMED:30049265). Trends in authorship in anesthesiology journals indicate an increase in female first and senior authorship, as well as editorial board membership, but women continue to be a minority in academic practice and leadership (PUBMED:30418237). In Canada, women are underrepresented in senior academic ranks and leadership positions, with significant differences in measures of publication productivity between men and women (PUBMED:33324526).
Instruction: Case study of the comparison of data from conference abstracts and full-text articles in health technology assessment of rapidly evolving technologies: does it make a difference? Abstracts: abstract_id: PUBMED:16984055 Case study of the comparison of data from conference abstracts and full-text articles in health technology assessment of rapidly evolving technologies: does it make a difference? Objectives: The aim of this study was to examine (i) the consistency of reporting research findings presented in conference abstracts and presentations and subsequent full publications, (ii) the ability to judge methodological quality of trials from conference abstracts and presentations, and (iii) the effect of inclusion or exclusion of data from these sources on the pooled effect estimates in a meta-analysis. Methods: This report is a case study of a selected health technology assessment review (TAR) of a rapidly evolving technology that had identified and included a meta-analysis of trial data from conference abstracts and presentations. Results: The overall quality of reporting in abstracts and presentations was poor, especially in abstracts. There was incomplete or inconsistent reporting of data in the abstract/presentations. Most often inconsistencies were between conference slide presentations and data reported in published full-text articles. Sensitivity analyses indicated that using data only from published papers would not have altered the direction of any of the results when compared with those using published and abstract data. However, the statistical significance of three of ten results would have changed. If conference abstracts and presentations were excluded from the early analysis, the direction of effect and statistical significance would have changed in one result. The overall conclusions of the original analysis would not have been altered. Conclusions: There are inconsistencies in data presented as conference abstracts/presentations and those reported in subsequent published reports. These inconsistencies could impact the final assessment results. Data discrepancies identified across sources included in TARs should be highlighted and their impact assessed and discussed. Sensitivity analyses should be carried out with and without abstract/presentation data included in the analysis. Incomplete reporting in conference abstracts and presentations limits the ability of reviewers to assess confidently the methodological quality of trials. abstract_id: PUBMED:16487455 Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies. Objectives: To assess the extent of use of data from conference abstracts and presentations in health technology assessments (HTAs) provided as part of the National Institute for Health and Clinical Excellence (NICE) appraisal process. Also to assess the methodological quality of trials from conference abstracts and presentations, the consistency of reporting major outcomes between these sources and subsequent full-length publications, the effect of inclusion or exclusion of data from these sources on the meta-analysis pooled effect estimates, and the timeliness of availability of data from these sources and full articles in relation to the development of technology assessment reviews (TARs). Data Sources: A survey of seven TAR groups. An audit of published TARs: included all NICE TARs published between January 2000 and October 2004. Case studies of selected TARs. Review Methods: Analyses of the results of the survey and audit were presented as a descriptive summary and in a tabular format. Sensitivity analyses were carried out to compare the effect of inclusion of data from abstracts and presentations on the meta-analysis pooled effect estimates by including data from both abstracts/presentations and full papers, and data from only full publications, included in the original TAR. These analyses were then compared with meta-analysis of data from trials that have subsequently been published in full. Results: All seven TAR groups completed and returned the survey. Five out of seven groups reported a general policy that included searching for and including studies available as conference abstracts/presentations. Five groups responded that if they included data from these sources they would carry out methodological quality assessment of studies from these sources using the same assessment tools as for full publications, and manage the data from these sources in the same way as fully published reports. All groups reported that if relevant outcome data were reported in both an abstract/presentation and a full publication, they would only consider the data in the full publication. Conversely, if data were only available in conference abstract/presentation, all but two groups reported that they would extract and use the data from the abstract/presentation. In total, 63 HTA reports for NICE were identified. In 20 of 63 TARs (32%) explicit statements were made with regards to inclusion and assessment of data from abstracts/presentations. Thirty-eight (60%) identified at least one randomised controlled trial (RCT) available as a conference abstract or presentation. Of these, 26 (68%) included trials available as abstracts/presentations. About 80% (20/26) of the 26 TARs that included RCTs in abstract/presentation form carried out an assessment of the methodological quality of such trials. In 16 TARs full reports of these trials were used for quality assessment where both abstracts/presentations and subsequent full publications were available. Twenty-three of 63 TARs (37%) carried out a quantitative analysis of results. Of these, ten (43%) included trials that were available as abstracts/presentations in the review; however, only 60% (6/10) of these included data from abstracts/presentations in the data analysis of results. Thirteen TARs evaluated rapidly evolving technologies and only three of these identified and included trial data from conference abstracts/presentations and carried out a quantitative analysis where abstract/presentation data were used. These three TARs were used as case studies. In all three case studies the overall quality of reporting in abstracts/presentations was generally poor. In all case studies abstracts and presentations failed to describe the method of randomisation or allocation concealment. Overall, there was no mention of blinding in 66% (25/38) of the abstracts and in 26% (7/27) of the presentations included in case studies, and one presentation (4%) explicitly stated use of intention-to-treat analysis. Results from one case study demonstrated discrepancies in data made available in abstracts or online conference presentations. Not only were discrepancies evident between these sources, but also comparison of conference abstracts/presentations with subsequently published full-length articles demonstrates data discrepancies in reporting of results. Sensitivity analyses based on one case study indicated a change in significance of effect in two outcome measures when only full papers published to date were included. Conclusions: There are variations in policy and practice across TAR groups regarding searching for and inclusion of studies available as conference abstracts/presentations. There is also variation in the level of detail reported in TARs regarding the use of abstracts/presentations. Therefore, TAR teams should be encouraged to state explicitly their search strategies for identifying conference abstracts and presentations, their methods for assessing these for inclusion, and where appropriate how the data were used and their effect on the results. Comprehensive searching for trials available as conference abstracts/presentations is time consuming and may be of questionable value. However, there may be a case for searching for and including abstract/presentation data if, for example, other sources of data are limited. If conference abstracts/presentations are to be included, the TAR teams need to allocate additional time for searching and managing data from these sources. Incomplete reporting in conference abstracts and presentations limits the ability of reviewers to assess confidently the methodological quality of trials. Where conference abstracts and presentations are considered for inclusion in the review, the TAR teams should increase their efforts to obtain further study details by contacting trialists. Where abstract/presentation data are included, reviewers should discuss the effect of including data from these sources. Any data discrepancies identified across sources in TARs should be highlighted and their impact discussed in the review. In addition, there is a need to carry out, for example, a sensitivity analysis with and without abstract/presentation data in the analysis. There is a need for research into the development of search strategies specific to identification of studies available as conference abstracts and presentations in TARs. Such strategies may include guidance with regard to identification of relevant electronic databases and appropriate conference sites relevant to certain clinical areas. As there are limited case studies included in this report, analyses should be repeated as more TARs accrue, or include the work of other international HTA groups. abstract_id: PUBMED:36152950 Towards a unified search: Improving PubMed retrieval with full text. Objective: A significant number of recent articles in PubMed have full text available in PubMed Central®, and the availability of full texts has been consistently growing. However, it is not currently possible for a user to simultaneously query the contents of both databases and receive a single integrated search result. In this study, we investigate how to score full text articles given a multitoken query and how to combine those full text article scores with scores originating from abstracts and achieve an overall improved retrieval performance. Materials And Methods: For scoring full text articles, we propose a method to combine information coming from different sections by converting the traditionally used BM25 scores into log odds ratio scores which can be treated uniformly. We further propose a method that successfully combines scores from two heterogenous retrieval sources - full text articles and abstract only articles - by balancing the contributions of their respective scores through a probabilistic transformation. We use PubMed click data that consists of queries sampled from PubMed user logs along with a subset of retrieved and clicked documents to train the probabilistic functions and to evaluate retrieval effectiveness. Results And Conclusions: Random ranking achieves 0.579 MAP score on our PubMed click data. BM25 ranking on PubMed abstracts improves the MAP by 10.6%. For full text documents, experiments confirm that BM25 section scores are of different value depending on the section type and are not directly comparable. Naïvely using the body text of articles along with abstract text degrades the overall quality of the search. The proposed log odds ratio scores normalize and combine the contributions of occurrences of query tokens in different sections. By including full text where available, we gain another 0.67%, or 7% relative improvement over abstract alone. We find an advantage in the more accurate estimate of the value of BM25 scores depending on the section from which they were produced. Taking the sum of top three section scores performs the best. abstract_id: PUBMED:31207219 Abstracts of low back pain trials are poorly reported, contain spin of information and are inconsistent with the full text: An overview study. Objective: To investigate trials abstracts evaluating treatments for low back pain with regards to completeness of reporting, spin (i.e., interpretation of study results that overemphasizes the beneficial effects of the intervention), and inconsistencies in data with the full text. Data Sources: The search was performed on Physiotherapy Evidence Database (PEDro) in February 2016. Study Selection: This is an overview study of a random sample of 200 low back pain trials published between 2010 and 2015. The languages of publication were restricted to English, Spanish and Portuguese. Data Extraction: Completeness of reporting was assessed using the CONSORT for Abstracts checklist (CONSORT-A). Spin was assessed using a SPIN-checklist. Consistency between abstract and full text were assessed by applying the assessment tools to both the abstract and full text of each trial and calculating inconsistencies in the summary score (paired t test) and agreement in the classification of each item (Kappa statistics). Methodological quality was analyzed using the total PEDro score. Data Synthesis: The mean number of fully reported items for abstracts using the CONSORT-A was 5.1 (SD 2.4) out of 15 points and the mean number of items with spin was 4.9 (SD 2.6) out of 7 points. Abstract and full text scores were statistically inconsistent (P=0.01). There was slight to moderate agreement between items of the CONSORT-A in the abstracts and full text (mean Kappa 0.20 SD 0.13) and fair to moderate agreement for items of the SPIN-checklist (mean Kappa 0.47 SD 0.09). Conclusions: The abstracts were incomplete, with spin and inconsistent with the full text. We advise health care professionals to avoid making clinical decisions based solely upon abstracts. Journal editors, reviewers and authors are jointly responsible for improving abstracts, which could be guided by amended editorial policies. abstract_id: PUBMED:37168536 Reporting inconsistency between published conference abstracts and article abstracts of randomised controlled trials in prosthodontics presented at IADR general sessions. Background: There is commonly a discrepancy between conference abstracts and published article abstracts in prosthodontic randomized controlled trials (RCTs), which may mislead the scholars those attend conferences. Objective: To identify the characteristics predicting inconsistency between conference abstracts and published article abstracts in prosthodontic RCTs. Methods: The conference abstracts of prosthodontic RCTs presented at the IADR general sessions from 2002 to 2015 were searched. Electronic searches of MEDLINE, EMBASE, the Cochrane Library, and Google Scholar databases were conducted to match full-text publications for conference abstracts. Two investigators extracted basic characteristics and assessed the consistency and reporting quality independently and in duplicate. The linear regression model was used to analyze the predictors of inconsistency. Results: A total of 147 conference abstracts were matched with published articles. Results for the secondary outcome measure, Statistical analysis, and precision measure were less than 50% consistent, and even nearly 5% of the studies had opposite conclusions. Multiple linear regression analysis showed that three factors were correlated with lower inconsistency, including continent of origin (p = 0.011), presentation type (p = 0.017), and difference in reporting quality (p = 0.013). Conclusion: Conference attendees should cautiously treat the findings of the conference abstracts. Researchers should improve the precision of the information delivered at conferences. We recommend the authors of RCTs to explain the primary difference between conference abstracts and article abstracts. abstract_id: PUBMED:32748679 Publication rate of abstracts from presentations at the British Orthodontic Conference 2009-2014. Objective: The primary objective of this study was to investigate the publication rate of scientific abstracts presented at the British Orthodontic Conference 2009-2014. Predictors of full-text publications after presentation of abstracts were explored. Design: Cross-sectional study. Materials And Methods: Details of abstracts were retrieved from the conference programmes. Abstracts were screened and full-text publications identified by a single author with discrepancies discussed. Two electronic databases were searched to identify full-text publication of abstracts presented at the British Orthodontic Conference during 2009-2014. Study characteristics were recorded in a prespecified data collection sheet. Descriptive and correlation statistics were calculated. Multivariable Cox regression modelling was implemented in order to assess the effect of predictors on the instance of probability of publication. Results: A total of 225 abstracts (148 poster presentations and 77 oral presentations) were identified. Observational studies were frequent (60%) and significant results were reported in 38.7% of abstracts. The rate of full-text publication after abstract presentation was 46.2% with a mean time to publication of 18.3 ± 18.7 months. Authors based at both university and hospitals (Hazard ratio: 2.63, 95% confidence interval [CI] 1.26-5.47, P=0.01) had a higher instant probability of publication compared to university only, whereas diagnostic studies (Hazard ratio: 0.18, 95% CI 0.04-0.74, P=0.02) had lower instant probability of publication compared to systematic reviews. Conclusion: Over 50% of study abstracts presented at the British Orthodontic Conference during 2009-2014 remain unpublished. Author affiliation and study type appear to influence full text publication. In order to reduce publication bias within the literature, publication of full-text articles by authors of presented abstracts is encouraged. abstract_id: PUBMED:28650522 Full text publication rates of research abstracts presented at the European Society of Endodontology (ESE) Congresses in the last 20 years. Aim: To evaluate the full-text publication rates of scientific research abstracts presented at the European Society of Endodontology (ESE) Congresses held between 1993 and 2013 (a total of 11 occasions) and to determine factors associated with the manuscripts. Methodology: An electronic database search was conducted from January 2015 to December 2016 to identify full text English written publications of the research abstracts presented at the last 11 ESE Biennial Congresses from 1993 to 2013. For each occasion, research abstract information were retrieved from the International Endodontic Journal (IEJ) through the official website of the ESE and the following parameters for each abstract presentation were recorded: Year of presentation, first author's affiliation, geographic origin, and type of study. Following full-text article identification, additional information was recorded such as: Year and journal of publication, elapsed time until full publication and number of authors per presentation and publication. Results: A total of 1165 research abstracts were presented, of which 401 (34.4%) were finally published as full-length articles. Overall 235 articles (58.6%) were published either in the International Endodontic Journal (IEJ, 35.7%) or Journal of Endodontics (JOE, 22.9%). The mean time between abstract presentation and full-text publication was 18.95 months. Munich (2001) had the highest publication rate (44%) whereas Lisbon (2013) had the highest number of published articles (77). Turkey was the country with the highest number of published abstracts (56). However, the Netherlands was the country with the highest number of publications related to the number of presentations (21/26) (80.7%). Differences in authorship between presentation and full publication were found in 179 (44.6%) articles. Conclusions: A substantial number of research abstracts presented at ESE congresses were not published in peer reviewed journals. Authors prefer to publish their research papers in international journals with high impact factor. This article is protected by copyright. All rights reserved. abstract_id: PUBMED:22223965 "Free full text articles": where to search for them? References form the backbone of any medical literature. Presently, because of high inflation, it is very difficult for any library/organization/college to purchase all journals. The condition is even worse for an individual person, such as private practitioners. The solution lies in the free availability of full-text articles. Here, the authors share their experiences about the accessibility of free full-text articles. abstract_id: PUBMED:31722856 Full-text publication rate of studies reported as 2013 SoFCOT meeting abstracts. Background: Publication of scientific work, although mandatory to ensure dissemination of novel research findings and obtain further funding, can require considerably more time and effort compared to conference presentations. Several national or scientific societies have determined the publication rate of studies reported at their meetings. The French Society for Orthopaedic and Trauma Surgery (Société française de chirurgie orthopédique et traumatologique, SoFCOT) has not yet measured this parameter. The objective of this study was to (i) evaluate the full-text publication rate of studies accepted in abstract form for podium presentations or posters at the 2013 SoFCOT meeting and (ii) identify characteristics of abstracts associated with subsequent full-text publication. Hypothesis: The full-text publication rate of abstracts accepted for the 2013 SoFCOT meeting was equal to or greater than the mean reported by national societies, i.e., 44.5%. Material And Methods: Publication rates of the 503 studies reported as abstracts at the 2013 SoFCOT meeting were studied. The time horizon was thus at least 5years. The topic was orthopaedic surgery for 315 (62.6%) abstracts, trauma surgery for 153 (30.4%) abstracts, and fractures in elderly patients - the cross-field theme for that year - for 35 (7.0%) abstracts. Reporting was as a podium presentation for 275 (54.7%) abstracts, an e-poster for 205 (40.8%) abstracts, an instructional course lecture for 20 (4.0%) abstracts, a symposium for 2 (0.4%) abstracts, and a round table for 1 (0.2%) abstract. Results: The full-text publication rate was 35.6% overall and 47.1% (139 publications) for podium presentations. Mean time from podium or poster presentation at the SoFCOT meeting to full-text publication was 1.2±1.5years (range: -2.5 to 6.1years). The full-text publications had 0.8±2.3 (range: -6 to 11) more authors compared to the abstract. They appeared in 54 journals with a mean impact factor of 1.9±1.3 (range: 0.25 to 13.77; median: 1.41; interquartile range: 1.26 to 2.47). Subgroup comparisons showed that full-text publication was more common for prospective than retrospective studies (50.0% versus 30.5%, p&lt;0.0001) and for studies showing a significant difference (48.6% versus 33.0%, p&lt;0.0001). Systematic reviews were more often published in full than were anecdotal case-reports. The full-text publication rate was also higher for studies reported as podium presentations than as e-posters (47.1% versus 17.6%, p&lt;0.0001). Finally, studies of orthopaedic surgery were more often published in full than were studies of trauma surgery (39.7% versus 28.2%, p=0.033). Discussion: The 5-year full-text publication rate of studies reported as abstracts at the 2013 SoFCOT meeting was consistent with previously reported data. The impact factors of the journals in which the studies were published are evidence of the high quality of the information shared at SoFCOT meetings. Level Of Evidence: IV, systematic retrospective analysis. abstract_id: PUBMED:29447159 A comprehensive and quantitative comparison of text-mining in 15 million full-text articles versus their corresponding abstracts. Across academia and industry, text mining has become a popular strategy for keeping up with the rapid growth of the scientific literature. Text mining of the scientific literature has mostly been carried out on collections of abstracts, due to their availability. Here we present an analysis of 15 million English scientific full-text articles published during the period 1823-2016. We describe the development in article length and publication sub-topics during these nearly 250 years. We showcase the potential of text mining by extracting published protein-protein, disease-gene, and protein subcellular associations using a named entity recognition system, and quantitatively report on their accuracy using gold standard benchmark data sets. We subsequently compare the findings to corresponding results obtained on 16.5 million abstracts included in MEDLINE and show that text mining of full-text articles consistently outperforms using abstracts only. Answer: The case study examining the comparison of data from conference abstracts and full-text articles in health technology assessment (HTA) of rapidly evolving technologies found that there are indeed inconsistencies between the data presented in conference abstracts/presentations and those reported in subsequent published reports. The overall quality of reporting in abstracts and presentations was poor, particularly in abstracts, with incomplete or inconsistent reporting of data. These inconsistencies could impact the final assessment results. Sensitivity analyses indicated that using data only from published papers would not have altered the direction of any of the results when compared with those using published and abstract data. However, the statistical significance of three out of ten results would have changed. If conference abstracts and presentations were excluded from the early analysis, the direction of effect and statistical significance would have changed in one result. Despite these discrepancies, the overall conclusions of the original analysis would not have been altered (PUBMED:16984055). The study also highlighted that incomplete reporting in conference abstracts and presentations limits the ability of reviewers to assess confidently the methodological quality of trials. It recommended that data discrepancies identified across sources included in TARs should be highlighted and their impact assessed and discussed. Sensitivity analyses should be carried out with and without abstract/presentation data included in the analysis (PUBMED:16984055). Another study assessing the use of data from conference abstracts and presentations in HTAs for the National Institute for Health and Clinical Excellence (NICE) found variations in policy and practice across TAR groups regarding searching for and inclusion of studies available as conference abstracts/presentations. The study suggested that TAR teams should state explicitly their search strategies for identifying conference abstracts and presentations, their methods for assessing these for inclusion, and where appropriate how the data were used and their effect on the results. It also noted that comprehensive searching for trials available as conference abstracts/presentations is time-consuming and may be of questionable value. However, there may be a case for searching for and including abstract/presentation data if other sources of data are limited (PUBMED:16487455). In conclusion, while the inclusion of data from conference abstracts and presentations in HTAs of rapidly evolving technologies does not seem to alter the overall conclusions of analyses, it can affect the statistical significance of some results and the direction of effect in others. Therefore, it is important to consider these data sources with caution and conduct sensitivity analyses to understand their impact on the overall assessment.
Instruction: Could transurethral resection of the prostate using the TURis system take over conventional monopolar transurethral resection of the prostate? Abstracts: abstract_id: PUBMED:24958486 Could transurethral resection of the prostate using the TURis system take over conventional monopolar transurethral resection of the prostate? A randomized controlled trial and midterm results. Objective: To assess whether bipolar transurethral resection of the prostate using the TURis (Olympus, Tokyo, Japan) system demonstrates comparable efficacy and safety reporting 36 months of follow-up findings. Methods: The trial was registered at University hospital Medical Information Network Clinical Trials Registry in Japan (trial number UMIN 000010801). Patients were randomly selected to undergo transurethral resection of the prostate using either the TURis or the conventional monopolar technique. Primary end points were safety according to operation time, decline of sodium and hemoglobin levels, clot retention, and catheterization time. Secondary end points were efficacy findings for patients after 36 months of follow-up. Results: A total of 136 patients were enrolled. Mean operation times were significantly prolonged in the TURis group (68.4 and 79.5 minutes for monopolar and TURis groups, respectively; P = .048). No significant differences in the decline of hemoglobin, hematocrit, and perioperative transfusion rates between groups were seen, whereas clot retention (grade 2) after the treatment seemed to occur more often in the monopolar group (7 of 62 [12.3%] in monopolar group vs 1 of 63 [1/8%] in TURis group; P = .061). No case presented symptomatic transurethral resection syndrome in either groups. Conclusion: Continued efficacy at 36 months after the treatment could be confirmed for the first time in TURis system, which also seems to be preferable as they produced more clinically favorable outcomes. Nevertheless, the TURis system required significantly more resection time, which might not entirely be a panacea for the treatment of benign prostatic obstruction, especially for patients having larger prostatic volumes. abstract_id: PUBMED:31179883 Perioperative inadvertent hypothermia among urology patients who underwent transurethral resection with either TURis or transurethral resection of the prostate method. Purpose: The purpose of the study was to investigate the incidence of perioperative hypothermia in urology patients undergoing transurethral resection with either TURis or transurethral resection of the prostate method and to recognize the risk factors that were responsible for the occurrence of hypothermia intraoperatively in these patients. Methods: It was a randomized prospective study. A total of 168 patients, according to American Society of Anesthesiologists physical status I-III, were scheduled for transurethral resection either with TURis or transurethral resection of the prostate method. We measured the core body temperature before (preoperative), during (perioperative) and after (postoperative) the surgery. Age, body mass index, American Society of Anesthesiologists score, duration of surgery, preoperative prostatic volume, and vital signs were also recorded. Results: The prevalence of inadvertent hypothermia was 64.1% for the TURis group and 60% for the transurethral resection of the prostate group. Hypothermic patients in TURis group were significantly older (87.7 ± 1.7 vs 68 ± 6.7 years, p &lt; 0.05) and had lower body mass index (26.9 ± 3.6 vs 29.2 ± 2.7, p &lt; 0.05), while hypothermic patients in the transurethral resection of the prostate group were significantly older (86 ± 1.1 vs 70 ± 7.4 years, p &lt; 0.05) and had notably higher duration of surgery (140.6 ± 28.9 vs 120.3 ± 14.3 min, p &lt; 0.05) than normothermic patients. Conclusion: Perioperative inadvertent hypothermia among urology patients who underwent transurethral resection with both TURis and transurethral resection of the prostate method is of high incidence. We recommend monitoring of the temperature of core body of all these patients, especially those with advanced age and lower body mass index. abstract_id: PUBMED:17416453 A prospective randomized study comparing monopolar and bipolar transurethral resection of prostate using transurethral resection in saline (TURIS) system. Objectives: To compare transurethral resection of prostate (TURP) using monopolar and bipolar transurethral resection in saline (TURIS) system. Materials And Methods: A prospectively randomized study was conducted between January 2004 and January 2005. Patient demographics and indications for surgery were recorded. The safety end points studied were occurrence of complications and decline in postoperative serum sodium (Na(+)) and hemoglobin (Hb) levels. Efficacy end points were resection time, weight of resected prostate tissue, and improvement in International Prostate Symptoms Score (IPSS) and maximum flow rate (Q(max)) in patients' uroflow over 12 mo. Results: One hundred consecutive patients were randomized and completed the study, with 52 patients in the monopolar TURP group and 48 in the TURIS group. At baseline, the two groups were comparable; they had at least 12 mo of follow-up. Mean resection time and mean weight of resected prostate tissue were comparable for both groups. Declines in the mean postoperative serum Na(+) for TURIS and monopolar TURP groups were 3.2 and 10.7 mmol/l, respectively (p&lt;0.01). However, there was no statistical difference in the decline in postoperative Hb between the two groups. There were two cases of clinically significant transurethral resection syndrome in the monopolar group. Urethral strictures were observed in three cases of TURIS and one patient in the monopolar group. The IPSS and Q(max) improvements were comparable between the two groups at 12 mo of follow-up. Conclusions: Bipolar TURP using the TURIS system is clinically comparable to monopolar TURP at 1 yr with an improved safety profile. abstract_id: PUBMED:24485082 Bipolar versus monopolar transurethral resection of the prostate: a prospective randomized study Purpose: To compare bipolar with standard monopolar transurethral resection of the prostate (TURP). Material And Methods: A prospectively randomized study was conducted between January 2010 and September 2011. Primary end points studied were efficacy (maximum flow rate [Qmax], International Prostate Symptom Score) and safety (adverse events, decline in postoperative serum sodium [Na+] and haemoglobin [Hb] levels). Secondary end points were operation time and duration of irrigation, catheterization, and hospitalization. Results: Sixty consecutive patients were randomized and completed the study, with 29 patients in the monopolar TURP group and 31 in the TURIS group. At baseline, the two groups were comparable in age, prostate volume, mean prostate-specific antigen value, International Prostate Symptom Score, and they had at least 12 months of follow-up. Declines in the mean postoperative serum Na+ for bipolar and monopolar TURP groups were 1.2 and 8.7 mmol/L, respectively. However, there was no statistical difference in the decline in postoperative Hb between the two groups. The mean catheterization time was 26.6 and 52 hours in the bipolar and standard groups, respectively. This difference was statistically significant as was the difference in the time to hospital discharge. The IPSS and Qmax improvements were comparable between the two groups at 12 months of follow-up. Conclusion: No clinically relevant differences in short-term efficacy are existed between the two techniques, but bipolar TURP is preferable due to a more favorable safety profile and shorter catheterization duration. abstract_id: PUBMED:32223095 Monopolar transurethral enucleation and resection of the prostate: Status quo of its application and studies Transurethral enucleation of the prostate allows more complete excision of the proliferative glands at the anatomical level, and has its unique advantages over the traditional surgical procedures, such as better results of surgery, lower recurrence rate, and higher satisfaction of the patients. At present, transurethral laser enucleation of the prostate has a limited application in many grass-root hospitals for the high price of laser and plasma equipment and a high incidence rate of postoperative urinary incontinence. In this context, monopolar transurethral enucleation and resection of the prostate (mTUERP) has come into the attention of clinicians, which can be performed with the equipment for transurethral resection of the prostate (TURP) and may become a real alternative of TURP. This paper presents an overview on the development and present status of mTUERP. abstract_id: PUBMED:22944063 Transurethral vapor enucleation and resection of prostate with TURis button electrode Objective: To describe an efficient and effective method of using Olympus TURis button plasma vaporization electrode plus loop electrode for transurethral vapor enucleation and resection of prostate. Methods: Between July 2011 and October 2011, the investigators performed transurethral vapor enucleation and resection of prostate using Olympus TURis button plasma vaporization electrode plus loop electrode in 16 consecutive patients at our institution. The parameters of prostate weight, International Prostate Symptom Score (IPSS), quality of life (QOL), operative duration, blood loss volume, catheterization period, duration of hospitalization, perioperative complications and the weight of enucleated tissue were evaluated. IPSS and QOL were recorded during the follow-up. Results: No patient had significant blood loss or signs of transurethral resection syndrome. The mean patient age was 67.3 ± 8.1 years. Mean preoperative prostate weight was 49 ± 24 g (range: 19 - 91) and mean resected tissue weight 36 ± 16 g (range: 10 - 50). Serious complications were not observed. Operative duration was 116 ± 31 minutes, mean catheter time 4.9 ± 1.8 days and the duration of hospitalization was 16.6 ± 5.5 days. Transurethral vapor enucleation and resection of prostate induced significant, pronounced, immediate and lasting improvement in IPSS (15.6 ± 6.8 vs 6.7 ± 2.4, P &lt; 0.01) and QOL (3.4 ± 1.4 vs 1.6 ± 0.6, P &lt; 0.01). Conclusion: Transurethral vapor enucleation and resection of prostate with Olympus TURis plasma button electrode is a safe, effective and thorough surgical method in the treatment of benign prostatic hyperplasia. abstract_id: PUBMED:22291832 Comparison of the haemostatic properties of conventional monopolar and bipolar transurethral resection of the prostate in patients on oral anticoagulants. Introduction: The aim of study is comparing the haemostatic properties of conventional monopolar resection (TURP) and bipolar transurethral resection in saline (TURIS) of the prostate in patients under chronic oral anticoagulants. Material And Methods: Out of a cohort group of 550 endoscopic resections for bladder outlet obstruction, 176 patients on chronic oral anticoagulant therapy required endoscopic resection either by monopolar TURP or bipolar TURIS technology. Changes in haemoglobin, blood transfusion, and clot retention were compared between both groups. Results: Mean postoperative change in haemoglobin level was -1.21 ±0.92 mg/dl in the TURP group compared to -1.29 ±0.99 mg/dl in the TURIS group (p = 0.603). The need for blood transfusions and the mean numbers of units transfused did not significantly differ between the 2 groups. Clot retention appeared in 12 patients (15%) in the TURP group compared to 13 patients (13%) in the TURIS group (p = 0.828). Conclusions: Despite promising experimental results of better haemostasis and deeper coagulation depth, bipolar technology does not permit one to reduce the amount of blood loss when compared to patients treated by conventional monopolar technology in this study group of patients on oral anticoagulation therapy. Patients on oral anticoagulants suffer more incidents of clot retention, which sometimes results in re-hospitalisation. abstract_id: PUBMED:24909399 Incidence of urethral stricture after bipolar transurethral resection of the prostate using TURis: results from a randomised trial. Objectives: To assess whether bipolar transurethral resection of the prostate (B-TURP) using the TURis system has a similar level of efficacy and safety to that of the traditional monopolar transurethral resection of the prostate (M-TURP), and to evaluate the impact of the TURis system on postoperative urethral stricture rates over a 36-month follow-up period. Patients And Methods: A total of 136 patients with benign prostatic obstruction were randomised to undergo either B-TURP using the TURis system or conventional M-TURP, and were regularly followed for 36 months after surgery. The primary endpoint was safety, which included the long-term complication rates of postoperative urethral stricture. The secondary endpoint was the follow-up measurement of efficacy. Results: In peri-operative findings, no patient in either treatment group presented with transurethral resection syndrome, and the decline in levels of haemoglobin and hematocrit were similar. The mean operation time was significantly extended in the TURis treatment group compared with the M-TURP group (79.5 vs 68.6 min; P = 0.032) and postoperative clot retention was more likely to be seen after M-TURP (P = 0.044). Similar efficacy findings were maintained throughout 36 months, but a significant difference in postoperative urethral stricture rates between groups was detected (6.6% in M-TURP vs 19.0% in TURis; P = 0.022). After stratifying patients according to prostate volume, there was no significant difference between the two treatment groups with regard to urethral stricture rates in patients with a prostate volume ≤ 70 mL (3.8% in M-TURP vs 3.8% in TURis), but in the TURis group there was a significantly higher urethral stricture rate compared with the M-TURP group in patients with a prostate volume &gt;70 mL (20% in TURis vs 2.2% in M-TURP; P = 0.012). Furthermore, the mean operation time for TURis was significantly longer than for M-TURP for the subgroup of patients with a prostate volume &gt; 70 mL (99.6 vs 77.2 min; P = 0.011), but not for the subgroup of patients with a prostate volume ≤ 70 mL. Conclusion: The TURis system seems to be as efficacious and safe as conventional M-TURP except that there was a higher incidence of urethral stricture in patients with larger preoperative prostate volumes. abstract_id: PUBMED:26818197 The TURis System for Transurethral Resection of the Prostate: A NICE Medical Technology Guidance. The transurethral resection in saline (TURis) system was notified by the company Olympus Medical to the National Institute of Health and Care Excellence's (NICE's) Medical Technologies Evaluation Programme. Following selection for medical technologies guidance, the company developed a submission of clinical and economic evidence for evaluation. TURis is a bipolar surgical system for treating men with lower urinary tract symptoms due to benign prostatic enlargement. The comparator is any monopolar transurethral resection of the prostate (mTURP) system. Cedar, a collaboration between Cardiff and Vale University Health Board, Cardiff University and Swansea University in the UK, acted as an External Assessment Centre (EAC) for NICE to independently critique the company's submission of evidence. Eight randomised trials provided evidence for TURis, demonstrating efficacy equivalent to that of mTURP for improvement of symptoms. The company presented meta-analyses of key outcome measures, and the EAC made methodological modifications in response to the heterogeneity of the trial data. The EAC analysis found that TURis substantially reduced the relative risks of transurethral resection syndrome (relative risk 0.18 [95 % confidence interval 0.05-0.62]) and blood transfusion (relative risk 0.35 [95 % confidence interval 0.19-0.65]). The company provided a de novo economic model comparing TURis with mTURP. The EAC critiqued the model methodology and made modifications. This found TURis to be cost saving at £70.55 per case for existing Olympus customers and cost incurring at £19.80 per case for non-Olympus customers. When an additional scenario based on the only available data on readmission (due to any cause) from a single trial was modelled, the estimated cost saving per case was £375.02 for existing users of Olympus electrosurgery equipment and £284.66 per case when new Olympus equipment would need to be purchased. Meta-analysis of eight randomised trials showed that TURis is associated with a statistically significantly reduced risk of transurethral resection syndrome and a reduced need for blood transfusion-two factors that may drive cost saving for the National Health Service. The clinical data are equivocal as to whether TURis shortens the hospital stay. Limited data from a single study suggest that TURis may reduce the rate of readmission after surgery. The NICE guidance supports adoption of the TURis technology for performing transurethral resection of the prostate in men with lower urinary tract symptoms due to benign prostatic enlargement. abstract_id: PUBMED:18721041 Technological advances in transurethral resection of the prostate: bipolar versus monopolar TURP. One of the most significant recent advancements in transurethral resection of the prostate (TURP) is the incorporation of bipolar technology. Bipolar circuitry allows TURP to be performed in a normal saline environment, which addresses a fundamental concern of conventional monopolar TURP (i.e., the use of hypo-osmolar irrigation). As a result, the risks of dilutional hyponatremia and transurethral resection (TUR) syndrome are eliminated, allowing for longer and safer resection. This review discusses the principles and applications of electrosurgery in conventional monopolar as well as new bipolar saline-based TURP systems. This review also addresses the positive impact on patient safety and resident training. Answer: The transurethral resection of the prostate using the TURis system has shown comparable efficacy to conventional monopolar transurethral resection of the prostate (TURP) with some advantages in safety profile. Studies have demonstrated that the TURis system can reduce the relative risks of transurethral resection syndrome and the need for blood transfusion, which are significant safety concerns associated with monopolar TURP (PUBMED:26818197). Additionally, the TURis system has been found to be clinically comparable to monopolar TURP at 1 year with an improved safety profile, including shorter catheterization duration (PUBMED:24485082). However, the TURis system is not without its drawbacks. Operation times have been reported to be significantly longer with the TURis system compared to monopolar TURP (PUBMED:24958486), and there is a higher incidence of urethral stricture in patients with larger preoperative prostate volumes when using the TURis system (PUBMED:24909399). Furthermore, while the TURis system has been shown to be cost-saving for existing Olympus customers, it may incur additional costs for non-Olympus customers (PUBMED:26818197). In terms of the incidence of perioperative hypothermia, there was no significant difference between patients undergoing TURis and those undergoing monopolar TURP, although the study recommended monitoring core body temperature, especially in older patients and those with lower body mass index (PUBMED:31179883). Overall, while the TURis system has certain advantages, particularly in terms of safety, it may not entirely replace conventional monopolar TURP due to its longer operation times and potential for higher costs in some settings. The decision to use the TURis system may depend on individual patient factors, surgeon preference, and institutional considerations.