input
stringlengths
6.82k
29k
Instruction: Does dexmedetomidine prevent emergence delirium in children after sevoflurane-based general anesthesia? Abstracts: abstract_id: PUBMED:16324031 Does dexmedetomidine prevent emergence delirium in children after sevoflurane-based general anesthesia? Background: Emergence agitation or delirium (ED) is a frequent phenomenon in children recovering from general anesthesia (GA). Dexmedetomidine, an alpha2 receptor agonist, has analgesic and sedative properties that might be helpful in the management of ED. We studied the effects of a continuous perioperative infusion of 0.2 microg.kg(-1).h(-1) dexmedetomidine on the incidence of ED in 50 children aged 1-10 years scheduled for sevoflurane-based GA. Methods: Following inhalation induction of GA, the children were randomly assigned into dexmedetomidine or placebo Groups D and S, respectively. The infusion of 0.2 microg.kg(-1).h(-1) dexmedetomidine or equal volume of saline was started after securing the airway. Depth of anesthesia was maintained by adjusting the concentration of sevoflurane to achieve a Bispectral Index Score of 40-60. Intraoperative hemodynamics were recorded every 5 min and the trachea was extubated at the end of the procedure. Perioperative pain management was determined by the blinded anesthesia team, and the study drug infusion was maintained for 15 min following the postanesthesia care unit (PACU) admission. ED and pain scores were evaluated by a blinded observer. Results: The incidence of ED was statistically significantly different between the two groups, 26% in Group D Vs 60.8% in Group S (P = 0.036). Additionally, the number of episodes of ED was lower in Group D (P < 0.017). Pain scores and the times to extubate and discharge from PACU were the same. Conclusions: The perioperative infusion of 0.2 microg.kg(-1).h(-1) dexmedetomidine decreases the incidence and frequency of ED in children after sevoflurane-based GA without prolonging the time to extubate or discharge. abstract_id: PUBMED:25770644 The effect of KETODEX on the incidence and severity of emergence agitation in children undergoing adenotonsillectomy using sevoflurane based-anesthesia. Background: Postoperative emergency agitation (EA) is a common problem often observed in children undergoing general anesthesia. The purpose of this study was to evaluate whether a bolus of intraoperative low-dose ketamine followed by dexmedetomidine i.v. could reduce the incidence of EA in children undergoing adenotonsillectomy following sevoflurane-based anesthesia. Methods: A total of 92 children undergoing adenotonsillectomy, aged 3-7 years, were randomly allocated to receive either low-doseketamine 0.15 mg/kg followed by dexmedetomidine 0.3 μg/kg i.v. (KETODEX, n=45) or volume-matched normal saline (Control, n=47), about 10 min before the end of surgery. Anesthesia was induced and maintained with sevoflurane. Postoperative pain and EA were assessed with objective pain score (OPS) and the Pediatric Anesthesia Emergence Delirium scale (PAED), respectively. EA was defined as a PAED≥10 points. Recovery profile and postoperative complications were recorded. Result: The incidence and severity of EA was lower in KETODEX group than controls (11% vs. 47%) and (2% vs. 13%), respectively (P<0.05). The frequency of fentanyl rescue was lower in KETODEX group than in controls (13.3 vs. 38.3%, P<0.05). Heart rate during extubation was significantly higher in the control group compared with children who received KETODEX (P<0.05). The incidence of postoperative pain was significantly less in the KETODEX group (15.5% vs. 63.8%, P<0.05). Times to interaction and extubation were significantly longer in the KETODEX group (P<0.05). Conclusion: KETODEX reduces the incidence and severity of EA in children undergoing adenotonsillectomy following sevoflurane-based anesthesia and provided smooth extubation. abstract_id: PUBMED:28338180 Influence of dexmedetomidine to cognitive function during recovery period for children with general anesthesia. Objective: We studied the influence of Dexmedetomidine on cognitive function in children during the recovery period of general anesthesia. Patients And Methods: Ninety-three children who underwent general anesthesia were selected and randomly divided into (1) the control group, (2) the dexmedetomidine group, and (3) the dezocine group. Fentanyl, propofol, and rocuronium were used in all patients to induce anesthesia, while sevoflurane inhalation and propofol were used to maintain anesthesia. In the control group, 20 ml NS were infused intravenously 10 min before anesthetic induction. In the dexmedetomidine group, 1.0 μg/kg dexmedetomidine in 20 ml was infused for 10 min. In the dezocine group, 0.1 mg/kg dezocine in 20 ml was infused for 10 min. Mean arterial blood pressure, average heart rate, and average oxygen saturation (SaO2) were compared at the following time points: end of surgery (T0), before extubation (T1), during extubation (T2), and 30 min after extubation (T3). The VAS scale, Ramsay sedation score, delirium grading scale and occurrence of adverse reactions at 30 min after extubation were recorded. The occurrence of postoperative cognitive dysfunction (POCD) and the expression of serum neuron-specific enolase (NSE) and IL-6 at postoperative days 1 and 7 were recorded. Results: Comparing mean arterial blood pressure, average heart rate, and average oxygen saturation (SaO2) at the different time points in the dexmedetomidine group, there were no statistically significant differences (p>0.05). The difference in the occurrence of adverse reactions in the different groups was statistically significant (p<0.05). The occurrence of postoperative cognitive dysfunction (POCD) at postoperative day 1 was significantly higher in the control group than the other two groups (p<0.05), and on the postoperative day 7th, the differences were not statistically significant (p>0.05). Regarding the expression of neuron-specific enolase (NSE) and IL-6, the levels were the highest in the control group, followed by the dezocine group (p<0.05). Conclusions: The dexmedetomidine is safer than dezocine in aspects of hemodynamics, sedation, analgesia, degree of delirium, occurrence of adverse reactions, and postoperative cognitive dysfunction (POCD). The improvement in the occurrence of postoperative cognitive dysfunction (POCD) is related to the levels of serum neuron-specific enolase (NSE) and IL-6. abstract_id: PUBMED:29333775 Effect of dexmedetomidine on emergence agitation after general anesthesia in children undergoing odontotherapy in day-surgery operating room Objective: To study the effectiveness of dexmedetomidine used for general anesthesia maintenance in children undergoing odontotherapy in day-surgery operating room in reducing the incidence of emergence agitation (EA). Methods: Eighty children undergoing odontotherapy and under general anesthesia in day-surgery operating room were randomized into two groups, group A (n=40) and group B (n=40). Each patient in group A was administered with a bolus dose of dexmedetomidine (1.0 μg·kg⁻¹, saline diluted to 10 mL) pump-infused after intubation and a maintenance dose of 0.1-0.4 mL·(kg·h)⁻¹ followed-up until 45 min before the end of operation. Each patient in group B was administered with a bolus dose of normal saline 10 mL pump-infused after intubation and maintenance dose of 0.1-0.4 mL·(kg·h)⁻¹ followed-up until 45 min before the end of operation. Gender, age, weight, physical status according to the American Society of Anesthesiologists, perioperative heart rate (HR), mean arterial pressure (MAP), pulse oxygen saturation (SpO₂), sufentanil dosage, duration of surgery, time of extubation, time of regaining consciousness, and time to reach modified Aldrete's score≥12 were recorded. Behavior in postanesthesia care unit was rated on the four-point agitation scale. Results: Compared with group B, decreases were observed in HR and MAP at the beginning of operation, in 10 and 30 min, 1 and 2 h after the beginning of operation, and after extubation of group A (P<0.05). Sufentanil dosage and incidence of EA during recovery of group A were also lower than those of group B (P<0.05). Time to regain consciousness and time to reach modified Aldrete's score≥12 of group A were longer than those of group B (P<0.05). No statistical difference was observed between other indexes of the two groups. Conclusions: As an anesthetic used for general anesthesia maintenance in children undergoing odontotherapy in day-surgery operating room, dexmedetomidine results in low incidence of EA during recovery and more stable vital signs. abstract_id: PUBMED:28810649 Effect of dexmedetomidine combined with sufentanil on preventing emergence agitation in children receiving sevoflurane anesthesia for cleft palate repair surgery. The aim of the present study was to observe whether dexmedetomidine (DEX) combined with sufentanil decreased emergence agitation (EA) in children receiving sevoflurane anesthesia for cleft palate repair surgery. Children undergoing elective cleft palate repair surgery were randomly allocated into the DEX + sufentanil group (group DS; n=50) and the normal saline + fentanyl group (group SF; n=50). Patients in group DS were treated with 0.5 µg/kg DEX prior to induction of anesthesia, whereas patients in group SF received an equal volume of normal saline. Sufentanil (0.2 µg/kg) was administered to induce anesthesia, and 30 min before the end of surgery for patients in group DS. Fentanyl (2 µg/kg) was administered at the same time point for patients in group SF. Mean arterial pressure (MAP), heart rate (HR), duration of surgery and anesthesia, and the dosage of remifentanil were assessed. EA score, Pediatric Anesthesia Emergence Delirium (PAED) score and the Children and Infants Postoperative Pain Scale (CHIPPS) score were documented every 15 min in the post-anesthesia care unit (PACU). The number of cases requiring fentanyl (1 µg/kg) and the recovery profile data were analyzed. Compared with group DS (P<0.05) and the baseline (P<0.05), HR and MAP were significantly increased in group SF immediately following tracheal intubation and extubation. Mean values of maximum EA, PAED and CHIPPS scores were significantly reduced in group DS compared with group SF at 0 (P<0.01), 15 (P<0.05), and 30 min (P<0.05) after arrival at PACU. The incidence of EA in group SF was significantly increased compared with group DS (P<0.05). The dosage of remifentanil during the surgery and the number of cases requiring fentanyl (1 µg/kg) in group DS were significantly decreased compared with group SF (P<0.05). The findings of the present study suggested that DEX combined with sufentanil was able to effectively decrease the incidence of EA in children receiving sevoflurane anesthesia for cleft palate repair surgery. abstract_id: PUBMED:32245513 Effect of xenon and dexmedetomidine as adjuncts for general anesthesia on postoperative emergence delirium after elective cardiac catheterization in children: study protocol for a randomized, controlled, pilot trial. Background: Emergence delirium, a manifestation of acute postoperative brain dysfunction, is frequently observed after pediatric anesthesia and has been associated with the use of sevoflurane. Both xenon and dexmedetomidine possess numerous desirable properties for the anesthesia of children with congenital heart disease, including hemodynamic stability, lack of neurotoxicity, and a reduced incidence of emergence delirium. Combining both drugs has never been studied as a balanced-anesthesia technique. This combination allows the provision of anesthesia without administering anesthetic drugs against which the Food and Drug Administration (FDA) issued a warning for the use in young children. Methods/design: In this phase-II, mono-center, prospective, single-blinded, randomized, controlled pilot trial, we will include a total of 80 children aged 0-3 years suffering from congenital heart disease and undergoing general anesthesia for elective diagnostic and/or interventional cardiac catheterization. Patients are randomized into two study groups, receiving either a combination of xenon and dexmedetomidine or mono-anesthesia with sevoflurane for the maintenance of anesthesia. The purpose of this study is to estimate the effect size for xenon-dexmedetomidine versus sevoflurane anesthesia with respect to the incidence of emergence delirium in children. We will also describe group differences for a variety of secondary outcome parameters including peri-interventional hemodynamics, emergence characteristics, incidence of postoperative vomiting, and the feasibility of a combined xenon-dexmedetomidine anesthesia in children. Discussion: Sevoflurane is the most frequently used anesthetic in young children, but has been indicated as an independent risk factor in the development of emergence delirium. Xenon and dexmedetomidine have both been associated with a reduction in the incidence of emergence delirium. Combining xenon and dexmedetomidine has never been described as a balanced-anesthesia technique in children. Our pilot study will therefore deliver important data required for future prospective clinical trials. Trial Registration: EudraCT, 2018-002258-56. Registered on 20 August 2018. https://www.clinicaltrialsregister.eu. abstract_id: PUBMED:28470100 Dexmedetomidine Effect on Emergence Agitation and Delirium in Children Undergoing Laparoscopic Hernia Repair: a Preliminary Study. Objective To evaluate the safety and efficacy of dexmedetomidine (Dex) to prevent emergence agitation (EA) and delirium (ED) in children undergoing laparoscopic hernia repair under general anesthesia. Methods 100 children (1-5 years, 10-25 kg) were randomized into four groups: controls (saline) and intravenous Dex at 0.25, 0.5, and 1.0 µg/kg (D1, D2, D3, respectively). Dex/saline infusion was started following anesthesia. EA and ED were evaluated on a 5-point scale. Results For the C, D1, D2, and D3 groups, respectively, EA frequencies were 45.8%, 30.4%, 12%, 4%; ED frequencies 29.1%, 13%, 4%, 4%; CHIPPS scores 8, 6, 3, 3; sevoflurane doses from 13.2 ± 3.4 (controls) to 9.4 ± 3.5 ml (D3). Intervals until mask removal/spontaneous eye opening were significantly longer for D2 and D3 than controls. PACU stay was longer for D3. Conclusions There was significantly less postoperative EA and pain, with less sevoflurane required, using Dex. abstract_id: PUBMED:37115403 Effects of Pharmacological Intervention on Recovery After Sevoflurane Anesthesia in Children: a Network Meta-analysis of Randomized Controlled Trials. Sevoflurane, commonly administered to children as anesthesia, often leads to emergence delirium (ED). Currently, a consensus is lacking among clinicians regarding pharmacological interventions to improve recovery. To determine an effective approach, we compared the effects of several drugs in lowering the incidence of ED after sevoflurane anesthesia in children.We searched online databases for relevant randomized controlled trials (59 studies selected; 5199 NMA-eligible participants) and performed a frequentist network meta-analysis (NMA). This study was registered on PROSPERO (number CRD: 42022329939).All included studies had a low to moderate risk of overall bias. The incidence of ED after sevoflurane anesthesia in children differed according to other drugs administered, and were ranked from high to low according to the surface under the cumulative ranking curve (SUCRA).Sufentanil (91.2%) and dexmedetomidine (77.6%) were more likely to reduce the incidence (SUCRA value) of ED, whereas the placebo (6.5%), ramelteon (11.1%), and magnesium (18%) were less likely to reduce the incidence of ED. Remifentanil (89.3%) ranked first in shortening emergence time, followed by placebo (82.4%) and ketamine (69.7%). Placebo shortened extubation time, followed by remifentanil (66.5%) and alfentanil (61.4%).Sufentanil and remifentanil lowered sevoflurane-induced ED incidences among children and shortened the emergence time more effectively than other drugs. Most adjuvant drugs that are combined with sevoflurane either do not change or may even prolong extubation time. Further research and clinical trials are required to support and update these conclusions. abstract_id: PUBMED:26629139 Consequence of dexmedetomidine on emergence delirium following sevoflurane anesthesia in children with cerebral palsy. Children with cerebral palsy can demonstrate irritability following emergence from general anaesthesia. As well, an elevated rate of emergence delirium (ED) in children has been associated with the application of sevoflurane. The current study's intent is to administer dexmedetomidine, in a single dosage administration, at the initial phase of sevoflurane based anesthesia with regard to the occurrence and severity of ED in children afflicted with cerebral palsy. Participating in the study (American Society of Anesthesiologists I-II) are eighty children ranging in ages two through twelve years. They would be anaesthetised with sevoflurane based anesthesia while undergoing lower limb surgical procedures. The participants were equally distributed to either Group c or Group D. Group C was administered 10 ml saline 0.9%, and Group D was administered dexmedetomidine 0.5 μg•kg(-1). Five minutes prior to commencement of the surgical procedures, the participants received the prescribed pharmaceutical dosages under the anesthesia of sevoflurane. In order to sustain the BIS values in a range of 45 and 55, at 60 second increments, endtidal sevoflurane concentrations (ETsev) were modified. After conclusion of the surgical procedures, in post anesthesia care unit (PACU), the frequency of ED was gauged with Aonos four point scale and the severity of ED was gauged with pediatric anesthesia emergence delirium scale upon admission (T0), after intervals of five minutes (T5), fifteen minutes (T15) and thirty minutes (T30). Extubation time, emergence time and length of at stay at the PACU were assessed. Relative to Group C, participants of Group D exhibited noticeably shortened times of emergence, extubation and PACU duration of stay. Prior to surgical incision, ETsev was elevated in the control group, (1.9±0.2 vs 1.6±0.3; P = 0.023) and amid the initial 20 minutes following the surgical incision (1.6±0.2 vs 1.1±0.2; P = 0.016). At intervals of commencement, T0, of five minutes (T5) and fifteen minutes T15, Group D exhibited lower occurrences and severity of ED than those participants in Group C. Dexmedetomidine, given as a bolus dose post induction, was effective in reducing the occurrence and severity of emergence delirium in children with cerebral palsy who were undergoing lower limb surgical procedures under sevoflurane anaesthesia. abstract_id: PUBMED:34739450 A Comparison of Dexmedetomidine and Propofol on Emergence Delirium in Children Undergoing Cleft Palate Surgery With Sevoflurane-Based Anesthesia. Abstract: Emergence delirium is a common complication after sevoflurane-anesthesia and have a serious impact on children undergoing cleft palate surgery. The aim of this study was to compare the effect of propofol and dexmedetomidine on emergence delirium in children. Ninety children aged 8 to 24 months, underwent cleft palate repair, were enrolled in the study. Children were randomly assigned to 3 groups after the induction: Group C (intravenous infusion 0.9% saline), Group P (intravenous infusion 2 mg/kg/hour propofol), and Group D (intravenous infusion 0.5 μg/kg/hour dexmedetomidine). Emergence delirium was diagnosed using the pediatric anesthesia emergence delirium scale and pain using the face, legs, activity, cry, consolability scale. Heart rate, mean arterial pressure, respiratory recovery time, extubation time, post anesthesia care unit observation time, and adverse events were also evaluated. A total of 86 patients were analyzed. The incidence of emergence delirium was 20.1% in group D, 58.6% in group P and 85.7% in group C (P < 0.05). A lower face, legs, activity, cry, consolability score was seen in group D than in group P and group C (3.9 + 1.1 versus 6.1 ± 0.9 and 7.1 ± 1.0, P < 0.05). The value of heart rate and mean arterial pressure during emergence in group P and group C were significantly higher than that in group D (All P < 0.05). These findings suggest that dexmedetomidine as a sedative, analgesic, and sympatholytic agent was superior to propofol in reducing the incidence of emergence delirium in children undergoing cleft palates surgery with sevoflurane-based anesthesia. Answer: Yes, dexmedetomidine appears to prevent emergence delirium (ED) in children after sevoflurane-based general anesthesia. Several studies have demonstrated its effectiveness in reducing the incidence and severity of ED. For instance, a study showed that a continuous perioperative infusion of 0.2 microg.kg^(-1).h^(-1) dexmedetomidine decreased the incidence and frequency of ED in children after sevoflurane-based general anesthesia without prolonging the time to extubate or discharge (PUBMED:16324031). Another study found that a combination of low-dose ketamine followed by dexmedetomidine intravenously (KETODEX) reduced the incidence and severity of EA in children undergoing adenotonsillectomy following sevoflurane-based anesthesia (PUBMED:25770644). Further research indicated that dexmedetomidine is safer than dezocine in terms of hemodynamics, sedation, analgesia, degree of delirium, occurrence of adverse reactions, and postoperative cognitive dysfunction (POCD), with improvements in POCD related to the levels of serum neuron-specific enolase (NSE) and IL-6 (PUBMED:28338180). Additionally, dexmedetomidine used for general anesthesia maintenance in children undergoing odontotherapy in day-surgery operating rooms resulted in a low incidence of EA during recovery and more stable vital signs (PUBMED:29333775). Moreover, dexmedetomidine combined with sufentanil effectively decreased the incidence of EA in children receiving sevoflurane anesthesia for cleft palate repair surgery (PUBMED:28810649). A network meta-analysis of randomized controlled trials also supported the use of dexmedetomidine, among other drugs, in lowering the incidence of ED after sevoflurane anesthesia in children (PUBMED:37115403). In children with cerebral palsy, a single dose of dexmedetomidine at the start of sevoflurane-based anesthesia reduced the occurrence and severity of ED (PUBMED:26629139). Finally, dexmedetomidine was found to be superior to propofol in reducing the incidence of emergence delirium in children undergoing cleft palate surgery with sevoflurane-based anesthesia (PUBMED:34739450).
Instruction: Can conventional radiographs be used to monitor the effect of neoadjuvant chemotherapy in patients with osteogenic sarcoma? Abstracts: abstract_id: PUBMED:8717114 Can conventional radiographs be used to monitor the effect of neoadjuvant chemotherapy in patients with osteogenic sarcoma? Objective: The objective of this study was to assess the effectiveness of conventional radiography in predicting histopathologic response in patients with osteogenic sarcoma who were treated with preoperative chemotherapy. Design And Patients: The radiographs of 22 patients with an osteogenic sarcoma, taken before and after neoadjuvant chemotherapy, were reviewed. Tumour location, size, radiographic appearance, margination, cortical destruction and periosteal reaction were evaluated. The findings were correlated with the histopathologic response of the surgical specimen. Results: None of the findings proved to be of predictive value for the histopathologic response. Increase in tumour diameter and increase in ossification and/or calcification, which were seen in more than half of the patients, did not correlate with response. Conclusion: Conventional radiographs do not contribute to the identification of good or poor responders. abstract_id: PUBMED:36051544 Osteosarcoma mineralization changes on radiographs have moderate correlation to chemotherapy response using bone subtraction methodology. Background: Survival following a diagnosis of osteosarcoma is correlated strongly with response to chemotherapy. Mineralization changes seen on radiographs have been hypothesized to correlate with chemotherapy response, however, this has never been analyzed using modern techniques. Methods: Retrospective review of radiographs obtained before and after neoadjuvant chemotherapy was performed for 31 patients with high-grade, conventional osteosarcoma. Pre-chemotherapy (PreC) images and post-chemotherapy (PostC) images were co-registered. Tumor luminance measurements were normalized based on the non-tumor bone and then the relative change in tumor mineralization were measured. Results: Mean luminance values for pre-chemotherapy non-tumor-affected bone and tumor were 0.63±0.12 and 0.65±0.12, respectively. Mean values for PostC non-tumor-affected bone were 0.59±0.14 and 0.64±0.10, respectively. Once normalized, osteosarcoma mineralization change showed a statistically significant moderate correlation-Pearson correlation coefficient (ρ) of 0.36 (P=0.038)-with the tumor necrosis value. Conclusions: Moderate, positive correlation was found between osteosarcoma mineralization change during chemotherapy and chemotherapy response. Further work is required to determine if these findings are prognostic by identifying best practice for image analysis and repeating this work with prospectively acquired digital radiographs using uniform technique and phantom normalization. abstract_id: PUBMED:37284107 Monitoring early responses to neoadjuvant chemotherapy and the factors affecting neoadjuvant chemotherapy responses in primary osteosarcoma. Background: This study sought to predict the early responses to neoadjuvant chemotherapy (NACT) of patients with primary conventional osteosarcoma (COS) using the apparent diffusion coefficient (ADC) and to evaluate the factors affecting the tumor necrosis rate (TNR). Methods: The data of 41 patients who underwent magnetic resonance imaging (MRI) and diffusion-weighted imaging sequence scans before NACT, 5 days after the end of the first phase of NACT, after the end of the whole course of chemotherapy, were prospectively collected. ADC1 refers to the ADC before chemotherapy, ADC2 refers to the ADC after the first phase of chemotherapy, and ADC3 refers to the ADC before surgery. The change in values before and after the first phase of chemotherapy was calculated as follows: ADC2-1 = ADC2 - ADC1. The change in values before and after the last phase of chemotherapy was calculated as follows: ADC3-1 = ADC3 - ADC1. The change in values after the first phase and the last phase of chemotherapy was calculated as follows: ADC3-2 = ADC3 - ADC2. We recorded the patient characteristics, including age, gender, pulmonary metastasis, alkaline phosphatase (ALP), and lactate dehydrogenase (LDH). The patients were divided into the following 2 groups based on their histological TNR after postoperative: (I) the good-response group (≥90% necrosis, n=13) and (II) the poor-response group (<90% necrosis, n=28). Changes in the ADCs were compared between the good-response and poor-response groups. The different ADCs between the 2 groups were compared, and a receiver operating characteristic analysis was performed. A correlation analysis was performed to assess the correlations of the clinical features, laboratory features, and different ADCs with patients' histopathological responses to NACT. Results: The ADC2 (P<0.001), ADC3 (P=0.004), ADC3-1 (P=0.008), ADC3-2 (P=0.047), and ALP before NACT (P=0.019) were significantly higher in the good-response group than in the poor-response group. The ADC2 [area under the curve (AUC) =0.723; P=0.023], ADC3 (AUC =0.747; P=0.012), and ADC3-1 (AUC =0.761; P=0.008) showed good diagnostic performance. Based on the univariate binary logistic regression analysis, the ADC2 (P=0.022), ADC3 (P=0.009), ADC2-1 (P=0.041), and ADC3-1 (P=0.014) were correlated with the TNR. However, based on the multivariate analysis, these parameters were not significantly correlated with the TNR. Conclusions: In patients with COS who are undergoing neoadjuvant chemotherapy, the ADC2 is a promisingindicator for predicting tumor response to chemotherapy in early. abstract_id: PUBMED:25581912 The indications and efficacy of conventional chemotherapy in primary and recurrent sarcoma. Conventional chemotherapy can have a favorable impact on the natural history of disease for selected patients with primary high-risk bone and soft-tissue sarcomas. In particular, multidrug regimens are integral to the care of patients with the most aggressive histologies, including Ewing sarcoma, osteosarcoma, and non-pleomorphic rhabdomyosarcoma. Appropriately selected patients with high-risk, clinically localized soft-tissue sarcomas may also benefit from histology-tailored adjuvant or neoadjuvant therapy. For patients with recurrent disease, conventional chemotherapy is frequently the most appropriate first-line therapy; active drugs are discussed at length. Several new promising cytotoxic chemotherapeutic agents are currently under development, including aldoxorubicin, TH-302, and trabectedin. abstract_id: PUBMED:28764797 Conventional osteosarcoma of the mandible successfully treated with radical surgery and adjuvant chemotherapy after responding poorly to neoadjuvant chemotherapy: a case report. Background: Osteosarcoma, the most common primary bone malignancy, has an extremely poor prognosis and a high rate of local recurrence and distal metastases. Because osteosarcomas of the head and neck region are rare, accounting for less than 10% of all osteosarcoma cases, limited information is available about their treatment and prognosis. Because of the high rate of distal metastases associated with extragnathic osteosarcoma, surgery combined with chemotherapy is currently considered essential in its treatment. However, the role of chemotherapy has not been well elucidated in the treatment of head and neck osteosarcoma because of the rarity of this condition. Case Presentation: In this report, we present the case of a 58-year-old Japanese woman with osteosarcoma of the mandible that was treated with radical surgery combined with neoadjuvant and adjuvant chemotherapy. Because the tumor showed rapid growth during neoadjuvant chemotherapy, neoadjuvant chemotherapy was suspended and surgical resection was performed, followed by adjuvant chemotherapy. No evidence of local recurrence and distal metastasis was found 14 months after initial treatment. Local control is considered a principal prognostic factor for head and neck osteosarcoma. Conclusions: Wide surgical excision should be considered a primary goal even during neoadjuvant chemotherapy, especially in cases that respond poorly to neoadjuvant chemotherapy. abstract_id: PUBMED:35086129 Response to neoadjuvant chemotherapy and survival of children and adolescents with high-grade osteosarcoma treated based on the EURAMOS-1 protocol. Background: High-grade osteosarcoma is the most common malignant bone tumor in children and adolescents. This study aimed to evaluate the histologic response to neoadjuvant chemotherapy and overall and event-free survival rates in patients < 21 years of age with a diagnosis of conventional osteosarcoma. Methods: We conducted an analytical and observational study of a cohort of patients < 21 years old with a diagnosis of conventional osteosarcoma treated with the OS INC-2009 protocol (based on EURAMOS-1). Descriptive analysis was performed, and overall and event-free survival rates were calculated by the Kaplan-Meier method. Results: Between April 2009 and October 2016, 84 patients with conventional osteosarcoma (mean age 13.5 ± 3.2 years) were admitted. Metastatic disease at diagnosis was observed in 36 patients (42.8%). Of the 41 patients who received neoadjuvant chemotherapy (50.6%), 15 (36.6%; confidence interval [95% CI]: 49.9-75.6) were classified as good responders and 26 (63%; 95% CI: 22.5-58.0) as poor responders. The 5-year overall and event-free survival rates in good responders were 88.8% (95% CI: 43.3-98.3) and 81.4% (95% CI: 43.5-95.0); in poor responders it was 66.5% (95% CI: 40.7-83.1) and 31.4% (95% CI: 13.8-50.7), respectively. Conclusions: Good responders' evaluation of histologic response to neoadjuvant chemotherapy showed improved overall and event-free survival rates. Specialized centers with multidisciplinary and comprehensive management are required to make the application of high-toxicity protocols feasible. abstract_id: PUBMED:17145581 Neoadjuvant treatment in osteosarcomas The dismal prognosis of osteosarcoma of the limbs treated by amputation alone can be improved dramatically by including chemotherapy in an interdisciplinary regimen. Neoadjuvant chemotherapy is commonly used in order to eradicate the micrometastases and to prepare for limb-salvage surgery. To-days conservative surgery is possible in the majority of patients, including young children, and relapse-free survival attains 50-80%. High-dose methotrexate, doxorubicine, cis-platin and ifosfamide are considered the most active drugs against osteosarcoma. However, the best combination of drugs remains controversial with pending questions regarding the number of drugs to give for the preoperative phase of treatment and the value of salvage chemotherapy in poor histological responders. Furthermore long term toxicity of the drugs need to be better assessed. In order to answer these questions, treatment should be administered within the scope of prospective multi-centre trials. abstract_id: PUBMED:11372948 Telangiectatic osteosarcoma of the extremity: neoadjuvant chemotherapy in 24 cases. Between April 1990 and December 1994, we treated 24 patients with telangiectatic osteosarcoma (TO) of the extremities with neoadjuvant chemotherapy using 2 protocols. Surgery consisted of limb salvage in 21 patients and amputation or rotation plasty in 3. The histologic response to chemotherapy was good (90% or more tumor necrosis) in 23 patients, of whom 12 had total necrosis. With a mean follow-up of 74 (60-96) months, 20 patients remained continuously free of disease and 4 relapsed with lung metastases. There were no local recurrences. Comparing these results to the ones achieved in 269 contemporary patients with conventional osteosarcoma of the extremities using the same protocols for chemotherapy, we found a significantly better histologic response to chemotherapy (96% vs 68% of good histologic response; p = 0.004) and disease-free survival (83% vs 55%; p = 0.01) in the TO group. We conclude that TO, once considered a lethal tumor, seems to be even more sensitive to chemotherapy than conventional osteosarcoma, and that most of these patients may be cured without amputation. abstract_id: PUBMED:27591503 Chemotherapy: Does Neoadjuvant or Adjuvant Therapy Improve Outcomes? Since preoperative chemotherapy has been clearly shown to improve outcomes for patients with Ewing sarcoma, rhabdomyosarcoma, and osteosarcoma, practitioners have attempted to extend the use of adjuvant/neoadjuvant chemotherapy to other types of adult soft tissue sarcoma. Given the high risk of distant recurrence and disease-specific death for patients with soft tissue sarcoma tumors larger than 10 cm, these patients should be considered candidates for neoadjuvant chemotherapy as well as investigational therapies. Yet, potential toxicity from cytotoxic chemotherapy is substantial, and there remains little consensus and wide variation regarding the indications for use of chemotherapy in the adjuvant/neoadjuvant setting. abstract_id: PUBMED:35398837 Clinical Outcome Following Prolonged Neoadjuvant Chemotherapy and Delayed Surgery in Osteosarcoma Patients: An Evidence-based Clinical Review. Background: The incidence of osteosarcoma reached 16.8 cases annually at dr. Cipto Mangunkusumo Hospital in 1995-2008. Previous studies suggested that prolonged neoadjuvant chemotherapy followed by delayed surgery improves the clinical outcome. Prolonged neoadjuvant chemotherapy followed by delayed surgery commonly occurs in Indonesia, as diagnostic imaging and surgery waiting list will delay the surgery. The aim of this study is to observe the survival rate and the event-free survival rate of osteosarcoma patients with prolonged neoadjuvant chemotherapy and delayed surgery. Methods: This review included randomized controlled trials (RCTs), cohort studies, retrospective cohort studies, clinical trials, and reviews. Literature search was conducted through MEDLINE (PubMed search engine), Cochrane Central Register of Controlled Trial, and Scopus. The studies were screened and selected according to inclusion criteria by author and contributors independently. Results: Six studies were included in the qualitative synthesis of this study. Overall survival rate, event-free survival rate, histological response and recurrence as well as neoadjuvant chemotherapy duration, cycle and regiment were assessed in this study. Conclusion: Prolonged neoadjuvant chemotherapy and delayed surgery results in 5-years survival rate of 43.2% to 96.6% and 5-years event-free survival rate of 35.7% to 86.4%. Answer: Conventional radiographs have been studied to determine their utility in monitoring the effect of neoadjuvant chemotherapy in patients with osteogenic sarcoma. One study found that none of the radiographic findings, such as tumor location, size, appearance, margination, cortical destruction, and periosteal reaction, were predictive of the histopathologic response in patients treated with preoperative chemotherapy. This led to the conclusion that conventional radiographs do not contribute to the identification of good or poor responders to neoadjuvant chemotherapy in patients with osteogenic sarcoma (PUBMED:8717114). However, another study using a bone subtraction methodology and modern techniques to analyze mineralization changes on radiographs found a moderate, positive correlation between osteosarcoma mineralization change during chemotherapy and chemotherapy response. This suggests that with the appropriate image analysis techniques, there may be some value in using radiographs to monitor the effect of neoadjuvant chemotherapy, although further work is required to determine if these findings are prognostic (PUBMED:36051544). In summary, while conventional radiographs alone may not be reliable for predicting the histopathologic response to neoadjuvant chemotherapy in osteogenic sarcoma, there is some evidence that with advanced image analysis methods, radiographic changes could potentially be correlated with chemotherapy response. However, more research is needed to establish these methods as reliable tools for monitoring treatment effects.
Instruction: Physicians' perceptions of and approaches to woman abuse. Does certification in family medicine make a difference? Abstracts: abstract_id: PUBMED:8792017 Physicians' perceptions of and approaches to woman abuse. Does certification in family medicine make a difference? Objective: To discover whether family physicians who go through residency training and The College of Family Physicians of Canada's (CFPC) certification process are more responsive than other physicians to woman abuse, whether they perceive and approach such abuse more appropriately, and whether they seek out more education on the subject. Design: A national survey using a pretested 43-item mailed questionnaire to examine perceptions of and approaches to detection and management of woman abuse. Setting: Canadian family and general practice. Participants: A cross-sectional sample of 1574 family physicians and general practitioners, of whom 963 (61%) volunteers responded. Main Outcome Measures: Demographic variables, perceptions of abuse, methods of diagnosing and managing woman abuse. Results: Most respondents agreed they could not diagnose and treat woman abuse effectively, regardless of certification status. They indicated they were detecting only 33% of cases. Certificants of CFPC, in particular residency-trained certificants, were more likely to think that they should be diagnosing woman abuse than noncertificants; they were also more likely to help victims by referring them to specialists and other agencies. Certificants were also more likely to think they should be treating these patients themselves, and that they were not adequately trained to do so. Although most respondents thought they needed more education, certificants were more likely to know of relevant courses, to have attended such courses, and to have read books or articles on the topic. Conclusions: Being a certificant is not associated with perceived skills in diagnosing and treating woman abuse, but is associated with an increased awareness of the problem. Certificants know that education on woman abuse is available. abstract_id: PUBMED:32341287 Practice Patterns of Family Physicians With and Without Sports Medicine Certification. Objective: To update information regarding practice patterns of family physicians with a certificate of added qualifications (CAQ) in Sports Medicine (SM), because it has been over 10 years since the last comprehensive study. Design: Cross-sectional analysis of 2017 and 2018 American Board of Family Medicine (ABFM) Family Medicine Certification and SM CAQ examination registration practice demographic questionnaire data. Setting: N/A. Participants: Family physicians with a CAQ in SM [sports medicine family physicians (SM-FPs)] and family physicians without a CAQ registering for the ABFM Family Medicine Certification or SM CAQ examinations. Intervention: N/A. Main Outcomes: Self-reported time spent practicing SM, activities in SM, scope of practice, and practice setting. Results: Sports medicine family physicians are predominately men (78.7%) and below 49 years (65.8%). Most SM-FPs spend 60% of their time or less practicing SM and the scope of practice of SM-FPs is only slightly narrower than that of their family physician counterparts without a CAQ. In addition, 92.8% of SM-FPs are practicing in an urban setting. Conclusions: The similarity of scope of practice for SM-FPs and family physicians without a CAQ and the time spent practicing SM by SM-FPs suggests that most SM-FPs are spending a significant amount of time continuing to practice their primary specialty. Sports medicine family physicians are largely attracted to urban practice settings, most likely because of the higher likelihood of employment opportunities. Finally, factors that may be dissuading women from entering the field of SM deserve further investigation. abstract_id: PUBMED:34312274 Practice Patterns of Family Physicians With and Without Addiction Medicine Board Certification. Background: The American Board of Medical Specialties recognized addiction medicine (ADM) as a subspecialty in 2016, which was timely given the recent rise in substance use disorder (SUD). The impact of this dual board opportunity on Family Medicine has not been described. Our study enumerates and characterizes physicians dually certified in Family Medicine and ADM. Methods: We linked American Board of Medical Specialties data from March 2020 on physicians dually boarded in Family Medicine and ADM to responses on demographic and scope of practice questions in the American Board of Family Medicine (ABFM) National Graduate Survey and Family Medicine Certification Examination Registration Questionnaire. Results: Of current ABFM Diplomates, 0.53% (492/93,269) are also boarded in ADM. Based on survey responses from a subset of dually certified physicians, those who are dually certified are more likely to practice in federally qualified health centers and to hold a faculty position. Dually certified physicians are more likely to provide HIV/AIDS and hepatitis C management and are as likely as non-dually certified physicians to provide newborn care, obstetric deliveries, inpatient adult medicine care, and intensive care. Discussion: While only a small proportion of family physicians carry dual ADM board certification, those that do disproportionately serve vulnerable populations while retaining broad scope of care. Further work is needed to examine whether SUD treatment access could be addressed by implementing models that support dually certified physicians in consultative and educational efforts that would amplify their impact across the primary care workforce. abstract_id: PUBMED:36787517 Trends of Geriatric Certification and Practice Patterns of Family Physicians: 1988-2019. Background And Objectives: Demand for geriatric care is increasing due to aging population. Trends in maintaining certification in geriatrics are unreported. Our objective was to describe the historic trend of family physicians who certified in geriatric medicine (FPs-GM) since 1988 and to assess differences in practice patterns between FPs-GM and family physicians (FPs). Methods: We performed a retrospective descriptive study using administrative data collected by the American Board of Family Medicine (ABFM). The study population was family physicians registering to continue their ABFM certification from 2017 to 2019. Medicare public use billing data was linked to ABFM administrative data on certification history. We used univariate analysis for descriptive analysis and logistic regression to identify contributors of recertification in geriatrics. Results: We identified a total of 3,207 FPs-GM between 1988 and 2019. More than half maintained GM certification since 2009 (57%), with male gender, White race, and urban practice associated with maintaining GM certification; 61% of their patients were older adults. FPs-GM were more likely to be in an academic practice setting with nearly half (53%) also practicing in hospitals or nursing homes. In the adjusted regression model, younger FPs or FPs who treat more older patients were significantly more likely to be recertified in geriatrics whereas other demographics and practice characteristics were not significant. Conclusions: Most FPs who recently earned GM certification tended to retain certification since the required accredited fellowship started in 1995. abstract_id: PUBMED:25201927 Certification status of family physicians in the initial cohort entering maintenance of certification. There existed considerable concern that participation in Maintenance of Certification by board-certified physicians would be less than optimal when it was introduced by the American Board of Medical Specialties in 2000. However, family physicians who entered the first 10-year Maintenance of Certification for Family Physicians cycle in 2003 participated at levels consistent with historical norms. abstract_id: PUBMED:31596931 Motivation for Participation in the American Board of Family Medicine Certification Program. Background And Objectives: Board certification programs have been criticized as not relevant to practice, not improving patient care, and creating additional burdens on already overburdened physicians. Many physicians may feel compelled to participate in board certification programs in order to satisfy employer, hospital, and insurer requirements; however, the influence of forces as motivators for physicians to continue board certification is poorly understood. Methods: We used data from the 2017 American Board of Family Medicine (ABFM) Family Medicine Certification Examination practice demographic registration questionnaire for those seeking to continue their certification, removing physicians who indicated they did not provide direct patient care. We utilized a mixed-methods design. For the quantitative analysis, a proportional odds logistic regression was used to examine the association between predictor variables and increasing levels of external motivation. For the qualitative analysis, we used a deductive approach to examine open-text responses. Results: Of the analytical sample of 7,545 family physicians, approximately one-fifth (21.4%) were motivated to continue their board certification solely by intrinsic factors. Less than one-fifth (17.3%) were motivated only by extrinsic factors, and the majority (61.2%) reported mixed motivations for continuing their board certification. Only 38 respondents (0.5%) included a negative opinion about the certification process in their open-text responses. Conclusions: Approximately half of family physicians in this sample noted a requirement to continue their certification, suggesting that there has been no significant increase in the requirements from employers, credentialing bodies, or insurers for physicians to continue board certification noted in previously cited work. Furthermore, only 17.5% of our sample reported solely external motivation to continue certification, indicating that real or perceived requirements are not the primary driver for most physicians to maintain certification. abstract_id: PUBMED:2804847 Correlates of certification in family medicine in the billing patterns of Ontario general practitioners. There is conflicting evidence as to whether physicians who are certified in family medicine practise differently from their noncertified colleagues and what those differences are. We examined the extent to which certification in family medicine is associated with differences in the practice patterns of primary care physicians as reflected in their billing patterns. Billing data for 1986 were obtained from the Ontario Health Insurance Plan for 269 certified physicians and 375 noncertified physicians who had graduated from Ontario medical schools between 1972 and 1983 and who practised as general practitioners or family physicians in Ontario. As a group, certificants provided fewer services per patient and billed less per patient seen per month. They were more likely than noncertificants to include counselling, psychotherapy, prenatal and obstetric care, nonemergency hospital visits, surgical services and visits to chronic care facilities in their service mix and to bill in more service categories. Certificants billed more for prenatal and obstetric care, intermediate assessments, chronic care and nonemergency hospital visits and less for psychotherapy and after-hours services than noncertificants. Many of the differences detected suggest a practice style consistent with the objectives for training and certification in family medicine. However, whether the differences observed in our study and in previous studies are related more to self-selection of physicians for certification or to the types of educational experiences cannot be directly assessed. abstract_id: PUBMED:35379715 Practice Adjustments Made by Family Physicians During the COVID-19 Pandemic. Introduction: COVID-19 pandemic-related health care disruptions necessitated rapid adaptation among family physicians to safely meet patient needs while protecting themselves and their staff. On April 1, 2020, the American Board of Family Medicine (ABFM) introduced a COVID Performance Improvement (PI) activity for physicians to report on and receive Family Medicine certification credit for practice adjustments they made during the early stages of the pandemic. We aimed to understand the types of interventions implemented, and lessons physicians learned from the efforts. Methods: We analyzed data from COVID-PI activities submitted by self-selected family physicians between April 1 and June 30, 2020. We summarized the COVID-related topics chosen for improvement and performed a qualitative content analysis on a random sample of open-text responses about lessons learned. Results: The most common practice changes among 1259 unique COVID-PI activity submissions related to virtualization of patient visits, implementing new workflows, developing screening protocols, and obtaining and preserving personal protective equipment. We identified 12 themes regarding lessons learned, most commonly regarding patient and staff safety, modified practice processes and workflows, positive perceptions of and future plans for virtual visits, access to care, and patient satisfaction. Most submitters noted early successes with their interventions. Conclusion: A PI activity template designed for continuous board certification allowed family physicians to report on how they successfully implemented short term practice changes during the early stages of the COVID-19 pandemic. Reflections from this subset of physicians regarding lessons learned may prove useful in informing future COVID-19 related practice changes. abstract_id: PUBMED:29311176 Predictors of Attrition From Family Medicine Board Certification. Board certification is associated with higher quality care. We sought to determine the rates and predictors of attrition from certification among family physicians who achieved initial certification with the American Board of Family Medicine from 1980 through 2000. In this period, 5.6% of family physicians never attempted recertification, with the rate increasing from 4.9% between 1990 and 1995 to 5.7% from 1996 to 2000. Being male, an international medical graduate, or 30 years of age or older at initial certification was associated with not recertifying. With information about those likely to leave certification, the board can design and implement interventions that minimize attrition. abstract_id: PUBMED:26152433 A Significant Number of Charter Diplomates Participate in American Board of Family Medicine (ABFM) Maintenance of Certification. Considerable controversy about the value of participating in Maintenance of Certification has recently arisen within the medical community. Despite this controversy, large numbers of family physicians certified by the American Board of Family Medicine participate in Maintenance of Certification for Family Physicians. Surprisingly, a small but significant number of charter diplomats--those initially certified by the American Board of Family Medicine at its founding--are engaged in the process. Answer: According to a national survey conducted among Canadian family physicians and general practitioners, certification in family medicine does appear to make a difference in physicians' perceptions of and approaches to woman abuse. The survey, which included a cross-sectional sample of 1574 family physicians and general practitioners with a 61% response rate, found that most respondents, regardless of certification status, agreed they could not diagnose and treat woman abuse effectively. However, those who were certificants of The College of Family Physicians of Canada (CFPC), particularly those who had undergone residency training, were more likely to believe that they should be diagnosing woman abuse. They were also more inclined to assist victims by referring them to specialists and other agencies. Additionally, certificants were more likely to think that they should be treating these patients themselves and acknowledged that they were not adequately trained to do so. The survey also revealed that while most respondents felt they needed more education on the topic, certificants were more likely to be aware of relevant courses, to have attended such courses, and to have read books or articles on the subject. Therefore, being a certificant is associated with an increased awareness of the problem of woman abuse and knowledge that education on the topic is available, even though it is not associated with perceived skills in diagnosing and treating woman abuse (PUBMED:8792017).
Instruction: Does switching contraceptive from oral to a patch or vaginal ring change the likelihood of timely prescription refill? Abstracts: abstract_id: PUBMED:24835829 Does switching contraceptive from oral to a patch or vaginal ring change the likelihood of timely prescription refill? Objective: This study evaluated contraceptive refill patterns of women insured commercially in the US who switched from oral contraceptives (OCs) to the patch or vaginal ring and assessed if switching contraceptive methods changes refill patterns. Study Design: Women aged 15-44 with ≥2 patch or ring prescriptions and ≥2 OC prescriptions before the first patch/ring prescription were identified from the MarketScan® Commercial database (1/1/2002-6/30/2011). Refill patterns 1-year pre- and postindex date (first patch/ring prescription) were evaluated, and women were categorized as timely or delayed refillers on OCs and patch/ring. Regression modeling was used to investigate the association between refill patterns and contraceptive methods and switching effects on refill patterns. Results: Of 17,814 women identified, 7901 switched to the patch, and 9913 switched to the ring. Among timely OC refillers, the percentage of timely refills decreased (patch: 95.6% to 79.4%, p<.001; ring: 96.5% to 74.3%, p<.001). However, among delayed OC refillers, the percentage of timely refills improved (patch: 47.9% to 72.2%, p<.001; ring: 50.4% to 64.0%, p<.001) during patch/ring use. Nonetheless, compared to timely OC refillers, women who were delayed OC refillers had 1.68-fold [95% confidence interval (CI): 1.52-1.84, p<.001] and 1.85-fold greater odds (CI: 1.69-2.02, p<.001) of being a delayed refiller while on the patch and ring, respectively. Conclusion: Switching to the patch or ring may improve refill behavior for women who have problems refilling OCs timely; however, the magnitude of the improvement may fail to improve ultimate contraceptive efficacy by simply switching to the patch or ring. Implications: The impact on timely refills of switching from OCs to either the patch or ring is complex and varies depending on the pattern of timely refills on OCs. abstract_id: PUBMED:32575009 Contraceptive vaginal ring reduces lamotrigine levels. Objective: The objective of the study was to describe the effect of the vaginal ring and transdermal patch on lamotrigine serum levels in women with epilepsy. Background: Previous studies demonstrate that oral hormonal contraceptives containing synthetic estrogen increase lamotrigine clearance through induction of glucuronidation. This leads to variable lamotrigine serum concentrations throughout monthly cycles in women who are on combined oral contraceptives (COCs). The effects of estrogen-containing nonoral hormonal contraceptive methods, including the vaginal ring and transdermal patch, on lamotrigine pharmacokinetics are not well described. Methods: Retrospective chart review was performed to identify serum lamotrigine levels drawn from women with epilepsy while on the active phase of vaginal ring or transdermal patch and while off contraception. Wilcoxon signed-rank tests for paired data were used to compare the difference in dose-corrected lamotrigine concentration in plasma between values while on hormonal contraception to those while off contraception in patients using a vaginal ring. Results: Six patients were using the vaginal ring, and one patient was using the transdermal patch. Lamotrigine dose-corrected concentrations were decreased during the active phase of the vaginal ring compared with concentrations during the period off contraception (p = .04). There was one patient without a decrease in concentration, but the other five patients on the vaginal ring had a decrease in dose-corrected lamotrigine concentration ranging from 36 to 70% while on the vaginal ring. Similarly, one patient using the transdermal patch had a decrease of 37% in dose-corrected lamotrigine concentration while on the patch. Conclusions: The findings support that the vaginal ring contraceptive method decreases lamotrigine concentrations during the active phase of treatment. This has important implications for contraceptive counseling and maintaining therapeutic levels in women of childbearing age with epilepsy. abstract_id: PUBMED:20055731 The use of the vaginal ring and transdermal patch among adolescent girls in Finland. Objective: Little is known about the use of the vaginal ring and transdermal patch in representative adolescent populations although these contraceptives are supposed to be a good option for that age group. The aim of this study was to determine the use of the vaginal ring and that of the transdermal patch and to compare these with the use of oral contraceptives (OCs) and emergency contraception in adolescent girls residing in Finland. Methods: As part of the Adolescent Health and Lifestyle Survey a questionnaire was mailed to a representative sample of girls aged 14, 16 or 18 years residing in Finland. Of these 3029 (72%) responded. Results: The use of the vaginal ring (0.9%) and that of the transdermal patch (0.1%) were uncommon whereas OCs were commonly resorted to (20%). Only 0.6% of girls aged 16 and 2.5% of those aged 18 used the vaginal ring or the transdermal patch. Fifteen percent of girls had ever used emergency contraception. Conclusion: The vaginal ring and the transdermal patch are rarely used by adolescents. Our findings suggest that these novel contraceptive methods have not been adopted by family planning services. abstract_id: PUBMED:20238323 Skin patch and vaginal ring versus combined oral contraceptives for contraception. Background: The delivery of combination contraceptive steroids from a skin patch or vaginal ring offers potential advantages over the traditional oral route. The skin patch and vaginal ring could require a lower dose due to increased bioavailability and improved user compliance. Objectives: To compare the contraceptive effectiveness, cycle control, adherence (compliance), and safety of the skin patch or the vaginal ring versus combination oral contraceptives (COCs). Search Strategy: For trials of the contraceptive patch or the vaginal ring, we searched MEDLINE, POPLINE, CENTRAL, EMBASE, LILACS, ClinicalTrials.gov, and ICTRP. We contacted manufacturers and researchers to identify other trials. Selection Criteria: All randomized controlled trials comparing the skin patch or vaginal ring with a COC. Data Collection And Analysis: Data were abstracted by two authors and entered into RevMan. For dichotomous variables, the Peto odds ratio (OR) with 95% confidence intervals (CI) was calculated. For continuous variables, the mean difference was computed. Main Results: We found 5 trials of the skin patch and 10 of the vaginal ring. Contraceptive effectiveness was similar for the patch or ring versus the comparison COC. More patch users discontinued early than COC users: ORs were 1.59 (95% CI 1.26 to 2.00), 1.56 (95% CI 1.18 to 2.06), and 2.57 (95% CI 0.99 to 6.64). Patch users also had more discontinuation due to adverse events. Compared to COC users, patch users reported more breast discomfort, dysmenorrhea, nausea, and vomiting. Patch users reported more compliant cycles than the COC users in two trials: ORs were 2.05 (95% CI 1.83 to 2.29) and 2.76 (95% CI 2.35 to 3.24). The ring trials generally showed similar discontinuation for ring and COC users. Ring users reported less nausea, acne, irritability, and depression than COC users. Ring users had more vaginitis and leukorrhea but less vaginal dryness. Ring users had similar adherence to COC users in two trials but less adherence in one. Cycle control was generally similar for the patch and COC, and was similar or better for the ring versus COC. Authors' Conclusions: Effectiveness was similar for the methods compared. The patch could lead to more discontinuation while the vaginal ring showed little difference. The patch group had better compliance than the COC group but more side effects. Ring users generally had fewer adverse events than COC users but more vaginal irritation and discharge. High losses to follow up can affect the validity of the results. abstract_id: PUBMED:18254023 Skin patch and vaginal ring versus combined oral contraceptives for contraception. Background: The delivery of combination contraceptive steroids from a skin patch or vaginal ring offers potential advantages over the traditional oral route. The skin patch and vaginal ring could require a lower dose due to increased bioavailability and improved user compliance. Objectives: To compare the contraceptive effectiveness, cycle control, compliance, and safety of the skin patch or the vaginal ring versus combination oral contraceptives (COCs). Search Strategy: We searched MEDLINE, POPLINE, CENTRAL, EMBASE, and LILACS for trials of the contraceptive patch or the vaginal ring. We contacted manufacturers and researchers to identify other trials. Selection Criteria: All randomized controlled trials comparing the skin patch or vaginal ring with a COC. Data Collection And Analysis: Data were abstracted by two authors and entered into RevMan. For dichotomous variables, the Peto odds ratio (OR) with 95% confidence intervals (CI) was calculated. For continuous variables, the weighted mean difference was computed. Main Results: We found three trials of the skin patch and eight of the vaginal ring. Contraceptive effectiveness was similar for the patch or ring and the comparison COC. Patch users reported more compliant cycles than COC users; ORs were 2.05 (95% CI 1.83 to 2.29) and 2.76 (95% CI 2.35 to 3.24) in two trials. One crossover ring trial had more noncompliance for the ring users. Satisfaction with method was higher for ring users than COC users in two studies. More patch users discontinued early than COC users: OR 1.58 (95% CI 1.25 to 1.99) and 1.45 (95% CI 1.11 to 1.90) in two trials. Patch users also had more discontinuation due to adverse events (AEs). The ring trials generally showed similar discontinuation for ring and COC users. Compared to COC users, patch users were more likely to report breast discomfort, dysmenorrhea, nausea, and vomiting. Ring users reported less nausea, irritability, and depression than COC users in single trials. However, ring users had more vaginitis and leukorrhea. Bleeding problems were generally similar or less common for the ring versus COC. Authors' Conclusions: Effectiveness rates were similar for the methods compared. The patch group had better compliance than the COC group. Compared to COC users, patch users had more side effects. Ring users generally had fewer adverse events than COC users but more vaginal irritation and discharge. The patch could lead to more discontinuation while the vaginal ring showed little difference. High losses to follow up can affect the validity of the results. abstract_id: PUBMED:12535478 Skin patch and vaginal ring versus combined oral contraceptives for contraception. Background: The delivery of combination contraceptive steroids from a skin patch or vaginal ring offers potential advantages over those administered via the traditional oral route. The skin patch and vaginal ring could possibly require a lower dose due to increased bioavailability and improved user compliance. Objectives: The review's objective is to compare the contraceptive efficacy, cycle control, compliance, and safety of the contraceptive skin patch versus combination oral contraceptives and the contraceptive vaginal ring versus combination oral contraceptives. Search Strategy: We searched the computerized databases MEDLINE, Popline, Cochrane Controlled Trials Register, EMBASE, and LILACS for trials of the contraceptive patch or the contraceptive ring. We also searched the references of the publications identified for inclusion. We contacted the manufacturers of the skin patch and the contraceptive ring in an attempt to identify published or unpublished trials that we might have missed. Selection Criteria: All randomized controlled trials in any language comparing the combination contraceptive skin patch with a combination oral contraceptive or the combination contraceptive vaginal ring with a combination oral contraceptive. Data Collection And Analysis: Titles and abstracts identified from the literature searches were assessed for potential inclusion. Data were abstracted by both the primary and second reviewers to ensure accuracy and were entered into RevMan 4.1. Peto odds ratios with 95% confidence intervals were calculated for all outcomes. Six- and 13-cycle Kaplan-Meier pregnancy rates were also entered into "Additional Tables." No sensitivity analyses were conducted since the eligible trials used different comparison oral contraceptives. The trials were critically appraised by examining the following factors: the study design; blinding; randomization method; group allocation concealment; exclusions after randomization; loss-to-follow-up; and early discontinuation. Main Results: We found three trials of the skin patch and no eligible randomized controlled trials of the combination contraceptive vaginal ring. The six-cycle and 13-cycle Kaplan-Meier cumulative probabilities of pregnancy showed that the combination contraceptive skin patch was similar to the control combination oral contraceptive in contraceptive efficacy. One trial found that patch users discontinued early from the trial more often than oral contraceptive users (OR 1.6; 95% CI, 1.3-2.0) but a second trial found no differences in discontinuation between the groups (OR 2.6; 95% CI, 1.0-6.7). Patch users had more self-reported cycles of compliance than oral contraceptive users; the odds ratio of compliance was 2.1 (95% CI, 1.8-2.3). Patch users were more likely to report breast discomfort than oral contraceptive users with an odds ratio of 3.1 (95% CI, 2.3-4.2). The remaining commonly reported adverse events did not differ by group assignment. One trial found that women assigned to use the patch were more likely to discontinue due to adverse events than those assigned to use the oral contraceptive (OR 2.3; 95% CI, 1.6-3.3), but a second trial found no difference (OR 2.8; 95% CI, 0.7-11.3). Few serious adverse events occurred that were considered possibly or likely related to the use of the patch or oral contraceptive. Reviewer's Conclusions: The randomized controlled trials comparing a combination contraceptive skin patch to a combination oral contraceptive showed similar efficacy rates for the two methods. The patch group had better self-reported compliance than the oral contraceptive group. On the other hand, breast tenderness was more common among those randomized to the patch. The patch might lead to early discontinuation (overall and due to adverse events), but the trial results on this outcome were mixed. No conclusions can be drawn regarding the vaginal ring as no eligible trials were available. abstract_id: PUBMED:23633314 Skin patch and vaginal ring versus combined oral contraceptives for contraception. Background: The delivery of combination contraceptive steroids from a transdermal contraceptive patch or a contraceptive vaginal ring offers potential advantages over the traditional oral route. The transdermal patch and vaginal ring could require a lower dose due to increased bioavailability and improved user compliance. Objectives: To compare the contraceptive effectiveness, cycle control, compliance (adherence), and safety of the contraceptive patch or the vaginal ring versus combination oral contraceptives (COCs). Search Methods: Through February 2013, we searched MEDLINE, POPLINE, CENTRAL, LILACS, ClinicalTrials.gov, and ICTRP for trials of the contraceptive patch or the vaginal ring. Earlier searches also included EMBASE. For the initial review, we contacted known researchers and manufacturers to identify other trials. Selection Criteria: We considered randomized controlled trials comparing a transdermal contraceptive patch or a contraceptive vaginal ring with a COC. Data Collection And Analysis: Data were abstracted by two authors and entered into RevMan. For dichotomous variables, the Peto odds ratio (OR) with 95% confidence intervals (CI) was calculated. For continuous variables, the mean difference was computed. We also assessed the quality of evidence for this review. Main Results: We found 18 trials that met our inclusion criteria. Of six patch studies, five examined the marketed patch containing norelgestromin plus ethinyl estradiol (EE); one studied a patch in development that contains levonorgestrel (LNG) plus EE. Of 12 vaginal ring trials, 11 examined the same marketing ring containing etonogestrel plus EE; one studied a ring being developed that contains nesterone plus EE.Contraceptive effectiveness was not significantly different for the patch or ring versus the comparison COC. Compliance data were limited. Patch users showed better compliance than COC users in three trials. For the norelgestromin plus EE patch, ORs were 2.05 (95% CI 1.83 to 2.29) and 2.76 (95% CI 2.35 to 3.24). In the levonorgestrel plus EE patch report, patch users were less likely to have missed days of therapy (OR 0.36; 95% CI 0.25 to 0.51). Of four vaginal ring trials, one found ring users had more noncompliance (OR 3.99; 95% CI 1.87 to 8.52), while another showed more compliance with the regimen (OR 1.67; 95% CI 1.04 to 2.68).More patch users discontinued early than COC users. ORs from two meta-analyses were 1.59 (95% CI 1.26 to 2.00) and 1.56 (95% CI 1.18 to 2.06) and another trial showed OR 2.57 (95% CI 0.99 to 6.64). Patch users also had more discontinuation due to adverse events than COC users. Users of the norelgestromin-containing patch reported more breast discomfort, dysmenorrhea, nausea, and vomiting. In the levonorgestrel-containing patch trial, patch users reported less vomiting, headaches, and fatigue.Of 11 ring trials with discontinuation data, two showed the ring group discontinued less than the COC group: OR 0.32 (95% CI 0.16 to 0.66) and OR 0.52 (95% CI 0.31 to 0.88). Ring users were less likely to discontinue due to adverse events in one study (OR 0.32; 95% CI 0.15 to 0.70). Compared to the COC users, ring users had more vaginitis and leukorrhea but less vaginal dryness. Ring users also reported less nausea, acne, irritability, depression, and emotional lability than COC users.For cycle control, only one trial study showed a significant difference. Women in the patch group were less likely to have breakthrough bleeding and spotting. Seven ring studies had bleeding data; four trials showed the ring group generally had better cycle control than the COC group. Authors' Conclusions: Effectiveness was not significantly different for the methods compared. Pregnancy data were available from half of the patch trials but two-thirds of ring trials. The patch could lead to more discontinuation than the COC. The patch group had better compliance than the COC group. Compliance data came from half of the patch studies and one-third of the ring trials. Patch users had more side effects than the COC group. Ring users generally had fewer adverse events than COC users but more vaginal irritation and discharge.The quality of the evidence for this review was considered low for the patch and moderate for the ring. The main reasons for downgrading were lack of information on the randomization sequence generation or allocation concealment, the outcome assessment methods, high losses to follow up, and exclusions after randomization. abstract_id: PUBMED:21397092 Understanding prescription adherence: pharmacy claims data from the Contraceptive CHOICE Project. Background: We examined prescription adherence rates by contraceptive method among women who used oral contraceptive pills (OCP), transdermal patch or vaginal ring. Study Design: Women in the St. Louis area were provided their choice of OCP, patch or ring at no cost and followed for 18 months. Time between monthly refills was obtained from pharmacy data and analyzed as a marker of adherence. Risk factors for initial nonadherence were estimated using Cox proportional hazards; predictors for repeated nonadherence were analyzed using Poisson regression with robust error variance. Results: Overall, 619 participants filled 6435 contraceptive prescriptions with a median of 10 refills per participant. Only 30% of women (n = 187) obtained all refills on time. In the time-to-failure analysis, use of vaginal ring and increased parity were predictors of early nonadherence (p < .05). In the multivariable analysis, use of the vaginal ring and history of abortion were risk factors for repeated nonadherence (p < .01). Conclusions: Even with financial barriers removed, pharmacy data show that many women inconsistently refill their contraception and may be at risk for unintended pregnancy. abstract_id: PUBMED:18829069 Effects of the contraceptive patch, the vaginal ring and an oral contraceptive on APC resistance and SHBG: a cross-over study. Introduction: The transdermal patch (20 microg ethinylestradiol+150 microg norelgestromin daily) and the vaginal ring (15 microg ethinylestradiol+120 microg etonogestrel daily) are new contraceptives, designed to deliver a low dose of hormones, suggesting a low exposure. However, few data are available about their risk of venous thrombosis. The objective was to investigate the effect of the patch, the ring, and an oral contraceptive (30 microg ethinylestradiol+150 microg levonorgestrel daily) on activated protein C sensitivity ratio (APC-sr) and on sex hormone-binding globulin (SHBG) levels in plasma. Materials And Methods: After a two month wash-out, 13 volunteers were randomly assigned to either the patch followed by the oral contraceptive or vice versa, or the ring followed by the oral contraceptive or vice versa. All treatments lasted two cycles and were separated by a wash-out of two cycles. APC-sr and SHBG levels were determined on day 18-21 of the second cycle of the wash-out and of each treatment period. Results: Compared to the oral contraceptive, both the patch and the ring led to higher APC resistance (mean difference APC-sr 1.1; 95% CI 0.67-1.52 and 0.55; 95% CI 0.11-1.00, respectively) and higher SHBG levels (mean difference 210 nmol/l; 95% CI 134-286 and 148 nmol/l; 95% CI 48-248, respectively). Conclusion: The activity of the protein C system in plasma was impaired more by contraceptive patch and vaginal ring than by an oral contraceptive containing the second generation progestagen levonorgestrel. abstract_id: PUBMED:38276081 Comparison of Combined Parenteral and Oral Hormonal Contraceptives: A Systematic Review and Meta-Analysis of Randomized Trials. Background: Delivering contraceptive hormones through a transdermal patch or a vaginal ring might have advantages over the traditional oral route. Objectives: To compare the effectiveness, compliance, and side effect profile of oral and parenteral drug administration methods. Methods: We performed a systematic literature search in four medical databases-MEDLINE (via PubMed), Cochrane Library (CENTRAL), Embase, and Scopus-from inception to 20 November 2022. Randomized controlled trials assessing the efficacy, compliance, and adverse event profile of combined parenteral and oral hormonal contraceptives were included. Results: Our systematic search provided 3952 records; after duplicate removal, we screened 2707 duplicate-free records. A total of 13 eligible studies were identified after title, abstract, and full-text selection. We observed no significant difference in contraceptive efficacy (Pearl Index) between oral and parenteral drug administration (MD = -0.06, CI: -0.66-0.53; I2 = 0%). We found significant subgroup differences between parenteral methods in terms of compliance (χ2 = 4.32, p =0.038, I2 = 80%) and certain adverse events: breast discomfort (χ2 = 19.04, p =0.001, I2 = 80%), nausea (χ2 = 8.04, p =0.005, I2 = 75%), and vomiting (χ2 = 9.30, p =0.002; I2 = 72%). Conclusion: Both parenteral and oral contraceptives can be used as an effective contraceptive method, and the route of administration should be tailored to patient needs and adverse event occurrence. Answer: Switching from oral contraceptives (OCs) to a patch or vaginal ring does change the likelihood of timely prescription refill, but the impact varies depending on the individual's previous refill patterns. A study found that among women who were timely OC refillers, the percentage of timely refills decreased after switching to the patch (from 95.6% to 79.4%) or the ring (from 96.5% to 74.3%). However, among those who were delayed OC refillers, the percentage of timely refills improved with the patch (from 47.9% to 72.2%) and the ring (from 50.4% to 64.0%) (PUBMED:24835829). Despite this improvement for women who previously had problems with timely refills, those who were delayed OC refillers still had greater odds of being delayed refillers while using the patch or ring compared to those who were timely OC refillers. This suggests that while switching to the patch or ring may improve refill behavior for some, it may not significantly enhance overall contraceptive efficacy just by changing the method (PUBMED:24835829).
Instruction: Does recovery from substance use disorder matter in patients with bipolar disorder? Abstracts: abstract_id: PUBMED:34130283 White Matter Hyperintensities after Five-Year Follow-Up and a Cross-Sectional FA Decrease in Bipolar I and Major Depressive Patients. Introduction: An increase in brain white matter hyperintensities (WMHs) and a decrease in white matter fractional anisotrophy (FA) have been detected in bipolar I (BPI), II (BPII), and major depressive disorder (MDD) patients. Their relationship, and differences in diagnostic groups are obscure. Longitudinal studies are rare. Objective: After 5-year follow-up, we evaluated WMHs in BPI, BPII, and MDD patients as compared with controls, and studied the effects of clinical variables. We also explored the associations of clinical variables with cross-sectional whole brain FA. Methods: Eight BPI, 8 BPII, 6 MDD patients, and 19 controls participated in magnetic resonance imaging at baseline and follow-up. Diffusion weighted imaging was included at follow-up. WMHs were rated by the Coffey scale, and a tract-based spatial statistics method was used for diffusion data. The general linear model, ANOVA, Fisher's exact, Wilcoxon sign, and Kruskal-Wallis tests were used for statistical analyses. Results: Periventricular WMHs were increased in BPI patients (p = 0.047) and associated with the duration of disorder and lifetime occurrence of substance use disorder (p = 0.018). FA decrease was found in the corpus callosum of BPI patients (p < 0.01). MDD patients showed FA decrease in the right cerebellar middle peduncle (RCMP) (p < 0.01). In BPI patients, the duration of disorder associated with FA increase in RCMP (p < 0.05). No FA decrease was detected in patients with WMHs as compared with those without. Conclusions: Preceding illness burden associated modestly with WMHs, and FA increase in RCMP in BPI patients. MDD patients had FA decrease in RCMP. No association with FA decrease and WMHs was found. abstract_id: PUBMED:10638860 Cortical gray matter deficit in patients with bipolar disorder. Background: cortical gray matter volume deficit and ventricular enlargement are well documented in schizophrenia, but their presence in bipolar disorder is less well established. Methods: global cortical gray matter, white matter and sulcal CSF, as well as lateral and third ventricular volume measures, were derived from axial MRI brain images obtained on age-matched bipolar (n=9), schizophrenic (n=9), and control (n=16) subjects. All subjects were free of history of alcohol or other substance dependence. Results: relative to controls, bipolar patients had widespread volume deficits of cortical gray matter but not of cortical white matter. Schizophrenic patients had an even more severe cortical gray matter deficit and greater sulcal and lateral ventricular enlargement than the bipolar patients. Conclusions: this group of patients with bipolar disorder had a widespread deficit of cortical gray matter similar to, but less pronounced than, that observed in patients with schizophrenia. abstract_id: PUBMED:15960566 Does recovery from substance use disorder matter in patients with bipolar disorder? Objective: To examine the potential impact of recovery from substance use disorder (SUD) on the course of bipolar disorder among patients diagnosed with both bipolar and substance use disorders according to DSM-IV criteria. Method: As part of the multicenter Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD), we examined bipolar disorder status (i.e., whether the patient is recovering or recovered), role functioning, and quality of life in the first 1000 patients to enter the STEP-BD study. We compared patients with no history of SUD, current SUD, and past SUD (i.e., lifetime SUD, but no current SUD) on these parameters. Data were collected between November 1999 and April 2001. Results: A current clinical status of recovering or recovered from bipolar disorder was less likely among patients with current or past SUD compared to patients with no SUD (p < .002). Recovering/recovered status did not differ significantly between patients with current SUD versus past SUD. All 3 groups differed significantly on measures of role functioning as assessed by the Longitudinal Interval Follow-Up Evaluation-Range of Impaired Functioning Tool (LIFE-RIFT), with poorest role functioning among patients with current SUD, followed by patients with past SUD (p = .0002). Patients with current or past SUD reported significantly lower quality of life as measured by the LIFE-RIFT and the Quality of Life Enjoyment and Satisfaction Questionnaire and more lifetime suicide attempts (p < .001) than patients without an SUD; patients with past versus current SUD did not differ significantly on these measures. Conclusion: The results suggest that patients with bipolar disorder who experience sustained remission from an SUD fare better than patients with current SUD, but not as well as subjects with no history of SUD; differences among the 3 groups appear greatest in the area of role functioning. abstract_id: PUBMED:31563071 Total gray matter volume is reduced in individuals with bipolar disorder currently treated with atypical antipsychotics. Background/aims: Recent evidence indicates that the intake of atypical antipsychotics (AAP) is associated with gray matter abnormalities in patients with psychiatric disorders. We explored if patients with bipolar disorder (BD) who are medicated with AAP exhibit total gray matter volume (TGV) reduction compared to BD individuals not medicated with AAP and healthy controls (HC). Methods: In a cross-sectional design, 124 individuals with BD and 86 HC underwent 3T-MRI of the brain and clinical assessment as part of our BIPFAT-study. The TGV was estimated using Freesurfer. We used univariate covariance analysis (ANCOVA) to test for normalized TGV differences and controlled for covariates. Results: ANCOVA results indicated that 75 BD individuals taking AAP had significantly reduced normalized TGV as compared to 49 BD not taking AAP (F = 9.995, p = .002., Eta = 0.084) and 86 HC (F = 7.577, p = .007, Eta = 0.046). Limitations: Our cross-sectional results are not suited to draw conclusions about causality. We have no clear information on treatment time and baseline volumes before drug treatment in the studied subjects. We cannot exclude that patients received different psychopharmacologic medications prior to the study point. We did not included dosages into the calculation. Many BD individuals received combinations of psychopharmacotherapy across drug classes. We did not have records displaying quantitative alcohol consumption and drug abuse in our sample. Conclusions: Our data provide further evidence for the impact of AAP on brain structure in BD. Longitudinal studies are needed to investigate the causal directions of the proposed relationships. abstract_id: PUBMED:35125129 Housing First for homeless people with severe mental illness: extended 4-year follow-up and analysis of recovery and housing stability from the randomized Un Chez Soi d'Abord trial. Aims: Housing First (HF), a recovery-oriented approach, was proven effective in stabilising housing situations of homeless individuals with severe mental disorders, yet had limited effectiveness on recovery outcomes on a short-term basis compared to standard treatment. The objective was to assess the effects of the HF model among homeless people with high support needs for mental and physical health services on recovery, housing stability, quality of life, health care use, mental symptoms and addiction issues on 4 years of data from the Un Chez Soi d'Abord trial. Methods: A multicentre randomised controlled trial was conducted from August 2011 to April 2018 with intent-to-treat analysis in four French cities: Lille, Marseille, Paris and Toulouse. Participants were homeless or precariously-housed patients with a DSM-IV-TR diagnosis of bipolar disorder or schizophrenia. Two groups were compared: the HF group (n = 353) had immediate access to independent housing and support from the assertive community treatment team; the Treatment-As-Usual (TAU) group (n = 350) had access to existing support and services. Main outcomes were personal recovery (Recovery Assessment Scale (RAS) scale), housing stability, quality of life (S-QoL), global physical and mental status (Medical Outcomes Study 36-item Short Form Health Survey (SF-36)), inpatient days, mental symptoms (Modified Colorado Symptom Index (MCSI)) and addictions (Mini International Neuropsychiatric Interview (MINI) and Alcohol Use Disorders Identification Test (AUDIT)). Mixed models using longitudinal and cluster designs were performed and adjusted to first age on the street, gender and mental disorder diagnosis. Models were tested for time × group and site × time interactions. Results: The 703 participants [123 (18%) female] had a mean age of 39 years (95% CI 38.0-39.5 years). Both groups improved RAS index from baseline to 48 months, with no statistically significant changes found between the HF and TAU groups over time. HF patients exhibited better autonomy (adjusted β = 2.6, 95% CI 1.2-4.1) and sentimental life (2.3, 95% CI 0.5-4.1), higher housing stability (28.6, 95% CI 25.1-32.1), lower inpatient days (-3.14, 95% CI -5.2 to -1.1) and improved SF-36 mental composite score (-0.8, 95% CI -1.6 to -0.1) over the 4-year follow-up. HF participants experienced higher alcohol consumption between baseline and 48 months. No significant differences were observed for self-reported mental symptoms or substance dependence. Conclusion: Data at 4 years were consistent with 2-year follow-up data: similar improvement in personal recovery outcomes but higher housing stability, autonomy and lower use of hospital services in the HF group compared to the TAU group, with the exception of an ongoing alcohol issue. These sustained benefits support HF as a valuable intervention for the homeless patients with severe mental illness. Trial Registration: ClinicalTrials.gov identifier: NCT01570712. abstract_id: PUBMED:16696823 Symptomatic and functional outcome 12 months after a first episode of psychotic mania: barriers to recovery in a catchment area sample. Objective: Recent studies have shown that outcome in mania is worse than previously thought. Such studies have been conducted in selected samples with restrictive measures of outcome. We aimed to explore outcome and its predictors in a catchment area sample of first-episode psychotic mania of DSM-III-R bipolar I disorder. Methods: Prospective 6 and 12 months follow-up was conducted with 87 DSM-III-R first-episode psychotic mania patients admitted to Early Psychosis Prevention and Intervention Centre between 1989 and 1997. Syndromic and symptomatic outcome were determined with the Brief Psychiatric Rating Scale; functional outcome with the Quality of Life Scale and Premorbid Adjustment Scale subitems. Results: Symptomatic outcome was assessed in 67 patients at 6 months and 61 patients at 12 months, and functional outcome in 56 patients at 6 months and 49 patients at 12 months. Logistic regressions were conducted on 46 and 43 patients, respectively, to explore predictors of outcome. While 90% of patients achieved syndromic recovery at 6 and 12 months, 40% had not recovered symptomatically at 6 and 12 months, still presenting with anxiety or depression. A total of 66% of patients at 6 months and 61% of patients at 12 months failed to return to previous level of functioning. Age at intake, family history of affective disorder, illicit drug use and functional recovery at 6 months predicted functional outcome at 12 months. Conclusions: This study confirms poor symptomatic and functional outcome after first-episode psychotic mania. It suggests possible usefulness of early intervention strategies in bipolar disorders and need for developing specific interventions addressing anxiety, depression and substance abuse comorbidity. abstract_id: PUBMED:19624389 Microstructural white matter changes in euthymic bipolar patients: a whole-brain diffusion tensor imaging study. Objectives: Brain structures of a distributed ventral-limbic and dorsal brain network have been associated with altered mood states and emotion regulation in affective disorders. So far, diffusion tensor imaging studies in bipolar patients have focused on frontal/prefrontal brain regions and found alterations in white matter integrity in manic, depressed, and euthymic bipolar patients, observed as changes in fractional anisotropy and mean diffusivity. To extend previous findings, we investigated whole-brain modifications in white matter integrity in euthymic bipolar patients with minimal manic and depressive symptoms. Methods: Twenty-two patients with a DSM-IV-TR diagnosis of bipolar I and II disorder in remission, with no lifetime or present comorbidities of substance abuse, and 21 sex- and age-matched healthy controls underwent diffusion tensor imaging with diffusion gradients applied along 41 directions. Fractional anisotropy and mean diffusivity group differences were explored using two voxel-based, whole-brain analyses that differ in their normalization approaches. Results: Fractional anisotropy was significantly increased in bipolar patients relative to healthy controls in medial frontal, precentral, inferior parietal, and occipital white matter. No group differences in mean diffusivity were found. Conclusions: The result of increased fractional anisotropy in euthymic bipolar patients in the present study suggests increased directional coherence of white matter fibers in bipolar patients during remission. abstract_id: PUBMED:25491440 Dual recovery among people with serious mental illnesses and substance problems: a qualitative analysis. Objective: Individuals with serious mental illnesses are more likely to have substance-related problems than those without mental health problems. They also face more difficult recovery trajectories as they cope with dual disorders. Nevertheless, little is known about individuals' perspectives regarding their dual recovery experiences. Methods: This qualitative analysis was conducted as part of an exploratory mixed-methods study of mental health recovery. Members of Kaiser Permanente Northwest (a group-model, not-for-profit, integrated health plan) who had serious mental illness diagnoses were interviewed four times over two years about factors affecting their mental health recovery. Interviews were recorded, transcribed, and coded with inductively derived codes. Themes were identified by reviewing text coded "alcohol or other drugs." Results: Participants (N = 177) had diagnosed schizophrenia/schizoaffective disorder (n = 75, 42%), bipolar I/II disorder (n = 84, 48%), or affective psychosis (n = 18, 10%). At baseline, 63% (n = 112) spontaneously described addressing substance use as part of their mental health recovery. When asked at follow-up, 97% (n = 171) provided codeable answers about substances and mental health. We identified differing pathways to recovery, including through formal treatment, self-help groups or peer support, "natural" recovery (without the help of others), and continued but controlled use of alcohol. We found three overarching themes in participants' experiences of recovering from serious mental illnesses and substance-related problems: Learning about the effects of alcohol and drugs provided motivation and a foundation for sobriety; achieving sobriety helped people to initiate their mental health recovery processes; and achieving and maintaining sobriety built self-efficacy, self-confidence, improved functioning and a sense of personal growth. Non-judgmental support from clinicians adopting chronic disease approaches also facilitated recovery. Conclusions: Irrespective of how people achieved sobriety, quitting or severely limiting use of substances was important to initiating and continuing mental health recovery processes. Substance abuse treatment approaches that are flexible, reduce barriers to engagement, support learning about effects of substances on mental health and quality of life, and adopt a chronic disease model of addiction may increase engagement and success. Peer-based support like Alcoholics or Narcotics Anonymous can be helpful for people with serious mental illnesses, particularly when programs accept use of mental health medications. abstract_id: PUBMED:23399478 Periventricular white matter integrity and cortisol levels in healthy controls and in euthymic patients with bipolar disorder: an exploratory analysis. Background: Bipolar disorder is associated with both white matter abnormalities and hypothalamo-pituitary-adrenal axis dysfunction. In a post-hoc analysis of diffusion tensor data, the relationship between cortisol levels and white matter structural integrity was explored in healthy controls and in euthymic patients with bipolar disorder. Methods: Healthy control subjects and patients with bipolar disorder, prospectively verified as euthymic, underwent diffusion tensor MRI: fractional anisotropy and mean diffusivity data in fifteen regions of interest were obtained. Morning and evening salivary cortisol levels (SCLs) were measured. Results: Significant negative partial correlations were found between fractional anisotropy and evening SCLs in control subjects in four periventricular regions. This pattern was absent in bipolar patients, possibly due to the presence of an excess of extracellular fluid manifested as a significant increase in mean diffusivity in those regions. Limitations: This is an exploratory, post-hoc analysis of data with relatively small sample sizes. Lithium treatment and past substance abuse in the bipolar group are potentially confounding factors in this study. Conclusions: These preliminary data show an inverse relationship between evening cortisol levels and a measure of periventricular white matter integrity in healthy controls. This relationship appears disrupted in bipolar patients, possibly due to periventricular osmoregulatory dysfunction, the effects of medication or past substance use. Future research should further investigate the influences of cortisol on oligodendrocyte function, white matter integrity and brain osmoregulation in bipolar disorder. abstract_id: PUBMED:30333004 B-positive: a randomized controlled trial of a multicomponent positive psychology intervention for euthymic patients with bipolar disorder - study protocol and intervention development. Background: Bipolar disorder (BD) is characterized by recurrent (hypo)manic and depressive episodes, alternating with euthymic states in which patients are relatively symptom free. Besides clinical recovery, it is important to also strive for improvement of mental well-being and personal recovery. One prominent field focussing on the improvement of well-being is positive psychology. However, studies assessing the effects of positive psychology or personal recovery interventions for people with BD are scarce and have used weak methodological designs. The study described in this protocol article aims to assess the effectiveness of a multicomponent positive psychology intervention ("Living well with bipolar disorder") adjusted for people with BD in the euthymic phase to improve well-being and personal recovery. Method: The study concerns a pragmatic randomized multicenter trial. The principle objective of the study is to assess whether the positive psychology intervention offered to BD patients in remission in addition to usual care (CAU) is more effective than CAU. The study will include 112 participants randomized to either the experimental condition receiving the intervention in addition to CAU or the control condition receiving CAU. The study population are patients with BD I or II in the euthymic phase. The inclusion criteria are 1) diagnosis of BD I or BD II, 2) between the ages of 18-65, 3) four or more supportive sessions in the last year, and 4) only residual depressive or manic symptoms. Patients are excluded if they are in a depressive or manic episode, have current addiction problems or have optimal levels of well-being. Measurements take place at baseline, post-intervention and follow-up 6 and 12 months from baseline. Outcomes of measures include positive well-being, personal recovery, psychopathology, self-compassion, positive relationships, dampening of positive affect and relapse. Discussion: The outlined study will be the first RCT examining the effects of a multicomponent positive psychology intervention for patients with bipolar disorder. Several limitations, including generalizability of the results and possible attrition issues, are discussed in advance. Trial Registration: This study has been registered in the Netherlands Trial Register ( NTR6729 ) on 12 October 2017. Answer: Yes, recovery from substance use disorder (SUD) does matter in patients with bipolar disorder. Research has shown that patients with bipolar disorder who experience sustained remission from an SUD fare better than patients with current SUD, but not as well as subjects with no history of SUD. The differences among these groups appear to be most significant in the area of role functioning. Patients with current or past SUD reported significantly lower quality of life, more lifetime suicide attempts, and poorer role functioning compared to patients without an SUD. However, there was no significant difference in recovering/recovered status between patients with current SUD versus past SUD (PUBMED:15960566). Additionally, periventricular white matter hyperintensities (WMHs) were increased in bipolar I (BPI) patients and associated with the duration of the disorder and lifetime occurrence of substance use disorder. This suggests that preceding illness burden, including SUD, is modestly associated with WMHs in BPI patients (PUBMED:34130283). Moreover, the presence of SUD can affect the structural integrity of the brain in bipolar patients. For instance, an exploratory analysis found a disrupted relationship between evening cortisol levels and a measure of periventricular white matter integrity in bipolar patients, which could be due to periventricular osmoregulatory dysfunction, the effects of medication, or past substance use (PUBMED:23399478). In summary, recovery from SUD is an important aspect of the overall treatment and prognosis for patients with bipolar disorder, as it can influence mental health recovery, quality of life, and brain structure integrity.
Instruction: Fatigued elderly patients with chronic heart failure: do patient reports and nurse recordings correspond? Abstracts: abstract_id: PUBMED:12629682 Fatigued elderly patients with chronic heart failure: do patient reports and nurse recordings correspond? Purpose: To compare descriptions of fatigue from interviews with elderly people with chronic congestive heart failure (CHF) and data recorded by nurses at an outpatient heart failure clinic. Methods: Patients (N = 158) were screened for moderate to severe CHF and interviewed using a revised form of the Fatigue Interview Schedule (FIS). Seventy-nine of these patients were offered visits at a nurse-monitored heart failure clinic. Nursing documentation of fatigue in patient records (n = 56) at the heart failure clinic was compared to the patient interviews. Findings: Results indicated poor concordance between patients' descriptions and record content concerning fatigue. Decreased libido was linked to fatigue according to the patients but not to the nurses' notes. Cognitive characteristics of fatigue were rarely recorded, but were more frequent in patient interviews. Conclusions: Nurses must recognize characteristics and factors related to fatigue in patients and develop strategies to help patients cope with their restricted ability in daily life. Practice Implications: Using the words and expressions of the patients and the diagnostic characteristics of fatigue in record-keeping can support the nurses understanding of patients living with CHF. abstract_id: PUBMED:9740348 Feasibility of a nurse-monitored, outpatient-care programme for elderly patients with moderate-to-severe, chronic heart failure. Aims: To evaluate the feasibility of a nurse-monitored, outpatient-care program for elderly patients previously hospitalized with chronic heart failure. Methods And Results: Patients with chronic heart failure hospitalized in the medical wards were screened to find those eligible for a randomized study to compare the effect of a nurse-monitored, outpatient-care programme aiming at symptom management, with conventional care. The inclusion criteria were patients classified in New York Heart Association classes III-IV, age 65 years, and eligibility for an outpatient follow-up programme. The total in-hospital population of patients discharged with a heart-failure diagnosis was surveyed. Eighty-nine per cent of all the hospitalized patients (n=1541) were 65 years old. Of these, 69% (n=1058) were treated in the medical wards which were screened. The study criteria were met by 158 patients (15%). No visits to the nurse occurred in 23 cases among the 79 patients randomized to the structured-care group (29%), mainly on account of death or fatigue. The numbers of hospitalizations and hospital days did not differ between the structured-care and the usual-care groups. Conclusions: Given the selection criteria and the outline of the interventions, the outpatient, nurse-monitored, symptom-management programme was not feasible for the majority of these elderly patients with moderate-to-severe, chronic heart failure, mainly because of the small proportion of eligible patients and the high drop-out rate. Management of these patients would have to be more adjusted to their home situation. abstract_id: PUBMED:27220672 Heart Failure Association of the European Society of Cardiology heart failure nurse curriculum. Recent advances in care and management of heart failure have improved outcome, largely as a result of the developing evidence basis for medications, implantable devices and the organization of heart failure follow-up. Such developments have also increased the complexity of delivering and coordinating care. This has led to a change to the way in which heart failure services are organized and to the traditional role of the heart failure nurse. Nurses in many countries now provide a range of services that include providing care for patients with acute and with chronic heart failure, working in and across different sectors of care (inpatient, outpatient, community care, the home and remotely), organising care services around the face-to-face and the remote collection of patient data, and liaising with a wide variety of health-care providers and professionals. To support such advances the nurse requires a skill set that goes beyond that of their initial education and training. The range of nurses' roles across Europe is varied. So too is the nature of their educational preparation. This heart failure nurse curriculum aims to provide a framework for use in countries of the European Society of Cardiology. Its modular approach enables the key knowledge, skills, and behaviours for the nurse working in different care settings to be outlined and so facilitate nursing staff to play a fuller role within the heart failure team. abstract_id: PUBMED:15688920 Evaluating the effect of setting up a nurse-led heart failure service. In the UK 880,000 people have definite or probable chronic heart failure with 63,000 new cases being identified each year (Department of Health, 2003). Heart failure and its management have become a national and governmental priority. A specialist nurse was recruited to establish and manage a nurse-led service for this client group. This article outlines the development of this service and evaluates its performance. abstract_id: PUBMED:36118136 Quantitative and Qualitative Research on Management Strategies for Dyspnoea in Elderly Patients with Coronary Heart Disease Complicated with Chronic Heart Failure. Objective: To investigate the most effective strategies for the management of dyspnoea in elderly patients with coronary atherosclerotic heart disease (CHD) complicated with chronic heart failure (CHF). Methods: The best management plans for dyspnoea were evaluated using quantitative and qualitative research techniques for a total of 120 senior patients with CHD complicated with CHF. A self-made questionnaire on dyspnoea management strategies served as the survey instrument for the cross-sectional quantitative investigation. A phenomenological approach served as the framework for the qualitative study, which involved semi-structured interviews with 12 elderly patients who had CHD complicated by CHF. Data were gathered through audio recordings, and the Colaizzi method was used to analyse the data. Results: A data analysis of the qualitative research results revealed the four strategies for dyspnoea management that were most effective: using drugs, inhaling oxygen, staying in a comfortable position and maintaining air circulation. These were consistent with the most effective strategies identified by the quantitative findings. There was no significant difference in the choice of dyspnoea management strategies for men and women (t = 0.968, P = 0.806); patients tended to use multiple integrated strategies to manage dyspnoea. Conclusion: Health care providers should consider employing individualised combinations of dyspnoea management strategies to assist elders with CHD complicated with CHF during acute exacerbations of dyspnoea. abstract_id: PUBMED:15718175 How much responsibility should heart failure nurses take? This article examines the emerging role of the heart failure nurse and the responsibilities and educational and training requirements surrounding such a role. There may be variations in the role and its responsibilities in different health care settings. However the principles are similar and include: history taking, carrying out clinical assessment and making appropriate decisions about patient management within the context of practice. An example of this is nurse supervision of adjusting and titration of medication in a clinic setting or in the patient's own home. A major challenge to this role is defining the limitations and scope of practice. Patients with chronic heart failure (CHF) are generally a frail, elderly population, and often have significant other co-morbidities. They can be on multiple medications and are frequently prescribed sub-optimal doses of evidence-based medication. Many patients are not managed by specialists, thus creating a huge potential for improved management. abstract_id: PUBMED:30108618 Depression and chronic heart failure in the elderly: an intriguing relationship. Chronic heart failure and depressive disorders have a high prevalence and incidence in the elderly. Several studies have shown how depression tends to exacerbate coexisting chronic heart failure and its clinical outcomes and vice versa, especially in the elderly. The negative synergism between chronic heart failure and depression in the elderly may be approached only taking into account the multifaceted pathophysiological characteristics underlying both these conditions, such as behavioural factors, neurohormonal activation, inflammatory mediators, hypercoagulability and vascular damage. Nevertheless, the pathophysiological link between these two conditions is not well established yet. Despite the high prevalence of depression in chronic heart failure elderly patients and its negative prognostic value, it is often unrecognized especially because of shared symptoms. So the screening of mood disorders, using reliable questionnaires, is recommended in elderly patients with chronic heart failure, even if cannot substitute a diagnostic interview by mental health professionals. In this setting, treatment of depression requires a multidisciplinary approach including: psychotherapy, antidepressants, exercise training and electroconvulsive therapy. Pharmacological therapy with selective serotonin reuptake inhibitors, despite conflicting results, improves quality of life but does not guarantee better outcomes. Exercise training is effective in improving quality of life and prognosis but at the same time cardiac rehabilitation services are vastly underutilized. abstract_id: PUBMED:25962334 Chronic heart failure in the elderly patient The prevalence and incidence of heart failure (HF) is increasing, especially in the elderly population, and is becoming a major geriatric problem. Elderly patients with HF usually show etiopathogenic, epidemiological, and even clinical characteristics significantly different from those present in younger patients. Their treatment, however, derives from clinical trials performed with only a few elderly subjects. Moreover, beyond the cardiovascular disease itself, it is essential to evaluate the patient as a whole, given the interrelationship between HF and the characteristic geriatric syndromes of the elderly patient. This review examines the peculiarities in the most prevalent "real world" HF patient. abstract_id: PUBMED:23888177 Heart failure in elderly: progress in clinical evaluation and therapeutic approach. Chronic heart failure (CHF) represents a major and growing health problem, due to its high incidence and prevalence, its poor prognosis and its impact on health-care costs. Although CHF patients are mainly elderly, few studies were aimed at testing the efficacy of diagnostic and therapeutic approaches in this population. The difficulty in CHF diagnosis among the elderly is related to different factors, such as: the frequent presence of co-morbidity conditions mimicking or masking heart failure signs and symptoms; the different diagnostic cut-offs of natriuretic peptides; and the need to correctly evaluate diastolic function in order to assess CHF with preserved ejection fraction. Furthermore, the therapy of elderly CHF patients has not been well defined, considering the few studies involving very aged patients and the absence of a therapeutic strategy demonstrated to improve prognosis of CHF patients with preserved ejection fraction. The aim of this review is to focus on the most recent issues concerning the diagnosis and therapy of elderly patients affected by CHF. abstract_id: PUBMED:35485599 Development and validation of a predictive risk model for frailty in elderly patients with multimorbidity. Aims: This study aimed to investigate the influencing factors of frailty in elderly patients with multimorbidity and to develop a predictive risk model for frailty in elderly patients with multimorbidity. Methods: In total, 3836 elderly patients with multimorbidity who were admitted to the medical wards of five grade A tertiary hospitals in Sichuan Province from March 2020 to June 2021 were selected. Based on the general data of patients with multimorbidity, the independent risk factors for frailty were obtained using logistic analysis, and a risk prediction model of frailty was developed. Results: Independent risk factors for frailty in patients with multimorbidity were age, types of medication, and comorbidity with chronic heart failure (CHF), chronic obstructive pulmonary disease (COPD) and chronic cerebrovascular disease (CCVD); and the protective factors for frailty were body mass index (BMI), exercise and education level. The expression of the model was Z = -2.054 + 0.016 × age - 0.029 × BMI - 0.153 × education level-1.059 × exercise + 0.203 × types of medication + 0.788 × comorbidity with CHF + 0.950 × comorbidity with COPD + 0.363 × comorbidity with CCVD. Conclusion: Age, BMI, education level, exercise, types of medication, and comorbidity with CHF, COPD and CCVD can affect frailty risk in elderly patients with multimorbidity, which may be helpful to predict the frailty risk of elderly patients with multimorbidity. Geriatr Gerontol Int 2022; 22: 471-476. Answer: The comparison between patient reports and nurse recordings regarding fatigued elderly patients with chronic heart failure (CHF) indicates that there is poor concordance between the two. According to a study, patient interviews revealed that decreased libido was associated with fatigue, but this was not reflected in the nurses' notes. Additionally, cognitive characteristics of fatigue were seldom recorded by nurses, despite being more frequently mentioned by patients in interviews (PUBMED:12629682). This discrepancy suggests that nurses may not fully recognize or document the characteristics and factors related to fatigue as experienced by patients with CHF. The study concludes that nurses should develop strategies to help patients cope with their restricted ability in daily life and that incorporating patients' own words and expressions, as well as the diagnostic characteristics of fatigue, into record-keeping could support nurses in understanding patients living with CHF (PUBMED:12629682).
Instruction: Should children with overweight or obesity be excluded from height references? Abstracts: abstract_id: PUBMED:26987558 The height-, weight- and BMI-for-age of preschool children from Nizhny Novgorod city, Russia, relative to the international growth references. Background: Monitoring a child's growth status helps to diagnose diseases and implement curative and preventive measures. The aim of this study was to assess how well preschool children of Russian city (Nizhny Novgorod) match with, or diverge from, international growth charts (WHO2006,2007; USCDC2000). Methods: Cross-sectional study included 3,130 children aged 3-7 years attending municipal preschools of Nizhny Novgorod, the city in the European part of Russia. The study was held from February 2012 to October 2013. The international WHO2006,2007 and USCDC2000 growth references were used to calculate the height, weight and BMI z-scores. The distributions of z-scores were analysed with descriptive and inferential statistical methods. Z-score equal 0.25 was considered as a benchmark for clinically significant differences. Results: Means height z-scores calculated with the use of WHO2006, 2007 and USCDC2000 references were above the 50th centile (0.13 - 0.47) for both boys and girls. The means height z-scores was less than 0.25 SD above the 50th centile only for WHO2006. Stunting prevalence (the height-for-age z-score less than -2) was slightly higher under WHO2006 (3-4%) than under USCDC 2000 (2-3%). Stunting prevalence among children aged 5-7 years was similar under WHO2007 and USCDC2000 references (1%). For boys and girls aged 3-4 years the thinness prevalence, using WHO2006 was 2%, using USCDC2000 was 6% (p < 0.05). At the age 5-7 years this proportion under WHO2007 was 3% in both sex groups, under USCDC2000 was 8% for boys and 6% for girls (p < 0.05). A proportion of preschoolers aged 3-4 years with overweight was slightly higher under WHO2006 reference (13-15%) than under USCDC2000 (12-14%). In the case of age 5-7 years the overweight prevalence under WHO2007 (13-12 %) was lower than under USCDC2000 (14%). Obesity prevalence under WHO2006,2007 (3-4%) was slightly higher than that under USCDC2000 reference (2-3%). Preschoolers' distribution by groups of normal weight, overweight, obesity didn't significantly differ among the references (chi-square). Conclusions: The growth assessment of children aged 3-7 years attending municipal preschools of the Russian city Nizhny Novgorod under the international references (WHO2006,2007; USCDC 2000), demonstrated that the height fit to the WHO2006 standard for the children aged 3 and 4 was generally fine, since all the mean values were within 0.25 of the standard deviations of the mean. Beyond the age of 5 the fit to the WHO2007 was poor while the fit to the USCDC2000 was poor throughout. abstract_id: PUBMED:26297699 Should children with overweight or obesity be excluded from height references? Background And Objectives: Growth reference charts are usually based on measurements of children free from a medical condition that affects growth. However, samples collected during the past decades often contain a large proportion of overweight or obese children. Because obesity increases linear growth, the question arises to what extent the percentiles curves for length/height are affected by the presence of children with overweight or obesity. Methods: Data from two cross-sectional samples of 2-year-old to 18-year-old children were analysed: 12,252 Belgian children, measured in 2002-2004, and 6159 Norwegian children, measured in 2003-2006. The LMS method was used to estimate height-for-age curves with and without children considered overweight or obese according to the International Obesity Task Force thresholds. Results: The prevalence of overweight (including obesity) and obesity was 13.0% and 2.8% in the Belgian and 13.8% and 2.3% in the Norwegian sample. Children were taller when overweight (+0.49 and 0.43 SD, in the Belgian and Norwegian sample, respectively) or obese (+0.73 and 0.72 SD in the Belgian and Norwegian sample, respectively). Effect sizes were smaller in younger and older children, which points to an advanced age of maturation as a possible cause. Excluding overweight and obese children had only a minor impact on the growth curves with largest difference in mean height SD scores -0.09 in the Belgian and -0.12 in the Norwegian sample with a corresponding increase of up to 0.5% and 1.2% in number of children >+2 SD. Conclusions: Current Belgian and Norwegian growth references for length/height were found to be largely unaffected by the current proportion of overweight and obese children. There is, therefore, no need for revised height charts that exclude overweight or obese children. abstract_id: PUBMED:25350773 Weight and height centiles of Argentinian children and adolescents: a comparison with WHO and national growth references. Background: Studies in several countries comparing the performance of WHO references and their own national growth standards reported differences that could affect screening and growth monitoring. Aim: To estimate weight and height centiles on a sample of Argentinian children and adolescents and compare selected centiles with WHO and national growth references. Subjects And Methods: A cross-sectional school survey was conducted on 6239 boys and girls aged 5-18. Data were collected between 2005-2009 in Santa Rosa, Argentina. Smoothed weight and height centiles were estimated by the LMS method and compared with WHO 2007 and Argentinian (ARG) growth references. Results: Weight centiles were higher than those of WHO and ARG. Height centiles were above the ARG and below the WHO ones. The greatest differences with ARG were seen before puberty and then declined up to age 18. In contrast, differences with WHO increased from puberty onwards. Conclusion: Compared with the ARG reference, linear growth of these schoolchildren shows a secular acceleration without substantial improvements in the adult height. In relation to WHO, the results suggest that around the adolescent growth spurt differences in linear growth between populations became larger, limiting the clinical usefulness of international growth references in adolescents. abstract_id: PUBMED:37140701 Population-based references for waist and hip circumferences, waist-to-hip and waist-to-height ratios for children and adolescents, and evaluation of their predictive ability. Childhood obesity is a public health problem globally as well as in Poland. This paper aimed to provide age- and sex-specific waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio normative values for Polish children and adolescents aged 3 - 18 years for more precise monitoring of abdominal fat accumulation. The waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio references were constructed with the lambda-mu-sigma (LMS) method using data from two nationally representative health surveys: the OLA study and the OLAF study, the largest available paediatric surveys in Poland which provided measured height, weight, waist, hip and blood pressure for 22,370 children and adolescents aged 3 - 18 years. The predictive ability of newly established references for overweight/obesity as defined by the International Obesity Task Force criteria and elevated blood pressure was tested with receiver operating characteristic. Abdominal obesity cut-offs linked to adult cardiometabolic cut-offs were established. Reference values for waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio are presented, as well as waist circumference, waist-to-height ratio and waist-to-hip ratio cut-off values linked to adult's cut-offs of cardiometabolic risk. The predictive value for overweight and obesity of population-based waist, hip and waist-to-height ratio references was outstanding-area under the receiver operating characteristic curve > 0.95 in both sexes, whereas with regard to the elevated blood pressure predictive ability was low-area under the receiver operating characteristic curve < 0.65. Conclusion: This paper presents the first waist, hip, waist-to-height ratio and waist-to-hip ratio references for Polish children and adolescents aged 3-18 years. The 90th and 95th percentile and cut-offs linked to adult thresholds for cardiometabolic risk are proposed as cut-offs for abdominal obesity. What is Known: • Waist circumference, waist-to-height ratio and waist-to-hip ratio are used to assess abdominal obesity in children and adults. • In Poland, there is no abdominal obesity and hip circumference references for children and adolescents from 3 to 18 years of age. What is New: • Population-based references of central obesity indices and hip references for children and youth aged 3-18 years and cardiometabolic risk thresholds for children and adolescents linked to adult's cut-offs were established. abstract_id: PUBMED:31050656 Using height-corrected definition of metabolic syndrome in children and adolescents. Metabolic syndrome (MS) is common among children and adolescents. Age- and gender-specific references or age-, gender- and height-specific references were used in pediatric MS definitions. More recently, an increasing number of studies documented that the ratio of waist circumference (WC) to height (WHtR) and blood pressure to height (BPHR) were easy anthropometric indexes for detection of obesity and hypertension in children and adolescents. For these reasons, height-corrected MS definition was proposed. WHtR and BPHR were used as alternatives to WC and BP in the definition of MS. In the present review, we discuss the possibility of the height-corrected MS definition for identifying MS in children. abstract_id: PUBMED:32654218 Swedish references for weight, weight-for-height and body mass index: The GrowUp 1990 Gothenburg study. Aim: To update the Swedish references for weight, weight-for-height and body mass index (BMI) considering the secular trend for height but not including that for weight. Methods: Longitudinal measures of height and weight were obtained (0-18 years) from 1418 (698 girls) healthy children from the GrowUp 1990 Gothenburg cohort born at term to non-smoking mothers and Nordic parents. A total of 145 individuals with extreme BMI value vs GrowUp 1974 BMI SDS reference were excluded (0-2 years: ±4SDS, 2 < years: -3SDS, +2.3SDS). References were constructed using the LMS method. Results: The updated weight reference became similar to the GrowUp 1974 Gothenburg reference: BMI increased rapidly up to lower levels in the 1990 cohort during infancy/early childhood, similar in both groups in late childhood/adolescence, despite lower values at +2SDS. Compared with the WHO weight standard, median and -2SDS weight values were higher for the 1990 cohort, whereas +2SDS values were lower, resulting in narrower normal range. Median values were greater and ±2SDS narrower for the 1990 vs the WHO weight-for-height reference. International Obesity Task force (IOTF) BMI lines for definitions for over- and underweight were added. Conclusion: We present updated references for weight, weight-for-height and BMI, providing a healthy goal for weight development when monitoring growth within healthcare settings. abstract_id: PUBMED:24498497 Weight, height and body mass index nomograms; early adiposity rebound in a sample of children in tehran, iran. Background: Assessing growth is a useful tool for defining health and nutritional status of children. The objective of this study was to construct growth reference curves of Iranian infants and children (0-6 years old) and compare them with previous and international references. Methods: Weight, height or length of 2107 Iranian infants and children aged 0-6 years old were measured using a cross-sectional survey in Tehran in 2010. Standard smooth reference curves for Iranian population were constructed and compared to multinational World Health Organization 2006 reference standards as well as a previous study from two decades ago. Results: Growth index references for Iranian girls are increased in compare to data from two decades ago and are approximately close to the international references. In boys; however, the increment was considerably large as it passed the international references. Not only the values for indexes was changed during two decades, but also the age at adiposity rebound came near the age of 3, which is an important risk factor for later obesity. Conclusions: During two decades, growth indexes of Iranian children raised noticeable. Risk factors for later obesity are now apparent and demand immediate policy formulations. In addition, reference curves presented in this paper can be used as a diagnostic tool for monitoring growth of Iranian children. abstract_id: PUBMED:35568936 Syrian national growth references for children and adolescents aged 2-20 years. Background: During the past three decades, growth charts have become one of the principal tools for monitoring anthropometric development in individuals and populations as well. Growth references by the CDC and other countries have been widely used in our hospitals and healthcare units for clinical assessment of children's development. The apparent overestimation and underestimation of many children's anthropometrics indicated the need to construct our own references. The objective of this study is to establish the national growth references for the Syrian population 2-20-year-old. Methods: A multicenter cross-sectional sample of 13,548 subjects, aged 2-20 years, were recruited from various kindergartens, schools, and universities across the Syrian Arab Republic between February and May-2019. Response variables (stature, weight, and BMI) were fitted against age using P-splines and three empirical distributions: Box-Cox T, Box-Cox Power Exponential, and Box-Cox Cole and Green. Residuals diagnostic Q-tests and worm plots were used to check the validity of fitted models. Results: Box-Cox T provided the best fit for stature-for-age, whereas Box-Cox Power Exponential provided the best fit for weight-for-age and BMI-for-age. Residuals diagnostics revealed adequate models fitting. BMI cutoffs revealed an increased prevalence of obesity (4.5% and 3.66%) and overweight (20.1% and 19.54%), for boys and girls respectively, in our population. Conclusions: Growth charts are available for use now in our hospitals and healthcare units. For 0-2-year-old children, we recommend using the World Health Organization's standards. abstract_id: PUBMED:20972688 Polish 2010 growth references for school-aged children and adolescents. Growth references are useful in monitoring a child's growth, which is an essential part of child care. The aim of this paper was to provide updated growth references for Polish school-aged children and adolescents and show the prevalence of overweight and obesity among them. Growth references for height, weight, and body mass index (BMI) were constructed with the lambda, mu, sigma (LMS) method using data from a recent, large, population-representative sample of school-aged children and adolescents in Poland (n = 17,573). The prevalence of overweight and obesity according to the International Obesity Taskforce definition was determined with the use of LMSGrowth software. Updated growth references for Polish school-aged children and adolescents were compared with Polish growth references from the 1980s, the Warsaw 1996-1999 reference, German, and 2000 CDC references. A positive secular trend in height was observed in children and adolescents from 7 to 15 years of age. A significant shift of the upper tail of the BMI distribution occurred, especially in Polish boys at younger ages. The prevalence of overweight or obesity was 18.7% and 14.1% in school-aged boys and girls, respectively. The presented height, weight, and BMI references are based on a current, nationally representative sample of Polish children and adolescents without known disorders affecting growth. Changes in the body size of children and adolescents over the last three decades suggest an influence of the changing economical situation on anthropometric indices. abstract_id: PUBMED:34114485 Validation of growth standards and growth references: A review of literature. The World Health Organization (WHO) growth standards provide the most recognized and widely accepted way of assessing child growth. To ensure its applicability, accuracy, and reliability, studies have validated WHO growth standards against local populations and other internationally recognized growth references. We reviewed outcomes of evaluations done on WHO growth standards and assess the appropriateness of using these growth standards on a global level. We undertook a systematic quantitative review of studies published from 2011 to 2020 from multiple databases. Studies were included if they considered children aged 59 months and below and reported on validation of growth standards. There was an agreement in studies that validated WHO growth standards against international growth references of its superiority in identifying stunted, overweight, and obese children. However, they were less likely to identify underweight children. None of the studies reviewed reported similar growth trajectories to WHO standards in all indicators considered. Regional differences in child growth were observed in comparison to WHO growth standards. Adoption of regional-specific standards increases the sensitivity of identifying children with adverse nutrition outcomes. Answer: No, children with overweight or obesity should not be excluded from height references. A study analyzing data from Belgian and Norwegian children found that excluding overweight and obese children had only a minor impact on the growth curves for length/height. The largest difference in mean height SD scores was -0.09 in the Belgian sample and -0.12 in the Norwegian sample, with a corresponding increase of up to 0.5% and 1.2% in the number of children >+2 SD. The study concluded that current Belgian and Norwegian growth references for length/height were largely unaffected by the current proportion of overweight and obese children, and therefore, there is no need for revised height charts that exclude overweight or obese children (PUBMED:26297699). This conclusion is supported by the fact that growth references are useful tools for monitoring a child's growth and nutritional status, and the inclusion of all children in these references provides a more accurate representation of the population's growth patterns. Additionally, other studies have focused on creating specific references for various anthropometric measures without excluding overweight or obese children, further suggesting that these children should be included in height references (PUBMED:37140701, PUBMED:31050656, PUBMED:32654218, PUBMED:24498497, PUBMED:35568936, PUBMED:20972688, PUBMED:34114485).
Instruction: A multilevel analysis of the relation of socioeconomic status to adolescent depressive symptoms: does school context matter? Abstracts: abstract_id: PUBMED:29171197 Socioeconomic Context as a Moderator in the Relationship between Body Mass Index and Depression in Europe. Background: Obesity and depression are two major health issues in contemporary societies. This study aims to address two fundamental questions: (1) what is the impact of macro-level adverse socioeconomic circumstances on depression? and (2) how do macro-level variations in the socioeconomic context affect the relationship between body mass index (BMI) and depression? Methods: Data from the 7th round of the European Social Survey were used, collected from a sample size of 37,623 participants and aggregated around a total of 20 countries. A random intercept multilevel model was constructed to study the variations in the relationship between depression and BMI. The contextual effect of risk of poverty, unemployment rate, and gross domestic product per capita were studied at the country level. Results: First, both unemployment and poverty risk were found to be positively associated with depressive disorders. Second, the results show that a higher risk of poverty at the macro level may increase the effect of BMI on depression in European countries. Conclusion: The present study provides new evidence suggesting that the obesity-depression relationship will be, on average, stronger in countries with poor socioeconomic conditions. Therefore, adverse socioeconomic contextual conditions may increase depression associated with obesity. abstract_id: PUBMED:14571218 A multilevel analysis of the relation of socioeconomic status to adolescent depressive symptoms: does school context matter? Objective: To determine whether the socioeconomic context of the school environment is associated with adolescent depressive symptoms independent of individual household income. Study design Data were drawn from a 1995 nationally representative study of 7th to 12th grade students. Multivariable linear regression at the school and individual levels assessed the relation between income and depressive symptoms. Multilevel modeling techniques were then used to understand how these factors are jointly associated with adolescent depressive symptoms. Participants: Adolescents (n=13,235) in grades 7 through 12 from 132 schools whose parent provided income information. Results: Linear regression analyses indicated that lower household income, average school income, and increasing school-level income inequality were significantly (P<.001) associated with depressive symptoms. Further examination of these relations through multilevel modeling indicated that both household income (P<.01) and average school income (P<.05) were significantly related to depressive symptoms after adjusting for covariates, with evidence for an interaction between the two. The impact of lower household income on depressive symptoms was approximately 2-fold greater for students attending a poor versus a rich school. Conclusions: School context is associated with adolescents' depressive symptoms, even after adjusting for individual-level factors. The school environment may partially buffer the adverse influence of lower household income on adolescent depressive symptoms. abstract_id: PUBMED:25653491 Individual and socioeconomic contextual effects on depressive symptom in Korea: multilevel analysis of cross-sectional nationwide survey. This study was aimed to examine the relationship between individual, socioeconomic context and depressive symptom among Korean population. Data were the Korean Community Health Survey (KCHS), a nationwide survey collected from 253 local communities including 230,715 adults aged 19 yr or over. To identify depressive symptom, the Center for Epidemiologic Studies Depression scale (CES-D) was used. This study employed multilevel logistic regression to analyze the hierarchical data that included individual and community level variables. The results of this study showed that people in the highest level of community income had a higher risk of depressive symptom compared with people in the lowest (OR, 1.6; 95% CI, 1.4-1.9). In a chi-square test for trend, the prevalence of depressive symptom was significantly increased with increased level of community income among all groups of the family income (P<0.001). Moreover a significant interaction was found between household income and community mean income (OR, 0.98; 95% CI, 0.97-0.99). Among individual level variables, age, sex, education, income, living alone, and the number of illnesses were associated with depressive symptom. This study identified that the level of community income has an inverse association, and its effect is especially stronger among low income individuals. abstract_id: PUBMED:23077540 Socioeconomic inequalities in adolescent depression in South Korea: a multilevel analysis. Background: In recent years, South Korea has witnessed a sustained rise in the prevalence of adolescent depression. In the present study, we sought to investigate family and school environmental influences on adolescent depression. Methods And Findings: Middle and high school students (N = 75,066) were randomly selected respondents to a web-based survey and answered questions on their academic and socioeconomic backgrounds, parental support, parental education level, physical activities, lifestyle habits and their experience of depression in the past one year. Two-level multilevel analysis was used to investigate the relationship between depression and individual (level 1) and school (level 2) factors. Girls reported having experienced depression in greater numbers than boys (43.96% vs. 32.03%). A significant association was found between adolescent depression experience and gender, grade, self-rated academic achievement, family affluence scale, parental support, parental education level, lifestyle habits, physical activity and sleep dissatisfaction. The students living with rich parents were more likely to be depressive, and maternal higher education was significantly associated with higher probability of boys' depression experience. Low academic achievement was highly associated with the experience of depression. In school level contexts, girls were found to be less likely to be depressive in girls-only schools. Conclusion: The adolescent depression experience is not only an individual phenomenon but is highly associated with other factors such as parents, peers, academic achievement, and even gender mix in the school. Thus, prevention measures on youth depression need to focus on emphasizing less pressure from parents on academic performance, and establishing healthy inter-gender relationships within co-education schools. abstract_id: PUBMED:38179954 The relation between economic stressors and higher education students' mental health during the initial outbreak of the COVID-19 pandemic. Aims: The COVID-19 pandemic has had a large impact on the financial situation of higher education students, disproportionately affecting students with a low socioeconomic status (SES). This raises the question of whether economic stressors related to COVID-19 have aggravated existing socioeconomic inequalities in mental health. This study examined the relationship between economic stressors and students' depressive symptoms, and the role of students' SES and countries' socioeconomic conditions. Methods: Data from the COVID-19 International Student Well-being Study was used for multilevel analyses, with depressive symptoms as dependent variable. Three indicators measured SES: educational level of the parents, ability to borrow money from their social network, and struggling with financial resources prior to COVID-19. Results: Students with a low SES had more depressive symptoms, and those not able to borrow money and with parents without higher education were more exposed to a deterioration in their financial situation. Both economic stressors (reduction in working hours and a deterioration of their financial situation) were positively related to depressive symptoms. In addition, the positive relationship between a decrease in working hours and depressive symptoms was stronger in countries with a higher unemployment rate. Conclusions: We observed socioeconomic inequalities in students' mental health, which, in part, can be ascribed to a larger exposure to the economic stressors related to COVID-19 among students' with a low SES. The macroeconomic context also played a role, as the impact of a reduction in working hours on depressive symptoms was stronger in countries with poor economic conditions. abstract_id: PUBMED:32178245 Trajectories of Depressive Symptoms and Neighborhood Changes from Adolescence to Adulthood: Latent Class Growth Analysis and Multilevel Growth Curve Models. The role of neighborhood socioeconomic status (SES) and racial/ethnic composition on depression has received considerable attention in the United States. This study examines associations between trajectory patterns of neighborhood changes and depressive symptoms using data from Waves I-IV of the National Longitudinal Study of Adolescent to Adult Health. We used latent class growth analysis to determine the number and distribution of person-centered trajectories for neighborhood characteristics, and multilevel growth curve models to examine how belonging to each class impacted depression trajectories from ages 13 to 32 among non-Hispanic Whites (NHW), non-Hispanic Blacks (NHB), Hispanics, and non-Hispanic Others (NHO). The distribution of neighborhood SES classes across racial/ethnic groups suggests significant levels of economic inequality, but had no effect on depressive symptoms. A more complex picture emerged on the number and distribution of racial/ethnic composition latent class trajectories. Compared to NHB peers who lived in predominantly NHW neighborhoods from adolescence to adulthood, NHBs in more diverse neighborhoods had lower risk for depressive symptoms. Conversely, Hispanics living in neighborhoods with fewer NHWs had higher risk for depressive symptoms. Among NHOs, living in neighborhoods with a critical mass of other NHOs had a protective effect against depressive symptoms. abstract_id: PUBMED:30578943 Adding interactions to models of intersectional health inequalities: Comparing multilevel and conventional methods. Examining health inequalities intersectionally is gaining in popularity and recent quantitative innovations, such as the development of intersectional multilevel methods, have enabled researchers to expand the number of dimensions of inequality evaluated while avoiding many of the theoretical and methodological limitations of the conventional fixed effects approach. Yet there remains substantial uncertainty about the effects of integrating numerous additional interactions into models: will doing so reveal statistically significant interactions that were previously hidden or explain away interactions seen when fewer dimensions were considered? Furthermore, how does the multilevel approach compare empirically to the conventional approach across a range of conditions? These questions are essential to informing our understanding of population-level health inequalities. I address these gaps using data from the National Longitudinal Study of Adolescent to Adult Health by evaluating conventional and multilevel intersectional models across a range of interaction conditions (ranging from six points of interaction to more than ninety, interacting gender, race/ethnicity/immigration status, parent education, family income, and sexual identification), different model types (linear and logistic), and seven diverse dependent variables commonly examined by health researchers: body mass index, depression, general self-rated health, binge drinking, cigarette use, marijuana use, and other illegal drug use. Findings suggest that adding categories to intersectional analyses will tend to reveal new points of interaction. Stratum-level results from the multilevel approach are robust to cross-classification by school context. Conventional and multilevel approaches differ substantially when tested empirically. I conclude with a detailed consideration of the origin of these differences and provide recommendations for future scholarship of intersectional health inequalities. abstract_id: PUBMED:27422160 The association between socioeconomic deprivation and secondary school students' health: findings from a latent class analysis of a national adolescent health survey. Background: The aims of this study were to examine indicators of socioeconomic deprivation among secondary school students and to determine associations between household poverty, neighbourhood deprivation and health indicators. Methods: Data were from a nationally representative sample of 8500 secondary school students in New Zealand who participated in a health survey in 2012. Latent class analyses were used to group students by household poverty based on nine indicators of household socioeconomic deprivation: no car; no phone; no computer; their parent/s worry about not having enough money for food; more than two people sharing a bedroom; no holidays with their families; moving home more than twice that year; garages or living rooms used as bedrooms; and, no parent at home with employment. Multilevel generalized linear models were used to estimate the cross-level interaction between household poverty and neighbourhood deprivation with depressive symptoms, cigarette smoking and overweight/ obesity. Results: Three groups of students were identified: 80 % of students had low levels of household poverty across all indicators; 15 % experienced moderate poverty; and 5 % experienced high levels of poverty. Depressive symptoms and cigarette smoking were 2-3 times higher in the poverty groups compared to student's not experiencing poverty. There were also higher rates of overweight/ obesity among students in the poverty groups compared to students not experiencing poverty, but once covariates were accounted for the relationship was less clear. Of note, students experiencing poverty and living in affluent neighbourhoods reported higher levels of depressive symptoms and higher rates of cigarette smoking than students experiencing poverty and living in low socioeconomic neighbourhoods. This cross-level interaction was not seen for overweight/ obesity. Conclusions: Measures of household socioeconomic deprivation among young people should not be combined with neighbourhood measures of socioeconomic deprivation due to non-linear relationships with health and behaviour indicators. Policies are needed that address household poverty alongside efforts to reduce socioeconomic inequalities in neighbourhoods. abstract_id: PUBMED:38076598 Neighborhood Socioeconomic Disadvantage and White Matter Microstructure of the Arcuate Fasciculus and Uncinate Fasciculus in Adolescents. Background: Neighborhood- or area-level socioeconomic disadvantage is associated with neural alterations across the life span. However, few studies have examined the effects of neighborhood disadvantage on white matter microstructure during adolescence, an important period of development that coincides with increased risk for psychopathology. Methods: In 200 adolescents (ages 13-20 years; 54.5% female, 4% nonbinary) recruited from 2 studies enriched for early adversity and depression, we examined whether neighborhood socioeconomic disadvantage derived from census tract data was related to white matter microstructure in several major white matter tracts. We also examined whether depressive symptoms and sex moderated these associations. Results: Greater neighborhood socioeconomic disadvantage was associated with lower fractional anisotropy (FA) in the left arcuate fasciculus (β = -0.24, false discovery rate [FDR]-corrected p = .035) and right uncinate fasciculus (β = -0.32, FDR-corrected p = .002) above and beyond the effects of family-level socioeconomic status. Depressive symptoms significantly moderated the association between left arcuate fasciculus FA and both neighborhood (β = 0.17, FDR-corrected p = .026) and unemployment (β = 0.22, FDR-corrected p = .004) disadvantage such that these associations were only significant in adolescents who reported less severe depression. Sex did not moderate the association between socioeconomic disadvantage and FA in these tracts. Conclusions: Greater neighborhood socioeconomic disadvantage, particularly poverty and educational attainment levels, was associated with lower FA in the arcuate fasciculus and uncinate fasciculus above and beyond the effects of family-level measures of socioeconomic status. These patterns were only observed in adolescents with low levels of depression, suggesting that we must be cautious about generalizing these findings to youths who struggle with mental health difficulties. abstract_id: PUBMED:31023286 Measuring the impact of multiple discrimination on depression in Europe. Background: The study of the health effects of perceived discrimination based on ethnic and social traits has a long-standing and widespread tradition in epidemiological research, but less attention has been paid to the study of multiple discrimination, particularly its effects on mental health. The present work aims to analyse the association between multiple discrimination and depressive symptoms in Europe, and the impact of contextual socioeconomic circumstances on this relationship. Methods: In this study, data from the 7th Round of the European Social Survey was used. Given that the outcome variable, CES-D8, is a depression scale from 0 to 24 possible values and the hierarchical organisation of individuals (level-1 units) clustered within countries (level-2 units), a linear multilevel model was carried out. Results: Our findings suggest that multiple discrimination increases our risk of suffering depressive disorder, but in addition this work provides an important step forward to explain and understand how the relationship between multiple discrimination and depression might vary depending the socioeconomic context. In particular, we can observe that differences in the prevalence of depressive symptoms along multiple discrimination levels decrease as GDP per capita increases among European countries. Conclusion: This study is relevant since provides new evidence on how the association between multiple discrimination and depression operates at the micro and macro-level context, which is fundamental to understand how macro-economic fluctuations of countries may determine depressive disorders through the effect of single and combined forms of discrimination. Answer: Yes, school context does matter in the relation of socioeconomic status to adolescent depressive symptoms. A study that conducted multilevel modeling techniques to assess the relation between income and depressive symptoms in adolescents found that both household income and average school income were significantly related to depressive symptoms after adjusting for covariates. The impact of lower household income on depressive symptoms was approximately two-fold greater for students attending a poor versus a rich school, indicating that the socioeconomic context of the school environment is associated with adolescent depressive symptoms independent of individual household income (PUBMED:14571218). Additionally, other studies have found similar results, suggesting that socioeconomic inequalities in adolescent depression are not only an individual phenomenon but are also highly associated with factors such as the school environment. For instance, the experience of depression among adolescents is influenced by parental support, parental education level, lifestyle habits, and the gender mix in the school, with students living with richer parents more likely to be depressive (PUBMED:23077540). Furthermore, the socioeconomic context at both the individual and community levels has been shown to affect depressive symptoms, with community income level having an inverse association, particularly among low-income individuals (PUBMED:25653491). These findings underscore the importance of considering the broader socioeconomic context, including school and community environments, when addressing adolescent depressive symptoms and developing prevention measures.
Instruction: Can a patient educational book change behavior and reduce pain in chronic low back pain patients? Abstracts: abstract_id: PUBMED:15246305 Can a patient educational book change behavior and reduce pain in chronic low back pain patients? Background Context: This study was prompted by 1) the almost universal use of patient education as an initial or at least an ancillary step in the treatment of patients presenting with low back pain, 2) the relative dearth of studies evaluating the effectiveness of patient education and 3) the complete lack of support in the few existing studies for the efficacy of education in improving patients' long-term health status. Purpose: A feasibility study to evaluate the efficacy of an individualized biomechanical treatment educational booklet to effect improvement in health status. Study Design: A prospective, longitudinal cohort study. Patient Sample: Sixty-two subjects (35 female, 27 male), average age 42.4 years, reported a mean duration of back pain before inclusion of 10.4 years. However, because of attrition, only 48 subjects had complete data across the 18-month follow-up period. Outcome Measures: Outcome measures included pain status, number of back pain episodes, subject compliance with self-care behaviors, knowledge and opinion of booklet content. Methods: Volunteers with chronic low back pain were provided a copy of an individualized biomechanical treatment educational book and told they would undergo a written survey of its content 1 week after reading the book. Subjects' health status at 9 and 18 months was evaluated using a structured telephone interview. Results: One week after the 62 subjects, with an average of 10.4 years of symptoms and extensive use of the medical system, finished reading the index book, 51.62% reported noticeable improvement in their pain, their content comprehension was good and opinions about the text were generally positive. At 9-month follow-up, there was statistically significant and clinically relevant improvement in reported pain magnitude (p< .03), number of episodes (p< .0001) and perceived benefit (p< .04). At 18-month follow-up, these gains held or demonstrated even further improvement. Conclusion: This study's results suggest that the Treat Your Own Back book may have considerable efficacy in helping readers decrease their own low back pain and reduce the frequency of, or even eliminate, their recurrent episodes. These findings also justify conducting a randomized controlled clinical trial to assess this book's efficacy in improving health status in subjects with low back pain with the study design including internal controls to minimize bias issues and a wider range of outcomes, including measures of pain, function, disability, patient satisfaction, utilization of health care services and psychosocial measures. abstract_id: PUBMED:25159025 Design of an educational tool for Primary Care patients with chronic non-specific low back pain Current scientific evidence on the management of chronic non-specific low back pain highlights the benefits of physical exercise. This goal is frequently undermined due to lack of education of the subjects on the multifactorial, benign, and non-specific nature of low back pain, which can lead to a chronic disease with genuine psychosocial risk factors. Its influence may not only interfere with individual decision to adopt more adaptive coping behaviors, but also with the endogenous mechanisms of pain neuromodulation. Thus, the educational strategies and control of these factors have become important objectives to be incorporated into the management of the disorder and research guidelines. This paper presents the theoretical models and the scientific basis on which it has based the design of an educational tool for patients with chronic non-specific low back pain treated in Primary Care physiotherapy. Structure, content and objectives are also presented. abstract_id: PUBMED:31839351 Effectiveness of communicative and educative strategies in chronic low back pain patients: A systematic review. Objective: To investigate the effectiveness of communicative and educative strategies on 1) patient's low back pain awareness/knowledge, 2) maladaptive behavior modification and 3) compliance with exercise in patients with chronic low back pain. Methods: A systematic review was conducted. Searches were performed on 13 databases. Only randomized controlled trials enrolling patients ≥ 18 years of age were included. Risk of bias was assessed with the Cochrane Collaboration's tool and interrater agreement between authors for full-texts selection was evaluated with Cohen's Kappa. No meta-analysis was performed and qualitative analysis was conducted. Results: 24 randomized controlled trials which intervention included communicative and educative strategies were selected. Most of the studies were judged as low risk of bias and Cohen's Kappa was excellent ( = 0.822). Interventions addressed were cognitive behavioral therapy as unique treatment or combined with other treatments (multimodal interventions), coaching, mindfulness, pain science education, self-management, graded activity and graded exposure. Conclusions, Practice Implication: Patient's low back pain awareness/knowledge is still a grey area of literature. Pain science education, graded exposure and multimodal interventions are the most effective for behavior modification and compliance with exercise with benefits also in the long-term, while self-management, graded activity and coaching provide only short-term or no benefits. abstract_id: PUBMED:31186099 Study of the relationship between adherence to back school and coping with pain in patients with chronic low back pain Introduction: Back School (BS) is a treatment modality for patients with chronic low back pain with educational contents associated with other therapeutic measures such as exercise. If followed, it is effective but it is well known that it is difficult to make changes to daily habits. This study evaluated whether patients completing BS had a predominance of active strategies in coping with pain, as they adhered better to the recommendations of BS. Material And Method: Prospective observational study to evaluate BS adherence in patients with chronic low back pain. Adult patients attending BS were included in the study. Patients whose educational level, concomitant pathology or personal situation prevented them from completing the study were excluded. The patients were evaluated at the beginning and after the third session of BS. The variables studied were adherence to exercises and recommendations, coping with pain, pain intensity, functional disability and other sociodemographic characteristics. Results: A total of 67.2% of the 116 included patients acknowledged that they did not perform the exercises and recommendations of BS every day. No relationship was established between patients with active strategies for coping with pain and adherence to BS. Functional capacity and pain intensity did not differ between adherent and non-adherent patients. Conclusions: This study did not demonstrate that adherence to the exercises and habits taught in BS is related to coping with pain strategies, pain intensity and functional capacity in patients with chronic low back pain. abstract_id: PUBMED:30791879 The influence of an educational internet-based intervention in the beliefs and attitudes of primary care professionals on non-specific chronic low back pain: study protocol of a mixed methods approach. Background: Personal convictions in referral to pain cause misbeliefs in health professionals, which can influence patients who suffer from non-specific chronic low back pain. Likewise, health professionals' beliefs affect their advice and attitudes towards patients' treatment, becoming a possible cause of greater disability. The development of educational interventions based on the best scientific evidence in neurophysiology of pain could be a way to provide information and advice to primary care health professionals to change their cognition towards chronic non-specific low back pain. The use of Information and Communication Technologies allows the development of web sites, which might be one of the effective resources to modify misbeliefs and attitudes, in relation to the origin and meaning of non-specific chronic low back pain, of primary care professionals and that may modify their attitudes in patients' treatment. Methods: The aim of this project is to identify misbeliefs and attitudes of primary care physicians and nurses about chronic non-specific low back pain to develop a web-based educational tool using different educational formats and gamification techniques. This study has a mixed-method sequential exploratory design. The participants are medical and nursing staff working in primary care centers in the city of Lleida, Spain. For the qualitative phase of this study, the authors will use personal semi-structured interviews. For the quantitative phase the authors will use an experimental study design. Subjects will be randomly allocated using a simple random sample technique. The intervention group will have access to the web site where they will find information related to non-specific chronic low back pain, based on the information obtained in the qualitative phase. The control group will have access to a video explaining the clinical practice guidelines on low back pain. Discussion: This study has been designed to explore and modify the beliefs and attitudes about chronic low back pain of physicians and nurses working in primary care settings, using a web-based educational tool with different educational formats and gamification techniques. The aim of the educational intervention is to change their knowledge about the origin and meaning of pain, with the result of reducing their misbeliefs and attitudes of fear avoidance. Trial Registration: ClinicalTrials.gov Identifier: NCT02962817 . Date of registration: 11/09/2016. abstract_id: PUBMED:29035178 Patient Perspectives on Communication with Primary Care Physicians about Chronic Low Back Pain. Objectives: Chronic low back pain (CLBP) is a common health problem with challenges for providing satisfactory care. This study was undertaken to identify opportunities to improve key aspects of physicians' communications with CLBP-affected patients. Methods: A series of 3 focus groups, each with 7 to 11 patients with CLBP, were recruited from primary care settings and grouped by risk level of reduced function resulting from back pain, to elicit perspectives about interactions with their primary care physicians. Analysis of focus group transcripts used an iterative process based on a thematic approach and a priori concepts. Results: A total of 28 patients participated in the focus groups. Patient comments about communicating with physicians around CLBP fit into themes of listening and empathy, validating pain experiences, conducting effective CLBP assessment, providing clear diagnosis and information, and collaboratively working on treatment. Patients shared that physicians can foster positive interactions with CLBP-affected patients by sharing personal experiences of chronic pain, being truthful about not having all the answers and being clear about how patients can benefit from referrals, reviewing the patient's previous treatments before beginning conversations about treatment options, providing follow-up instructions, giving patients a diagnosis beyond "chronic pain," and explaining the role of imaging in their care. Conclusion: This study provides specific steps that physicians in the US can take to improve physician-patient interactions during primary care visits pertaining to CLBP. The findings could inform physician training, development of educational materials for patients, and future research. abstract_id: PUBMED:16365466 Comparing yoga, exercise, and a self-care book for chronic low back pain: a randomized, controlled trial. Background: Chronic low back pain is a common problem that has only modestly effective treatment options. Objective: To determine whether yoga is more effective than conventional therapeutic exercise or a self-care book for patients with chronic low back pain. Design: Randomized, controlled trial. Setting: A nonprofit, integrated health care system. Patients: 101 adults with chronic low back pain. Intervention: 12-week sessions of yoga or conventional therapeutic exercise classes or a self-care book. Measurements: Primary outcomes were back-related functional status (modified 24-point Roland Disability Scale) and "bothersomeness" of pain (11-point numerical scale). The primary time point was 12 weeks. Clinically significant change was considered to be 2.5 points on the functional status scale and 1.5 points on the bothersomeness scale. Secondary outcomes were days of restricted activity, general health status, and medication use. Results: After adjustment for baseline values, back-related function in the yoga group was superior to the book and exercise groups at 12 weeks (yoga vs. book: mean difference, -3.4 [95% CI, -5.1 to - 1.6] [P < 0.001]; yoga vs. exercise: mean difference, -1.8 [CI, -3.5 to - 0.1] [P = 0.034]). No significant differences in symptom bothersomeness were found between any 2 groups at 12 weeks; at 26 weeks, the yoga group was superior to the book group with respect to this measure (mean difference, -2.2 [CI, -3.2 to - 1.2]; P < 0.001). At 26 weeks, back-related function in the yoga group was superior to the book group (mean difference, -3.6 [CI, -5.4 to - 1.8]; P < 0.001). Limitations: Participants in this study were followed for only 26 weeks after randomization. Only 1 instructor delivered each intervention. Conclusions: Yoga was more effective than a self-care book for improving function and reducing chronic low back pain, and the benefits persisted for at least several months. abstract_id: PUBMED:35095623 Prevalence and Associated Factors of Depressive Symptoms Among Patients With Chronic Low Back Pain: A Cross-Sectional Study. Background: Few studies have focused on depressive symptoms among patients with chronic low back pain in China. The aim of this cross-sectional study was to assess the prevalence and associated factors of depressive symptoms in patients with chronic low back pain. Methods: From May to August 2021, 1,172 patients with chronic low back pain were recruited in China. Depressive symptoms were assessed through the Patient Health Questionnaire. Associations of demographic characteristics, clinical characters and social-psychological factors with depressive symptoms were investigated among patients with chronic low back pain. Results: The prevalence of depressive symptoms was 25.00%. Logistic regression analysis found that duration of pain in 1-5 years (1-3 years: OR = 2.91, 95%CI: 1.65-5.14, 3-5 years: OR = 3.09, 95%CI: 1.55-6.15) and more severe pain (OR = 1.13, 95%CI: 1.10-1.17) were associated with higher risks of depressive symptoms. Better family function (good family function: OR = 0.25, 95%CI: 0.15-0.41, moderate family dysfunction: OR = 0.47, 95%CI: 0.29-0.77) and higher pain self-efficacy (OR = 0.94, 95%CI: 0.93-0.95) were associated with lower risks of depressive symptoms. Conclusion: Patients with chronic low back pain have a high prevalence of depressive symptoms in China. Duration of pain, pain severity, family function and pain self-efficacy were predictors of depressive symptoms among chronic low back pain patients in China. Early identification of the associated factors may be helpful for the timely management of depressive symptoms. abstract_id: PUBMED:11927757 Psychological correlates of pain behavior in patients with chronic low back pain. Pain behaviors that are excessive for the degree of known physical disease are common in patients with chronic low back pain and are frequently assumed to arise from a comorbid depressive illness. Although some studies have confirmed an association between depression and excessive pain behavior, methodologic problems (such as the use of depression ratings that also recorded symptoms attributable to physical disease) make interpretation of this finding difficult. We recruited 54 consecutive patients with chronic (>6 months) low back pain from a hospital clinic. Subjects completed self-rated assessments of anxiety and depression (Hospital Anxiety and Depression Scale) designed to be minimally affected by physical symptoms, along with assessments of disability (ODQ), pain (visual analogue scale), pain behavior (Waddell checklist), and physical impairment. Seventeen subjects (31%) exhibited excessive pain behavior. Overall, they were no more depressed or anxious than the remainder, although men with excessive pain behavior showed a trend toward being more depressed. Patients with excessive pain behavior were more disabled (self-rated and observer-rated), reported greater pain, and were more likely to be female and to have pain of shorter duration. Pain behavior did not correlate with anxiety or depression, but correlated with measures of disability and pain intensity. Factor analysis revealed that physical disability, pain intensity, and pain behavior loaded heavily on the first factor. Anxiety and depression loaded together on a separate factor. We conclude that pain behaviors were not related to anxiety or depression in our group, although gender differences between groups could have contributed to our negative findings. Pain behaviors may influence other physical measures. Further studies are required to investigate the relation between depression and pain behavior while controlling for gender differences. abstract_id: PUBMED:37458022 Clarification of the "pain neuroscience education" concept in the management of patients with persistent low back pain: A scoping review. Background: Patient education is a recommended treatment strategy for persistent low back pain (PLBP). Pain neuroscience education (PNE) is an emerging concept with boundaries still unclear. Objective: To clarify the PNE concept and identify its key characteristics in PLBP management. Methods: A systematic search was conducted using the following databases: Pubmed, ScienceDirect, Google Scholar, Cochrane, and Pedro. Inclusion Criteria: publications in English or French on pain neuroscience education and chronic low back pain, educational books cited in white literature. Three reviewers independently selected eligible studies for final inclusion. Numerical analysis and narrative synthesis were carried out from the extracted data. Results: From 919 identified publications, 54 were selected. Ten educational resources were added. PNE refers to a theoretical framework, a specific educational intervention, and an overall care approach. It is characterized by the intention to help the patient reconceptualize their PLBP from a tissue injury marker to a protective, neurobiological perspective, and then to increase adherence to biopsychosocial rehabilitation and normalize cognitive-behavioral responses. Pain sciences concepts are presented with an optimization of learning strategies. Conclusion: PNE stands out from other educational and cognitive behavioral approaches through its objective of changing the pain concept. Answer: Yes, a patient educational book can change behavior and reduce pain in chronic low back pain patients. A study evaluating the efficacy of an individualized biomechanical treatment educational booklet found that 51.62% of subjects reported noticeable improvement in their pain one week after reading the book. At 9-month follow-up, there was statistically significant and clinically relevant improvement in reported pain magnitude, number of episodes, and perceived benefit. At 18-month follow-up, these gains held or demonstrated even further improvement (PUBMED:15246305). This suggests that educational books like the "Treat Your Own Back" book may have considerable efficacy in helping readers decrease their own low back pain and reduce the frequency of, or even eliminate, their recurrent episodes. Moreover, the concept of Pain Neuroscience Education (PNE) is characterized by the intention to help patients reconceptualize their persistent low back pain from a tissue injury marker to a protective, neurobiological perspective, which can increase adherence to biopsychosocial rehabilitation and normalize cognitive-behavioral responses (PUBMED:37458022). This indicates that educational interventions, including books that provide pain science education, can be effective in managing chronic low back pain by altering patients' understanding and behavior related to their pain.
Instruction: Are patients' sedation preferences at gastroscopy influenced by preceding patients' decisions? Abstracts: abstract_id: PUBMED:15521847 Are patients' sedation preferences at gastroscopy influenced by preceding patients' decisions? Background: When patients choose sedation or no sedation for gastroscopy, it is unknown whether the decision and experience of preceding patients on the same endoscopy list influence such decisions. Aim: To address this question in an endoscopy unit, where pre- and postprocedure patients are free to communicate. Methods: The order and sedation decisions of 503 out-patients attending diagnostic gastroscopy lists were analysed. One hundred patients completed a preprocedural questionnaire about sedation preferences. Results: Three hundred and fifteen (63%) patients chose no sedation. Men were more likely to be unsedated, 170 (72%), than women 158 (59%) (chi(2) = 9.1, P < 0.01). Age did not influence decisions. If the first patient on an endoscopy list requested sedation, 36% of subsequent patients were also sedated. This was similar to 38% of subsequently sedated patients on lists where the first patient was unsedated. Similar proportions of subsequent patients requested sedation when comparing lists where the first two patients were both sedated or both unsedated and when comparing lists where the first three patients were all sedated or all unsedated. Ninety per cent of the patients completing preprocedural questionnaires had made sedation choices before arrival; the remaining undecided patients had unsedated endoscopies. Conclusions: Patients' sedation decisions are not influenced by preceding patients' decisions. Most patients attending for out-patient gastroscopy have already decided about sedation. abstract_id: PUBMED:34307602 Prospective, randomized comparison of two supplemental oxygen methods during gastroscopy with propofol mono-sedation in obese patients. Background: Hypoxemia is a common complication in obese patients during gastroscopy with sedation. The Wei nasal jet tube (WNJT) is a new special nasopharyngeal airway with the ability to provide supraglottic jet ventilation and oxygen insufflation via its built-in wall channels. The aim of this study was to compare the efficacy and safety of the WNJT vs a nasal cannula for supplemental oxygen during gastroscopy with propofol mono-sedation in obese patients. Aim: To compare the efficacy and safety of the WNJT vs a nasal cannula for supplemental oxygen during gastroscopy with propofol mono-sedation in obese patients. Methods: A total of 103 obese patients with a body mass index of 30 kg/m2 or more undergoing elective gastroscopy under propofol mono-sedation were randomly assigned to receive supplemental oxygen at 5 L/min through either a WNJT (WNJT group, n = 51) or a nasal cannula (nasal cannula group, n = 52). The lowest pulse oxygen saturation (SpO2) and mild and severe hypoxemia during gastroscopy were recorded. The primary outcome was the incidence of hypoxemia. Results: The lowest SpO2 during gastroscopy with propofol mono-sedation was significantly increased in the WNJT group compared with the nasal cannula group. The incidence of mild hypoxemia and total incidence of hypoxemia were significantly lower in the WNJT group than in the nasal cannula group. Other than a higher incidence of epistaxis in the WNJT group, the occurrence of adverse events was similar between the devices. While neither device demonstrated a statistically significant difference in satisfaction among patients, the WNJT did result in improved satisfaction among anesthetists and physicians. Conclusion: During gastroscopy with propofol mono-sedation in obese patients, the WNJT, when compared with a nasal cannula for supplemental oxygen, can significantly reduce the occurrence of hypoxemia and improve both arterial oxygenation and satisfaction among anesthetists and physicians. The use of the WNJT may, however, lead to epistaxis in a few patients. In view of this clinically acceptable risk-benefit ratio, the WNJT may be recommended as an alternative tool for supplemental oxygen for the prevention of hypoxemia during gastroscopy with propofol mono-sedation in obese patients. abstract_id: PUBMED:38102399 Effect of moderate versus deep sedation on recovery following outpatient gastroscopy in older patients: a randomized controlled trial. Background: Although gastrointestinal endoscopy with sedation is increasingly performed in older patients, the optimal level of sedation remains open to debate. In this study, our objective was to compare the effects of moderate sedation (MS) and deep sedation (DS) on recovery following outpatient gastroscopy in elderly patients. Methods: In this randomized, partially blinded, controlled trial, we randomly divided 270 patients older than 60 years who were scheduled for elective outpatient gastroscopy into the MS or DS group based on the Modified Observer's Assessment of Alertness/Sedation (MOAA/S) scale. The primary outcome was the duration of stay in the post-anesthesia care unit (PACU). Secondary outcomes included the duration of the total hospital stay, frequency of retching, bucking, and body movements during the examination, endoscopist and patient satisfaction, and sedation-associated adverse events during the procedure. Results: A total of 264 patients completed the study, of whom 131 received MS and 133 received DS. MS was associated with a shorter PACU stay [16.15 ± 9.01 min vs. 20.02 ± 11.13 min, P < 0.01] and total hospital stay [27.32 ± 9.86 min vs. 30.82 ± 12.37 min, P < 0.05], lesser hypoxemia [2.3% (3/131) vs. 12.8% (17/133), P < 0.01], use of fewer vasoactive drugs (P < 0.001), and more retching (P < 0.001). There was no difference in the incidence of bucking and body movements or endoscopist and patient satisfaction between the two groups. Conclusion: Compared to deep sedation, moderate sedation may be a preferable choice for American Society of Anesthesiologists (ASA) Grade I-III elderly patients undergoing outpatient gastroscopies, as demonstrated by shorter PACU stays and total hospital stays, lower sedation-associated adverse events, and similar levels of endoscopist and patient satisfaction. abstract_id: PUBMED:37387195 Comparison of 95% effective dose of remimazolam besylate and propofol for gastroscopy sedation on older patients: A single-centre randomized controlled trial. Aims: Advanced age is an important risk factor for adverse events during procedural sedation. Remimazolam is safe and effective in gastroscopic sedation. However, the ideal dose and application for older patients are not well known. We aim to investigate its 95% effective dose (ED95) for older patients undergoing gastroscopy and to assess its safety and efficacy, with propofol as the comparison. Methods: The trial consists of 2 parts, patients aged >65 years and scheduled for outpatient painless gastroscopy were enrolled. In the first part, Dixon's up-and-down methodology was used to determine the ED95 of remimazolam besylate and propofol for gastroscopic insertion, in combination with 0.2 μg/kg remifentanil. In the second part, patients in each group received 0.2 μg/kg remifentanil and ED95 dose of the study drugs for sedation induction, supplemental doses were added to maintain sedation depth when necessary. The primary outcome was the incidence of adverse events. The secondary outcome was the recovery time. Results: The ED95 of remimazolam besylate and propofol induction were 0.2039 (95% confidence interval 0.1753-0.3896) mg/kg and 1.9733 (95% confidence interval 1.7346-3.7021) mg/kg respectively. Adverse events were reported in 26 (40.6%) patients in the remimazolam group and 54 (83.1%) in the propofol group (P < .0001), whereas the remimazolam group presented a higher incidence of hiccups (P = .0169). Besides, the median time to awakening was approximately 1 min shorter with remimazolam than with propofol (P < .05). Conclusion: For older patients undergoing gastroscopy, the ED95 dose of remimazolam is a safer alternative than propofol when inducing the same sedation depth. abstract_id: PUBMED:34052943 Anesthetic strategy for obese patients during gastroscopy: deep sedation or conscious sedation? A prospective randomized controlled trial. Objective: This paper aims to compare the incidence of SpO2 values < 95% and < 90% of the obese patients between conscious sedation and deep sedation and whether conscious sedation was superior to the deep sedation for obese patients during diagnostic gastroscopy. Methods: Obese patients undergoing diagnostic gastroscopy were randomly assigned to two different intervention groups: group CS (conscious sedation) or group DS (deep sedation). Group CS patients were managed by conscious sedation [Modified Observer Assessment of Alertness/Sedation (MOAA/S) at 4-5] protocol, and group DS patients were managed by deep sedation (MOAA/S at ≤ 2) protocol. Propofol and sufentanil (0.1 and 0.05 mg/kg) were, respectively, infused for sedation and analgesia in CS and DS protocols. The primary endpoints were to compare the incidence of SpO2 values < 95% and < 90% of the patients between the two groups. The incidence of successful sedation, satisfaction scores of patients and endoscopist were assessed as secondary endpoints. Results: 115 obese patients (59 in group CS and 56 in group DS) were enrolled in this study. The incidences of SpO2 < 95% and < 90% in group CS (42.4% and 6.8%) were significantly lower than those in group DS (69.6% and 19.6%, with P = 0.003 and 0.041, respectively). The incidence of successful sedation was similar between groups (86.4% vs 89.3%, P = 0.641). Patient satisfaction scores and endoscopist satisfaction scores were both similar between groups (P = 0.548 and 0.171). Conclusion: Conscious sedation with propofol and sufentanil (0.1 mg/kg) reduced the incidence of hypoxic events without affecting gastroscopy procedure and satisfaction compared with the deep sedation for obese patients during diagnostic gastroscopy. Trial Registration: ChiCTR-1900024894; registration date, July 31, 2019. http://www.chictr.org.cn . abstract_id: PUBMED:26358723 Sedation for gastroscopy: Is it an adequately understood and informed choice? Introduction: Sedation uptake rates for oesophagogastroduodenoscopy (OGD) vary greatly. Issues concerning adequate information and consent have been raised. Additionally, patient comprehension of sedation options is inconsistent. Methods: A closed ended questionnaire was created and delivered to assess patient understanding regarding sedation prior to OGD. The questionnaire was based on British Society of Gastroenterology guidelines. Results: One hundred and eleven patients were recruited. 90 % of the sedated and 73 % of the unsedated patients were satisfied with their respective decisions (OR 0.283, *p = 0.01). 65 % were unaware of basic differences between conscious sedation and general anesthesia, and 37 % were unaware that driving is permitted after having throat spray alone. The most informed of the age groups had the lowest uptake of sedation and the least informed had the highest uptake. Conclusion: The decision to undergo gastroscopy with or without sedation is not a sufficiently informed one. This study highlights the need for the widespread dissemination of good quality information to inform patients better regarding sedation prior to OGD. abstract_id: PUBMED:37436589 Sedation with a 1:1 mixture of etomidate and propofol for gastroscopy in hypertensive elderly patients. We hypothesized that the combined use of etomidate and propofol for endoscopic sedation in elderly hypertensive patients could reduce adverse reactions while providing ideal sedation. To validate our hypothesis, we conducted a prospective, randomized, controlled, double-blinded study. A total of 360 elderly hypertensive patients scheduled for gastroscopy at our hospital were enrolled in the study, of whom 328 completed the trial. The patients were randomly assigned to one of three groups: the propofol group (group P), the etomidate group (group E), or the propofol-etomidate combination group (mixed at a ratio of 1:1, group PE). We collected and analyzed the cardiopulmonary effects and side effects in each group. Regardless of the sedation drug used, the systolic blood pressure, mean blood pressure, and heart rate of involved patients were significantly affected. Oxygen desaturation and injection pain were more common in group P compared to groups E (33.6% vs. 14.8%, 31.8% vs. 2.7%, both P < 0.01) and group PE (33.6% vs. 13.6%, 31.8% vs. 6.4%, both P < 0.01). The incidence of myoclonus in the PE group was lower than that in the E group (10.9% vs. 61.2%, P < 0.01). Our results indeed demonstrated that the combined use of etomidate and propofol appeared to maintain cardiopulmonary stability with minimal side effects in older hypertensive patients scheduled for gastroscopy, which further implied that this sedation strategy could be a safe and pain-free option for managing patients undergoing gastroscopy, particularly those at a higher risk of adverse cardiovascular events. abstract_id: PUBMED:34115199 Sedation strategy for gastroscopy in obese patients. N/A abstract_id: PUBMED:36324081 The effect of varying inhaled oxygen concentrations of high-flow nasal cannula oxygen therapy during gastroscopy with propofol sedation in elderly patients: a randomized controlled study. Background: Despite evidence that high-flow nasal cannula oxygen therapy (HFNC) promotes oxygenation, its application in sedated gastroscopy in elderly patients has received little attention. This study investigated the effect of different inhaled oxygen concentrations (FiO2) of HFNC during sedated gastroscopy in elderly patients. Methods: In a prospective randomized single-blinded study, 369 outpatients undergoing regular gastroscopy with propofol sedation delivered by an anesthesiologist were randomly divided into three groups (n = 123): nasal cannula oxygen group (Group C), 100% FiO2 of HFNC group (Group H100), and 50% FiO2 of HFNC (Group H50). The primary endpoint in this study was the incidence of hypoxia events with pulse oxygen saturation (SpO2) ≤ 92%. The secondary endpoints included the incidence of other varying degrees of hypoxia and adverse events associated with ventilation and hypoxia. Results: The incidence of hypoxia, paradoxical response, choking, jaw lift, and mask ventilation was lower in both Group H100 and Group H50 than in Group C (P < 0.05). Compared with Group H100, Group H50 showed no significant differences in the incidence of hypoxia, jaw lift and mask ventilation, paradoxical response, or choking (P > 0.05). No patients were mechanically ventilated with endotracheal intubation or found to have complications from HFNC. Conclusion: HFNC prevented hypoxia during gastroscopy with propofol in elderly patients, and there was no significant difference in the incidence of hypoxia when FiO2 was 50% or 100%. Trial Registration: This single-blind, prospective, randomized controlled trial was approved by the Ethics Committee of Nanjing First Hospital (KY20201102-04) and registered in the China Clinical Trial Center (20/10/2021, ChiCTR2100052144) before patients enrollment. All patients signed an informed consent form. abstract_id: PUBMED:16045027 Two-stage administration sedation method for gastroscopy in hypoxia predisposed patients Objective: To explore the safety and the efficacy of two-stage administration sedation method (with propofol plus midazolam) in the analgesia gastroscopy for hypoxia predisposed patients. Methods: Sixty-four hypoxia predisposed patients (with snore or expectoration) were divided into two groups and received sedation respectively with two-stage administration method and routine continuum administration method before gastroscopy. We analyzed the alteration of SBP, DBP, SpO2, sedation level, the degree of pharyngeal malaise, recovery time, dose of propofol,the lethe degree of upset, and procedure of gastroscopy to evaluate the safety and the efficacy of the two methods in the analgesia gastroscopy for hypoxia predisposed patients. Results: Compared with routine continuum administration sedation method, two-stage administration sedation method had little influence on SpO2, so it reduced the accident of respiratory arrest (P < 0.01). The gastroscopy could be achieved at the lower sedation level with two stage administration method (P <0.01), so that the recovery time was shortened (P < 0.01). The degree of pharyngeal malaise in the patients with two-stage administration sedation method was higher than that of patients with routine continuum administrationsedation method (P < 0.01), and the dose of propofol used for sedation decreased (P < 0.01). While the lethe degree of gastroscopy upset and alteration of SBP, DBP were rated similarly in both groups (P > 0.05), and the lethe degree of gastroscopy procedure in two-stage administration method group was significantly lower than that of routine continuum adminis- tration method group (P <0.01). Conclusion: The two-stage administration sedation method is a new effective and safe sedation method for gastroscopy in the hypoxia predisposed patients. Answer: Patients' sedation preferences at gastroscopy are not influenced by preceding patients' decisions. A study analyzing the order and sedation decisions of 503 out-patients attending diagnostic gastroscopy lists found that most patients had already decided about sedation before arrival. The study observed that if the first patient on an endoscopy list requested sedation, 36% of subsequent patients were also sedated, which was similar to the 38% of subsequently sedated patients on lists where the first patient was unsedated. The proportions of subsequent patients requesting sedation did not significantly differ when comparing lists with varying numbers of initially sedated or unsedated patients. Additionally, 90% of the patients completing preprocedural questionnaires had made sedation choices before arrival, and the remaining undecided patients had unsedated endoscopies (PUBMED:15521847).
Instruction: Do antidepressants change personality? Abstracts: abstract_id: PUBMED:22842025 Do antidepressants change personality?--a five-year observational study. Background: Whether antidepressants influence personality is a major clinical and societal issue due to their widespread use. In an observational study, we investigated whether depressive patients' neuroticism and extraversion scores covary with antidepressant pharmacotherapy, and if so, whether this remains significant after accounting for depressive or anxiety symptoms. Methods: Major depressive disorder patients (N=237) were interviewed at up to four time-points in a five-year prospective longitudinal study. Changes in neuroticism plus extraversion scores were compared with changes in antidepressant pharmacotherapies and depressive plus anxiety symptoms to uncover any covariation between them. Autoregressive path models were used to examine this covariation at the sample level. Within-subject change was estimated using a random-effects latent change model. Results: Significant covariation is present in the change trajectories between personality scores and depressive symptoms; declining depression scores were associated with rising extraversion and declining neuroticism. Although the personality scores of many patients changed significantly over the five-year study, none of these changes were associated with changes in antidepressant pharmacotherapy. Limitations: The study covered only two dimensions of personality. Single drug-specific analysis could not be done. Antidepressant blood levels were not measured. Conclusion: No evidence emerged for significant covariation of antidepressant pharmacotherapy with neuroticism or extraversion scores. By contrast, changes in both personality dimensions were associated with changes in depressive symptoms, those in neuroticism also in anxiety symptoms. If antidepressants influence these personality dimensions, the effect size is likely markedly smaller than that of the disorders for which they are prescribed. abstract_id: PUBMED:28946789 Personality, personality disorders, and the process of change. Objective: The present paper elaborates a process perspective of change in psychotherapy for personality disorders (PDs). Firstly, the paper reviews the literature of mechanisms of change in treatments of PD, with the main focus on emotional processing and socio-cognitive processing. Secondly, it proposes an illustrative case-series analysis of eight cases, drawn from a mediation analysis conducted within the context of a randomized controlled trial for borderline personality disorder (BPD). Method: As such, cases with good and poor outcomes are compared, as are cases with poor and good intake features and cases with poor and good process markers across treatment. Results: The results illustrate possible pathways to healthy change over the course of four months of treatment, and possible pathways of the absence of change. Conclusions: These results are discussed with regard to three main research perspectives: the combination of qualitative and quantitative methodology in psychotherapy research may be applied to case study research, a neurobehavioral perspective on change may incorporate the individualized experience in the laboratory and therapist responsiveness to patient characteristics may be a core feature of fostering change. Clinical or methodological significance of this article: The present paper illustrates individual pathways to change in personality disorders. It illustrates how coping capacities influence the process of psychotherapy and outcome in personality disorders. It demonstrates the relevance of individualizing treatments for personality disorders. It demonstrates several integrative features of psychotherapy research, in particular the use of neurobehavioral paradigms and the integration of single-case research within randomized controlled trials. abstract_id: PUBMED:2592964 Personality and response to tricyclic antidepressants in depressed patients. Several studies have shown that disturbances of personality are poor predictors of response to antidepressants. None of these studies, however, has used personality measures similar to the DSM-III. We evaluated the relationship between antidepressant response and personality scores obtained on the Millon Clinical Multiaxial Inventory, which provides personality measures congruent with DSM-III. Personality features such as assertiveness, independence, and competitiveness distinguished responders from nonresponders. However, the frequency of personality disorder diagnoses did not differ between responders and nonresponders. abstract_id: PUBMED:31314688 Personality Change in Middle Adulthood: With Focus on Differential Susceptibility. Little attention has been paid to middle adulthood in research on personality stability and change. In addition, previous research on individual differences in personality change has not fully explained its variability. This study focused on the differential susceptibility model, which suggests that individual susceptibility interacts with environmental factors and produces variability in outcomes, and investigated individual differences in personality change with a middle adult sample. A total of 1051 Japanese middle adults (M = 41.61 years; SD = 5.31; range 30-50 years; 534 females) participated in this two-wave short-term longitudinal study. Latent change score model analyses revealed substantial mean-level declines in Agreeableness and Honesty-Humility. Moreover, the results showed that the influences of some life events on personality change are moderated for better and for worse by individual susceptibility to one's environment. These findings suggest that the trends of personality development may differ between Western and non-Western countries and that differential susceptibility model may play an important role in deriving individual differences in personality stability and change. abstract_id: PUBMED:27716921 Life Events and Personality Trait Change. Objective: Theory and research have emphasized the impact of life events on personality trait change. In this article, we review prospective research on personality trait change in response to nine major life events in the broader domains of love and work. Method: We expected to find that life events lead to personality trait change to the extent that they have a lasting influence on individuals' thoughts, feelings, and behavior. Moreover, we predicted that love-related life events such as marriage or parenthood would be more strongly related to changes in traits that emphasize affective content, whereas work-related life events would be more likely to lead to change in traits that reflect behavioral or cognitive content. Results: The current state of research provided some evidence that life events can lead to changes in personality traits and that different life events may be differently related to specific trait domains. Conclusions: A more general conclusion emerging from this review is that the evidence for the nature, shape, and timing of personality trait change in response to life events is still preliminary. We discuss the implications of the results for theory and research and provide directions for future studies on life events and personality trait change. abstract_id: PUBMED:33558417 Changing personality traits with the help of a digital personality change intervention. Personality traits predict important life outcomes, such as success in love and work life, well-being, health, and longevity. Given these positive relations to important outcomes, economists, policy makers, and scientists have proposed intervening to change personality traits to promote positive life outcomes. However, nonclinical interventions to change personality traits are lacking so far in large-scale naturalistic populations. This study (n = 1,523) examined the effects of a 3-mo digital personality change intervention using a randomized controlled trial and the smartphone application PEACH (PErsonality coACH). Participants who received the intervention showed greater self-reported changes compared to participants in the waitlist control group who had to wait 1 mo before receiving the intervention. Self-reported changes aligned with intended goals for change and were significant for those desiring to increase on a trait (d = 0.52) and for those desiring to decrease on a trait (d = -0.58). Observers such as friends, family members, or intimate partners also detected significant personality changes in the desired direction for those desiring to increase on a trait (d = 0.35). Observer-reported changes for those desiring to decrease on a trait were not significant (d = -0.22). Moreover, self- and observer-reported changes persisted until 3 mo after the end of the intervention. This work provides the strongest evidence to date that normal personality traits can be changed through intervention in nonclinical samples. abstract_id: PUBMED:36221989 Actual and volitional personality change across study abroad. Objectives: According to personality development theories, the dramatic environmental transition of study abroad may form a crucible for personality change. Location, social roles, and cultural familiarity suddenly shift, potentially disrupting old habits and creating new ones, building upon the typical maturation occurring during college age. The current study poses questions about selection and socialization effects of study abroad on personality, actual and volitional change in personality, and whether adjustment to study abroad catalyzes change. Method: Longitudinal studies were conducted with Japanese students studying for one year in the USA (N = 300), and a comparison sample of students in an English-language program at their university in Japan (N = 108). Big Five personality traits and trait-relevant behavior were assessed at the beginning and end of the programs, along with three types of volitional change: expectations, perceptions, and desires. Results: Study abroad showed selection effects for higher Extraversion and Emotional Stability traits and developmental and socialization effects of increases in Openness behavior. Expected and perceived change corresponded with actual change (but desired change did not), and cultural adjustment predicted socially desirable trait-relevant behavior before students' return home. Conclusions: Study abroad was revealed as an environment wherein students both subjectively experienced and actually demonstrated changes in trait-relevant behavior. abstract_id: PUBMED:10696819 Personality change in adulthood: loci of change and the role of interpersonal process. This study examined personality change in two domains--dispositional tendencies (emotion traits) and characteristic adaptations (views of self) over eight years in a sample of older adults (M = 63.4). Stability coefficients for anxiety, depression, interest, anger, anger-in, anger-out, and aggression ranged from .47 to .75; only anger-out showed significant change over the eight years. On the other hand, respondents reported moderate changes in perspectives, goals, personality, feelings, and ways of relating and the ratings of outside informants were significantly correlated with self-reports of change for all but goals. Personality change was associated with positive and negative interpersonal life events of an intimate nature such as marriage, divorce, and death of loved ones that took place over the past eight years, and was not associated with other high and low points in lives involving careers, changes in residence, and more distant social relationships. abstract_id: PUBMED:28923391 Stability and change in personality and personality disorders. In this paper, we review recent findings related to stability and change in personality and personality disorder. Estimates of stability vary depending on a number of methodological and substantive factors. These factors include the type of stability being examined, the type of construct being assessed, the method being used to assess personality, how participants are sampled, and developmental trends in personality stability and change. We describe current evidence about personality stability with respect to each of these factors. We conclude that current gaps in the literature can be filled by more carefully attending to factors that impact estimates of stability and change, and provide recommendations about how future research can fill those gaps. abstract_id: PUBMED:4078758 Influence of personality on response to monoamine oxidase inhibitors and tricyclic antidepressants. Fifty patients who had previously been treated with both a monoamine oxidase inhibitor and a tricyclic antidepressant at different times were rated using an interview schedule to assess abnormal personality. Thirty-four per cent of the sample were found to have an abnormal personality with anankastic (obsessional) personality being the most frequent. Those who failed to respond to either treatment were significantly more likely to have an abnormal personality than those who did respond. There was some evidence to support the view that patients who improved on monoamine oxidase inhibitors showed fewer abnormal personality traits, particularly for sensitivity, pessimism and hypochrondriasis, than those who improved on tricyclic antidepressants. Answer: The question of whether antidepressants change personality has been explored in various studies, with mixed findings. According to a five-year observational study (PUBMED:22842025), there was no significant covariation between changes in antidepressant pharmacotherapy and changes in neuroticism or extraversion scores in patients with major depressive disorder. Instead, changes in these personality dimensions were associated with changes in depressive symptoms, suggesting that if antidepressants influence personality dimensions, the effect size is likely smaller than that of the disorders for which they are prescribed. Other research has focused on the broader context of personality change, including the impact of life events and psychotherapy on personality traits (PUBMED:27716921, PUBMED:28946789). For instance, life events have been shown to lead to personality trait change, particularly when they have a lasting influence on individuals' thoughts, feelings, and behavior (PUBMED:27716921). Psychotherapy for personality disorders has also been associated with change, with emotional processing and socio-cognitive processing being key mechanisms (PUBMED:28946789). In terms of antidepressant response, one study found that certain personality features, such as assertiveness, independence, and competitiveness, distinguished responders from nonresponders to tricyclic antidepressants, but the frequency of personality disorder diagnoses did not differ between the two groups (PUBMED:2592964). Another study suggested that patients who improved on monoamine oxidase inhibitors showed fewer abnormal personality traits than those who improved on tricyclic antidepressants (PUBMED:4078758). Overall, while there is evidence that personality can change in response to various factors, including life events and psychotherapy, the direct impact of antidepressants on personality traits appears to be less clear or significant. It is important to consider that personality is a complex and multifaceted construct, and changes may be influenced by a combination of factors, including but not limited to pharmacotherapy (PUBMED:22842025, PUBMED:27716921, PUBMED:28946789, PUBMED:2592964, PUBMED:4078758).
Instruction: Is duplex ultrasonography useful for the diagnosis of giant-cell arteritis? Abstracts: abstract_id: PUBMED:12186513 Is duplex ultrasonography useful for the diagnosis of giant-cell arteritis?. Background: Evidence of a dark halo on ultrasonography has been considered a specific sign of giant-cell arteritis and may replace temporal artery biopsy for the diagnosis of giant-cell arteritis in patients with typical clinical manifestations. Objective: To assess the usefulness of temporal artery duplex ultrasonography and to compare this mode of ultrasonography with physical examination of temporal arteries for the diagnosis of giant-cell arteritis in patients with suspected giant-cell arteritis or polymyalgia rheumatica. Design: Diagnostic test study. Setting: Several divisions of Reggio Emilia Hospital, Reggio Emilia, Italy. Patients: 86 consecutive patients with a suspected diagnosis of giant-cell arteritis or polymyalgia rheumatica identified over a 22-month period. Measurements: The temporal arteries were examined in all 86 patients. Duplex ultrasonography of the temporal arteries was then performed by two ultrasonographers who were unaware of the clinical diagnosis. Before corticosteroid therapy was started, temporal artery biopsies were performed in all patients at the site targeted by the ultrasonographer. Results: A hypoechoic halo around the lumen of the temporal arteries had a sensitivity of only 40% (95% CI, 16% to 68%) and a specificity of 79% (CI, 68% to 88%) for the diagnosis of biopsy-proven giant-cell arteritis. The negative likelihood ratio was 0.8 (CI, 0.5 to 1.2), and the positive likelihood ratio was 1.9 (CI, 0.9 to 4.1). When the thickness of the halo was at least 1 mm, specificity increased to 93% (CI, 84% to 98%) and the positive likelihood ratio increased to 5.7 (CI, 2.0 to 16.2); however, sensitivity remained low at 40% (CI, 16% to 68%). On physical examination, temporal artery abnormalities had a higher sensitivity of 67% (CI, 38% to 88%), a higher specificity of 99% (CI, 92% to 100%), and a higher positive likelihood ratio of 47.3 (CI, 6.5 to 342.4) than did ultrasonographic findings. None of the patients with giant-cell arteritis had a normal temporal artery inspection and a hypoechoic halo on ultrasonography. Conclusion: Evidence on ultrasonography of a halo around temporal arteries, either any halo or a halo 1 mm or greater in thickness, only modestly increased the probability of biopsy-proven giant-cell arteritis but did not improve the diagnostic accuracy of a careful physical examination. abstract_id: PUBMED:9358127 Color duplex ultrasonography in the diagnosis of temporal arteritis. Background: The diagnosis of temporal arteritis usually requires a biopsy of the temporal artery. We examined the usefulness of color duplex ultrasonography in patients suspected of having temporal arteritis. Methods: In this prospective study, all patients seen in the departments of rheumatology and ophthalmology from January 1994 to October 1996 who had clinically suspected active temporal arteritis or polymyalgia rheumatica were examined by duplex ultrasonography. The final diagnoses, made according to standard criteria, were temporal arteritis in 30 patients, 21 with biopsy-confirmed disease; polymyalgia rheumatica in 37; and negative histologic findings and a diagnosis other than temporal arteritis or polymyalgia rheumatica in 15. We also studied 30 control patients matched for age and sex to the patients with arteritis. Two ultrasound studies were performed and read before the biopsies; one ultrasonographer was unaware of the clinical information. Results: In 22 (73 percent) of the 30 patients with temporal arteritis, ultrasonography showed a dark halo around the lumen of the temporal arteries. The halos disappeared after a mean of 16 days (range, 7 to 56) of treatment with corticosteroids. Twenty-four patients (80 percent) had stenoses or occlusions of temporal-artery segments, and 28 patients (93 percent) had stenoses, occlusions, or a halo. No halos were identified in the 82 patients without temporal arteritis; 6 (7 percent) had stenoses or occlusions. For each of the three types of abnormalities identified by ultrasonography, the interrater agreement was > or =95 percent. Conclusions: There are characteristic signs of temporal arteritis that can be visualized by color duplex ultrasonography. The most specific sign is a dark halo, which may be due to edema of the artery wall. In patients with typical clinical signs and a halo on ultrasonography, it may be possible to make a diagnosis of temporal arteritis and begin treatment without performing a temporal-artery biopsy. abstract_id: PUBMED:10655758 The significance of color duplex ultrasonography for the diagnosis of temporal arteritis We examined the usefulness of color duplex ultrasonography in patients suspected of having temporal arteritis. Five patients, who were all aged 70 or older, developed a new onset of localized headache with temporal artery abnormalities, and had an elevated erythrocyte sedimentation rate of > 100 mm/hour. The final diagnoses were temporal arteritis in three patients, polymyalgia rheumatica in one, and probable healed temporal arteritis in one. Color duplex ultrasonography showed stenoses, which were confirmed histologically as well, in the superficial temporal artery of all patients. The characteristic findings of active temporal arteritis were, however, demonstrated in only three biopsy specimens, and in the remaining two the stenoses were thought to be related to previous arteritis. The hypoechoic halo, which has been reported to be a characteristic finding of color duplex ultrasonography in active temporal arteritis, was detected in only one patient with active temporal arteritis and another one with probable healed temporal arteritis. No stenoses were demonstrated in the superficial temporal arteries of 30 control subjects (20 with at least one risk factor of atherosclerosis and 10 without it). Color duplex ultrasonography can therefore be considered a powerful method for detecting stenoses in the superficial temporal artery. Its ability to identify their etiology is, however, unsatisfactory, so that temporal artery biopsy remains undoubtedly the most reliable test for etiological evaluation. We thus recommend color duplex ultrasonography as a supplementary method for the diagnosis of temporal arteritis, because it can provide useful information concerning the appropriate site of temporal artery biopsy. abstract_id: PUBMED:28352206 Color duplex ultrasonography findings of temporal arteries in a case of giant cell arteritis: role in diagnosis and follow-up. Giant cell arteritis (GCA) is a systemic autoimmune disease that affects medium- and large-sized arteries. The diagnostic gold standard is the temporal artery biopsy, but it has limited sensitivity and some difficulties in reproducibility. Color duplex ultrasonography is a noninvasive, reproducible, and inexpensive method for diagnosis of temporal arteries involvement (temporal arteritis [TA]) in GCA with high sensitivity and specificity. We present the ultrasound findings at baseline and during follow-up in a case of TA in a patient with GCA. abstract_id: PUBMED:7790746 Doppler ultrasonography in giant cell arteritis. A patient with biopsy negative giant cell arteritis was examined and treated by our group. Since in some cases the visual loss in this disease is due to a central retinal artery occlusion, a technique to evaluate the blood flow velocity in this artery would be useful. The Duplex scanner was utilized by us in this patient to study the blood flow velocity of the central retinal artery. The flow velocity was reduced. The patient was treated with systemic steroids which lead to clinical and symptomatic improvement. After the treatment with steroids the Duplex scanner was again used to study the flow velocity of the central retinal artery. The blood flow velocity improved. We think that Doppler ultrasonography may be useful in the diagnosis of and in monitoring the treatment of some cases of giant cell arteritis. abstract_id: PUBMED:17203007 Technology Insight: the role of color and power Doppler ultrasonography in rheumatology. An increasing number of rheumatologists have access to ultrasound equipment that provide both color and power Doppler modes, which can be used to investigate musculoskeletal and vascular pathologies. Musculoskeletal Doppler ultrasonography can be used to estimate levels of inflammation, to document the anti-inflammatory effect of agents such as corticosteroids and tumor necrosis factor inhibitors, to differentiate between inflammatory and degenerative disease, and to distinguish between normal and inflamed joints in cases of minor synovial swelling. Vascular Doppler ultrasonography can be used to determine organ involvement in small-vessel vasculitides, to delineate aneurysms in vasculitides of medium-sized arteries, and to assess the characteristic findings in large-vessel vasculitis. Numerous studies, including a meta-analysis, have been published on the use of temporal-artery ultrasonography for the diagnosis of giant cell arteritis. Duplex ultrasonography is a sensitive approach for detecting characteristic edematous wall swellings in active temporal arteritis and for assessing vasculitis of the axillary arteries (large-vessel giant cell arteritis) in patients with suspected temporal arteritis, polymyalgia rheumatica, or fever of unknown origin. Duplex ultrasonography can also be used to assess vasculitis of subclavian and carotid arteries in younger patients with Takayasu's arteritis and acute finger artery occlusions in patients with small-vessel vasculitides. abstract_id: PUBMED:10665744 Color duplex ultrasonography in large-vessel giant cell arteritis. In a patient with active extracranial giant cell arteritis, duplex ultrasonography demonstrated hypoechoic mural thickening of the brachial, axillary, subclavian, and carotid arteries with bilateral subtotal occlusions of the brachial and axillary arteries. The ultrasound image of the artery walls became midechoic within 8 weeks, and hyperechoic within one year after start of treatment with corticosteroids. A similar hypoechoic mural thickening of the temporal arteries has been recently described in active giant cell arteritis. The dark ultrasound image is due to an edema of the vessel wall in the acute stage. The brighter ultrasound image might be due to fibrosis in the chronic stage of the disease. abstract_id: PUBMED:12578748 Comparison between color duplex ultrasonography and histology of the temporal artery in cranial arteritis (giant cell arteritis). Background: Duplex sonography of the temporal artery may be helpful in the diagnosis of cranial arteritis. Patients And Methods: The superficial temporal arteries of 36 patients with cranial arteritis or suspected arteritis were examined using both duplex ultrasonography (US) and biopsy. The data of these patients were divided into two groups. Group A consisted of 24 patients (66.7%) with definite positive results using duplex (US) and Group B of 12 patients (33.3%) who showed a suspicious or negative ultrasonographic result. Result: In all patients of Group A, the histological findings corresponded with the ultrasonographic changes in the inflamed artery. - The characteristic ultrasonographic sign was a dark halo around the lumen of the temporal arteries. There was a high correlation between a bilateral halo found by US with an ocular involvement. Ten out of 14 patients with a bilateral halo (71.4%) showed a distinct involvement of the optic nerve or retina. - The characteristic histological signs were infiltration of the vessel wall by inflammatory cells, mainly lymphocytes. Group B: The biopsies of the superficial temporal arteries were positive in 8 patients (66.7 %), negative in 4 other patients (33.3%). Conclusion: Patients with a distinct halo, demonstrated by US, also showed corresponding pronounced inflammatory cell infiltration of the vessel wall. Patients with no ultrasonographic changes presented histological signs of initial inflammation such as isolated inflammatory cells around the vasa vasorum and/or in the adventitial layer. abstract_id: PUBMED:15498914 The role of color duplex sonography in the diagnosis of giant cell arteritis. Objective: To determine the clinical usefulness of color duplex sonography in the diagnosis of giant cell arteritis as an alternative to temporal artery biopsy. Methods: From May 1998 to November 2002, 68 consecutive patients seen in our hospital with a clinical suggestion of active temporal arteritis were included. Forty-eight patients were female and 20 were male, with a mean age of 77 years. Color duplex sonography with a linear array transducer (5-10 MHz) was used to assess temporal artery morphologic characteristics before a biopsy was performed. The main sonographic criterion for a positive diagnosis was visualization of a hypoechoic halo around the temporal artery. These data were compared with pathologic findings. The kappa statistic was used to determine the level of agreement. Sensitivity, specificity, positive and negative predictive values, and accuracy of duplex sonography as a diagnostic test were assessed. Results: The color duplex sonographic findings were positive in 25 of 68 patients with a clinical suggestion of giant cell arteritis. The diagnosis was confirmed by biopsy in 22 patients; there were 4 false-positive results and 1 false-negative result by duplex sonography. The kappa value was 0.84. Sensitivity, specificity, positive and negative predictive values, and accuracy for duplex sonography were 95.4%, 91.3%, 84%, 97.6%, and 92.6%, respectively. Conclusions: The use of high-resolution color duplex sonography may replace biopsy in the diagnosis of giant cell arteritis. abstract_id: PUBMED:29967570 The Utility of Color Duplex Ultrasonography in the Diagnosis of Giant Cell Arteritis: A Prospective, Masked Study. (An American Ophthalmological Society Thesis). Purpose: To evaluate the diagnostic yield and concordance of color duplex ultrasound (CDU) of the superficial temporal artery (STA), temporal artery biopsy (TAB), and American College of Rheumatology (ACR) criteria in the diagnosis of giant cell arteritis (GCA). Methods: Prospective, masked study of all patients evaluated in one institution suspected of having GCA. All patients with a suspected diagnosis of GCA were admitted for pulsed intravenous corticosteroids. Patients underwent serologic work-up and ACR criteria were documented. All patients had a CDU and TAB performed within 3 days of initiation of systemic corticosteroid therapy. Main outcome measure: Concordance of CDU and TAB. Secondary outcome measures: Concordance between unilateral and bilateral CDU and TAB by side and segment, concordance between TAB and ACR criteria, and statistical analysis of serologic markers for GCA. Results: The diagnosis of biopsy-proven GCA was found in 14 of 71 (19.7%) patients. The sensitivity of CDU compared to the reference standard of TAB ranged between 5.1% and 30.8% depending on the signs studied on CDU and correlation of specific TAB parameters. Of the serologic studies, a platelet count threshold of 400,000μL had the highest positive (18.32) and lowest negative (0.37) likelihood ratios for a diagnosis of GCA. Conclusions: In this study, CDU showed minimal value in diagnosing GCA compared to TAB. There was poor correlation between CDU results and ACR criteria for GCA. The threshold platelet count had higher positive and negative predictive values for GCA than CDU and is a useful serologic marker for GCA. Answer: Duplex ultrasonography has been investigated as a diagnostic tool for giant cell arteritis (GCA), with varying results reported in the literature. Some studies have identified characteristic signs of temporal arteritis that can be visualized by color duplex ultrasonography, such as a dark halo around the lumen of the temporal arteries, which may be due to edema of the artery wall (PUBMED:9358127). This halo sign has been considered specific for GCA and has been suggested as a potential replacement for temporal artery biopsy in patients with typical clinical manifestations (PUBMED:9358127; PUBMED:28352206). However, the sensitivity and specificity of duplex ultrasonography for diagnosing GCA have been questioned. One study found that the presence of a hypoechoic halo around the temporal arteries had a sensitivity of only 40% and a specificity of 79% for the diagnosis of biopsy-proven GCA (PUBMED:12186513). When the thickness of the halo was at least 1 mm, specificity increased to 93%, but sensitivity remained low (PUBMED:12186513). Physical examination of temporal artery abnormalities showed higher sensitivity and specificity than ultrasonographic findings (PUBMED:12186513). Another study reported that color duplex sonography had a sensitivity of 95.4%, specificity of 91.3%, and accuracy of 92.6% for the diagnosis of GCA, suggesting that it could potentially replace biopsy (PUBMED:15498914). However, a prospective, masked study found that the sensitivity of color duplex ultrasound (CDU) compared to temporal artery biopsy (TAB) ranged between 5.1% and 30.8%, indicating minimal value in diagnosing GCA compared to TAB (PUBMED:29967570). In summary, while duplex ultrasonography can detect characteristic signs of GCA and may be useful as a noninvasive and reproducible method, its diagnostic accuracy is not consistently superior to temporal artery biopsy. The utility of duplex ultrasonography may be more as a supplementary method to guide biopsy rather than as a standalone diagnostic tool (PUBMED:10655758; PUBMED:17203007). Therefore, temporal artery biopsy remains the most reliable test for the etiological evaluation of GCA (PUBMED:10655758).
Instruction: Seasonal variation in plasma levels of lithium in the Indian population: is there a need to modify the dose? Abstracts: abstract_id: PUBMED:18830549 Seasonal variation in plasma levels of lithium in the Indian population: is there a need to modify the dose? Introduction: Lithium still remains an important choice in the therapy of manic-depressive psychosis (MDP), and though there are reports of seasonal variation in lithium levels from a few countries, such studies have not been conducted in India. Variability in the lithium level can lead to lack of efficacy or toxicity, making seasonal variation clinically relevant. Methods: A retrospective case sheet audit was performed for 101 MDP patients for recording plasma lithium level, oral lithium dose, age and gender for one year. The overall average oral lithium dose and level were recorded; the monthly average to which it most closely matched was noted as the control month, and values of other months were compared with this control month by Friedman's test followed by Dunn's test. Results: The mean age of patients was 38.22 (standard deviation 12.07) years, and 72 out of 101 patients were male. The mean lithium dose in November (938.61 +/- 243.40 mg/day), which was the closest to the overall mean dose (938.24 +/- 241.78 mg/day) was taken as the control month, which when compared with other monthly values, did not show any significant difference. The June (0.54 +/- 0.23 meq/L), July (0.55 +/- 0.24 meq/L) and August (0.55 +/- 0.24 meq/L) mean plasma lithium values were significantly high when compared to the October value (0.45 +/- 0.22 meq/L) as control. High-low variability between the plasma lithium values of different months was found to be 25 percent. Conclusion: The present study showed a significant high variability of lithium levels in different months of the year, therefore frequent plasma level monitoring and oral lithium dose adjustment to prevent situations of toxicity and lack of efficacy in MDP. abstract_id: PUBMED:12140117 Seasonal variations of lithium plasma levels. The aim of this study was to investigate the seasonal time course of lithium blood levels. We analyzed lithium plasma and red blood cell (RBC) levels in 186 subjects affected by bipolar (n=134) and major depressive (n=52) disorder, with stable oral dosage, followed in our lithium clinic for an average of 36 months. We observed a significant elevation of lithium plasma levels in summer with a more marked variation among early-onset subjects, bipolar subtype, and females. Lithium levels in plasma peaked in summer, and levels in RBC showed a trend in the same direction. Possible stratification factors such as presence of affected relatives or psychotic features did not significantly influence results. In conclusion, we observed a significant variation of lithium plasma levels according to seasons. If confirmed, this finding could have both clinical and research implications. abstract_id: PUBMED:848339 Lithium side effects in relation to dose and to levels and gradients of lithium in plasma. The relation between lithium dose, lithium concentrations, and lithium gradients in plasma and the side effects tremor, nausea, abdominal pains, and loose bowels was studied in 19 subjects. Rapidly dissolving lithium carbonate tablets were used. Tremor was related to higher doses, to higher concentrations, and to higher gradients of lithium in plasma. Nausea was related to higher gradients. Abdominal pain and loose bowels showed no relation to doses, levels or gradients. Concentrations of lithium were dose-dependent, while gradients were relatively independent of dose. abstract_id: PUBMED:35177146 Plus ça change? Switching lithium preparations. Aims And Method: A supply disruption alert in 2020, now rescinded, notified UK prescribers of the planned discontinuation of Priadel® (lithium carbonate) tablets. This service evaluation explored lithium dose and plasma levels before and after the switching of lithium brands, in order to determine the interchangeability of different brands of lithium from a pharmacokinetic perspective. Results: Data on the treatment of 37 patients switched from Priadel® tablets were analysed. Switching to Camcolit® controlled-release tablets at the same dose did not result in meaningful differences in plasma lithium levels. Dose adjustment and known or suspected poor medication adherence were associated with greater variability in plasma lithium levels on switching brands. Clinical Implications: For comparable pre- and post-switch doses in adherent patients, the most common brands of lithium carbonate appear to produce similar plasma lithium levels. British National Formulary guidance relating to switching lithium brands may be unnecessarily complex. abstract_id: PUBMED:41412 Lithium effects: relation to lithium dose and to plasma peak levels. In a 24-hour study, plasma peak lithium was determined in manic-melancholic patients who routinely had their entire lithium dose at night. A correlation analysis was undertaken of the relation of plasma peak level and the dose of lithium to a number of lithium induced changes: Increase in urine volume, weight gain, decrease in plasm phosphate, increase in plasma magnesium, decrease in plasma urea, increase in plasma alkaline phosphatase, increase in urinary pH. Only the changes in plasma phosphate and in urine pH were significantly correlated to the peak value of plasma lithium. The increase in urine volume was significantly correlated to the dose of lithium. abstract_id: PUBMED:32824874 Plasma Lithium Levels in a General Population: A Cross-Sectional Analysis of Metabolic and Dietary Correlates. Initial evidence suggests that lithium might affect life expectancy and the risk for different disease conditions, but most studies were conducted in patients on lithium medication. Little is known about the association of blood lithium levels within the physiological range with cardiometabolic risk factors and diet. We measured plasma lithium in a community-based sample from Northern Germany (samples taken between 2010 and 2012). All participants (aged 25-82 years) underwent standardized examinations and completed a semi-quantitative food frequency questionnaire. Of several variables tested, the estimated glomerular filtration rate (eGFR) was statistically significantly (inversely) associated with lithium levels, mainly in individuals with slightly impaired renal function (eGFR < 75 mL/min/1.73 m2). Besides, lithium levels were positively associated with age and alcohol intake. Using reduced rank regression, we identified a dietary pattern explaining 8.63% variation in plasma lithium levels. Higher lithium levels were associated with higher intakes of potatoes, leafy vegetables, root vegetables, fruits, tea, beer, wine and dietetic products and lower intakes of pasta, rice, pork, chocolate, sweets, soft drinks, other alcoholic beverages, sauces and snacks. Our observations suggest that plasma lithium levels are associated inversely with kidney function, particularly in individuals with slightly impaired renal function, and positively with age and alcohol intake. Lithium at physiological levels was moderately related to an exploratory dietary pattern. abstract_id: PUBMED:3106739 Effects of lithium on the pituitary-gonadal axis in the rat: evidence for dose-dependent changes in plasma gonadotropin and testosterone levels. The purpose of the present study was to examine the effects of lithium, a drug which is now used rather widely in the treatment of acute mania and the prophylaxis of manic-depressive bipolar disorders, on the pituitary-gonadal function in the laboratory rat. Sexually adult male rats, maintained under standardized laboratory conditions (LD 14: 10; lights on at 06:00 h, CST), were injected (ip) with lithium chloride both acutely for 1 day and chronically for 5 days, and by utilizing a low and high dose. For the low dose, lithium was injected twice daily (at 10:00 and 15:00 h) at 2.5 meg/Kg for 1 and 5 days, whereas in the high dose groups, also receiving lithium twice daily and at the same hours, the dosages were 5 meq/Kg for 1 day and 3.5 meq/Kg for 5 days. Animals were sacrificed 4 hours after the last lithium (or saline) injections. Plasma and pituitary levels of luteinizing hormone (LH) and follicle stimulating hormone (FSH), and plasma levels of testosterone (T) were measured by radioimmunoassay (RIA). The administration of the low dose led to a significantly higher (P less than 0.001) plasma FSH, but unaltered plasma LH, levels after 5 days. In contrast, the high dose lithium led to significant suppressions of plasma LH (P less than 0.02; on day 5) and FSH (P less than 0.001; on both day 1 and 5) levels. The levels of plasma T also showed a significant reduction following the low dose (P less than 0.02; on day 5), as well as the high dose lithium treatment, as evident after both 1 (P less than 0.02) and 5 (P less than 0.02) days. Regardless of the dosage, or the duration of treatment, pituitary gonadotropin levels remained unaltered following lithium. The results of our present experiments suggest that lithium administration, either acutely or on a chronic basis, might be associated with significant adverse effects on the pituitary-testicular axis. Furthermore, since some of the hormonal changes were evident when plasma lithium concentration was within the therapeutic range, our data may have potential clinical implications. abstract_id: PUBMED:521243 Relationship between plasma, RBC, and CSF lithium concentrations in human subjects. Simultaneous measurement of plasma, RBC, and plasma lithium concentrations took place with 17 inpatients chronically treated with lithium, at various times after the last lithium dose. RBC lithium levels were significantly higher than CSF lithium levels. Specimens drawn 10 or more hours after the last dose showed higher RBC and CSF lithium and lower plasma lithium than specimens drawn 4 or less hours after the last lithium dose. None of the lithium measurements differentiated manic-depressives from schizophrenics or schizoaffectives. Plasma, RBC, and CSF lithium all intercorrelated highly and equally. abstract_id: PUBMED:6099263 Influence of lithium on magnesium levels in the plasma, erythrocytes and brain of rats Intra peritoneal administration of LiCl to rat with a single dose or every day during a week shows that the correlation of brain Li with erythrocytes Li is better than the same correlation with plasmatic Li. With a single dose of Li the variation of the Li level doesn't give significant variations of Mg. With repetitive administrations of Li (near that which occurs in actual therapeutic treatment) we show (44 pairs of results with p less than 0.001) that the increase of Li in erythrocytes is correlated with a decrease of Mg in erythrocytes and an increase of this ion in plasma. The authors demonstrate that the nervous cells don't release Mg when Li get in the brain. abstract_id: PUBMED:31782314 Seasonal and temperature effect on serum lithium concentrations. Background: Lithium remains the gold standard treatment for bipolar disorder. However, it has a very narrow therapeutic index (0.6-0.8 mmol/L). It has been suggested that high environmental temperature can lead to dehydration, elevated plasma lithium concentration and then lithium toxicity. Objectives: We aimed to investigate the effect of seasonal and short-term changes in temperature on serum lithium concentrations in Sydney, Australia. Methods: We retrospectively analysed data from all patients who had serum lithium concentrations taken from the Prince of Wales and Sutherland Hospitals between 2008 and 2018. Temperature data came from the Bureau of Meteorology. We examined correlations between lithium concentrations and the preceding 5 days maximum temperatures, month and season. We also performed a longitudinal analysis of the effect of temperature and seasons within selected patients who had repeated levels. Results: A total of 11,912 serum lithium concentrations from 2493 patients were analysed. There was no significant association between higher lithium concentration and preceding higher temperatures (r = -0.008, p = 0.399). There was also no important seasonal or monthly variation, across all patients or in the smaller cohort with longitudinal data (n = 123, r = 0.008, 95% confidence interval: [-0.04, 0.06]). Conclusion: There were no clinically important differences in serum lithium concentration related to seasons, months or temperatures, which suggests that patients on lithium are able to adequately maintain hydration during hot weather in Sydney. Answer: The study conducted on the Indian population indicated that there is a significant seasonal variation in plasma lithium levels, with higher values observed during the months of June, July, and August compared to October (PUBMED:18830549). This suggests that there may be a need to monitor plasma lithium levels more frequently during these months and adjust the oral lithium dose accordingly to prevent toxicity and maintain efficacy in the treatment of manic-depressive psychosis (MDP). The findings from the Indian study are consistent with another study that observed a significant elevation of lithium plasma levels in the summer, particularly among early-onset subjects, bipolar subtype, and females (PUBMED:12140117). This further supports the notion that seasonal changes can influence lithium plasma levels and may necessitate dose adjustments. However, it is important to note that another study conducted in Sydney, Australia, found no significant association between higher lithium concentration and preceding higher temperatures, nor any important seasonal or monthly variation in serum lithium concentration (PUBMED:31782314). This suggests that the need for dose modification may vary depending on the geographical location and the ability of patients to maintain hydration during different seasons. In conclusion, the evidence from the Indian population does suggest that there is a need to consider seasonal variation when monitoring and adjusting lithium doses to ensure safe and effective treatment of MDP. However, the necessity and extent of dose modification may depend on local climate conditions and individual patient factors.
Instruction: Does knowing about an HPV infection influence behavior change? Abstracts: abstract_id: PUBMED:19064233 Does knowing about an HPV infection influence behavior change? A feasibility study of females attending a teen clinic. Objective: To test the feasibility of a brief, clinic-based, behavioral intervention designed to foster the adoption of three protective behaviors among adolescent females testing positive for any of 13 oncogenic strains of HPV. This feasibility study also included non-statistical comparisons of risk behaviors assessed at follow-up. Design: A non-randomized clinical trial. Participants: Twenty-eight sexually active, 17-23-year-old females were recruited from a hospital-based adolescent medicine clinic. Intervention: Adolescents testing positive for HPV received a physician-delivered intervention designed to emphasize the association of high-risk HPV with cervical cancer and to promote protective behaviors. Those testing negative for HPV did not receive intervention. Main Outcome Measure: Several indicators were used including sexual risk behaviors, intent for subsequent Pap testing, intent to quit smoking, and intent to be vaccinated against HPV. Results: Those testing positive (39.3%) received the intervention. Recruitment and retention rates were high. In comparing the two groups after a 30-day follow-up period, modest differences, favoring the intervention, were observed. At follow-up, teens testing positive reported lower levels of risk taking behavior, greater intent to return for next Pap testing, greater intent to quit smoking (if applicable), and greater intent to be vaccinated against HPV. Conclusion: Overall, the findings suggest that a larger study is indeed feasible and may produce meaningful differences between groups. Although the source (i.e., testing positive vs. the receipt of intervention) of these differences cannot be determined, findings suggests that further investigation of a testing/behavioral intervention regarding HPV and cervical cancer prevention for adolescent females may be warranted. abstract_id: PUBMED:36669967 Identifying intervention strategies to improve HPV vaccine decision-making using behaviour change theory. Although the HPV vaccine is highly safe and effective, its uptake is sub-optimal in many countries, including Ireland. There is therefore a need to identify appropriate interventions that will increase HPV vaccine acceptance by parents. In this study, we took a systematic approach to understand the factors that influence HPV vaccine uptake by parents of adolescent girls in Ireland to define suitable behaviour change interventions that would support positive vaccine decision-making in the future. Specifically, we conducted semi-structured interviews, used a Theoretical Domains Framework (TDF)-based topic guide, to gain insight into the knowledge, beliefs, attitudes and current behaviours of parents with respect to their HPV vaccine decision. Transcripts were analysed using the TDF. The Behaviour Change Wheel (BCW) was used to identify relevant intervention functions and the Behaviour Change Technique Taxonomy version 1 (BCTTv1), to identify relevant intervention techniques. All parents discussed the essential role of healthcare providers in vaccine decision-making. Complacency and confidence were important factors in decision-making by vaccine hesitant parents. Five BCW intervention functions were identified as appropriate, namely; education; persuasion; environmental restructuring; modelling and enablement. To our knowledge, this is the first study to systematically evaluate HPV vaccine decision-making using behaviour change theory and identify suitable intervention strategies to promote positive vaccine decision-making using this approach. abstract_id: PUBMED:32272289 Young adults' human papillomavirus-related knowledge: source of medical information matters. Objectives: Few studies examine the influence that different sources of medical information has on human papillomavirus (HPV)-related knowledge. We examined the relationship between the primary source of medical information and knowledge about HPV in young adults aged 18-26 years. Study Design: This study used cross-sectional data from the Health Information National Trends Survey. Methods: Respondents (n = 404) self-reported their knowledge about HPV-related diseases and vaccinations and their sources of medical information. Sources of medical information included electronic/print media, family/friends, or a healthcare provider. Bivariate and multivariate analyses were used to examine the association between the source of information and HPV knowledge. Results: Fifty-six percent of respondents used electronic or print media as their primary source of medical information. A greater proportion of Hispanic (40.0%) and black (36.0%) respondents received medical information from their family/friends than white respondents (20.0%). Respondents who received medical information from family/friends had 4.34 (95% confidence interval [CI]: 2.14, 8.79), 4.06 (95% CI: 2.05, 8.04), and 2.35 (95% CI: 1.10, 5.04) times higher odds than those who received information from healthcare providers of not knowing that HPV causes cervical cancer, knowing HPV is a sexually transmitted infection, and hearing about the HPV vaccine, respectively. Conclusion: Source of medical information was significantly associated with knowledge of HPV. Receiving medical information from family/friends negatively influenced young adults' HPV knowledge. These findings may guide future interventions to target peer and familial influence on medical decisions. abstract_id: PUBMED:20390987 Passport to promiscuity or lifesaver: press coverage of HPV vaccination and risky sexual behavior. A significant minority of parents are concerned about adolescents engaging in risky sexual behavior following human papillomavirus (HPV) vaccination. The way the HPV vaccine is reported in the media has the potential to influence public understanding and vaccination decisions. The present study examined the content of articles published between 2003 and 2008 in British national newspapers that addressed the issue of adolescents engaging in risky sexual behavior following HPV vaccination. We used mixed methods to analyze 92 articles in which the issue was mentioned. Qualitative framework analysis highlighted three main types of discussion: news stories proposing that adolescents will engage in risky sexual behavior following HPV vaccination, counterarguments insisting that adolescents will not engage in risky sexual behavior after HPV vaccination, and parents' views of the issue of risky sexual behavior. The results indicated that newspapers provide parents with broadly positive descriptive norms about vaccination; however, the issue that adolescents will engage in risky sexual behaviors following HPV vaccination is regularly discussed in the national press and has the potential to increase parents' concerns about vaccination. abstract_id: PUBMED:29579499 Using narrative intervention for HPV vaccine behavior change among Khmer mothers and daughters: A pilot RCT to examine feasibility, acceptability, and preliminary effectiveness. Purpose: To develop a theory-guided culturally grounded narrative intervention to promote HPV vaccination behavior and examine the feasibility, acceptability, and preliminary effectiveness of the intervention among dyads of Cambodian American mothers and daughters. Method: The principles of community-based participatory research guided the development and evaluation and involved two phases: Phase 1: Development of storytelling narrative intervention videos which focused on a series of HPV vaccination-related messages and which integrated the narrative theory with the revised network episode model (rNEM); Phase 2: conducting the pilot RCT with 19 dyads of Khmer mothers and daughters aged from 14 to 17years to examine the feasibility and acceptability of the study. Findings: Recruitment was completed in 7months with an overall retention of 84%. The acceptability of the intervention was high, as reflected by the number of positive comments on the narrative video. Preliminary data indicate that vaccine uptake at one-month follow-up was the same (2 vs. 2) between intervention and control groups. However, daughters in the narrative intervention group reported higher intention to receive HPV vaccination within one month compared to the control group (4 vs. 1). Conclusion: All the procedures to inform a full RCT were examined, including identification of eligible participants, recruitment, randomization, intervention adherence, and short-term follow-up. The positive preliminary outcomes and feedback support the feasibility and potential effectiveness of the theory-guided narrative intervention. abstract_id: PUBMED:29976170 Changes in (risk) behavior and HPV knowledge among Dutch girls eligible for HPV vaccination: an observational cohort study. Background: Implementation of human papillomavirus (HPV) vaccination raised concerns that vaccination could lead to riskier sexual behavior. This study explored how possible differences in sexual behavior and HPV knowledge developed over time between HPV-vaccinated and unvaccinated girls. Methods: A random sample of 19,939 girls (16-17 year olds) eligible for the catch-up HPV vaccination campaign in the Netherlands was invited for a longitudinal study with questionnaires every 6 months over a two-year follow-up period. Possible differences over time between vaccinated and unvaccinated participants were studied using generalized equations estimation (GEE). Results: A total of 2989 girls participated in round one, of which 1574 participated (52.7%) in the final 5th round. Vaccinated girls were more likely to live in more urban areas (OR 1.28, 95%CI 1.10-1.47) and to use alcohol (OR 1.46, 95%CI 1.24-1.70) and contraceptives (OR 1.69, 95%CI 1.45-1.97). Vaccinated and unvaccinated girls showed comparable knowledge on HPV, HPV vaccination, and transmission. Vaccinated girls were more likely to be sexually active (OR 1.19, 95%CI 1.02-1.39), and this difference increased over time (OR for interaction 1.06, 95%CI 1.00-1.12). However, they had a slightly lower number of lifetime sexual partners (mean difference - 0.20, 95%CI -0.41-0.00). Vaccinated girls were less likely to use a condom with a steady partner (aOR 0.71, 95%CI 0.56-0.89). However, the difference between vaccinated and unvaccinated girls with regard to condom use with casual or steady partner(s) did not significantly change over time. Conclusion: Overall, we did not find indications that vaccination influenced sexual behavior in girls during 2 years of follow-up. The few differences found may be related to existing disparities in the socio-demographic characteristics of the young population pointing to the importance and improvement of education with regard to safe sex practices. Our findings do not suggest that vaccination status is associated with changes in sexual risk behavior and thus it is unlikely that this might influence the effectiveness of the vaccination program. abstract_id: PUBMED:28770192 Test-Retest Reliability of Self-Reported Sexual Behavior History in Urbanized Nigerian Women. Background: Studies assessing risk of sexual behavior and disease are often plagued by questions about the reliability of self-reported sexual behavior. In this study, we evaluated the reliability of self-reported sexual history among urbanized women in a prospective study of cervical HPV infections in Nigeria. Methods: We examined test-retest reliability of sexual practices using questionnaires administered at study entry and at follow-up visits. We used the root mean squared approach to calculate within-person coefficient of variation (CVw) and calculated the intra-class correlation coefficient (ICC) using two way, mixed effects models for continuous variables and [Formula: see text] statistics for discrete variables. To evaluate the potential predictors of reliability, we used linear regression and log binomial regression models for the continuous and categorical variables, respectively. Results: We found that self-reported sexual history was generally reliable, with overall ICC ranging from 0.7 to 0.9; however, the reliability varied by nature of sexual behavior evaluated. Frequency reports of non-vaginal sex (agreement = 63.9%, 95% CI: 47.5-77.6%) were more reliable than those of vaginal sex (agreement = 59.1%, 95% CI: 55.2-62.8%). Reports of time-invariant behaviors were also more reliable than frequency reports. The CVw for age at sexual debut was 10.7 (95% CI: 10.6-10.7) compared with the CVw for lifetime number of vaginal sex partners, which was 35.2 (95% CI: 35.1-35.3). The test-retest interval was an important predictor of reliability of responses, with longer intervals resulting in increased inconsistency (average change in unreliability for each 1 month increase = 0.04, 95% CI = 0.07-0.38, p = 0.005). Conclusion: Our findings suggest that overall, the self-reported sexual history among urbanized Nigeran women is reliable. abstract_id: PUBMED:27281034 Exploring Chinese Women's Perception of Cervical Cancer Risk as It Impacts Screening Behavior: A Qualitative Study. Background: Chinese women rarely undergo regular cervical screening. Women's decisions about cervical screening may be influenced by their perception of the risk of cervical cancer. Therefore, understanding how women perceive the risk of cervical cancer, how personal risk factors are interpreted, and the influence of cultural issues on cervical screening behavior is important. Objective: The aim of this study is to understand cervical cancer risk perception and the role of personal risk factors as they influence screening behavior among Chinese women. Method: An exploratory qualitative research design was used. We conducted semistructured, in-depth interviews of 27 women in Changsha, a medium-size city in Hunan province, China. Results: Participants identified that cervical cancer had serious consequences, but they distanced themselves psychologically from the disease because they felt that "cervical cancer is a shameful and deadly disease." Although women identified some of the risk factors for the disease, they had little specific knowledge of human papillomavirus infection, its association with cervical cancer, and the importance of cervical screening. Conclusion: This study contributes new knowledge to the understanding of cervical screening behavior within a specific social and cultural context. Better efforts should be made to educate Chinese women on the risk of cervical cancer and emphasize that effective cancer treatments are available and that there is a good chance of survival. Implications For Practice: Nurses working with Chinese women should ensure that the information they provide is culturally sensitive, particularly by acknowledging the normative beliefs of this population group. abstract_id: PUBMED:23930748 Human papillomavirus vaccine stages of change among male and female university students: ready or not? Objective: To examine gender differences in human papillomavirus (HPV) vaccine stages of change following the recommendations for permissive use of HPV vaccine in males. Participants: Students aged 18-26 attending a large, public, Midwest university in April 2010. Methods: Participants completed a self-administered, online questionnaire. HPV vaccine stage of change was assessed according to core constructs of the Transtheoretical Model of Behavior Change. Logistic regression was used to identify associations of HPV-related beliefs and attitudes with stage of change. Results: Although most (80.5%) of the 4,019 participants had at least contemplated HPV vaccination, more females had taken observable steps towards vaccination. Significant differences between genders in HPV-related beliefs and attitudes were observed, particularly perceived parental or perceived health care provider approval of HPV vaccination. Conclusions: University students generally agreed with the benefits of HPV vaccination, both for themselves and for society, and these attitudes were significantly associated with having at least contemplated vaccination. abstract_id: PUBMED:24520841 No difference in sexual behavior of adolescent girls following Human Papilloma Virus vaccination: a case study two districts in Uganda; Nakasongola and Luwero. Background: Vaccination against Human Papilloma Virus (HPV) before sexual debut has been recommended by WHO as a primary prevention strategy against cervical cancer. In Uganda, vaccination against HPV started as a demonstration project among young girls in Nakasongola; and Ibanda districts. Studies have suggested that vaccination against HPV could result in risky sexual behavior and increase the risk of early sexual debut.This study was done to compare the sexual behavior of HPV vaccinated and non vaccinated adolescent girls in two neighboring districts in Uganda; and to assess whether HPV vaccination had any influence on sexual behavior of vaccinated adolescent girls. Methods: This was an unmatched comparative study, which used both qualitative and quantitative study methods. It was carried out among 400 primary school girls aged 12 to 15 years in the districts of Nakasongola (vaccinated) and Luwero (non vaccinated). Quantitative data was collected using a questionnaire while qualitative data was obtained using focus group discussions and key informant interviews. The main outcome measure was the number of sexually active girls in each group. Results: Of the 400 girls, 8 volunteered information that they were sexually active, 5(2.5%) from Luwero (non vaccinated) and 3 (1.5%) from Nakasongola (vaccinated), but there was no statistically significant difference between the 2 groups. HPV vaccination was not significantly associated with being sexually active. Conclusion: There was no significant difference in sexual behavior between vaccinated and non vaccinated girls. Answer: Yes, knowing about an HPV infection does influence behavior change. A feasibility study conducted among adolescent females testing positive for oncogenic strains of HPV found that after receiving a physician-delivered intervention emphasizing the association of high-risk HPV with cervical cancer and promoting protective behaviors, teens testing positive reported lower levels of risk-taking behavior, greater intent to return for next Pap testing, greater intent to quit smoking (if applicable), and greater intent to be vaccinated against HPV (PUBMED:19064233). This suggests that awareness of an HPV infection can lead to behavior change in the direction of adopting protective health behaviors.
Instruction: Is intraoperative anal endosonography necessary during laparoscopy-assisted anorectoplasty for high/intermediate type imperforate anus? Abstracts: abstract_id: PUBMED:18266590 Is intraoperative anal endosonography necessary during laparoscopy-assisted anorectoplasty for high/intermediate type imperforate anus? Purpose: The aim of this study was to evaluate the relevance of anal endosonography (AES) during Georgeson laparoscopy-assisted anorectoplasty (GLA) for high/intermediate type imperforate anus (HIIA) using a continence evaluation questionnaire (CEQ) and postoperative AES (post-AES). Methods: Fifteen HIIA patients who underwent GLA were enrolled in this prospective study. Six had intraoperative AES (op-AES) to confirm the accuracy of the positioning of the pull-through (PT) canal and 9 did not (no-AES). All subjects were reviewed prospectively by one surgeon and all had post-AES to measure the external sphincter (ES) and puborectalis (PR) at 3 and 9 o'clock to confirm that the PT colon was central. A CEQ (5 parameters: frequency of motions, severity of staining, severity of perianal erosions, anal shape, and requirement for medications; maximum score: 10) was administered to 9 subjects (4 from op-AES and 5 from no-AES) followed-up for over 3 years to assess continence. Results: Mean age at GLA for both groups was not statistically different. There were no statistical differences between the two groups for thickness of the ES and PR on post-AES. Mean annual CEQ scores for the op-AES group 1, 2, 3, 4, and 5 years after GLA were 5.3, 6.3, 7.4, 8.4, and 8.5, respectively, while for the no-AES group were 5.9, 7.1, 8.1, 8.2, and 8.2, respectively; all differences were not statistically significant. Conclusions: Our results suggest that op-AES is not necessary during GLA because there is no difference in clinical and functional status that can be attributed to op-AES. abstract_id: PUBMED:18206475 Midterm postoperative clinicoradiologic analysis of surgery for high/intermediate-type imperforate anus: prospective comparative study between laparoscopy-assisted and posterior sagittal anorectoplasty. Purpose: The objective of this study is to analyze the outcome of surgery for high/intermediate-type imperforate anus using anal endosonography (AES), magnetic resonance imaging (MRI), and a continence evaluation questionnaire (CEQ). Methods: In this study, 24 cases of high/intermediate-type imperforate anus were studied. Fifteen of 24 had Georgeson's laparoscopy-assisted anorectoplasty (GLA), and 9 of 24 had Pena's posterior sagittal anorectoplasty (PPA). All subjects had AES and MRI postoperatively. On AES, differences in thickness of the external sphincter (ES) and puborectalis (PR) at 3 and 9 o'clock were compared, and if pull-through colon was central, AES was 0. On MRI, differences in thickness were analyzed semiquantitatively and scored; if muscles were of even thickness, the score was 0; slight difference was 1, and marked difference was 2. A 5-parameter CEQ questionnaire (maximum score, 10) was administered to 16 of 24 subjects followed up for more than 3 years (9 GLA, 7 PPA). Surgical stress was assessed using mean febrile period, duration of raised white blood cell count, and peak C-reactive protein level. Results: Mean age at surgery and mean postoperative period for both groups were not statistically different. There were no differences in mean muscle thickness for ES or PR on AES according to procedure (ES: GLA = 0.19 +/- 0.15 mm, PPA = 0.16 +/- 0.09 mm, P = .59; PR: GLA = 0.19 +/- 0.19 mm, PPA = 0.22 +/- 0.15 mm, P = .69). Magnetic resonance imaging scores were also not significantly different according to procedure (GLA: 0.77 +/- 0.83; PPA: 0.75 +/- 0.50, P = .97). When CEQ were compared annually, scores for GLA were generally higher throughout the study but only statistically significant at 3 and 4 years (P < .05). Differences in parameters of surgical stress were not significant. Conclusion: Although there were no differences in muscle thickness and parameters of surgical stress observed according to technique, GLA would appear to provide better outcome based on CEQ scores. abstract_id: PUBMED:31494699 Single-incision laparoscopic-assisted anorectoplasty for treating children with intermediate-type anorectal malformations and rectobulbar fistula: a comparative study. Purpose: Single-incision laparoscopic-assisted anorectoplasty (SILAARP) was compared to posterior sagittal anorectoplasty (PSARP) for treating intermediate-type anorectal malformations (ARMs) with rectobulbar fistula. Methods: Between December 2011 and January 2016, 48 children with intermediate-type ARMS were treated with SILAARP (n = 34) or PSARP (n = 14) in our centre. Data including demographics, complications, and long-term outcomes were retrospectively compared. Results: No significant difference was observed between both groups in terms of median operative time and complications. The length of postoperative hospital stay was shorter in the SILAARP group than in the PSARP group (6.15 ± 1.10 vs 9.64 ± 4.13 days; p = 0.008). After a mean follow-up of 59.38 ± 13.68 months, the rates of voluntary bowel movements, soiling, and constipation were similar in both groups. Anorectal manometry was performed in 15 and 7 children from the SILAARP and PSARP groups, respectively. Although there were no significant differences in the presence of rectoanal relaxation reflex and high-pressure-zone length, anal canal resting pressure was higher in the SILAARP group than in the PSARP group (33.35 ± 12.95 vs 23.06 ± 8.40 mmHg; p = 0.039). Conclusions: Both SILAARP and PSARP seemed feasible and effective for treating intermediate-type ARMs with rectobulbar fistula in children. However, SILAARP significantly reduced the length of postoperative hospital stay and improved anal canal resting pressure. abstract_id: PUBMED:12483622 Intraoperative endosonography enhances laparoscopy-assisted colon pull-through for high imperforate anus. Purpose: The authors used ultrasonographic endoprobes during laparoscopy-assisted colon pull-through (LACPT) for the repair of high imperforate anus to confirm the pull-through canal was surrounded symmetrically by pelvic floor muscles. Methods: Six patients with high imperforate anus were treated by LACPT (mean age at LACPT, 8.2 months). An endoscopic (12-MHz, 2.5-mm in diameter) and proctoscopic (7.5-MHz, 12-mm in diameter) probe were inserted into the proposed route of dissection intraoperatively to measure the thickness of the surrounding muscle tissue at at least 3 levels: the external anal sphincter, the levator ani muscle sling, and the intervening muscle complex. Results: The average thickness of the external anal sphincter was 2.3 +/- 0.4 mm anteriorly, 2.4 +/- 0.4 mm on the left, 2.4 +/- 0.5 mm posteriorly, and 2.6 +/- 0.6 mm on the right. The average thickness of the muscle complex was 2.3 +/- 0.6 mm anteriorly, 2.2 +/- 0.5 mm on the left, 2.1 +/- 0.4 mm posteriorly, and 2.2 +/- 0.5 mm on the right. The average thickness of the left crus of the levator ani muscle was 1.8 +/- 0.3 mm, the right crus was 1.9 +/- 0.4 mm, and the rim located posterior to the rectum was 2.0 +/- 0.3 mm. No statistically significant difference was found between the measurements taken at each level. Conclusion: Intraoperative endosonography during LACPT can greatly enhance the precision of positioning the pull-through canal. abstract_id: PUBMED:30120550 Long-term outcomes and complications after laparoscopic-assisted anorectoplasty vs. posterior sagittal anorectoplasty for high- and intermediate-type anorectal malformation. Purpose: Laparoscopic-assisted anorectoplasty (LAARP) is a minimally invasive procedure in which the levator ani muscle is left to potentially improve postoperative fecal continence. This study aimed to retrospectively evaluate postoperative complications and long-term outcomes of fecal continence after LAARP and compare them to those after posterior sagittal anorectoplasty (PSARP). Methods: Forty-five male patients with high and intermediate anorectal malformation (aged ≥ 4 years) who underwent LAARP or PSARP (LAARP 29, PSARP 16) from 1999 to 2013 were included. Postoperative complications and postoperative fecal continence were retrospectively evaluated and compared. Postoperative complications were also compared before and after introducing a urethroscope during fistula resection. Results: Complications after LAARP and PSARP were seen in 12 vs. 2 cases (p = 0.09) of mucosal prolapse and in 9 vs. 1 case (p = 0.07) of posterior urethral diverticulum (PUD), respectively. The incidence of PUD decreased after introduction of a urethroscopy from 40% in 8/20 cases to 11% in 1/9 cases. No significant difference was found in terms of fecal continence in both groups. Conclusion: Through our study, greater improvement of postoperative fecal continence after LAARP has not been shown. LAARP was at higher risk for mucosal prolapse and PUD. However, precise dissection of the urethral fistula could be performed after the introduction of urethroscopy. abstract_id: PUBMED:35343076 Intraoperative visualization of urethra using illuminating catheter in laparoscopy-assisted anorectoplasty for imperforated anus-A novel and safe technique for preventing urethral injury. Purpose: In this study, we used a near-infrared ray catheter (NIRC) to visualize the urethral line. We herein report our intraoperative visualization technique of the urethra using an illuminating catheter in laparoscopy-assisted anorectoplasty (LAARP) for imperforated anus. Patient And Surgical Technique: A 3.0-kg term male neonate with anorectal malformation was delivered. An invertogram revealed the type as intermediate. Transverse colostomy was performed at the left upper abdomen. A recto-bulbar urethral fistula (RBUF) was diagnosed via distal colostogram and voiding cystourethrogram. LAARP was planned at 6 months of age. We performed the operation with four trocars. A 45° 5-mm scope was used to clearly view the deep pelvic space. Before starting rectal dissection, a 6-Fr pig-tail-type NIRC was inserted through the external opening of the urethra to visualize the urethra during the laparoscopic procedure. The catheter tip was placed in the bladder, and excretion of urine was maintained through the NIRC during the procedures. While dissecting the deep pelvic space between the posterior wall of the urethra and anterior wall of the rectum, the exact line of the urethra was clearly confirmed by overlay images of the NIRC. The RBUF was dissected safely using this innovative image-guided technique. Anoplasty was performed between the rectal stump and perineal skin. The postoperative course was uneventful. Oral intake was started on postoperative day 1. Postoperative dynamic urography showed no complications. Conclusion: An NIRC is useful for detecting the urethra during LAARP. abstract_id: PUBMED:35775517 Comparison of clinical outcome and anal manometry following laparoscopic-assisted anorectoplasty and posterior sagittal anorectoplasty in patients with high and intermediate anorectal malformation: A randomised controlled trial. Introduction: High and intermediate types of anorectal malformations (ARMs) may be managed by either open posterior sagittal anorectoplasty (PSARP) or by laparoscopic-assisted anorectoplasty (LAARP). Most of the literature favours one approach over the other based on retrospective analysis. We performed this study with the aim to compare the short-term outcomes of both procedures. Materials And Methods: All paediatric patients with high and intermediate ARM were enrolled and randomised into two groups: open PSARP group and LAARP group. Outcome parameters such as faecal continence using Kelly's scoring system, anal manometric parameters and post-operative complications were compared between the groups. Results: A total of 16 patients were included with equal distribution in the open PSARP and LAARP group. Patient's variables were comparable in both the groups. Five patients developed immediate post-operative complications, three in the LAARP and two in the open group. The mean Kelly's score was 3.63 ± 1.6 versus 2.57 ± 1.9 (P = 0.132) for LAARP and PSARP group, respectively. The mean resting pressure was 34.71 ± 6.26 cm of H2O and 35 ± 6.16 cm of H2O (P = 0.384) in LAARP and open group, respectively. Rectoanal inhibitory reflex was demonstrated in 6/7 patients in LAARP group and 5/7 patients in open group. Conclusion: Faecal continence in patients undergoing either of the procedure is comparable. However, wound-related complications are lesser in LAARP procedure. abstract_id: PUBMED:35257221 Long-term outcomes of male imperforate anus with recto-urethral fistula: laparoscopy-assisted anorectoplasty versus posterior sagittal anorectoplasty. Purpose: A five-parameter fecal continence evaluation questionnaire (FCEQ) and incidence of complications were used for long-term assessment of laparoscopy-assisted anorectoplasty (LAARP) and posterior sagittal anorectoplasty (PSARP) for treating male imperforate anus (MIA) with rectobulbar (RB) or rectoprostatic (RP) fistulas. Methods: Subjects were 64 consecutive Japanese MIA patients with RB or RP fistulas treated at a single institution between 1995 and 2021. FCEQ data collected retrospectively were used to calculate a fecal continence evaluation (FCE) score (best = 10) and coefficient of variation for FCE (FCECV). The statistical significance threshold was defined at p < 0.05. Results: Fistulas were RB (n = 40; LAARP = 25/40, PSARP = 15/40) or RP (n = 24; LAARP = 17/24, PSARP = 7/24). Mean ages at surgery and status of the sacrum were similar (p = 0.06, 0.05 and 0.51). FCE scores in RP-LAARP were consistently higher with less FCECV but differences were only statistically significant from 7 years postoperatively (p < 0.05). While FCE scores for RB-LAARP and RB-PSARP were similar (p = 0.99), FCECV were lower for RB-LAARP compared with RB-PSARP. LAARP was associated with less-wound infections, but greater incidence of anal mucosal prolapse unrelated to preoperative status of the sacrum. Conclusion: Long-term postoperative FCEQ assessment favored LAARP for treating MIA with either RB or RP fistulas. abstract_id: PUBMED:29354902 Comparison of laparoscopic-assisted anorectoplasty and posterior sagittal anorectoplasty for high and intermediate anorectal malformations Objective: To compare the efficacy of laparoscopic-assisted anorectoplasty (LAARP) and posterior sagittal anorectoplasty (PSARP) for high and intermediate anorectal malformation (ARM). Methods: Clinical data 60 children with high and intermediate ARM who underwent LAARP (40 cases of three-port laparoscopy, 20 cases of transumbilical single incision laparoscopy ) between January 2010 to December 2014 were retrospectively analyzed. Clinical data of 36 children with high and intermediate ARM who underwent PSARP between January 1992 to December 2002 were used as control. The duration of operation, blood loss, hospital stay, postoperative complications and long-term anal function were compared between two groups. Krickenbeck classification was applied to assess the anorectal function. Results: Baseline data, such as gender, age at operation, sacral index and proctatresia site, were not significantly different between two groups. As compared to PSARP group, LAARP group presented shorter operative time [(113±13) min vs. (132±10) min, P=0.01], less operative blood loss [(11.5±1.6) ml vs. (14.3±1.7) ml, P=0.03], shorter postoperative hospital stay [(6.9±0.7) d vs. (8.2±0.8) d, P=0.02] and lower morbidity of postoperative complication [11.7% (7/60) vs. 27.8% (10/36), P=0.02]. Postoperative defecation evaluation showed that LAPPR group had lower percentage of 2-3 grade constipation than PSARP group [0% vs. 22.2%(8/36), P=0.00]. While percentage of 2-3 grade voluntary bowel movement [5.9% (3/51) vs. 16.7% (6/36), P=0.20] and percentage of 2-3 grade soiling [5.9%(3/51) vs. 16.7%(6/36), P=0.20] were not significantly different between two groups. Conclusions: LAARP is safe and effective in the treatment of high and intermediate ARM. Compared with PSARP, LAARP has some advantages, such as better cosmetic results and faster recovery. abstract_id: PUBMED:33838897 Lessons learned from lower urinary tract complications of anorectoplasty for imperforate anus with rectourethral/rectovesical fistula: Laparoscopy-assisted versus posterior sagittal approaches. Purpose: To report the sequelae of and preventive strategies for selected lower urinary tract (LUT) complications, i.e., posterior urethral diverticulum (PUD), intraoperative LUT injuries, postoperative dysuria, and fistula recurrence in male imperforate anus (IA) with rectourethral/rectovesical (RU/RV) fistula after laparoscopy-assisted anorectoplasty (LAARP) or posterior sagittal anorectoplasty (PSARP). Methods: 153 boys with IA and RU/RV fistula treated 1986-2019 by LAARP (n = 56) or PSARP (n = 97) at two unrelated institutes were studied retrospectively. Results: After mean follow-up of 17.0 years (range: 36.5 days-32.0 years), the overall incidences of LUT complications were: LAARP (6/56; 10.7%); PSARP (7/97; 7.2%); p = 0.55, comprising PUD: LAARP (n = 5), PSARP (n = 0); p = 0.006; injuries: LAARP (n = 0), PSARP (n = 5); p = 0.16; dysuria: LAARP (n = 1), PSARP (n = 1); p>0.999; and recurrence: LAARP (n = 0), PSARP (n = 1); p>0.999. Mean onset of PUD was 5.1 years (range: 1.0-15.1 years). Treatment: PUD: surgery (n = 2/5), conservative (n = 3/5); injuries: intraoperative repair (n = 5/5); dysuria: conservative (n = 2/2), and recurrence: redo PSARP (n = 1/1). Conclusions: Strategies devised to improve dissection accuracy resolved the specific technical issues causing LUT complications (remnant RU fistula dissection in LAARP and blind posterior access in PSARP). Currently, the incidence of new cases of PUD and LUT injuries is zero. Level Of Evidence: Level III. Answer: Intraoperative anal endosonography (AES) during laparoscopy-assisted anorectoplasty (LAARP) for high/intermediate type imperforate anus (HIIA) has been evaluated in several studies to determine its necessity and impact on surgical outcomes. One study suggests that intraoperative AES (op-AES) is not necessary during Georgeson laparoscopy-assisted anorectoplasty (GLA) because there was no difference in clinical and functional status that could be attributed to op-AES when compared to a group that did not have op-AES. This was based on a continence evaluation questionnaire (CEQ) and postoperative AES measurements of the external sphincter (ES) and puborectalis (PR) to confirm the central positioning of the pull-through (PT) colon. The study concluded that op-AES did not show any significant differences in the thickness of the ES and PR or in the mean annual CEQ scores between the two groups (PUBMED:18266590). Another study compared the outcomes of LAARP and posterior sagittal anorectoplasty (PSARP) using AES, magnetic resonance imaging (MRI), and CEQ. It found no differences in muscle thickness or parameters of surgical stress according to the technique used. However, CEQ scores were generally higher for the LAARP group, suggesting a better outcome based on continence evaluation, but this did not directly address the necessity of intraoperative AES (PUBMED:18206475). A study that used intraoperative endosonography during LAARP reported that it could enhance the precision of positioning the pull-through canal, suggesting a potential benefit of using intraoperative AES for ensuring accurate placement of the PT canal (PUBMED:12483622). In summary, while some studies indicate that intraoperative AES may not be strictly necessary for LAARP in HIIA as it does not significantly impact clinical and functional outcomes (PUBMED:18266590), other studies suggest that it could enhance the precision of the procedure (PUBMED:12483622). Therefore, the necessity of intraoperative AES may depend on the surgeon's preference and the specific circumstances of the surgery.
Instruction: Quality of dying in the ICU: is it worse for patients admitted from the hospital ward compared to those admitted from the emergency department? Abstracts: abstract_id: PUBMED:33718931 Quantifying the impact of patient boarding on emergency department length of stay: All admitted patients are negatively affected by boarding. Background: Patients boarding in the emergency department (ED) as a result of delays in bed placement are associated with increased morbidity and mortality. Prior literature on ED boarding does not explore the impact of boarding on patients admitted to the hospital from the ED. The objective of this study was to evaluate the impact of patient boarding on ED length of stay for all patients admitted to the hospital. Methods: This was an institutional review board-approved, retrospective review of all patients from January 1, 2015, through June 30, 2019, presenting to 2 large EDs in a single health system in Pennsylvania. Quantile regression models were created to estimate the impact of patients boarding in the ED on length of stay for all ED patients admitted to the hospital. Results: A total number of 466,449 ED encounters were analyzed across two EDs. At one ED, for every patient boarded, the median ED length of stay for all admitted patients increased by 14.0 minutes (P < 0.001). At the second ED, for every patient boarded in the ED, the median ED length of stay increased by 12.4 minutes (P < 0.001). Conclusion: ED boarding impacts length of stay for all patients admitted through the ED and not just those admitted patients who are boarded. This study provides an estimate for the increased ED length of stay experienced by all patients admitted to the hospital as a function of patient boarding. abstract_id: PUBMED:25116294 Quality of dying in the ICU: is it worse for patients admitted from the hospital ward compared to those admitted from the emergency department? Objective: Although most intensive care unit (ICU) admissions originate in the emergency department (ED), a substantial number of admissions arrive from hospital wards. Patients transferred from the hospital ward often share clinical characteristics with those admitted from the ED, but family expectations may differ. An understanding of the impact of ICU admission source on family perceptions of end-of-life care may help improve patient and family outcomes by identifying those at risk for poor outcomes. Design And Setting: This was a cohort study of patients with chronic illness and acute respiratory failure requiring mechanical ventilation who died after admission to an ICU in any of the 14 participating hospitals in the Seattle-Tacoma area between 2003 and 2008 (n = 1,500). Measurements: Using regression models adjusted for hospital site and patient-, nurse- and family-level characteristics, we examined associations between ICU admission source (hospital ward vs. ED) and (1) family ratings of satisfaction with ICU care; (2) family and nurse ratings of quality of dying; (3) chart-based indicators of palliative care. Main Results: Admission from the hospital ward was associated with lower family ratings of quality of dying [β -0.90, 95% confidence interval (CI) -1.54, -0.26, p = 0.006] and satisfaction (total score β -3.97, 95% CI -7.89, -0.05, p = 0.047; satisfaction with care domain score β -5.40, 95% CI -9.44, -1.36, p = 0.009). Nurses did not report differences in quality of dying. Patients from hospital wards were less likely to have family conferences [odds ratio (OR) 0.68, 95% CI 0.52, 0.88, p = 0.004] or discussion of prognosis in the first 72 h after ICU admission (OR 0.72, 95% CI 0.56, 0.91, p = 0.007) but were more likely to receive spiritual care (OR 1.48, 95% CI 1.14, 1.93, p = 0.003) or have life support withdrawn (OR 1.38, 95% CI 1.04, 1.82, p = 0.025). Conclusion: Admission from the hospital ward is associated with family perceptions of a lower quality of dying and less satisfaction with ICU care. Differences in receipt of palliative care suggest that family of patients from the hospital ward receive less communication. Nurse ratings of quality of dying did not significantly differ by ICU admission source, suggesting dissimilarities between family and nurse perspectives. This study identifies a patient population at risk for poor quality palliative and end-of-life care. Future studies are needed to identify interventions to improve care for patients who deteriorate on the wards following hospital admission. abstract_id: PUBMED:23073760 The effect of admitted patients in the emergency department on rates of hospital admissions. Emergency Department overcrowding with admitted inpatients is a common international occurrence. We undertook a retrospective review to compare patient admission rates from patients presenting to our Emergency Department with the level of overcrowding with admitted inpatients on that particular day in the Emergency Department. Over the 2-year study period there was no change in the rate or absolute number of admissions per day compared with the level of inpatient overcrowding. abstract_id: PUBMED:38023805 Prevalence and Associated Factors of Anemia Among Hospital Admitted Patients in Eastern Ethiopia. Background: Anemia is one of the most common comorbidities frequently seen in admitted patients. However, there is a scarcity of evidence regarding anemia among hospital admitted patients in Ethiopia, particularly in the Harari Region. Therefore, this study aimed to assess the prevalence and associated factors of anemia among hospital admitted patients in Eastern Ethiopia. Methods: A hospital-based cross-sectional study was conducted from October 25 to December 30, 2022. Four milliliters of venous blood were collected and complete blood count was done using the DxH 800 (Beckman Coulter, Inc, Miami, FL) hematology analyzer. The data were entered in Epi-data version 4 and exported to SPSS version 26 for statistical analysis. Bivariable and multivariable logistic regression models were fitted. The level of significance was declared at a p-value of < 0.05. Results: Of the 381 hospital admitted patients, 64.8% (95% CI = 60.01, 69.65) of the participants were anemic. Admitted patients who drank standard alcohol daily (AOR = 3.78, 95% CI = 1.71, 8.30), underweight (AOR = 9.39, 95% CI = 2.90, 30.46), and undernourished patients (AOR = 2.59, 95% CI = 1.15, 5.84), patients admitted with chronic kidney disease (AOR = 11.16, 95% CI = 4.06, 30.64), chronic liver disease (AOR = 3.20, 95% CI = 1.21, 8.47), deep vein thrombosis (AOR = 6.22, 95% CI = 1.98, 19.52), infectious disease (AOR = 9.71, 95% CI = 2.77, 34.02), and chronic non-communicable disease (AOR = 7.01, 95% CI = 1.90, 25.99) were all significantly associated with anemia. Conclusion: Anemia was common among hospital admitted patients and should prompt the focus on admission diagnoses that are likely to play leading roles in etiology. This information indicates a need for routine screening of anemia for all admitted patients to improve their health. abstract_id: PUBMED:32963799 Quality of life among newly admitted patients to cancer hospital in Vietnam and associated factors. A cross-sectional study was conducted among 262 newly admitted patients of a cancer hospital in Vietnam. The Vietnamese version EORTC-30 questionnaire 3.0 was used to interviews patients during the first week after their admission. Lower global quality of life and physical functioning scores were significantly associated with older age, lung and gastrointestinal cancers. Emotional functioning score was lower among female patients as compared to males. Financial difficulty was an important preditor for almost all functional scales. Management of early symptoms, providing early emotional supports and appropriate policies to reduce financial difficulty are important to improve cancer patients' quality of life. abstract_id: PUBMED:16163992 Improving documentation of head injured patients admitted to the emergency department ward. Objective: Well-written and factually accurate medical records are one of the cornerstones of Emergency Medicine. This audit aimed to assess whether documentation could be improved for head injured patients admitted to the Emergency Department observation ward using a pre-printed proforma. Methods: In the first phase the notes of a consecutive series of forty patients admitted for observation to an Emergency Department ward after sustaining a head injury were prospectively audited. A data collection instrument was designed to measure the presence or absence of documentation of mechanism of injury, specific symptoms, signs, medications, investigations and treatment considered essential for gold standard head injury management. In the second phase a specially designed proforma was introduced for all patients being admitted for observation. The notes of a second consecutive series of forty patients were then audited using the same data collection instrument. Results: The first phase of the audit revealed inadequate documentation with regard to many of the measured variables. Significant Improvements were noted in all measured variables after the introduction of the proforma. Conclusions: Documentation of all important positive and negative signs in head injured patients can be time consuming and often a challenge for doctors working in busy Emergency Departments. Accurate documentation is however important from both a clinical and a medico-legal position and this audit have shown that the introduction of a customized proforma can improve the quality of documentation. In addition clinical management of head injured patients may improve as the proforma also acts as a prompt for their subsequent investigation and treatment. abstract_id: PUBMED:29893435 Dyspnoea and self-management strategies in patients admitted to the emergency department: A study of patients' experiences. Aims And Objective: To explore patients' experiences of acute dyspnoea, physical functioning and perspectives on course of illness prior to admission to the emergency department. Background: Many emergency admissions are considered unnecessary and avoidable. In this perspective, it seems relevant to gain insight into the patients' perspective on acute dyspnoea and the need for emergency admission. However, only few studies have investigated reasons for emergency admission from a patient perspective. Design: A qualitative study was conducted with semi-structured interviews among six patients previously admitted to the emergency department due to dyspnoea. Data collection and analysis were carried out according to Kvale & Brinkmann using meaning condensation. Findings: Dyspnoea was experienced as an unpleasant breathlessness in the form of pain or suffocation, which limited usual physical activities, negatively impacting on quality of life. Self-management strategies such as medication, breathing exercises, distraction from breathing, and mental and physical relaxation in general were used to avoid hospital admission. The chronically ill patients saw the following ways to alternate course of disease to avoid admissions to the emergency department: easier access to specialised emergency medical care, medical supplies in the patient's home and making existing physical exercise programmes more accessible and interesting. Conclusion And Relevance To Clinical Practice: It would be relevant to further investigate whether prehospital interventions may remedy acute dyspnoea among chronically ill patients, and whether such interventions are cost-effective. abstract_id: PUBMED:37173654 Effect of emergency physician-operated emergency short-stay ward on emergency department stay length and clinical outcomes: a case-control study. Background: We hypothesized that an emergency short-stay ward (ESSW) mainly operated by emergency medicine physicians may reduce the length of patient stay in emergency department without expense of clinical outcomes. Methods: We retrospectively analysed adult patients who visited the emergency department of the study hospital and were subsequently admitted to wards from 2017 to 2019. We divided study participants into three groups: patients admitted to ESSW and treated by the department of emergency medicine (ESSW-EM), patients admitted to ESSW and treated by other departments (ESSW-Other) and patients admitted to general wards (GW). The co-primary outcomes were ED length of stay and 28-day hospital mortality. Results: In total, 29,596 patients were included in the study, and 8,328 (31.3%), 2,356 (8.9%), and 15,912 (59.8%) of them were classified as ESSW-EM, ESSW-Other and GW groups, respectively. The ED length of stay of the ESSW-EM (7.1 h ± 5.4) was shorter than those of the ESSW-Other (8.0 ± 6.2, P < 0.001) and the GW (10.2 ± 9.8, P < 0.001 for both). Hospital mortality of ESSW-EM (1.9%) was lower than that of GW (4.1%, P < 0.001). In the multivariable linear regression analysis, the ESSW-EM was independently associated with shorter ED length of stay compared with the both ESSW-Other (coefficient, 1.08; 95% confidence interval, 0.70-1.46; P < 0.001) and GW (coefficient, 3.35; 95% confidence interval, 3.12-3.57; P < 0.001). In the multivariable logistic regression analyses, the ESSW-EM was independently associated with lower hospital mortality compared with both the ESSW-Other group (adjusted P = 0.030) and the GW group (adjusted P < 0.001). Conclusions: In conclusion, the ESSW-EM was independently associated with shorter ED length of stay compared with both the ESSW-Other and the GW in the adult ED patients. Independent association was found between the ESSW-EM and lower hospital mortality compared with the GW. abstract_id: PUBMED:29885547 Exploring the quality of the dying and death experience in the Emergency Department: An integrative literature review. Aim: The aim of this integrative literature review was to explore the quality of the dying and death experience in the Emergency Department from the perspective of staff and carers. Background: Death in the Emergency Department is common. Understanding the quality of the death and dying experience of patients and their family members is crucial to building knowledge and improving care. Design: Systematic integrative literature review reported following the PRISMA guidelines. Data Sources: Pubmed, Cumulative Index to Nursing and Allied Health Literature, Magonline (internurse), and the Cochrane library. Articles used were published in English during 1990- 2017. Review Method: Appraisal and thematic analysis. Results: Sixteen articles are included. Eight themes emerged from the literature: care in the Emergency Department is about living not dying, staff perceive that death is a failure, staff feel underprepared to care for the dying patient and family in this environment, there is limited time for safe standards of care, staff stress and distress, staff use of distancing behaviours, the care of the dying role is devolved from medics to nurses at the end of life, and patients and staff perceive that the Emergency Department is not the preferred place of death CONCLUSION: There are areas of concern about end of life care in the Emergency Department. To improve practice and to ensure that a good death occurs, further research is needed. There is a need to understand more about the experience of caregivers when a relative or friend dies in the Emergency Department. abstract_id: PUBMED:36327572 The impact of COVID-19 pandemic on the urgency of patients admitted to the emergency department. Background: The COVID-19 pandemic brought important changes in access to the Emergency Department (ED). At present, an analysis of how the COVID-19 pandemic has changed not only the number but also the nature of the urgency of ED access is not available. This study aimed to verify the effect of the COVID-19 pandemic on the urgency of patients admitted to the ED utilizing timebased analyses. Methods: A quasi-experimental interrupted time series analysis based on a retrospective review of data from all ED admissions occurring at the Merano Hospital (Italy) from 1 January 2015 to 30 June 2021 was conducted. Results: From January 2015 to June 2021, 416,560 patients were admitted to the ED. Since the pandemic outbreak, the trend of urgent patients (classified as orange and red by triage) remained unchanged compared to the pre-pandemic period (p = 0.095, p = 0.155). In contrast, patients classified as blue (non-urgent) increased exponentially since the outbreak of the pandemic (p < 0.001). Conclusion: The present study reported stability in urgent codes (orange and red) since the pandemic outbreak while a dramatic increase in non-urgent codes (blue and green) is reported. The results of the study may indicate the need to find health policy solutions to manage this large increase in nonurgent patients requiring assessment in EDs since the outbreak of the pandemic. Answer: According to the study with PUBMED:25116294, the quality of dying in the ICU is perceived to be worse for patients admitted from the hospital ward compared to those admitted from the emergency department. This conclusion is based on lower family ratings of quality of dying and satisfaction with ICU care for patients transferred from the hospital ward. Additionally, these patients were less likely to have family conferences or discussions of prognosis in the first 72 hours after ICU admission but were more likely to receive spiritual care or have life support withdrawn. The study suggests that families of patients from the hospital ward receive less communication, which may contribute to their perceptions of a lower quality of dying and less satisfaction with care. Nurses, however, did not report differences in quality of dying based on the source of ICU admission, indicating a discrepancy between family and nurse perspectives. This study highlights a patient population at risk for poor quality palliative and end-of-life care and suggests the need for interventions to improve care for patients who deteriorate on the wards following hospital admission.
Instruction: Are there enough GPs in England to detect hypertension and maintain access? Abstracts: abstract_id: PUBMED:23643232 Are there enough GPs in England to detect hypertension and maintain access? A cross-sectional study. Background: Fewer patients are recorded by practices as having hypertension than are identified in systematic population surveys. However, as more patients are recorded on practice hypertension registers, mortality from coronary heart disease and stroke declines. Aim: To determine whether the number of GPs per 1000 practice population is associated with the number of patients recorded by practices as having hypertension, and whether patients' reports of being able to get an appointment with a GP are associated with the number of GPs and the number of patients recorded as having hypertension. Design And Setting: Cross-sectional study of available data for all general practices in England for 2008 to 2009. Method: A model was developed to describe the hypothesised relationships between population (deprivation, ethnicity, age, poor health) and practice characteristics (list size, number of GPs per 1000 patients, management of hypertension) and the number of patients with hypertension and patient-reported ability to get an appointment fairly quickly. Two regression analyses were undertaken. Results: Practices recorded only 13.3% of patients as having hypertension. Deprivation, age, poor health, white ethnicity, hypertension management, and the number of GPs per 1000 patients predicted the number of patients recorded with hypertension. Being able to get an appointment fairly quickly was associated with the number of patients recorded with hypertension, age, deprivation, practice list size, and the number of GPs per 1000 patients. Conclusion: In order to improve detection of hypertension as part of a strategy to lower mortality from coronary heart disease, the capacity of practices to detect hypertension while maintaining access needs to be improved. Increasing the supply of GPs may be necessary, as well as improvements in efficiency. abstract_id: PUBMED:23834865 Are there enough GPs in England to detect hypertension and maintain access? N/A abstract_id: PUBMED:34586250 Health and household surveys in Brazil and England: The National Health Survey and the Health Survey for England. This paper aims to analyze the characteristics of national health surveys conducted in Brazil and England by their respective Statistics institutes. For Brazil, the National Health Survey (PNS) was considered, and the Health Survey for England (HSE) for England. To this end, we show a preliminary overview of the different population profiles of the two countries. Then, a brief historical background is presented, including the common themes that are addressed in the PNS and HSE that favor comparative analyses. Finally, we compared, for example, the inequalities in access to and use of Brazilian and English health services. The results show several possibilities for comparative analysis on topics such as health perception, tobacco use, alcohol consumption, diabetes, and hypertension. However, the need to consider the specificities of the population profile of each country and the methodological characteristics of the surveys is emphasized. abstract_id: PUBMED:19622520 Ethnic inequalities in access to and outcomes of healthcare: analysis of the Health Survey for England. Background: Ethnic/racial inequalities in access to and quality of healthcare have been repeatedly documented in the USA. Although there is some evidence of inequalities in England, research is not so extensive. Ethnic inequalities in use of primary and secondary health services, and in outcomes of care, were examined in England. Methods: Four waves of the Health Survey for England were analysed, a representative population survey with ethnic minority oversamples. Outcome measures included use of primary and secondary healthcare services and clinical outcomes of care (controlled, uncontrolled and undiagnosed) for three conditions - hypertension, raised cholesterol and diabetes. Results: Ethnic minority respondents were not less likely to use GP services. For example, the adjusted odds ratios for Indian, Pakistani and Bangladeshi versus white respondents were 1.29 (95% confidence intervals 1.07 to 1.54), 1.32 (1.10 to 1.58) and 1.35 (1.10 to 1.65) respectively. Similarly, there were no ethnic inequalities for the clinical outcomes of care for hypertension and raised cholesterol, and, on the whole, no inequalities in outcomes of care for diabetes. There were ethnic inequalities in access to hospital services, and marked inequalities in use of dental care. Conclusion: Ethnic inequalities in access to healthcare and the outcomes of care for three conditions (hypertension, raised cholesterol and diabetes), for which treatment is largely provided in primary care, appear to be minimal in England. Although inequalities may exist for other conditions and other healthcare settings, particularly internationally, the implication is that ethnic inequalities in healthcare are minimal within NHS primary care. abstract_id: PUBMED:20423586 GPs' perceived barriers to their involvement in caring for patients with HIV: a questionnaire-based study. Background: Since the introduction of highly active antiretroviral therapy (HAART) in the late 1990s, individuals with HIV are living longer and beginning to experience the chronic diseases of ageing. Alongside side effects of HAART therapy, these may be better managed in the community by generalists rather than by HIV specialists. Aim: To explore GPs' current perceptions of barriers to their involvement in managing patients with HIV. Design Of Study: Postal questionnaire. Setting: Practices within a primary care trust in the south of England. Method: The questionnaire included 24 statements relating to potential structural and attitudinal barriers to GPs' involvement in the care of patients with HIV. Results: Seventy-one per cent (124/174) of GPs responded; 93% (n = 115) of responders were comfortable managing other illness in HIV patients, and 60% (n = 74) agreed that GPs rather than specialists should manage common chronic problems such as hypertension or diabetes. Specialist prescribing was commonly identified as a barrier, specifically: difficulty keeping up to date, knowledge of HAART side effects, and drug interactions. The majority of responders (67% [83]) viewed patients as preferring their HIV illness to be managed by specialists. Conclusion: There is now the opportunity and potential to relocate the management of chronic disease in patients with HIV to GPs. Factors such as lack of time, experience, and training were identified as barriers to caring for patient with HIV in the pre-HAART era; these factors continue to challenge some GPs. Additional barriers are the complexity of HAART regimes and inadequate reimbursement. Addressing these barriers is necessary if services are to be reconfigured for people with HIV. abstract_id: PUBMED:28545856 Screening for pulmonary arterial hypertension in patients with systemic sclerosis: Comparison of DETECT algorithm to decisions of a multidisciplinary team, in a competence centre Introduction: Pulmonary arterial hypertension (PAH) is a severe complication of systemic sclerosis and detecting PAH efficiently remains challenging. The DETECT study has offered in 2013 a composite screening tool for PAH. The objective of our study was to compare the indication of right heart catheterisation (RHC) as suggested by the DETECT algorithm with the decisions of a multidisciplinary team. Methods: This prospective monocentric non-interventional study consecutively included systemic sclerosis patients when data required to apply DETECT algorithm were available. We evaluate the number of RHC as requested by this algorithm and confronted it with the indications of RHC suggested by a multidisciplinary group blinded for the result of DETECT algorithm. Results: In total, 117 systemic sclerosis patients were included. When DETECT algorithm was applied to all patients, RHC was suggested by this algorithm for 70 patients, whereas only 15 indications were required by the multidisciplinary group; among those patients only 7 had PAH. When DETECT algorithm was applied only to the 42 patients with DLCO<60% and disease duration of more than 3 years, RHC was suggested for 31 patients whereas only 13 were indicated by the multidisciplinary group; among those patients only 7 had PAH. Conclusion: The DETECT algorithm is able to efficiently detect all PAH patients finally diagnosed by our multidisciplinary team. However, it increases by 3 the number of RHC that should be performed. abstract_id: PUBMED:28347984 Trends in the diagnosis and management of hypertension: repeated primary care survey in South West England. Background: Previous surveys identified a shift to nurse-led care in hypertension in 2010. In 2011 the National Institute for Health and Care Excellence (NICE) recommended ambulatory (ABPM) or home (HBPM) blood pressure (BP) monitoring for diagnosis of hypertension. Aim: To survey the organisation of hypertension care in 2016 to identify changes, and to assess uptake of NICE diagnostic guidelines. Design And Setting: Questionnaires were distributed to all 305 general practices in South West England. Method: Responses were compared with previous rounds (2007 and 2010). Data from the 2015 Quality and Outcomes Framework (QOF) were used to compare responders with non-responders, and to explore associations of care organisation with QOF achievement. Results: One-hundred-and-seventeen practices (38%) responded. Responders had larger list sizes and greater achievement of the QOF target BP ≤150/90 mmHg. Healthcare assistants (HCAs) now monitor BP in 70% of practices, compared with 37% in 2010 and 19% in 2007 (P<0.001). Nurse prescribers alter BP medication in 26% of practices (11% in 2010, none in 2007; P<0.001). Of the practices, 89% have access to ABPM, but only 71% report confidence in interpreting results. Also, 87% offer HBPM, with 93% of these confident in interpreting results. Conclusion: In primary care BP monitoring has devolved from GPs and nurses to HCAs. One in 10 practices are not implementing NICE guidelines on ABPM and HBPM for diagnosis of hypertension. Most practices express confidence interpreting HBPM results but less so with ABPM. The need for education and quality assurance for allied health professionals is highlighted, and for training in ABPM interpretation for GPs. abstract_id: PUBMED:12444541 The safety of valsartan: results of a postmarketing surveillance study on 12881 patients in England. Valsartan is a second class of angiotensin II receptor antagonist, indicated for the treatment of hypertension. The objective of the study was to monitor the safety of valsartan using the technique of prescription event monitoring (PEM), in patients who were prescribed this drug by general practitioners (GPs) in England. PEM is a noninterventional observation cohort technique. Exposure data were obtained from dispensed prescriptions issued between December 1996 and November 1998. Outcome data were obtained by sending questionnaires to prescribing GPs. The cohort comprised 12881 patients. Events most frequently reported as suspected adverse drug reactions were malaise/lassitude (37; 0.3% of total cohort), dizziness (19; 0.1%), and unspecified side effects (57; 0.4%). Events with the highest incidence density (ID(1) per 1000 patient-months of treatment) in the first month of treatment were malaise/lassitude (15.6), dizziness (11.8), and headache/migraine (10.9). Most frequent reasons for stopping valsartan were not effective (847; 6.6% of total cohort), malaise/lassitude (265; 21%), and dizziness (146; 1.1%). No unexpected serious adverse events were identified. Other events assessed as possibly related to valsartan use were impotence (37), dizziness (19), cough (9), facial oedema (5), hyperkalaemia (3), and angioneurotic oedema (1). There were four reports of exposure during pregnancy and 203 deaths (1.5%) in this cohort. In conclusion, this study monitored the safety profile of valsartan in a large cohort of patients in general practice in England. No untoward features other than dizziness were identified that were not mentioned in the prescribing guidance. abstract_id: PUBMED:26598759 Comparison of hypertension healthcare outcomes among older people in the USA and England. Background: The USA and England have very different health systems. Comparing hypertension care outcomes in each country enables an evaluation of the effectiveness of each system. Method: The English Longitudinal Study of Ageing and the Health and Retirement Survey are used to compare the prevalence of controlled, uncontrolled and undiagnosed hypertension within the hypertensive population (diagnosed or measured within the survey data used) aged 50 years and above in the USA and in England. Results: Controlled hypertension is more prevalent within the hypertensive population in the USA (age 50-64: 0.53 (0.50 to 0.57) and age 65+: 0.51 (0.49 to 0.53)) than in England (age 50-64: 0.45 (0.42 to 0.48) and age 65+: 0.42 (0.40 to 0.45)). This difference is driven by lower undiagnosed hypertension in the USA (age 50-64: 0.18 (0.15-0.21) and age 65+: 0.13 (0.12 to 0.14)) relative to England (age 50-64: 0.26 (0.24 to 0.29) and age 65+: 0.22 (0.20 to 0.24)). The prevalence of uncontrolled hypertension within the hypertensive population is very similar in the USA (age 50-64: 0.29 (0.26 to 0.32) and age 65+: 0.36 (0.34 to 0.38)) and England (age 50-64: 0.29 (0.26 to 0.32) and age 65+: 0.36 (0.34 to 0.39)). Hypertension care outcomes are comparable across US insurance categories. In both countries, undiagnosed hypertension is positively correlated with wealth (ages 50-64). Uncontrolled hypertension declines with rising wealth in the USA. Conclusions: Different diagnostic practices are likely to drive the cross-country differences in undiagnosed hypertension. US government health systems perform at least as well as private healthcare and are more equitable in the distribution of care outcomes. Higher undiagnosed hypertension among the affluent may reflect less frequent medical contact. abstract_id: PUBMED:32580986 Lessons from the failure of implementing the 'Better Care Better Value' prescribing indicator for renin-angiotensin system drugs in England: a qualitative study of general practitioners' perceptions using behavioural change framework. Objectives: To explore reasons for the lack of uptake of 'Better Care Better Value (BCBV)' prescribing indicators for renin-angiotensin-aldosterone system (RAAS) inhibitors and identify learning lessons to inform the implementation of future prescribing policies. Design: In-depth, semistructured interviews to explore: general practitioners' (GPs) experiences in prescribing RAAS, perceptions of the BCBV policy and potential barriers to policy implementation and suggestions for improving future policy implementation. Interviews were audio recorded, transcribed verbatim and analysed thematically, then mapped onto behavioural change frameworks (the Capability, Opportunity, Motivation and Behaviour) model and Behaviour Change Wheel (BCW)). Setting: Primary care setting in England PARTICIPANTS: Interviews were conducted with 16 GPs recruited from a purposive sample of 91 GP practices in three English counties. Results: Four factors/barriers, related mainly to GPs' psychological capability and reflective motivation, emerged as the possible barriers for the BCBV's lack of uptake, including: lack of the policy awareness, negative attitude to the policy, lack of incentives and GPs' reluctance to switch patients from angiotensin receptor blockers (ARBs) to ACE inhibitors (ACEIs). The participating GPs proposed interventions to improve future BCBV implementation and they were related to six intervention/policy functions of the BCW, addressing the four identified barriers: education/communication (increase GPs' awareness) and environmental restructuring/regulations (provide GPs with reminding alerts); incentivisation/fiscal (provide GPs with financial incentives); enablement/guidelines-regulations (provide GPs with benchmarking against peers) and enablement/regulations and education/guidelines (facilitate switching from ARBs to ACEIs). Conclusions: The main reason underpinning the low uptake of the BCBV indicator appears to be lack of a proactive implementation strategy. This case study demonstrated that passively disseminating policy without an effective implementation strategy results in low uptake. Furthermore, multifaceted implementation strategies are necessary to influence complex clinical decision making in a time-limited environment, such as prescribing behaviours. These findings suggest that effective policy implementation requires the application of a systematic comprehensive behaviours change framework. Answer: The question of whether there are enough GPs in England to detect hypertension and maintain access is addressed in the study by PUBMED:23643232. The study found that practices recorded only 13.3% of patients as having hypertension, which is lower than the number identified in systematic population surveys. The number of GPs per 1000 patients was one of the predictors for the number of patients recorded with hypertension. Additionally, the ability of patients to get an appointment fairly quickly was associated with the number of patients recorded as having hypertension, the number of GPs per 1000 patients, and other factors such as age and deprivation. The conclusion of the study suggests that in order to improve the detection of hypertension and lower mortality from coronary heart disease, there needs to be an increase in the capacity of practices to detect hypertension while maintaining access. This may require increasing the supply of GPs as well as improvements in efficiency. The study does not explicitly state that there is a current shortage of GPs, but it implies that increasing the number of GPs may be necessary to improve hypertension detection and patient access to appointments. Therefore, based on this study, it can be inferred that there may not be enough GPs in England to optimally detect hypertension and maintain access to GP appointments for all patients.
Instruction: Is the NIHSS certification process too lenient? Abstracts: abstract_id: PUBMED:19295205 Is the NIHSS certification process too lenient? Background And Purpose: The National Institutes of Health Stroke Scale (NIHSS) is a widely used measure of neurological function in clinical trials and patient assessment; inter-rater scoring variability could impact communications and trial power. The manner in which the rater certification test is scored yields multiple correct answers that have changed over time. We examined the range of possible total NIHSS scores from answers given in certification tests by over 7,000 individual raters who were certified. Methods: We analyzed the results of all raters who completed one of two standard multiple-patient videotaped certification examinations between 1998 and 2004. The range for the correct score, calculated using NIHSS 'correct answers', was determined for each patient. The distribution of scores derived from those who passed the certification test then was examined. Results: A total of 6,268 raters scored 5 patients on Test 1; 1,240 scored 6 patients on Test 2. Using a National Stroke Association (NSA) answer key, we found that correct total scores ranged from 2 correct scores to as many as 12 different correct total scores. Among raters who achieved a passing score and were therefore qualified to administer the NIHSS, score distributions were even wider, with 1 certification patient receiving 18 different correct total scores. Conclusions: Allowing multiple acceptable answers for questions on the NIHSS certification test introduces scoring variability. It seems reasonable to assume that the wider the range of acceptable answers in the certification test, the greater the variability in the performance of the test in trials and clinical practice by certified examiners. Greater consistency may be achieved by deriving a set of 'best' answers through expert consensus on all questions where this is possible, then teaching raters how to derive these answers using a required interactive training module. abstract_id: PUBMED:16224093 NIHSS training and certification using a new digital video disk is reliable. Background And Purpose: NIH Stroke Scale certification is required for participation in modern stroke clinical trials and as part of good clinical care in stroke centers. The existing training and certification videotapes, however, are more than 10 years old and do not contain an adequate balance of patient findings. Methods: After producing a new NIHSS training and demonstration DVD, we selected 18 patients representing all possible scores on 15 scale items for a new certification DVD. Patients were divided into 3 certification groups of 6 patients each, balanced for lesion side, distribution of scale item findings, and total score. We sought to measure interrater reliability of the certification DVD using methodology previously published for the original videotapes. Raters were recruited from 3 experienced stroke centers. Each rater watched the new training DVD and then evaluated one of the 3 certification groups. Results: Responses were received from 112 raters: 26.2% of all responses came from stroke nurses, 34.1% from emergency departments/other physicians, and 39.6% from neurologists. One half (50%) of raters were previously NIHSS-certified. Item responses were tabulated, scoring performed as previously published, and agreement measured with unweighted kappa coefficients for individual items and an intraclass correlation coefficient for the overall score. kappa ranged from 0.21+/-0.05 (ataxia) to 0.92+/-0.09 (LOC-C questions). Of 15 items, 2 showed poor, 11 moderate, and 2 excellent agreement based on kappa scores. The intraclass correlation coefficient for total score was 0.94 (95% confidence interval, 0.84 to 1.00). Reliability scores were similar among specialists and centers, and there were no differences between nurses and physicians. kappa scores trended higher among raters previously certified. Conclusions: These certification DVDs are reliable for NIHSS certification, and scoring sheets have been posted on a web site for real-time, online certification. abstract_id: PUBMED:37329663 A clinical comparative analysis between expanded NIHSS and original NIHSS in posterior circulation ischemic stroke. Background: The NIHSS is used routinely but has drawbacks. One of them is its inefficiency in detecting all signs of posterior circulation strokes. Since its declaration as a possible substitute for NIHSS in posterior circulation strokes in 2016; the expanded NIHSS (e-NIHSS) has not been paid much attention. The current study assesses clinically the value of e-NIHSS over NIHSS in posterior circulation strokes as regards the percentage of cases with different/higher scores, the significance of such scoring on management decisions, the weight of baseline e-NIHSS as a prognostic tool on 90 days functional outcome and its cut off value. Methods: The current longitudinal observational study was conducted on 79 cases after formal written consent who presented with posterior circulation strokes and confirmed by brain imaging. Results: In Comparison to NIHSS; the e-NIHSS score was higher in 36 cases at baseline and in 30 cases on discharge. The e-NIHSS median was two points higher at baseline and 24 h and was one point higher on discharge P < 0.001. A baseline moderate/moderate-severe was more common with e-NIHSS (n 50, 63.3%). In terms of 90 days outcome; a less favorable outcome (>2) was evident in cases with different scoring (e-NIHSS > NIHSS) indicating more sensitivity of e-NIHSS in prognosing 90 days outcome. ROC curve showed 82% sensitivity and 81% specificity with a significant area under the curve (=0.858) on scoring ≥ 8 in e-NIHSS. Conclusion: e-NIHSS is diagnostically and prognostically relevant tool in posterior circulation strokes and ought to be considered in future guidelines. abstract_id: PUBMED:27446761 NIST Standards for Microanalysis and the Certification Process. The National Institute of Standards and Technology (NIST) has been involved in the development of standards for microanalysis since the middle of the 1960s. Certification of "traceable" standards that can be sold to other laboratories is time-consuming and costly, especially when the extent of microheterogeneity within each specimen becomes part of the uncertainty assigned to the certified values. The process of certification of microanalysis standards and the improvements that have facilitated the process with the development of automation and computerization are reviewed. abstract_id: PUBMED:25925653 Certification renewal process of the American Board of Orthodontics. The American Board of Orthodontics was established in 1929 and is the oldest specialty board in dentistry. Its goal is to protect the public by ensuring competency through the certification of eligible orthodontists. Originally, applicants for certification submitted a thesis, 5 case reports, and a set of casts with appliances. Once granted, the certification never expired. Requirements have changed over the years. In 1950, 15 cases were required, and then 10 in 1987. The Board has continued to refine and improve the certification process. In 1998, certification became time limited, and a renewal process was initiated. The Board continues to improve the recertification process. abstract_id: PUBMED:34887032 Longitudinal assessment: A strategy to improve continuing professional certification. Healthcare certification organizations carefully balance a commitment to bring value to their membership through programs that support lifelong learning and professional growth, while protecting the public by ensuring competent certified practitioners. These certifying bodies are challenged with remaining current with their maintenance of certification programs while keeping pace with the growing breadth of knowledge, industry standards and guidelines, innovative advances, and rapid technological gains in testing and assessment. Within the context of process innovation, the National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA) evaluated the current landscape of Longitudinal Assessment (LA) as a potential strategy for the assessment of core knowledge as part of their Continued Professional Certification Program for Certified Registered Nurse Anesthetists. This manuscript details the evaluation of LA using a Logic Model as the tool to scaffold inquiry, a review of LA literature, an environmental scan of current LA programs with identification of LA program elements available, and the results of a LA feasibility study. The findings substantiate that continued professional certification which incorporates a LA strategy can augment lifelong learning, but is not an assessment strategy that can be implemented without thoughtful planning, customization and continuous maintenance. abstract_id: PUBMED:10295849 A guide to HFMA's certification process. HFMA's professional certification process offers members a way to demonstrate their expertise in patient accounts management or healthcare financial management. Marsha L. Morien, FHFMA, CMPA, is chairman of HFMA's Board of Examiners, the group that oversees the certification activities. In the following interview, Morien explains the steps in the process and offers advice for members preparing to become certified. abstract_id: PUBMED:19665341 Certification of health-related websites in France The 2004 statute that created the French National Authority for Health (HAS, Haute Autorité de Santé) required it to establish a procedure for the certification of health-related web sites. The HAS established a procedure based on the HONcode certification scheme set up by the Health On the Net Foundation, with which HAS has a partnership agreement. The HONcode includes eight principles that govern the quality of online heath information and its presentation (quality of the production process). The collaboration between HAS and HON has already led to improvements in a large number of web sites in France and to their certification. The main advantages of certification for site publishers are better site quality and enhanced credibility rather than a larger audience. Quality certification has little impact on the choice of site by Internet users as they tend to use search engines to find health-related information. Future development of the procedure should work to increase the value of certification both by improving the quality of sites and in signaling quality to Internet users. abstract_id: PUBMED:9870325 The certification process in adolescent medicine. The first and second certifying examinations in adolescent medicine were administered jointly by the American Board of Pediatrics (ABP) and the American Board of Internal Medicine (ABIM) on November 15, 1994, to 295 candidates, and on November 18, 1997, to 197 candidates including 170 who were taking the examination for the first time. This report describes the certification process and the characteristics of those taking the first and second examinations in adolescent medicine. The purposes of certification in adolescent medicine, consistent with the purposes of specialty boards, are to improve the quality of patient care during the second decade of life, expand and improve training to include a comprehensive and scientifically based approach to the care of young people as they enter adulthood, and stimulate faculty development and research for the problems of the adolescent population (1). Certification addresses these goals by establishing rigorous standards for training, verifying the knowledge base of candidates through a written examination, and requiring periodic recertification. This certification program recognizes a growing number of physicians with expertise in the medical problems of adolescents and the emergence in the past 30 years of a body of knowledge about the health problems, the effects of rapid biologic and psychologic change, and the interaction of biologic, psychologic, social, and environmental forces that characterize this unique developmental period (2). abstract_id: PUBMED:31307678 Successful completion of an ISO 9001 v 2015 certification process in clinical investigation research Clinical investigation at the University Hospital of Nantes is carried out within the investigation department, which consists of coordination and 24 clinical research teams gathered at four different hospital sites. The Clinical Investigation Units are all equipped with a Quality Management System. This has been in place for more than 12 years and was initially created for the Clinical Investigation Units of the Clinical Investigation Centre (CIC 1413) in Nantes and then transposed in 2013, to the emerging Clinical Investigation Units. Objective: In order to evaluate the performance of this quality approach applied to all UHC investigations, the Investigation Department is committed to obtaining an ISO 9001:2015 certification. Methods: The quality division of the investigation department conducted the project according to a methodology, based on internal and external evaluations to determine an inventory and the retro-planning of the certification approach. Results: The scope of the certification involved the investigation department of the University Hospital: 24 clinical investigation units, over 500 medical staff, 150 non-medical staff. In our certification approach, the client was the promoter of the clinical study. The process lasted 18 months and included phases of training and awareness-raising for all the staff involved. The quality management system of the investigation department has been revised and improved in accordance with current regulations. The Investigation Department obtained the ISO 9001:2015 certification for all clinical investigations at the Nantes University Hospital in June 2017 without any non-compliance. Conclusion: Our work shows that the investigation department has successfully adapted the ISO 9001:2015 standard to institutional clinical research. The certification for specific activities in clinical investigation is the guarantee of an optimized and relevant organization for the safety of volunteers, participating in clinical research as well as the scientific quality of the research and contributes to the satisfaction of sponsors. Answer: The NIHSS certification process has been scrutinized for potentially being too lenient. One study found that the NIHSS certification test allowed for a wide range of acceptable answers, which introduced scoring variability. For instance, one patient in the certification test received 18 different correct total scores from raters who passed the certification. This suggests that the certification process may not be stringent enough to ensure consistent scoring among certified examiners, which could affect the performance of the test in trials and clinical practice (PUBMED:19295205). However, it is important to note that another study reported that a new digital video disk used for NIHSS training and certification was found to be reliable, with high interrater reliability and similar reliability scores among different specialists and centers (PUBMED:16224093). This indicates that while there may be concerns about leniency in the certification process, efforts have been made to improve the reliability of NIHSS certification.
Instruction: Early onset preeclampsia and cerebral palsy: a double hit model? Abstracts: abstract_id: PUBMED:26283455 Early onset preeclampsia and cerebral palsy: a double hit model? Background: Cerebral palsy (CP) is a late sequel of pregnancy, and the role of preeclampsia is debatable. Objective: The aims of this study were to determine the association between preeclampsia and cerebral palsy and to determine the risk factors for the development of cerebral palsy in these patients. Study Design: A retrospective population-based cohort study was designed that included 229,192 singleton pregnancies. The study population was divided into 2 groups: (1) patients with preeclampsia (n = 9749) and (2) normotensive gestations (n = 219,443). Generalized Estimating Equation multiple logistic regression models were performed to study the associations among preeclampsia, small for gestational age, gestational age at delivery, and the risk factors for the development of cerebral palsy in neonates of women with preeclampsia. Results: The rate of cerebral palsy was double in patients with preeclampsia than in the normotensive group (0.2% vs 0.1%; P = .015); early onset preeclampsia and small for gestational age were independent risk factors for the subsequent development of cerebral palsy (odds ratio, 8.639 [95% confidence interval, 4.269-17.480]; odds ratio, 2.737 [95% confidence interval, 1.937-3.868], respectively). A second model was conducted to determine the risk factors for the development of cerebral palsy in women with preeclampsia. Birth asphyxia, complications of prematurity, and neonatal infectious morbidity, but not small for gestational age or gestational age at delivery, were independent risk factors for the development of cerebral palsy. Conclusion: In a comparison with normal pregnant women, the rate of cerebral palsy is double among patients with preeclampsia, especially those with early-onset disease. Early-onset preeclampsia is an independent risk factor for cerebral palsy. Among women with preeclampsia, the presence of neonatal infectious morbidity, birth asphyxia, and complications of prematurity are independent risk factors for the development of cerebral palsy, which further supports the role of a multi-hit model in the pathogenesis of this syndrome. abstract_id: PUBMED:35767262 Development and Validation of a Prediction Model for Perinatal Arterial Ischemic Stroke in Term Neonates. Importance: Perinatal arterial ischemic stroke (PAIS) is a focal brain injury in term neonates that is identified postnatally but is presumed to occur near the time of birth. Many pregnancy, delivery, and fetal factors have been associated with PAIS, but early risk detection is lacking; thus, targeted treatment and prevention efforts are currently limited. Objective: To develop and validate a diagnostic risk prediction model that uses common clinical factors to predict the probability of PAIS in a term neonate. Design, Setting, And Participants: In this diagnostic study, a prediction model was developed using multivariable logistic regression with registry-based case data collected between January 2003, and March 2020, from the Alberta Perinatal Stroke Project, Canadian Cerebral Palsy Registry, International Pediatric Stroke Study, and Alberta Pregnancy Outcomes and Nutrition study. Criteria for inclusion were term birth and no underlying medical conditions associated with stroke diagnosis. Records with more than 20% missing data were excluded. Variable selection was based on peer-reviewed literature. Data were analyzed in September 2021. Exposures: Clinical pregnancy, delivery, and neonatal factors associated with PAIS as common data elements across the 4 registries. Main Outcomes And Measures: The primary outcome was the discriminative accuracy of the model predicting PAIS, measured by the concordance statistic (C statistic). Results: Of 2571 term neonates in the initial analysis (527 [20%] case and 2044 [80%] control individuals; gestational age range, 37-42 weeks), 1389 (54%) were male, with a greater proportion of males among cases compared with controls (318 [60%] vs 1071 [52%]). The final model was developed using 1924 neonates, including 321 cases (17%) and 1603 controls (83%), and 9 clinical factors associated with risk of PAIS in term neonates: maternal age, tobacco exposure, recreational drug exposure, preeclampsia, chorioamnionitis, intrapartum maternal fever, emergency cesarean delivery, low 5-minute Apgar score, and male sex. The model demonstrated good discrimination between cases and controls (C statistic, 0.73; 95% CI, 0.69-0.76) and good model fit (Hosmer-Lemeshow P = .20). Internal validation techniques yielded similar C statistics (0.73 [95% CI, 0.69-0.77] with bootstrap resampling, 10-fold cross-validated area under the curve, 0.72 [bootstrap bias-corrected 95% CI, 0.69-0.76]), as did a sensitivity analysis using cases and controls from Alberta, Canada, only (C statistic, 0.71; 95% CI, 0.65-0.77). Conclusions And Relevance: The findings suggest that clinical variables can be used to develop and internally validate a model to predict the risk of PAIS in term neonates, with good predictive performance and strong internal validity. Identifying neonates with a high probability of PAIS who could then be screened for early diagnosis and treatment may be associated with reductions in lifelong morbidity for affected individuals and their families. abstract_id: PUBMED:37844515 Neonatal outcomes of early preterm births according to the delivery indications. Objective: To compare the neonatal outcomes of early preterm births according to delivery indications and determine the obstetric risk factors associated with adverse outcomes. Methods: We retrospectively studied pregnancies delivered between 22 + 0 and 26 + 6 weeks at the tertiary center between April 2013 and April 2022. Stillbirths, elective termination of pregnancy, and multifetal pregnancies were excluded. Patients were classified into two groups according to delivery indications: spontaneous preterm birth (sPTB) due to premature rupture of membranes (PROM), preterm labor, or acute cervical insufficiency; and indicated preterm birth (iPTB). Obstetric and neonatal outcomes were compared between the groups. Results: Of the 121 neonates, 73 % (88/121) underwent sPTB. The overall survival rates were 73 % and 49 % in the sPTB and iPTB groups, respectively (p = 0.017). Multivariate logistic regression analysis was performed with adjustment for gestational age at delivery, fetal growth restriction, cesarean section, histological chorioamnionitis, and funisitis. Moreover, in the 1-year follow-up, the proportion of body mass below the third percentile was significantly higher in the iPTB-group than in the sPTB-group (53 % vs. 20 %, p = 0.019). Furthermore, diagnoses of developmental delay and cerebral palsy were slightly higher in the iPTB-group (33 % and 20 %, respectively) than in the sPTB-group (27 % and 9 %, respectively); however, this difference was not statistically significant. Conclusions: In early preterm births, iPTB was associated with a higher neonatal mortality than sPTB. abstract_id: PUBMED:2429891 Early child morbidity and late development following primary abdominal cesarean section in breech presentation near term This study concerns the results obtained in respect of early morbidity and late development of 115 and 57 children, respectively, born between 1978 and 1983, who had been delivered by primary low cervical Caesarean section shortly before term. Early morbidity of the 115 children was analysed taking into consideration the risk factors, such as premature rupture, gestation diabetes, EPH gestosis, condition following Caesarean section, abnormal amnioscopic and antepartal cardiotocographic findings, as well as the methods of anaesthesia employed. In the study on late development 57 children between 1 1/4 and 6 years of age were followed up and examined with regard to several faculties (social contact, fine motoricity and adaptation, speech and gross motoricity) according to the Denver Developmental Screening Test. Children with abnormal findings were subjected to special examination. Children with abnormal findings were also subjected to a positional test according to Vojta and to the Munich functional developmental diagnosis after Hellbrüge et al. While employing physiotherapy after Bobath and early rehabilitation training by the parents, these children were followed up at regular intervals. There was no clinically relevant acidosis in the group of 115 newborn. A total of 44 newborn (38%) displayed slight to medium enhanced acidity (pH value, umbilical artery: 7.20 to 7.29) according to the stage classification after Saling and Wulf. Slight to medium acidosis (umbilical artery pH 7.10 to 7.19) was seen in 3 cases only (2.6%). In 112 newborn we found a correlation between the good Apgar score values (7-10) and normal acidity in the umbilical artery blood (act. umbilical artery pH greater than or equal to 7.30). In the remaining 3 newborn with lower Apgar scores (3-6) there was no acidosis in the umbilical artery blood. In the follow-up group (57 cases) we found one child with psychomotor retardation of speech (disturbed articulation and reduced vocabulary) and 6 children with slight motor disturbances in the early developmental stage. These disturbances were recorded as slight central disturbances of coordination according to Vojta within the framework of early diagnosis. Four of these children received early treatment according to Bobath. When they were between 1 and 1 1/2 years of age, all the 4 children showed normalisation of motoricity during the follow-up checks. The other two children displayed spontaneous regression of the mild central disturbance of coordination when they were 5 and 6 months of age.(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:15957991 Morbidity and development in childhood of infants born after temporising treatment of early onset pre-eclampsia. Objective: To assess morbidity and development in childhood of infants born after temporising management of severe early onset pre-eclampsia. Design: Cohort study with matched controls. Setting: University centre for high risk obstetrics. Samples: Three groups of neonates matched for gender and year of birth: one born after temporising treatment of severe early onset (<32 weeks) pre-eclampsia with an average delay of delivery of two weeks (n= 193); one born at the duration of pregnancy [1 week] of the pre-eclamptic mother on admission (control group I, n = 192); and one born at the same gestational age [1 week] as the infant of the pre-eclamptic mother (control group II, n= 189). Method: Follow up at four years of age or more using medical records and questionnaires. Main Outcome Measures: The presence of various morbidities including mental retardation, cerebral palsy, motor skill problems, visual handicap, hearing loss, speech and language problems, education level and acute or chronic respiratory problems. Results: Median follow up of seven years (range 4-12) was achieved in 159 infants in the study group (83%), 122 in control group I (64%) and 110 in control group II (58%). Missing data analysis showed no differences in neonatal characteristics and morbidity between infants with and without follow up in the study group. All major and minor handicaps were less frequent in the study group than in control group I but statistical significance was reached only for acute and chronic respiratory disorders in the study group (13.8%) compared with control group I (27%). Conclusion: Average delay of delivery of two weeks with temporising management in severe early onset pre-eclampsia is associated with a reduced risk of respiratory disorders in childhood. abstract_id: PUBMED:21490794 Prenatal brain damage in preeclamptic animal model induced by gestational nitric oxide synthase inhibition. Cerebral palsy is a major neonatal handicap with unknown aetiology. There is evidence that prenatal brain injury is the leading cause of CP. Severe placental pathology accounts for a high percentage of cases. Several factors predispose to prenatal brain damage but when and how they act is unclear. The aim of this paper was to determine if hypoxia during pregnancy leads to damage in fetal brain and to evaluate the localization of this injury. An animal model of chronic hypoxia produced by chronic administration of a nitric oxide synthase inhibitor (L-NAME) was used to evaluate apoptotic activity in fetal brains and to localize the most sensitive areas. L-NAME reproduces a preeclamptic-like condition with increased blood pressure, proteinuria, growth restriction and intrauterine mortality. Apoptotic activity was increased in L-NAME brains and the most sensitive areas were the subventricular and pallidum zone. These results may explain the clinical features of CP. Further studies are needed. abstract_id: PUBMED:12044309 Preeclampsia and cerebral palsy in low-birth-weight and preterm infants: implications for the current "ischemic model" of preeclampsia. Objective: One of the prevailing hypotheses for the pathogenesis of preeclampsia is the "ischemic model." It assumes that reduced uteroplacental perfusion is the primary step and the point of convergence of diverse pathogenic processes in the development of preeclampsia. One might expect a fetus under such "ischemic conditions" to be at an increased risk of later development of cerebral palsy (CP). The objective of this study was to test the hypothesis that maternal preeclampsia increases the risk of CP in preterm and low-birth-weight infants. Methods: A meta-analysis was performed based on published articles identified by searching computerized databases (MEDLINE, EMBASE, CINAHL, Current Contents, Biological Abstracts, and Dissertation Abstracts) from 1966 through 1999. Ten observational studies on the association between preeclampsia and CP were identified based on prespecified inclusion criteria. Two independent reviewers extracted data and assessed the methodological quality of eligible articles. Odds ratios (OR) of CP for preeclampsia from individual studies were pooled. Main Outcome Measure: Cerebral palsy. Results: In case-control studies, preeclampsia was associated with a statistically significant decreased risk of CP [pooled adjusted OR, 0.50; 95% confidence interval (CI), 0.33-0.81; p < 0.01). In cohort studies, preeclampsia was associated with a nonstatistically significant reduced risk of CP (pooled OR, 0.91; 95% CI, 0.35-2.41; p > 0.05). Conclusions: Preeclampsia may be associated with a decreased risk of CP in preterm and low-birth-weight infants. This challenges the currently held belief that reduced uteroplacental perfusion is the unique pathophysiological process in preeclampsia. abstract_id: PUBMED:15140504 Obstetrical complications and subsequent schizophrenia in adolescent and young adult offsprings: is there a relationship? Schizophrenia is a psychiatric disease affecting around 1% of the population, the negative signs of which are correlated with inactivity of the prefrontal dorsolateral cortex, while an increased, more deeply localized, activity in the mesolimbic pathway may explain the positive signs. Several events occurring during pregnancy are likely to be involved in its genesis: hormonal supplementation by diethylstilbestrol, severe maternal denutrition, exposure to influenza virus, repeated psychological stress. From multicentric studies and meta-analyses in the psychiatric literature, the risk of schizophrenia appears to be multiplied by two if pregnancy is complicated, mainly by diabetes, Rhesus incompatibility, bleeding, preeclampsia, premature rupture of membranes and preterm birth. When delivery is linked to an abnormal presentation or happens via a caesarean birth for acute foetal distress, the time when the first signs of psychosis appear seems to be earlier in adolescence or in early adulthood. Cerebral imaging of schizophrenic patients shows ventriculomegaly and gray matter reduction, mainly in hippocampal volumes and in the dorsolateral prefrontal cortex. Similar alterations in the neuronal pathways have been experimentally reproduced in rats after repeated prenatal stress and perinatal hypoxia. A region on the distal portion of chromosome 1 has shown evidence for linkage to schizophrenia. Therefore, a two factor model seems to be able to explain the onset of schizophrenia in which obstetrical complications may interact with a genetic liability and in which the consequences of hypoxic events may lie on a continuum ranging from cerebral palsy in some children to subtle cognitive and behavioural disturbances in others. abstract_id: PUBMED:17466686 Does peripartum infection increase the incidence of cerebral palsy in extremely low birthweight infants? Objectives: This study was undertaken to determine the perinatal predictors of cerebral palsy in extremely low birthweight infants (<1000 g). Study Design: A case control study of infants with birthweight of less than 1000 g (19 with cerebral palsy and 38 controls) who survived beyond 18-22 months of corrected age was performed. Outcome variables included maternal demographics, obstetric complications, and neonatal outcome (gestational age at delivery, birthweight, Apgar scores, intrauterine growth restriction, respiratory distress syndrome, intraventricular hemorrhage, and neonatal sepsis). Data analysis consisted of t tests, chi2, and analysis of variance when appropriate. Results: There were no significant differences between cerebral palsy and control groups with regard to mode of delivery, Apgar scores, preeclampsia, antenatal vaginal bleeding, or the use of magnesium sulfate. However, male gender (odds ratio 3.70; 95% CI 1.05-12.5), primigravid status (odds ratio 5.52; 95% CI 1.67-18.3), early neonatal sepsis (odds ratio 12.9; 95% CI 2.94-57.2) and chorioamnionitis, both clinical and histologic (odds ratio 3.71; 95% CI 1.16-11.9) were significantly associated with the development of cerebral palsy. The strong association between cerebral palsy and chorioamnionitis, as well as early neonatal sepsis, remain significant after adjustment for primigravid status and male gender. Conclusion: In extremely low birthweight infants, cerebral palsy was strongly associated with chorioamnionitis, early neonatal sepsis, male gender, and primigravid status. abstract_id: PUBMED:16338186 Epidemiology of cerebral palsy. The term cerebral palsy refers to a range of clinical symptoms, with related service requirements, resulting from lesions or abnormalities in the brain arising early in life. It is not a diagnosis; aetiology and pathology are variable. This article discusses the definition and differential classification of cerebral palsy, describes trends in its frequency over time stratified by associated variables, and briefly reviews the most recent findings concerning its aetiology. Answer: The concept of a "double hit model" in the context of early onset preeclampsia and cerebral palsy (CP) suggests that multiple factors or events contribute to the development of CP, rather than a single cause. The studies provided offer insights into this model by examining the associations between preeclampsia, particularly early onset preeclampsia, and the subsequent development of CP. One study found that the rate of CP was double in patients with preeclampsia compared to the normotensive group, with early onset preeclampsia and being small for gestational age identified as independent risk factors for the development of CP (PUBMED:26283455). This study also identified birth asphyxia, complications of prematurity, and neonatal infectious morbidity as independent risk factors for CP among women with preeclampsia, supporting the multi-hit model in the pathogenesis of CP. Another study developed a prediction model for perinatal arterial ischemic stroke (PAIS) in term neonates, which included preeclampsia as one of the clinical factors associated with the risk of PAIS (PUBMED:35767262). Since PAIS can lead to CP, this further supports the idea that preeclampsia may be part of a multifactorial process leading to CP. Additional research comparing neonatal outcomes of early preterm births according to delivery indications found that indicated preterm birth (iPTB), which often includes cases of preeclampsia, was associated with higher neonatal mortality and slightly higher diagnoses of developmental delay and CP compared to spontaneous preterm birth (sPTB) (PUBMED:37844515). Moreover, a study on prenatal brain damage in a preeclamptic animal model induced by gestational nitric oxide synthase inhibition found increased apoptotic activity in fetal brains, particularly in sensitive areas that may explain clinical features of CP (PUBMED:21490794). In summary, the evidence from these studies supports the concept of a double hit model, where early onset preeclampsia may act as one of multiple insults that contribute to the development of CP. Factors such as birth asphyxia, complications of prematurity, neonatal infectious morbidity, and PAIS are also implicated, suggesting that the pathogenesis of CP in the context of preeclampsia is multifactorial and complex.
Instruction: Treatment of trimalleolar fractures. Is osteosynthesis needed in posterior malleolar fractures measuring less than 25% of the joint surface? Abstracts: abstract_id: PUBMED:14578650 Treatment of trimalleolar fractures. Is osteosynthesis needed in posterior malleolar fractures measuring less than 25% of the joint surface? Objectives: We evaluated the effect of posterior malleolar fractures, which measured less than 25% of the joint surface, on the results of ankle fractures. Methods: The study included 44 patients (21 females, 23 males; mean age 44 years; range 17 to 76 years) who underwent surgical treatment for Weber types B or C ankle fractures. Fibula fractures were associated with deltoid ligament ruptures in 12 patients, and with medial malleolar fractures in 32 patients. Sixteen patients and 28 patients with and without posterior malleolar fractures, respectively, were evaluated as separate groups according to the Phillips' criteria for comparison of clinical, anatomical, and arthritic scores. The mean follow-up was 29.5 months (range 18 to 64 months). Results: There were no significant differences between the two groups with regard to clinical and anatomical scores. Although the mean arthritic score was higher in patients with a posterior malleolar fracture, it did not reach significance (p>0.05). Conclusion: Our data show that satisfactory results can be achieved in posterior malleolar fractures measuring less than 25% of the joint surface when an acceptable reduction is performed even without osteosynthesis. abstract_id: PUBMED:33016122 Outcomes of Posterior Arthroscopic Reduction and Internal Fixation (PARIF) for the Posterior Malleolar Fragment in Trimalleolar Ankle Fractures. Background: Posterior malleolar fractures within a trimalleolar ankle fracture pattern are challenging to manage. Posterior ankle arthroscopy provides a means to assess the intra-articular and syndesmotic reductions, while removing loose bodies. The purpose of this study was to determine the radiographic and patient reported outcomes of posterior arthroscopic reduction and internal fixation (PARIF) in trimalleolar ankle fractures. Methods: From November 2015 to March 2019, we prospectively enrolled consecutive trimalleolar ankle fractures that underwent PARIF by a single surgeon. Preoperative and postoperative computed tomography (CT) scans were interpreted by 2 blinded musculoskeletal radiologists for articular reduction, syndesmosis congruity, and presence of ossific loose bodies. Patient outcomes were assessed using the Foot and Ankle Disability Index (FADI), American Orthopaedic Foot & Ankle Society (AOFAS) ankle score, Olerud-Molander Ankle Score (OMAS), and visual analog scale (VAS). A total of 28 trimalleolar ankle fractures were treated with PARIF. Mean patient age was 36 years (range, 19-69). Results: Preoperative CT identified 18 intra-articular loose bodies (range, 0-4) in 36% of ankles and 75% (n = 21) syndesmosis incongruity. Postoperative CT scans demonstrated anatomic intra-articular reduction in all fractures, 41.7% (5/12) syndesmosis incongruity without fixation, and 0% (0/9) with suture-button fixation. At mean 2-year follow-up, the results were good to excellent with mean VAS score 1 (range, 0-4), AOFAS score 84 (range, 63-100), FADI 85 (range, 59-100), and OMAS 76 (range, 40-100). Conclusion: The PARIF technique for displaced posterior malleolar fractures was effective in achieving anatomic intra-articular reduction, syndesmosis congruity, and intra-articular loose body removal, while safely preserving the soft tissues. Level Of Evidence: Level IV, prospective case series. abstract_id: PUBMED:30430238 Persistent postoperative step-off of the posterior malleolus leads to higher incidence of post-traumatic osteoarthritis in trimalleolar fractures. Background: Traditionally, size of the posterior fragment is considered the most important indicator for fixation in trimalleolar fractures. It remains unclear which factors contribute to worse functional and radiological outcome. This study was designed to determine predictors for the development of posttraumatic osteoarthritis and worse functional outcome in trimalleolar fractures. Methods: This retrospective cohort study evaluated outcomes of 169 patients with a trimalleolar fracture treated between 1996 and 2013 in a level-1 trauma hospital in the Netherlands after a mean follow-up of 6.3 (range 2.4 to 15.9) years. The average fragment size was 17%. Twenty patients had a posterior fragment smaller than 5% of the intra-articular surface, 119 patients a fragment of 5-25% and 30 patients a posterior fragment larger than 25%. In total, 39 patients (23%) underwent fixation of the posterior fragment. Results: Clinical union was achieved in all 169 patients. The median AOFAS score after follow-up was 93 (interquartile range 76-100) and the median AAOS score was 92 (interquartile range 81-98). A persistent postoperative step-off larger than 1 mm was found in 65 patients (39%) and osteoarthritis was present in 49 patients (30%). Higher age and postoperative step-off > 1 mm were independent, significant risk factors for the development of osteoarthritis. Osteoarthritis and BMI were independent, significant risk factors for worse functional outcome. Conclusion: It is advisable to correct intra-articular step-off of intraarticular posterior malleolar fragments to reduce the risk of developing osteoarthritis and, consequently, the risk of worse functional outcome after long-term follow-up. Level Of Evidence: Level IIB. abstract_id: PUBMED:26542855 Size matters: The influence of the posterior fragment on patient outcomes in trimalleolar ankle fractures. Introduction: Ankle fractures are increasing in incidence. The more complicated the lesion is, the higher the risk of developing posttraumatic arthrosis. Severe posttraumatic arthrosis results in a reduced quality of life. Therefore, the treatment of a trimalleolar fractures is crucial. However, the treatment guidelines for posterior malleolar fractures (PMF) are still based on recommendations from 1940. Only a few retrospective studies have been conducted, which analysed patient outcomes based on lateral X-rays of the ankle. The purpose of this retrospective analysis was to survey patient outcomes in relation to the size of the PMF on the basis of CT-scans. Methods: We retrospectively examined 42 patients with trimalleolar fractures with an average follow-up of 2.5 years. Twenty-four patients (57%) received a CT scan of the ankle joint. The radiologic images were analysed for the size of the PMF and the involvement of the joint surface using lateral X-rays and available CT images. We examined all 42 patients clinically and radiologically, and estimated the grade of arthrosis of the ankle in accordance with the Bargon Score and assigned AOFAS Scores for each patient. We divided our patients into different groups according to the size of their PMF and evaluated patient outcomes in accordance with the compiled data first on the basis of X-ray data and then on the basis of CT data. Results: Comparing the measurement results by two different radiologic methods revealed that CT results in a more precise determination of PMF size in contrast to lateral X-rays, by which measurements were generally overrated. The statistical evaluation of our data demonstrated that patients with an osteosynthesis of the PMF and a PMF size of >25% showed signs of posttraumatic arthrosis but had better outcomes in accordance to the AOFAS score. All results were not significant. Conclusion: An exact evaluation of CT images of posterior malleolar fractures in patients with trimalleolar ankle fractures is crucial for the decision to perform an osteosynthesis of the PMF and, therefore, an analysis of patient outcomes. The results of previous studies should be evaluated cautiously due to missing CT data. To date, this is the largest retrospective patient series of patient outcomes based on CT data. abstract_id: PUBMED:35995683 Arthroscopic handlebar technique for the treatment of posterior malleolar fractures. Treating posterior malleolar fractures of the ankle remains a challenge. The arthroscopic handlebar technique is our novel surgical method used for reduction and fixation of posterior malleolar fractures and involves the restoration of posterior malleolar fractures under anterior arthroscopic guidance and the use of Kirschner wires that penetrates the fractured posterior malleolus. Arthroscopy enables visualization of the intra-articular fracture of the posterior malleolus, and a handlebar reduction bar is used to control the fractured posterior malleolus. The arthroscopic handlebar technique is a promising procedure for reduction and internal fixation of the posterior malleolar fractures. abstract_id: PUBMED:29887634 Posterior Malleolus Fractures in Trimalleolar Ankle Fractures: Malleolus versus Transyndesmal Fixation. Background: In ankle fractures involving the posterior malleolus, the issue of which types of fractures require posterior malleolus fixation is still controversial. Recent studies have demonstrated that trimalleolar fractures adversely affect the functional outcomes in comparison to bimalleolar fractures of the lateral and medial malleolus. The purpose of this study was to assess the effects of posterior malleolus fixation on the functional and radiological outcomes. Materials And Methods: Reduction quality, development of posttraumatic ankle osteoarthritis, and functional outcomes in 49 consecutive trimalleolar ankle fractures were evaluated retrospectively in patients with and without posterior malleolus fixation. Group I consisted of 29 patients, in which posterior malleolar fracture was left untreated. Twenty patients in Group II, posterior malleolar fragment was fixed directly by screws alone or plate screw. Twenty-one of these 49 patients were male (43%). The mean age was 47 years (range 20-82 years). Results: The mean followup was 12 to 51 months with a mean of 15 months (range 12-51 months). Statistically significant differences were found between Group I and Group II in terms of ankle arthrosis. American Orthopaedic Foot and Ankle Society score was significantly lower in Group I compared to Groups II. Conclusions: These results demonstrate that posterior malleolar fracture fixation is closely related to successful radiological and functional outcomes after trimalleolar fractures. Transyndesmal screw fixation may not be needed in the cases where the posterior malleolar fracture fixated. For these reasons, we recommend that all posterior malleolar fractures have to be fixed regardless of size. abstract_id: PUBMED:36620781 The Management of Posterior Malleolus Fractures in Unstable Ankle Injuries: Where Do We Stand Now? The evaluation and treatment of the posterior malleolus fracture in unstable ankle injuries remain a topic of controversy. The main objective of this systematic review was to examine the available literature and identify the variables that affect the management of posterior malleolar fractures and how these are related to the outcomes. To that end, a systematic review was performed based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A comprehensive search of MEDLINE, Embase, and Cochrane Library was conducted. The search terms used were as follows: "posterior malleolar", "ankle fractures", "trimalleolar fractures", "ORIF", "surgery", "operative", and "conservative". The available studies were screened against the inclusion and exclusion criteria. Based on the review of the available literature, we have concluded that the size of the posterior malleolar fragment is not an accurate indicator, and clinicians should consider other factors such as fracture configuration and articular surface congruity. Also, the risk for the development of post-traumatic arthritis increases when the joint surface is not restored regardless of the surgical intervention and fragment size. The complications of posterior malleolus fractures necessitate evidence-based management. The assessment and the final treatment of these injuries in unstable ankle fractures should not be based on the traditional fragment-size parameters. Clinicians should assess the fracture configuration through imaging modalities and try to preserve the articular surface congruity so as to achieve optimal outcomes. Finally, more studies with high-level evidence are required in order to determine the most appropriate management pathway for these patients. abstract_id: PUBMED:24480500 Impact of trimalleolar ankle fractures: how do patients fare post-operatively? Background: We aim to evaluate the clinical and functional outcome of trimalleolar fractures and the ability of patients to return to sporting activities. Methods: A retrospective review of 31 patients with operatively managed trimalleolar fractures was conducted. Their Olerud and Molander scores and ability to return to sports was analyzed at 1 year postoperatively. Results: Dislocations were more likely to result from trimalleolar fractures (p<0.001). 11 (52.4%) of our patients had residual pain at 1 year. 13 (61.9%) and 10 (47.6%) had persistent ankle stiffness and swelling. Out of 12 patients who were involved in sports pre-operatively, only 4 (33.3%) patients were able to return to sports. 3 (25%) patients were unable to do sports at all. Increasing posterior malleolar fragment size correlates with poorer functional outcome. Conclusions: Patients have poorer functional outcome with increasing posterior malleolar fragment size in trimalleolar fractures. Residual deficits affect the majority of our patients and a notable proportion was unable to return to sporting activities. abstract_id: PUBMED:33897988 Trimalleolar fracture: The endless posterior malleolus fracture debate, to repair or not to repair? Ankle fracture is one of the most common fractures presenting in the emergency department. The fracture varies from unimalleolar, bimalleolar or trimalleolar. Involvement of the posterior malleolus is common and ranges from small avulsions to large intraarticular fragments causing subluxation of the talus. If left untreated, the resulting step-off, comminution or posterior talar subluxation may lead to osteoarthritis and further disability. To date, no consensus exists regarding the management of posterior malleolus fractures in the set-up of trimalleolar fractures. In this review we provide an overview of the literature on the available treatment options for posterior malleolar fracture in the set-up of trimalleolar fractures. abstract_id: PUBMED:36018372 Variation in posterior fragment fixation in the Netherlands: a nationwide study. Purpose: The treatment of the posterior fragment in trimalleolar fractures differs from hospital to hospital in the Netherlands. A nationwide survey was performed to evaluate the fixation criteria and practice variation. Methods: An online cross-sectional survey amongst (orthopaedic) trauma surgeons was performed in the Netherlands. It consisted of three sections: a general section, a section showing preoperative images of six cases and a section with postoperative images of nine cases. Results: A total of 151 surgeons completed the online survey. 45% of the respondents indicated that they fixated the posterior fragment if smaller than 25% of the intra-articular surface. 48% preferred an open posterior approach to fixate the posterior fragment. There was good consensus in treatment for the two cases with Bartonicek type 4 fractures (operative treatment in 73 and 72% respectively). Little consensus was found for Bartonicek type 2 and 3 fractures (88% opted for operative treatment in one case, but 89% for conservative treatment in the second case). Reoperation was mostly considered in cases with a step-off of more than 1 mm (by 33-38% of the respondents). There was great variation in the choice of treatment if only the size of the posterior fragment was considered. Other fixation criteria such as postoperative step-off or instability after fixation of the lateral and medial malleoli are taken into account. In cases where fixation was needed, a percutaneous approach and an open posterolateral approach were equally preferred. Conclusions: There is much variation in treatment of the posterior malleolar fracture amongst orthopaedic and trauma surgeons in the Netherlands. The percutaneous approach and open posterolateral approach to fixate the posterior malleolar fracture seem to be equally used in the Netherlands. Still, there is no uniformity in treatment of posterior malleolar fracture, especially for Bartonicek 2 and Bartonicek 3 fractures. Reoperation is considered by less than half of the surgeons in case of postoperative persistent step-off of more than 1 mm. Answer: The treatment of trimalleolar fractures, particularly the need for osteosynthesis in posterior malleolar fractures measuring less than 25% of the joint surface, is a subject of debate among orthopedic and trauma surgeons. Some studies suggest that satisfactory results can be achieved without osteosynthesis in posterior malleolar fractures that are less than 25% of the joint surface, provided that an acceptable reduction is performed (PUBMED:14578650). However, other research indicates that the fixation of the posterior malleolus is closely related to successful radiological and functional outcomes after trimalleolar fractures, recommending that all posterior malleolar fractures be fixed regardless of size (PUBMED:29887634). Persistent postoperative step-off of the posterior malleolus has been associated with a higher incidence of post-traumatic osteoarthritis, suggesting that intra-articular step-off should be corrected to reduce the risk of developing osteoarthritis and consequently, the risk of worse functional outcomes (PUBMED:30430238). Additionally, the size of the posterior fragment has been shown to influence patient outcomes, with larger fragments correlating with poorer functional outcomes (PUBMED:26542855). Arthroscopic techniques, such as the Posterior Arthroscopic Reduction and Internal Fixation (PARIF) and the arthroscopic handlebar technique, have been developed to achieve anatomic intra-articular reduction and syndesmosis congruity, and they have shown good to excellent outcomes (PUBMED:33016122; PUBMED:35995683). These techniques also allow for the removal of intra-articular loose bodies while preserving soft tissues. A systematic review suggests that the size of the posterior malleolar fragment is not an accurate indicator alone, and clinicians should consider other factors such as fracture configuration and articular surface congruity. The risk for the development of post-traumatic arthritis increases when the joint surface is not restored, regardless of the surgical intervention and fragment size (PUBMED:36620781). In conclusion, while some evidence supports conservative treatment for smaller posterior malleolar fractures, other studies and expert opinions advocate for fixation to ensure better functional outcomes and to prevent post-traumatic osteoarthritis. The decision to perform osteosynthesis should be based on a comprehensive assessment of the fracture, including size, configuration, and the potential for articular incongruity, rather than relying solely on the size of the posterior fragment.
Instruction: Does loss of residual renal function lead to malnutrition in peritoneal dialysis patients? Abstracts: abstract_id: PUBMED:16995342 Does loss of residual renal function lead to malnutrition in peritoneal dialysis patients? Objectives: It is usually believed that loss of residual renal function is associated with anorexia and the development of malnutrition. We conducted a retrospective study in our center to evaluate the effect of declining residual renal function on patients' nutritional status. Methods: All incident uremic patients (n = 46) who began peritoneal dialysis from January 1, 2003 June 1, 2003 in our center were closely followed for 1 year with focus on maintaining strict volume control with time on dialysis. Patient's residual renal function (RRF) was assessed by the average renal urea and creatinine clearances. Those patients who had more than 50% decrease in GFR were selected for the present analysis. Serum albumin (ALB), dietary protein intake (DPI) and subjective global assessment (SGA) were closely followed. Results: There were 16 patients (9 males and 7 females) included in the present analysis, among whom 31.3% were diabetics. Patients' GFR declined significantly (RRF were 4.32 +/- 2.69, 2.99 +/- 2.21 and 1.24 +/- 0.99 ml/min for Months 1, 6 and 12, respectively, p < 0.05), along with a significant decline in urine volume (985.62 +/- 543.29, 698.13 +/- 463.59 and 425.63 +/- 320.52 ml/d for Months 1, 6 and 12, respectively, p < 0.01). Although weekly peritoneal Kt/V did not increase significantly, peritoneal ultrafiltration increased significantly during this period (428.75 +/- 408.96, 534.38 +/- 296.39, 844.38 +/- 440.35 ml for Months 1, 6 and 12, respectively, p < 0.05). Serum ALB increased significantly (32.34 +/- 5.07, 34.74 +/- 4.89 and 36.21 +/- 3.98 g/l for Months 1, 6 and 12, respectively, p < 0.01). DPI also increased significantly. The prevalence of malnutrition (by SGA) decreased from 62.5% at the start of dialysis to 18.8% at the end of this study (p < 0.05). Conclusions: Our study suggests that rapid decline of residual renal function in PD patients does not necessarily lead to decreased dietary protein intake and deteriorated nutritional status. Focus on incremental peritoneal fluid removal along with the decline in residual renal function and, thus, maintaining volume control may be one of the critical reasons for the success. abstract_id: PUBMED:16612329 The importance of residual renal function in dialysis patients. Preserving residual renal function has always been the primary clinical goal for every nephrologist managing patients with chronic kidney disease. There is no reason why this important goal should not extend to patients with stage 5 chronic kidney disease receiving dialysis. Indeed, there is now clear evidence that preserving residual renal function remains important after the commencement of dialysis. Residual renal function contributes significantly to the overall health and well-being of dialysis patients. It not only provides small solute clearance but also plays an important role in maintaining fluid balance, phosphorus control, and removal of middle molecular uremic toxins, and shows strong inverse relationships with valvular calcification and cardiac hypertrophy in dialysis patients. Decline of residual renal function also contributes significantly to anemia, inflammation, and malnutrition in patients on dialysis. More importantly, the loss of residual renal function, especially in patients on peritoneal dialysis, is a powerful predictor of mortality. In addition, there is increasing evidence that residual renal and peritoneal dialysis clearance cannot be assumed to be equivalent qualitatively, thus indicating the need to preserve residual renal function in patients on dialysis. In this article, we will review evidence that residual renal function is important in dialysis patients (especially peritoneal dialysis) and outline potential strategies that may better preserve residual renal function in dialysis patients. abstract_id: PUBMED:16673862 Residual renal function and nutritional status in patients on continuous ambulatory peritoneal dialysis Introduction: During the last years, an increasing number of patients with end-stage renal failure caused by various underlying diseases, all over the world, is treated by renal replacement therapy. NUTRITIONAL STATUS: Malnutrition is often found in patients affected by renal failure; it is caused by reduced intake of nutritional substances due to anorexia and dietary restrictions hormonal and metabolic disorders, comorbid conditions and loss of proteins, amino-acids, and vitamins during the dialysis procedure itself. Nutritional status significantly affects the outcome of patients on chronic dialysis treatment. Recent epiodemiological trials have proved that survival on chronic continuous ambulatory peritoneal dialysis program depends more on residual renal function (RRF) than on peritoneal clearances of urea and creatinine. Material And Methods: The aim of the study was to analyze the influence of RRF on common biochemical and anthropometric markers of nutrition in 32 patients with end-stage renal failure with various underlying diseases during the first 6 months on continuous ambulatory peritoneal dialysis (CAPD). The mean residual creatinine clearance was 8,3 ml/min and the mean RRF was 16,24 l/week in our patients at the beginning of the chronic peritoneal dialysis treatment. Results And Conclusion: During the follow-up, the RRF slightly decreased, while the nutritional status of patients significantly improved. Gender and age, as well as the leading disease and peritonitis didn't influence the RRF during the first 6 months of CAPD treatment. We found several positive correlations between RRF and laboratory and anthropometric markers of nutrition during the follow-up, proving the positive influence of RRF on nutritional status of patients on chronic peritoneal dialysis. abstract_id: PUBMED:29510483 Effects of Cholesterol Levels on Mortality in Patients with Long-Term Peritoneal Dialysis Based on Residual Renal Function. The effect of dyslipidemia on peritoneal dialysis (PD) patients based on the presence of residual renal function (RRF; renal creatinine clearance >2 mL/min/1.73 m²) is unknown. Data from the Taiwan Renal Registry Data System between 2005 and 2012 were analyzed to estimate the association between dyslipidemia and mortality in PD patients. Long-term PD patients (n = 8032) were divided into groups with (RRF; n = 2691, 33.5%) and without RRF (non-RRF; n = 5341, 66.5%). The primary outcome was three-year mortality, and multivariate Cox regression was used for survival analysis. After stratifying the total cholesterol (TC) level between the first and third years, the hazard ratio for mortality was estimated. In the non-RRF group, TC < 120 mg/dL was associated with independently increased risk of mortality. In the RRF group, low TC was not independently correlated with increased mortality, but TC > 285 mg/dL was associated with increased risk. PD patients with higher level of TC (>200 mg/dL) in both first and third years of dialysis had significantly lower risk of mortality. In this nationwide cohort study, PD patients without RRF who had low TC level had the highest mortality, in contrast to those with RRF. Malnutrition in long-term PD patients without RRF is an important issue to be monitored. abstract_id: PUBMED:8978995 Adequacy of peritoneal dialysis. Landmark studies relevant to the adequacy of peritoneal dialysis have recently been published. The Canada/USA peritoneal dialysis study has shown a relationship between clearance indices and patient survival rate, but the study is confounded by residual renal function. Registry data show conflicting results concerning survival rates on peritoneal dialysis and haemodialysis. The complicated pathogenesis of malnutrition in dialysis patients is now becoming further appreciated. abstract_id: PUBMED:19382077 Peritoneal dialysis in cirrhotic patients Renal failure and the need for dialysis worsen the prognosis of patients with combined liver and kidney disease. The choice of an appropriate dialysis technique should improve the life expectancy of these patients. Hypotension, impaired defence against infections, electrolyte and acid-base imbalance, severe protein and caloric malnutrition, hyperammonemia, hyperbilirubinemia, and inadequate response to diuretics present a number of clinical problems in patients with liver insufficiency. Liver failure is therefore considered an important risk factor for any type of dialysis. Theoretically, both hemodialysis and peritoneal dialysis may cause specific problems in these patients. Hemodialysis has an increased cost/benefit ratio in cirrhotic patients. The administration of heparin during dialysis might worsen blood coagulation, ascites is not controlled by hemodialysis, and frequent paracentesis may be necessary. The efficiency of hemodialysis in removing certain toxic substances accumulating in liver failure is still unclear. Peritoneal dialysis does not require anticoagulation, helps maintain residual renal function, allows continuous removal of a fixed amount of ascitic fluid, does not cause acute hemodynamic changes, clears some of the toxic metabolites accumulated by liver failure, and is less expensive. Finally, peritoneal dialysis is associated with continuous absorption of glucose through the mesenteric capillaries into the mesenteric and liver blood flow, thus improving caloric malnutrition. During the first months of peritoneal dialysis, cirrhotic patients lose about 10 g of protein in the peritoneal dialysate but this loss tends to decrease with time. All the available data seem to indicate that in cirrhotic patients on peritoneal dialysis the majority of complications are consequent upon liver disease, which is also the most important cause of death. The outcome of peritoneal dialysis is not affected by cirrhosis and is similar to that observed in noncirrhotic patients. All the evidence reported in the literature seems to indicate that in cirrhotic patients peritoneal dialysis is an adequate treatment of uremia. abstract_id: PUBMED:14650752 Prevention of malnutrition in peritoneal dialysis patients Malnutrition is a frequent and serious problem for patients treated by peritoneal dialysis. Patients' survival depends on their nutritional status at the initiation of the dialysis treatment. Main malnutrition factors are inflammation, insufficient dialysis dose, peritoneal glucidic absorption and protein loss within the dialysate. These patients show a relationship between malnutrition, inflammation and cardiovascular diseases. To prevent malnutrition, it is necessary to reduce inflammation by improving dialysis solutions' biocompatibility and optimising the sodium regulation. The peritoneal membrane exposure to both glucose and its degradation products must also be reduced. In order to restrict protein losses, especially when peritoneal hyper permeability occurred, dialysis solutions containing amino acids can be used. Early dialysis treatment and a progressive increase of the dialysis dose corresponding to the decrease of the residual renal function can also be recommended. abstract_id: PUBMED:17299143 The John F. Maher Award Recipient Lecture 2006. The "heart" of peritoneal dialysis: residual renal function. The CANUSA study originally reported the importance of total small-solute clearance in predicting survival of peritoneal dialysis (PD) patients. However, subsequent reanalysis of data from the CANUSA study clearly demonstrated that the predictive power for mortality in PD patients was largely attributable to residual renal function (RRF) and not to the dose of PD. While this should not lead to the assumption that the dose of PD is unimportant, it does clearly indicate that the contribution of residual renal clearance and PD clearance to the overall survival of PD cannot be considered equivalent. In a previous study, we also demonstrated the importance of loss of RRF in predicting a heightened risk of mortality and cardiovascular death in PD patients. In this review, we focus our discussion on the different potential mechanisms that explain the important link between RRF and cardiovascular disease and survival of PD patients. We provide evidence to explain why RRF is so important to patients receiving long-term PD treatment and why it should be regarded as the "heart" of PD. abstract_id: PUBMED:37513579 Phosphate Control in Peritoneal Dialysis Patients: Issues, Solutions, and Open Questions. Hyperphosphatemia is a common complication in advanced chronic kidney disease and contributes to cardiovascular morbidity and mortality. The present narrative review focuses on the management of phosphatemia in uremic patients receiving peritoneal dialysis. These patients frequently develop hyperphosphatemia since phosphate anion behaves as a middle-size molecule despite its low molecular weight. Accordingly, patient transporter characteristics and peritoneal dialysis modalities and prescriptions remarkably influence serum phosphate control. Given that phosphate peritoneal removal is often insufficient, especially in lower transporters, patients are often prescribed phosphate binders whose use in peritoneal dialysis is primarily based on clinical trials conducted in hemodialysis because very few studies have been performed solely in peritoneal dialysis populations. A crucial role in phosphate control among peritoneal dialysis patients is played by diet, which must help in reducing phosphorous intake while preventing malnutrition. Moreover, residual renal function, which is preserved in most peritoneal dialysis patients, significantly contributes to maintaining phosphate balance. The inadequate serum phosphate control observed in many patients on peritoneal dialysis highlights the need for large and well-designed clinical trials including exclusively peritoneal dialysis patients to evaluate the effects of a multiple therapeutic approach on serum phosphate control and on hard clinical outcomes in this high-risk population. abstract_id: PUBMED:35819749 On the importance of the interplay of residual renal function with clinical outcomes in end-stage kidney disease. Chronic kidney disease (CKD) is one of the most important public health concerns of the century, and is associated with high rates of morbidity, mortality and social costs. CKD evolving towards end-stage kidney disease (ESKD) is on the rise resulting in a greater number of patients requiring peritoneal dialysis (PD) and hemodialysis (HD). The aim of this manuscript is to review the current literature on the interplay of residual renal function (RRF) with clinical outcomes in ESKD. The persistence of RRF is one of the most important predictors of decreased morbidity, mortality, and better quality of life in both PD and HD patients. RRF contributes to the well-being of ESKD patients through various mechanisms including higher clearance of solutes, maintenance of fluid balance, removal of uremic toxins and control of electrolytes. Furthermore, RRF has beneficial effects on inflammation, anemia, malnutrition, diabetes mellitus, obesity, changes in the microbiota, and cardiac diseases. Several strategies have been proposed to preserve RRF, such as blockade of the renin-angiotensin-aldosterone system, better blood pressure control, incremental PD and HD. Several clinical trials investigating the issue of preservation of RRF are ongoing. They are needed to broaden our understanding of the interplay of RRF with clinical outcomes in ESKD. Answer: The relationship between the loss of residual renal function (RRF) and malnutrition in peritoneal dialysis patients is complex and not necessarily direct. Some studies suggest that the decline of RRF can contribute to malnutrition in patients on dialysis (PUBMED:16612329), as RRF plays a role in maintaining fluid balance, phosphorus control, and removal of middle molecular uremic toxins, which are factors that can influence nutritional status. Additionally, the loss of RRF is associated with anemia, inflammation, and malnutrition (PUBMED:16612329). However, other studies have shown that the decline in RRF does not necessarily lead to deteriorated nutritional status. For instance, a retrospective study found that despite a significant decline in patients' glomerular filtration rate (GFR) and urine volume, serum albumin and dietary protein intake actually increased, and the prevalence of malnutrition decreased over a one-year period (PUBMED:16995342). This suggests that with careful management, including maintaining volume control and focusing on incremental peritoneal fluid removal, malnutrition can be avoided even as RRF declines. Another study found that while RRF slightly decreased, the nutritional status of patients on continuous ambulatory peritoneal dialysis (CAPD) significantly improved, with positive correlations between RRF and nutritional markers (PUBMED:16673862). This indicates that RRF can have a positive influence on nutritional status, but its decline does not inevitably lead to malnutrition. In summary, while the loss of RRF can be associated with factors that contribute to malnutrition, careful management and treatment strategies can mitigate these effects and even improve the nutritional status of peritoneal dialysis patients despite a decline in RRF (PUBMED:16995342, PUBMED:16673862).
Instruction: Is insulin action a determinant of left ventricular relaxation in uncomplicated essential hypertension? Abstracts: abstract_id: PUBMED:9222942 Is insulin action a determinant of left ventricular relaxation in uncomplicated essential hypertension? Objective: To examine the relation of insulin action and left ventricular diastolic function in uncomplicated essential hypertension. Methods: Doppler echocardiography and glucose clamping combined with indirect calorimetry were performed in 29, newly diagnosed, hypertensive men, free from cardiac and metabolic drugs. They were divided into two groups according to the clamp-derived whole-body glucose disposal level: 20 with insulin resistance (whole-body glucose disposal < 33 mumol/kg per min) and nine with normal insulin sensitivity. Results: The two groups were comparable in age, body mass index, heart rate and blood pressure. No difference in diastolic function was found except for the isovolumic relaxation time, which was prolonged for patients with insulin resistance (P = 0.02). For the population as a whole, the relaxation time had univariate relations with the left ventricular mass index (r = 0.57, P < 0.001), whole-body glucose disposal (r = -0.56, P < 0.001) and non-oxidative glucose metabolism (r = -0.54, P = 0.002). In a multivariate model including age, body mass index, heart rate, diastolic blood pressure, left ventricular mass index and whole-body glucose disposal as potential determinants, only the left ventricular mass index (beta = 0.39, P = 0.02) and whole-body glucose disposal (beta = -0.38, P = 0.03) were independent predictors of the relaxation time (R2 = 0.43, P < 0.001). Conclusions: In uncomplicated essential hypertension the insulin resistance is a determinant of abnormalities in isovolumic relaxation, independently from the influence exerted by increased blood pressure levels, being overweight and left ventricular hypertrophy. abstract_id: PUBMED:11229109 A study of the influence of insulin resistance on left ventricular mass in hypertensive patients. Objectives: To study the influence of insulin resistance on the left ventricular mass in hypertensive subjects. Material And Methods: Thirty patients having uncomplicated essential hypertension were included in the study. Post-oral glucose load serum insulin level (2 hrs) was determined and this was used as a marker for insulin resistance. Two D-echocardiography was performed and left ventricular mass and left ventricular mass index were calculated. Result: Out of the 30 patients 18 were males and 12 were females. Eight were obese while the remaining 22 were non-obese. The patients were 27 to 70 years old. The mean age, height and weight were 54.83 +/- 9.46 years, 159.07 +/- 8.81 cm and 58.38 +/- 11.03 kg, respectively. The post oral glucose load serum insulin levels in the study ranged from 57.65 to 210.81 microU/ml. The left ventricular mass and left ventricular mass index ranged from 42.58 to 310.8 g (mean 196.60 +/- 65.13 g) and 42.74 to 185.59 g/m2 (mean 118.71 +/- 37.75 g/m2), respectively. The correlation coefficient ('r' value) between post oral glucose load serum insulin levels and left ventricular mass and left ventricular mass index were calculated. Conclusion: A strong positive correlation was observed between the post oral glucose load serum insulin levels and left ventricular mass ('r' = +0.750). A strong positive correlation between post oral glucose load serum insulin levels and left ventricular mass index ('r' = +0.757) was also observed. These correlations were found to be statistically highly significant (p < 0.01). This association was demonstrated independently of age, anthropometric measurements, systolic and diastolic blood pressure levels. Thus, in patients with essential hypertension, hyperinsulinemia (insulin resistance) has a role in promoting left ventricular hypertrophy. abstract_id: PUBMED:7702807 Glucose intolerance exaggerates left ventricular hypertrophy and dysfunction in essential hypertension. The influence of glucose intolerance, the preclinical stage of diabetes mellitus, on the progression of left ventricular hypertrophy and left ventricular dysfunction in essential hypertension, was assessed with two-dimensional M-mode echocardiography in age- and sex-matched essential hypertensive patients with (n = 28) or without (n = 44) glucose intolerance, and normotensive control subjects (n = 29). Left ventricular mass index in hypertensive patients with glucose intolerance was significantly higher than that in hypertensive patients without glucose intolerance (mean +/- SD, 115.6 +/- 28.2 v 102.1 +/- 22.1 g/m2; P < .05). Left ventricular diastolic function as reflected by peak lengthening rate was reduced in glucose-intolerant hypertensive patients than in hypertensive patients without glucose intolerance (2.68 +/- 0.71 v 3.16 +/- 0.82/sec; P < .05). End-systolic wall stress/left ventricular end-systolic volume index, an index of left ventricular contractility, was reduced more in glucose-intolerant hypertensive patients than in hypertensive patients without glucose intolerance (2.75 +/- 0.55 v 3.13 +/- 0.55 10(3) dyn.m2/cm2.mL-1; P < .01). These findings suggest that glucose intolerance accelerates progression of left ventricular hypertrophy and deteriorates left ventricular diastolic function and contractility in essential hypertension. abstract_id: PUBMED:18423813 Kidney function and risk factors for left ventricular hypertrophy in untreated uncomplicated essential hypertension. Background: Left ventricular (LV) hypertrophy and decreased kidney function are well-established cardiovascular risk factors in hypertensive patients. Study Design: We investigated the relationship between creatinine level, creatinine clearance, and estimated glomerular filtration rate (eGFR) with LV mass (LVM) in a cross-sectional study. Predictors: eGFR and serum creatinine level. Outcome: LVM index (LVMI). Setting & Participants: 400 patients with untreated uncomplicated essential hypertension. Measurements: LVMI, eGFR (Modification of Diet in Renal Disease Study equation), Framingham risk factors, and a series of specific risk factors, ie, endothelial function (acetylcholine [ACh]-stimulated forearm blood flow [FBF]), insulin sensitivity (Homeostatic Model Assessment for insulin resistance [HOMA-R] index), C-reactive protein (CRP), and uric acid. Results: Both eGFR and creatinine level were significantly related to LVMI (r = -0.34 and r = 0.35; P < 0.001). In a multiple regression model adjusting for Framingham risk factors, eGFR was independently associated with LVMI. However, this association, although highly significant, lost substantial strength after adjustment for such specific risk factors as HOMA-R index, ACh-stimulated FBF, CRP level, and uric acid level. eGFR interacted with insulin resistance in explaining the variability in LVMI (P = 0.007). Limitations: The cross-sectional nature of this study precludes cause-effect conclusions. Conclusions: Independently of other risk factors, decreased kidney function contributes to explain the variability in LVMI in patients with untreated uncomplicated essential hypertension. This association is attributable in part to the link between eGFR and such specific risk factors as HOMA-R index, ACh-stimulated FBF, CRP level, and uric acid level. Decreased kidney function and insulin resistance interact in explaining the variability in LVMI in these patients. abstract_id: PUBMED:10466471 Effect of insulin resistance on left ventricular hypertrophy and dysfunction in essential hypertension. Background: In hypertensive patients, the relationships between glucose tolerance and left ventricular hypertrophy (LVH) and left ventricular diastolic function (LVDF) have been described in several reports. Objective: In this study, we examined the relationships between insulin resistance and LVH and LVDF in hypertensive patients from the therapeutic perspective. Methods And Results: The study participants were essential hypertensive patients with impaired glucose tolerance (IGT-HT, n = 26), hypertensive patients with normal glucose tolerance (NGT-HT, n = 39), and normotensive control individuals (n = 18). Insulin resistance was evaluated by the insulin suppression test by use of the steady-state plasma glucose (SSPG) level. Left ventricular mass index (LVMI) and LVDF, which was determined by the E:A ratio, were estimated by echocardiography. Temocapril, an angiotensin-converting enzyme inhibitor, was administered in an open, non-randomized manner with a mean dose of 2.8+/-0.2 mg/ day, and the mean administration period was 18 weeks. The systolic and diastolic blood pressure, the LVMI, and the SSPG level were significantly higher in the hypertensive patients than in the control individuals. The mean systolic and diastolic blood pressures were significantly decreased by treatment with Temocapril. Before treatment, stepwise regression analysis showed that SSPG is an independent predictor for LVMI and LVDF. After treatment, the changes in LVMI (D-LVMI; %) (-15.1+/-1.5), the changes in LVDF (D-E:A; %) (-38.2+/-4.1), and the changes in insulin resistance (D-SSPG; %) (-13.7+/-1.7) were significantly higher in the IGT-HT group than in the NGT-HT group (-11.4+/-1.1, -18.1+/-1.7, -9.4+/-1.4, respectively), and the D-SSPG was an independent predictor for D-LVMI and D-E :A. Conclusions: The results of this study indicate that insulin resistance is an important factor affecting LVH and LVDF. abstract_id: PUBMED:9578353 Are insulin metabolism and night-time blood pressure related to left ventricular hypertrophy? Essential hypertensives in whom blood pressure does not fall during sleep (non-dippers) are thought to be at greater risk of cardiovascular morbidity. Insulin resistance is also suggested to be a risk factor for cardiovascular morbidity. The purpose of the present study was to evaluate the relationship of insulin metabolism to left ventricular hypertrophy in dippers and non-dippers. Thirty male, non-diabetic out-patients with newly diagnosed arterial hypertension were included in the study: 21 dippers (mean age 45+/-13 years; body mass index 28.2+/-4.0 kg/m2) and nine non-dippers (mean age 48+/-10 years, body mass index 28.6+/-3.9 kg/m2). Patients were subdivided into dippers and non-dippers on the basis of 24-h ambulatory blood pressure monitoring. Insulin and glucose responses to an oral glucose load have been evaluated. C-peptide levels were determined. Left ventricular mass was assessed by echocardiography. Non-dippers had significantly higher mean night-time systolic (non-dippers: 148+/-9; dippers: 123+/-16 mmHg; P<0.001), diastolic blood pressure (non-dippers: 90+/-8; dippers: 77+/-8 mmHg; P<0.001) and non-significantly higher left ventricular mass (279+/-92 g) and left ventricular mass index (135+/-46 g/m2). No significant difference was found between C-peptide, insulin, glucose levels and incremental areas between the two groups. Night-time blood pressure, insulin, C-peptide and glucose did not correlate with left ventricular mass in non-dippers. Dippers showed a positive correlation between fasting C-peptide and left ventricular mass (r=0.48, P=0.02) and between glucose and left ventricular mass (r=0.42, P=0.05). Our data indicate that night-time blood pressure and insulin are not related to left ventricular hypertrophy in patients with essential hypertension. abstract_id: PUBMED:1482235 Left systolic ventricular function and metabolic disorders in untreated hypertensive patients Previous studies have shown that essential hypertension is frequently associated with insulin resistance and hyperinsulinism. Because insulin may exert a direct positive inotropic as well as chronotropic effect and controlled the initiation of peptide chains in the heart, we tested the hypothesis that insulin may be a determinant of myocardial hypertrophy and contractility. The relation between glucose metabolism (assessed by the oral glucose tolerance test) and left ventricular (LV) mass and function (assessed by echocardiography) was explored in 47 never-treated lean essential hypertensive patients (EH) of short duration and 19 normotensive subjects (NT). A greater number of EH versus NT (23 vs 5%) had an abnormal glucose tolerance. The fasting insulin-to-glucose ratio was significantly higher in EH as compared to NT. Fasting as well as integrated serum insulin to glucose values ratio were positively correlated with heart rate (r = 0.35, p < 0.05, r = 0.38, p < 0.05) and the LV end-systolic stress to volume ratio (r = 0.48, p < 0.001, r = 0.54, p < 0.001) but not with LV mass (r = 0.02, r = 0.02) in EH. When EH were divided into those with normal (n = 36) and supernormal (n = 11) LV contractility based on the relationship between LV fractional shortening and LV end-systolic stress, integrated insulin level and fasting insulin to glucose ratio were markedly higher in patients with supernormal LV contractility, whereas arterial pressure, heart rate, urinary sodium excretion, and plasma renin activity were similar in the two groups. We concluded that hyperinsulinemia and LV hypercontractility are associated in patients with hypertension of short duration. If chronic hyperinsulinemia is to be causally related to hypertension, one would have to postulate that the effects (inotropism and chronotropism) of insulin on the heart can be dissociated from the resistance to the glucose-lowering action of insulin. abstract_id: PUBMED:15887476 Relation of insulin resistance and left ventricular function and structure in non-diabetic patients with essential hypertension. Objective: Both left ventricular hypertrophy and insulin resistance (IR) have often been demonstrated in patients with essential hypertension (EH). Insulin may exert a direct growth-promoting effect on cardiomyocytes. The purpose of this study was to examine the relationship between left ventricular structure, function and IR in patients with EH. Methods: We enrolled 73 patients (21 men, mean age 51.7 +/- 9.2 years) with untreated hypertension (BP > 140 and/or 90 mm Hg, fasting glycaemia < 110 mg/dl) and 64 healthy subjects without diabetes mellitus and hypertension (21 men, mean age 48.9 +/- 10.6 years) constituted the control group. In all subjects, transthoracic echocardiography was performed and blood samples were taken. Homeostasis model assessment (HOMA) was calculated by the formula: HOMA-index = fasting blood glucose (mg/dl) * immunoreactive insulin (microU/ml)/405 for the assessment of IR. Hypertensive patients were divided in two groups by mean HOMA index values. Each subject was examined for LV end-diastolic diameter, septal and posterior wall thickness, LV mass index (LVMI), fractional shortening (FS), mitral inflow velocity pattern, atrial filling fraction (AFF), left ventricular outflow velocity pattern and the total ejection isovolume index (TEI index). Results: The HOMA index (p < 0.001), LVMI (p < 0.001), AFF (p < 0.0001), peak A velocity (p < 0.028), septal (p < 0.0001) and posterior (p < 0.0001) wall thickness were significantly higher and FS (p < 0.001), E/A ratio (p < 0.0001) were significantly lower in hypertensive patients than healthy controls. LVMI (p < 0.01) and septal wall thickness (p < 0.001) were significantly higher in those hypertensive patients with a higher HOMA index. The HOMA-index was univariately related to the TEI index (r = 0.27, p = 0.01) and septal wall thickness (IVS) (r = 0.29, p = 0.01) by Pearson correlation analysis in hypertensive patients. LVMI, FS and mitral inflow velocity pattern were not related to the HOMA index. The TEI index (R2 = 0.20, p = 0.0001) and IVS (R2 = 0.12, p = 0.002) were significantly related to the HOMA-index as an independent variable by stepwise regression analysis. Conclusions: These results demonstrated that hypertensive patients had both abnormal cardiac structure and function and higher IR index. In our study group, the effect of hypertension on cardiac structure and function was correlated with IR. Our results suggested that IR might be an important factor causing left ventricular dysfunction and wall thickness in non-diabetic patients with EH. abstract_id: PUBMED:27672023 Moderate Alcohol Consumption Is Associated With Left Ventricular Diastolic Dysfunction in Nonalcoholic Hypertensive Patients. Ethanol consumption is associated with left ventricular dysfunction in heavy ethanol drinkers. The effect of moderate ethanol intake on left ventricular function in hypertension, however, is unknown. We investigated the relationship between ethanol consumption and cardiac changes in nonalcoholic hypertensive patients. In 335 patients with primary hypertension, we assessed daily ethanol consumption by questionnaires that combined evaluation of recent and lifetime ethanol exposure and examined cardiac structure and function by echocardiography. Patients with abnormal liver tests, previous cardiovascular events, left ventricular ejection fraction <50%, and creatinine clearance <30 mL/min 1.72 m2 were excluded. Left ventricular hypertrophy was found in 21% of hypertensive patients and diastolic dysfunction was detected in 50% by tissue-Doppler imaging. Ethanol consumption was comparable in hypertensive patients with and without left ventricular hypertrophy, whereas patients with left ventricular diastolic dysfunction had significantly greater consumption than patients with normal ventricular filling. Left atrial diameter, e' wave velocity, e'/a' ratio, and E/e' ratio changed progressively with increasing levels of ethanol consumption, and prevalence of left ventricular diastolic dysfunction increased with a change that became statistically significant in patients consuming 20 g/d of ethanol or more. The e' wave velocity was inversely correlated with ethanol consumption, and multivariate logistic regression indicated that ethanol consumption predicted diastolic dysfunction independently of age, body mass index, blood pressure, insulin sensitivity, and left ventricular mass index. In conclusion, ethanol consumption is independently associated with left ventricular diastolic dysfunction in nonalcoholic hypertensive patients and might contribute to development of diastolic heart failure. abstract_id: PUBMED:17385461 Left ventricular remodeling in association of arterial hypertension with diabetes mellitus type 2: correlation with gender and disease duration Aim: To study specific features of left ventricular remodeling (LVR) in patients with hypertension and diabetes mellitus type 2 (DM-2). Material And Methods: The study group comprised 290 patients, the control group consisted of 79 patients with essential hypertension free of carbohydrate tolerance impairment. The age of the examinees ranged from 35 to 60 years. Structural-geometric left ventricular model was determined by echocardiography. Estimation of significant determinants of myocardial mass index (MMI) and types of left ventricular remodeling was made with multiple regression analysis and logistic regression. The parameters of 24-h monitoring of blood pressure, glycemic control, blood lipid-transport function, plasma insulin, aldosteron, hydrocortisone levels, blood rennin activity were used as independent variants. Incidence of registration of left ventricular excentric hypertrophy (LVEH) in the study group was higher as compared to control (27.6 and 16.5%, respectively; p < 0.05). A rise of left ventricular MMI in the study group was related to 24-h systolic arterial pressure and basal insulinemia (R2 = 0.4229). Development of left ventricular structural-geometric model in the study group depended on the gender, duration of diabetes, 24-h systolic blood pressure (direct correlation) and 24-h diastolic blood pressure (inverse correlation), R2 = 0.6500. In DM duration about 5 years, percentage of males with concentric left ventricular hypertrophy (CLVH) and LVEH was 63% versus 37% (chi-square 5.0815, p < 0.03). In longer diabetes LVEH was seen more frequently than CLVH (73.3 and 26.7%, respectively). Among women with left ventricular hypertrophy and hypertension under 10 years LVEH and CLVH were detected in 69.1 and 30.9% cases, respectively (chi-square 7.9356, p < 0.01). Later, these differences became less obvious (41.7 and 58.3%, respectively). Conclusion: Hypertensive patients with diabetes mellitus type 2 develop structural-geometric changes of the heart earlier. LVR in such patients is associated with gender, duration of DM and non-proportional growth of 24-h systolic blood pressure leading to gender-related differences in the time of formation of LVEH and CLVH. Answer: Yes, insulin action appears to be a determinant of left ventricular relaxation in uncomplicated essential hypertension. A study examining the relation of insulin action and left ventricular diastolic function in newly diagnosed hypertensive men found that those with insulin resistance had a prolonged isovolumic relaxation time compared to those with normal insulin sensitivity. Insulin resistance was an independent predictor of abnormalities in isovolumic relaxation, regardless of the influence of increased blood pressure levels, being overweight, and left ventricular hypertrophy (PUBMED:9222942). Additionally, other studies have shown a strong positive correlation between post oral glucose load serum insulin levels and left ventricular mass and left ventricular mass index, suggesting that hyperinsulinemia (insulin resistance) plays a role in promoting left ventricular hypertrophy in patients with essential hypertension (PUBMED:11229109). Furthermore, glucose intolerance, which is a preclinical stage of diabetes mellitus and often associated with insulin resistance, has been found to exacerbate left ventricular hypertrophy and dysfunction in essential hypertension (PUBMED:7702807). Moreover, decreased kidney function, which is related to left ventricular mass index (LVMI), has been shown to interact with insulin resistance in explaining the variability in LVMI in patients with untreated uncomplicated essential hypertension (PUBMED:18423813). Insulin resistance has also been identified as an important factor affecting left ventricular hypertrophy (LVH) and left ventricular diastolic function (LVDF) in hypertensive patients (PUBMED:10466471). In summary, multiple studies indicate that insulin resistance is associated with changes in left ventricular structure and function, including relaxation, in patients with uncomplicated essential hypertension.
Instruction: Can emergency department physicians safely and effectively initiate thrombolysis for acute ischemic stroke? Abstracts: abstract_id: PUBMED:11134376 Can emergency department physicians safely and effectively initiate thrombolysis for acute ischemic stroke? Objective: To review the clinical outcomes of stroke patients treated with IV tissue plasminogen activator (tPA; alteplase) in a community setting and to compare outcomes when treatment was initiated by a neurologist or an emergency department (ED) physician in telephone consultation with a neurologist and radiologist. Methods: Clinical information was prospectively collected for 43 stroke patients treated with IV tPA (alteplase) within a five-hospital network of affiliated community hospitals. Blinded 3-month outcomes were obtained with telephone interview or patient visit. Results: Excellent functional recovery measured by a Modified Rankin score of 0 to 1 (42%), symptomatic intracerebral hemorrhages (7%), and mortality (16.3%) were similar to those reported by National Institute of Neurological Disorders and Stroke (39%, 7.7%, 17.3%). After initial screening by an ED physician, 20 patients were directly examined by a stroke neurologist who then prescribed tPA. Twenty-three patients received tPA prescribed by an ED physician after telephone consultation with a neurologist and review of the head CT by a radiologist. Functional outcome, symptomatic intracerebral bleeding rate, and mortality rate were similar between these groups. Door-to-needle time was similar. Protocol deviations were much higher when ED physicians prescribed the tPA compared to when neurologists did (30% versus 5%). These protocol deviations were reduced with staff education. Conclusions: The clinical results of the National Institute of Neurological Disorders and Stroke tPA Stroke Trial were replicated in this small series of patients treated in a community setting. Outcomes were similar whether the prescribing physician was a neurologist or an ED physician. abstract_id: PUBMED:36963177 Stroke thrombolysis given by emergency physicians: The time is here. Emergency Medicine core clinical privileges include administration of thrombolytic therapy for myocardial infarction and stroke. However, emergency medicine providers have created their own paradigm to rely on neurology specialty consultation to treat acute stroke patients. A 2013 study supported by the American Academy of Neurology showed an 11% shortage of neurologists at that time and projected a 19% shortage by 2025. The lack of neurologists is a worldwide problem. To help ease the shortage, teleneurology and telestroke care has been implemented by neurologists, most notably for acute ischemic stroke (AIS) patients. We present a model in which an Emergency Medicine (EM) Stroke Champion (SC); an EM physician within our Neurologic Emergency Department, directs care for acute stroke patients at our comprehensive hub and primary stroke center spoke hospitals. The SCs independently treat patients with fibrinolytic therapy and provide teleneurology to fellow emergency physicians caring for acute stroke patients at spoke hospitals. Over nineteen months the SCs received 457 phone calls for patients meeting stroke alert criteria. Sixteen patients were deemed eligible for IV alteplase with one hemorrhagic conversion (6.25%). Sixty-four patients required transfer, and this was facilitated by the SCs. The concept of emergency physician SCs providing teleneurology consultation to other fellow emergency physicians was found to be feasible and safe. This model has the potential to be generalized not to just stroke champions, but to all emergency physicians to feel both confident and comfortable treating acute stroke patients. abstract_id: PUBMED:18660390 Thrombolysis for acute ischaemic stroke: a new challenge for emergency medicine. Introduction: Acute ischaemic stroke (AIS) is a leading cause of death and disability within the United Kingdom. Despite evidence of the benefit of thrombolysis for appropriately selected patients with AIS, this intervention remains markedly underutilised in this country when compared with other developed countries. The delivery of thrombolysis for AIS has become a political, as well as a clinical, priority in the United Kingdom. Discussion: Research has shown that, although thrombolysis for AIS is associated with increased short-term mortality, this is offset by a significant benefit in terms of reduced long-term death and disability. Recent observational data have shown that it can be safely and effectively delivered in the "normal" clinical setting (ie, a non-research environment). Furthermore, thrombolysis for AIS is supported by the Royal College of Physicians and the National Institute for Health and Clinical Excellence. Emergency physicians are trained to receive and assess patients with possible stroke. The emergency department (ED) is an ideal location in which to perform these clinical duties and to communicate and coordinate the necessary tasks required for the delivery of thrombolysis. All of the skills and resources are already available within the ED, with the exception of a single training requirement: certification in the National Institute for Health Stroke Scale scoring system, which can be acquired following limited Internet-based training. Results: Emergency physicians should be integrally involved in the development of protocols for the delivery of thrombolysis to patients with AIS. This will require communication and collaboration locally with stroke physicians and radiologists, a process that should be facilitated by the newly emerging Stroke Networks. abstract_id: PUBMED:27678127 The influence of emergency department crowding on the efficiency of care for acute stroke patients. Objective: To investigate the impact of emergency department (ED) crowding (number of ED patients) and number of ED staff on the efficiency of the ED care process for acute stroke patients. Design: Retrospective cohort study conducted from 1 May 2008 to 31 December 2013. Setting: Largest primary stroke center (3000-bed tertiary academic hospital) in southern Taiwan. Participants: Patients aged 18-80 years presenting to the ED with acute stroke symptoms ≤3 h from symptom onset (n = 1142). Main Outcome Measures: Door-to-assessment time (DTA), door-to-computed tomography completion time (DTCT) and door-to-needle time (DTN). Results: Of the 785 patients with ischemic stroke, 90 (11.46%) received thrombolysis. In the multivariate regression analysis, the number of ED patients and the number of attending physicians were significantly associated with delayed DTA and DTCT but not DTN. Initial assessment by a resident was also associated with delayed DTA and DTCT. The number of nurses was associated with delayed DTCT and DTN. Conclusions: Although ED crowding was not associated with delayed DTN, it predicted delayed DTA and DTCT in thrombolysis-eligible stroke patients. The number of attending physicians affected initial assessment and DTCTs, whereas the number of nurses impacted thrombolytic administration times. abstract_id: PUBMED:23818807 Views of emergency physicians on thrombolysis for acute ischemic stroke. Background: The 3-hour window for treating stroke with intravenous tissue plasminogen activator (t-PA) requires well-organized, integrated efforts by emergency physicians and stroke neurologists. Objective: To evaluate attitudes and knowledge of emergency physicians about intravenous t-PA for acute ischemic stroke, particularly in primary stroke centers (PSCs) with stroke neurology teams. Methods: A 15-question pilot Internet survey administered by the Arizona College of Emergency Physicians. Results: Between March and August 2005, 100 emergency physicians responded: 71 in Arizona and 29 in Missouri. Forty-eight percent practiced at PSCs; 48% thought t-PA was effective, 20% did not, and 32% were uncertain. PSC or non-PSC location of practice did not influence endorsement (odds ratio, 0.96; 95% confidence interval, 0.27-1.64). Of those opposing t-PA, 87% cited risk of hemorrhage. Conclusions: Most emergency physicians did not endorse t-PA. Improved collaboration between emergency physicians and stroke neurologists is needed. abstract_id: PUBMED:29172357 The Role of Fibrinolytic Therapy in the Emergency Department. Emergency physicians are many times the first-line practitioners to encounter circulatory emergencies. Amongst these many diseases we face, of importance are acute myocardial infarction, ischemic cerebrovascular accidents and pulmonary embolism. Familiarity with fibrinolytic agents and regimens, as well as indications and contraindications, are crucial to the emergency department physician. Fibrinolytic therapy has been shown to be beneficial and can serve as a bridge to definite treatment, while lowering the risk of morbidity and mortality. Unfortunately fibrinolytic therapy is underused and often administered later than optimum. The aim of fibrinolytics is to dissolve blood clots that can cause serious and potentially life-threatening damage if not removed in a timely manner. The mechanism of this benefit relates to maximizing tissue salvage by early restoration of blood flow and thereby enhancing both early and long-term survival. Older thrombi have extensive fibrin polymerization making them more resistant to thrombolysis; hence, the importance of early administration of fibrinolytic therapy. abstract_id: PUBMED:27307770 Early recognition of intraventricular hemorrhage in the setting of thrombocytosis in the emergency department. Thrombocytosis is frequently encountered as an incidental laboratory finding since isolated thrombocytosis is often asymptomatic. Even though thrombocytosis is benign and self-limiting in most cases, it can at times result in thrombosis or hemorrhage. The most common type of thrombocytosis is reactive (secondary) thrombocytosis and can be due to infections, trauma, surgery, or occult malignancy. Since thrombocytosis is a known risk factor for thrombosis, it is commonly a concern for ischemic stroke and myocardial infarction. Much less common are hemorrhagic events associated with thrombocytosis. Studies have shown that when hemorrhage is present in patients with thrombocytosis, it is most often seen in the setting of chronic myelogenous leukemia and essential thrombocythemia. In essential thrombocythemia, the overall risk of bleeding and thrombosis is 0.33% per patient-year and 6.6% per patient-year, respectively. In the general population, the risk of bleeding and thrombosis is 0% and 1.2%, respectively. The present study is a case report of an 83-year-old man who presented to the emergency department with hypertension and headache, who was then found to have significant thrombocytosis (platelets >1,000×10(9)/L) and acute right intraventricular hemorrhage without any signs of neurological deficits, or evidence of vascular malformations or mass. We present this case report for review and discussion of some of the challenges and considerations associated with the management of such patients. abstract_id: PUBMED:23710712 Agreement between ambulance nurses and physicians in assessing stroke patients. Objectives: If an ambulance nurse could bypass the emergency department (ED) and bring suspected stroke patients directly to a CT scanner, time to thrombolysis could be shortened. This study evaluates the level of agreement between ambulance nurses and emergency physicians in assessing the need for a CT scan, and interventions and monitoring beforehand, in patients with suspected stroke and/or a lowered level of consciousness. Methods: From October 2008 to June 2009, we compared the ambulance nurses' and ED physicians' judgement of 200 patients with stroke symptoms. Both groups answered identical questions on patients' need for a CT scan, and interventions and monitoring beforehand. Results: There was poor agreement between ambulance nurses and ED physicians in judging the need for a CT scan: κ = 0.22 (95% confidence interval (CI), 0.06-0.37). The nurses' ability to select the same patients as the physician for a CT scan had a sensitivity of 84% (95% CI, 77-89) and a specificity of 37% (95% CI, 23-53). Agreement concerning the need for interventions and monitoring was also low: κ = 0.32 (95% CI, 0.18-0.47). In 18% of cases, the nurses considered interventions before a CT scan unnecessary when the physicians' deemed them necessary. Conclusions: Additional tools to support ambulance nurses decisions appear to be required before suspected stroke patients can be taken directly to a CT scanner. abstract_id: PUBMED:38276228 Emergency Department Time Targets for Interhospital Transfer of Patients with Acute Ischemic Stroke. Background and objectives: Although the intravenous tissue plasminogen activator (rt-PA) has been shown to be effective in the treatment of acute ischemic stroke (AIS), only a small proportion of stroke patients receive this drug. The low administration rate is mainly due to the delayed presentation of patients to the emergency department (ED) or the lack of a stroke team/unit in most of the hospitals. Thus, the aim of this study is to analyze ED time targets and the rate of rt-PA intravenous administration after the initial admission of patients with AIS in an ED from a traditional healthcare center (without a neurologist or stroke team/unit). Methods: To analyze which factors influence the administration of rt-PA, we split the general sample (n = 202) into two groups: group No rt-PA (n = 137) and group rt-PA (n = 65). This is based on the performing or no intravenous thrombolysis. Results: Analyzing ED time targets for all samples, we found that the median onset-to-ED door time was 180 min (IQR, 120-217.5 min), door-to-physician time was 4 min (IQR, 3-7 min), door-to-CT time was 52 min (IQR, 48-55 min), and door-in-door-out time was 61 min (IQR, 59-65 min). ED time targets such as door-to-physician time (p = 0.245), door-to-CT time (p = 0.219), door-in-door-out time (p = 0.24), NIHSS at admission to the Neurology department (p = 0.405), or NIHSS after 24 h (p = 0.9) did not have a statistically significant effect on the administration or no rt-PA treatment in patients included in our study. Only the highest door-to-CT time was statistically significantly correlated with the death outcome. Conclusion: In our study, the iv rt-PA administration rate was 32.18%. A statistically significant correlation between the highest door-to-CT time and death outcome was found. abstract_id: PUBMED:27067664 Stroke thrombolysis given by emergency physicians cuts in-hospital delays significantly immediately after implementing a new treatment protocol. Background: Tissue plasminogen activator (tPA) treatment for acute ischaemic stroke (AIS) should be given as soon as possible, preferably within 60 min after arrival at hospital. There is great variation in door-to-needle times (DNTs) internationally, nationally and even within the same hospital. Various strategies for improving treatment delays have been presented. The role of emergency physicians (EPs) in treating AIS has been under discussion in recent years. Emergency Medicine (EM) officially became a specialty in Finland in 2013. Practical education of EPs in Kanta-Häme Central Hospital began in October 2012, together with reorganization of the in-hospital treatment path for AIS patients. The main change was shifting the on-call duty regarding stroke patients from internists or neurologists to EPs after the third quarter of 2013. Methods: This was a retrospective study. The data, concerning the characteristics of tPA-treated patients, DNTs and onset-to-treatment times (OTTs) was collected from electronic and paper records. The period studied was 1 year before and 1 year during reorganization, i.e. 2012 and 2013. Results: During the study period a total of 64 tPA treatments were given, 31 before and 33 during reorganization. The median DNT was 54 min in 2012, while it was 28 min in 2013 (p < 0.001). The median OTTs were 139 and 101 min before and during the start of reorganization, respectively (p < 0.001). Conclusions: Both total and in-hospital delays in the treatment of ischaemic stroke were shortened significantly during reorganization. Emergency physicians are able to treat AIS patients within international time guidelines. Success was based on scrutinized reorganization and good cooperation between neurologists, EPs and radiologists. Answer: Yes, emergency department (ED) physicians can safely and effectively initiate thrombolysis for acute ischemic stroke. A study reviewing clinical outcomes of stroke patients treated with intravenous tissue plasminogen activator (IV tPA; alteplase) in a community setting found that outcomes were similar whether the prescribing physician was a neurologist or an ED physician. This included measures of excellent functional recovery, symptomatic intracerebral hemorrhages, and mortality rates. However, protocol deviations were higher when ED physicians prescribed tPA compared to neurologists, which were reduced with staff education (PUBMED:11134376). Furthermore, the concept of an Emergency Medicine (EM) Stroke Champion (SC), an EM physician directing care for acute stroke patients, has been found to be feasible and safe. Over nineteen months, SCs independently treated patients with fibrinolytic therapy and provided teleneurology to fellow emergency physicians, with a low rate of hemorrhagic conversion (PUBMED:36963177). Additionally, research has indicated that thrombolysis for acute ischemic stroke can be safely and effectively delivered in a "normal" clinical setting, and emergency physicians are well-positioned to be involved in the development of protocols for the delivery of thrombolysis (PUBMED:18660390). A study also showed that after implementing a new treatment protocol where emergency physicians were responsible for treating acute ischemic stroke patients, there was a significant reduction in door-to-needle times and onset-to-treatment times, demonstrating that emergency physicians can manage stroke treatment within international time guidelines (PUBMED:27067664). However, it is important to note that while ED physicians can initiate thrombolysis, there are challenges such as ED crowding and staffing levels that can impact the efficiency of care for acute stroke patients (PUBMED:27678127). Additionally, some emergency physicians may have reservations about the use of tPA, often citing the risk of hemorrhage as a concern (PUBMED:23818807). Therefore, improved collaboration between emergency physicians and stroke neurologists, as well as education and protocol development, are essential for optimizing the safe and effective initiation of thrombolysis by ED physicians.
Instruction: Abdominal wall thickness: is it associated with superficial and deep incisional surgical site infection after colorectal surgery? Abstracts: abstract_id: PUBMED:23676120 Abdominal wall thickness: is it associated with superficial and deep incisional surgical site infection after colorectal surgery? Background: Colorectal surgical procedures have a high rate of surgical site infection (SSI), and obesity has been implicated as a predictor of such infection. We hypothesized that abdominal wall thickness (AWT), as a metric of obesity, would predict postoperative superficial and deep incisional SSI after colorectal surgery, and conducted a study to assess superficial and deep incisional SSI and its relationship to abdominal wall thickness. Methods: To measure pre-operative AWT through cross-sectional imaging, and to analyze its relationship to SSI, we conducted a retrospective study at a single academic medical center of patients who had had colorectal resection for any indication in 2008 and 2009. Results: We identified 143 patients for inclusion in the study. Superficial or deep incisional SSI occurred in 43 patients (30%). Abdominal wall thickness at the midpoint between the umbilicus and pubis was associated with SSI (OR 1.03; p=0.014). Body-mass index (BMI) was also significantly associated with SSI (OR 1.08; p=0.014). Other significant (p&lt;0.05) predictors of SSI by univariate analysis included a history of soft tissue infection, a surgical wound classification of 3 or 4, and lack of compliance with perioperative antibiotic guidelines. In a multivariable analysis of factors that were statistically significantly associated with SSI in univariate comparisons, lack of appropriate preoperative antibiotic administration independently predicted SSI (OR 4.33; 95% CI 1.08-17.40), but AWT and BMI were not significantly associated with SSI. Conclusions: Surgical site infection is common after colorectal surgery. Increased AWT predicts SSI by univariate analysis. Our findings could guide further studies of interventions that may decrease the risk of SSIs in patients with a thick abdominal wall. abstract_id: PUBMED:29259055 Hughes Abdominal Repair Trial (HART)-abdominal wall closure techniques to reduce the incidence of incisional hernias: feasibility trial for a multicentre, pragmatic, randomised controlled trial. Objectives: Incisional hernias are common complications of midline abdominal closure. The 'Hughes Repair' combines a standard mass closure with a series of horizontal and two vertical mattress sutures within a single suture. There is evidence to suggest this technique is as effective as mesh repair for the operative management of incisional hernias; however, no trials have compared Hughes repair with standard mass closure for the prevention of incisional hernia formation. This paper aims to test the feasibility of running a randomised controlled trial of a comparison of abdominal wall closure methods following midline incisional surgery for colorectal cancer, in preparation to a definitive randomised controlled trial. Design And Setting: A feasibility trial (with 1:1 randomisation) conducted perioperatively during colorectal cancer surgery. Participants: Patients undergoing midline incisional surgery for resection of colorectal cancer. Interventions: Comparison of two suture techniques (Hughes repair or standard mass closure) for the closure of the midline abdominal wound following surgery for colorectal cancer. Primary And Secondary Outcomes: A 30-patient feasibility trial assessed recruitment, randomisation, deliverability and early safety of the surgical techniques used. Results: A total of 30 patients were randomised from 43 patients recruited and consented, over a 5-month period. 14 and 16 patients were randomised to arms A and B, respectively. There was one superficial surgical site infection (SSI) and two organ space SSIs reported in arm A, and two superficial SSIs and one complete wound dehiscence in arm B. There were no suspected unexpected serious adverse reactions reported in either arm. Independent data monitoring committee found no early safety concerns. Conclusions: The feasibility trial found no early safety concerns and demonstrated that the trial was acceptable to patients. Progression to the pilot and main phases of the trial has now commenced following approval by the independent data monitoring committee. Trial Registration Number: ISRCTN 25616490. abstract_id: PUBMED:33146081 Is prophylactic mesh closure effective to decrease the incidence of incisional hernia after laparotomy in colorectal surgery? Background: few studies have studied prophylactic mesh closure after laparotomy for colorectal surgery. Methods: a retrospective cohort study was performed to compare patients with and without prophylactic mesh closure after open colorectal surgery. Results: 309 patients were included from January 2014 to December 2016. Prophylactic mesh closure was performed in 98 patients (31.7%). After a mean follow-up of 21.7 months, incisional hernia was developed in 9 and 54 patients in the group with and without mesh respectively (9.2% vs. 25.7%, OR = 0.3, p = 0.001). In the multivariate Cox model prophylactic mesh closure was associated with a protective effect on incisional hernia development with a Hazard Ratio of 0.46 (p = 0.033). Surgical site infection was more frequent in the mesh group (19.4% vs. 9.5%, OR = 2.3, p = 0.015). Conclusions: prophylactic mesh closure is effective to decrease the incidence of incisional hernia after colorectal surgery. abstract_id: PUBMED:36545836 The incidence of extraction site incisional hernia after minimally invasive colorectal surgery: a systematic review and meta-analysis. Aim: Minimally invasive colorectal surgery reduces surgical trauma with better preservation of abdominal wall integrity, but the extraction site is still at risk of incisional hernia (IH). The aim of this study was to determine pooled incidence of IH for each type of extraction site and to compare rates of IH after midline, nonmidline and Pfannenstiel extraction. Method: A systematic review and meta-analysis was conducted using the PRISMA guidelines. Single-armed and multiple-armed cohort studies and randomized controlled trials regarding minimally invasive colorectal surgery were searched from five databases. Outcomes were pooled and compared with random-effects, inverse-variance models. Risk of bias within the studies was assessed using the Cochrane ROBINS-I and RoB 2 tool. Results: Thirty six studies were included, with a total 11,788 patients. The pooled extraction site IH rate was 16.0% for midline (n = 4081), 9.3% for umbilical (n = 2425), 5.2% for transverse (n = 3213), 9.4% for paramedian (n = 134) and 2.1% for Pfannenstiel (n = 1449). Nonmidline extraction (transverse and paramedian) showed significantly lower odds ratios (ORs) for IH when compared with midline extraction (including umbilical). Pfannenstiel extraction resulted in a significantly lower OR for IH compared with midline [OR 0.12 (0.50-0.30)], transverse [OR 0.25 (0.13-0.50)] and umbilical (OR 0.072 [0.033-0.16]) extraction sites. The risks of surgical site infection, seroma/haematoma or wound dehiscence were not significantly different in any of the analyses. Conclusion: Pfannenstiel extraction is the preferred method in minimally invasive colorectal surgery. In cases where Pfannenstiel extraction is not possible, surgeons should avoid specimen extraction in the midline. abstract_id: PUBMED:26267990 Risk Assessment of Abdominal Wall Thickness Measured on Pre-Operative Computerized Tomography for Incisional Surgical Site Infection after Abdominal Surgery. Background: The surgical site infection (SSI) is a common complication of abdominal operation. It relates to increased hospital stay, increased healthcare cost, and decreased patient's quality of life. Obesity, usually defined by BMI, is known as one of the risks of SSI. However, the thickness of subcutaneous layers of abdominal wall might be an important local factor affecting the rate of SSI after the abdominal operations. Objective: The objective of this study is to assess the importance of the abdominal wall thickness on incisional SSI rate. Material And Method: The subjects of the present study were patients who had undergone major abdominal operations at Thammasat University Hospital between June 2013 and May 2014, and had been investigated with CT scans before their operations. The demographic data and clinical information of these patients were recorded. The thickness ofsubcutaneous fatty tissue from skin down to the most superficial layer of abdominal wall muscle at the surgical site was measured on CT images. The wound infectious complication was reviewed and categorized as superficial and deep incisional SSIfollowing the definition from Centersfor Disease Control and Prevention (CDC) guidelines. The significance ofeach potentialfactors on SSI rates was determined separately with student t-test for quantitative data and χ2-test for categorical data. Then all factors, which had p &lt; 0.10, were included into the multivariate logistic regression analysis and were analyzed with significance at p &lt; 0.05. Results: One hundred and thirty-nine patients were included in this study. They all underwent major abdominal surgery and had had pre-operative CTscans. Post-operative SSI was 25.2% (35/139), superficial and deep types in 27 and 8 patients, respectively. The comparison of abdominal wall thickness between patients with and without infection was significantly different (20.0 ± 8.4 mm and 16.0 ± 7.2 mm, respectively). When the thickness at 20 mm was used as the cut-off value, 43 of 139 patients had abdominal wall thickness ≥ 20 mm. The incidence of SSI of the thickness ±20 mm group was 37.2% (16/43) and of the less thickness group was 19.8% (19/96), with p &lt; 0.05. The univariate analysis revealed that abdominal wall thickness ≥ 20 mm, body weight ≥ 60 kg, and wound classification were the important factors related to SSI after the abdominal operation. However, only abdominal wall thickness and wound classification were still significant by multivariate analysis. Conclusion: The findings of this study confirmed the significance of the subcutaneous thickness of abdominal wall at the surgical site on the incidence of incisional SSI. The thickness ≥ 20 mm had an effect on increasing post operative SSI rate especially in contaminated operations. These findings could be helpful in making healthcare providers fully aware and thus exercise special attention in wound care or even develop new modalities to prevent SSI in patients with the aforementioned risks. abstract_id: PUBMED:34889535 The quest to improve outcomes for abdominal wall incisional hernia repair in Pietermaritzburg: between Scylla and Charybdis. Background: The repair and outcomes of incisional abdominal wall hernias have not yet been benchmarked to allow comparison with recommended best practice in a South African context. This study aimed to address his deficit. Method: Patients who underwent an incisional hernia repair between December 2012 and December 2018 were analysed in respect to the following variables: demographics, comorbidities, indication for surgery, site, size, surgical approach, mesh usage, operating times, complications and 30-day mortality. Results: Of the cohort of 224 patients, 185 underwent elective repair. There were 152 open and 72 laparoscopic procedures, and 17 patients (8%) required a repeat operation with an overall in-hospital mortality rate of 6% (13). Eight patients developed an enteric leak. There were nine cardiovascular complications, 24 respiratory complications, 22 surgical site infections and 13 patients developed an acute kidney injury (AKI). There were 39 emergency operations. The emergency cohort were older than the elective with a higher rate of cardiovascular or surgical (CVS) complications and AKI. Eight patients developed an enteric leak. Mortality rates of were significantly higher in the emergency operation cohort compared to the elective group (18% vs 3%). The 13 in-hospital deaths were older, more likely to have undergone an emergency operation, to be diabetic (46% vs 10%), hypertensive (92% vs 33%), have a bowel anastomosis (39% vs 9%), experience an enteric leak (46% vs 1%) and require repeat operation than the patients who survived. Conclusion: Incisional abdominal wall hernias are difficult to manage as the patients often have several comorbidities which when coupled with an emergency operation leads to poor outcomes. Improving outcomes requires strategies that address comorbidities and shift the focus to elective rather than emergency repair. abstract_id: PUBMED:33537500 Influence of Suture Materials on Incisional Hernia Rate after Laparoscopic Colorectal Cancer Surgery: A Propensity Score Analysis. Objectives: Incisional hernia is a common problem after colorectal surgery, and a laparoscopic approach does not reduce the incisional hernia rate. Previous reports have described the risk factors for incisional hernia; however, the impact of suture materials remains unclear. As such, this study compared the incisional hernia rate using different suture materials for abdominal wall closure after laparoscopic colorectal cancer surgery. Methods: Patients undergoing laparoscopic colorectal cancer surgery between January 2014 and December 2016 were included in this study. We separated patients into the following two groups based on the suture materials used for abdominal wall closure: (1.) fast-absorbable group and (2.) non-absorbable group. The primary outcome was incisional hernia rate that was diagnosed using computed tomography. We compared outcomes between these two groups using propensity score matching. Results: Before matching, 394 patients were included (168 in the fast-absorbable group and 226 in the non-absorbable group). After one-to-one matching, patients were stratified into the fast-absorbable group (n = 158) and the non-absorbable group (n = 158). The incisional hernia rate was higher in the fast-absorbable group than in the non-absorbable group (13.9% vs. 6.3%; P = 0.04). The median time to develop an incisional hernia was significantly shorter in the fast-absorbable group (6.7 months vs. 12.3 months; P &lt; 0.01). The incidence of surgical site infection was not different between the two groups, but the incidence of suture sinus was lower in the fast-absorbable group (0% vs. 5.1%; P &lt; 0.01). Conclusions: The use of fast-absorbable sutures may increase the risk of incisional hernia after laparoscopic colorectal cancer surgery. abstract_id: PUBMED:35979802 Incisional hernia following colorectal cancer surgery according to suture technique: Hughes Abdominal Repair Randomized Trial (HART). Background: Incisional hernias cause morbidity and may require further surgery. HART (Hughes Abdominal Repair Trial) assessed the effect of an alternative suture method on the incidence of incisional hernia following colorectal cancer surgery. Methods: A pragmatic multicentre single-blind RCT allocated patients undergoing midline incision for colorectal cancer to either Hughes closure (double far-near-near-far sutures of 1 nylon suture at 2-cm intervals along the fascia combined with conventional mass closure) or the surgeon's standard closure. The primary outcome was the incidence of incisional hernia at 1 year assessed by clinical examination. An intention-to-treat analysis was performed. Results: Between August 2014 and February 2018, 802 patients were randomized to either Hughes closure (401) or the standard mass closure group (401). At 1 year after surgery, 672 patients (83.7 per cent) were included in the primary outcome analysis; 50 of 339 patients (14.8 per cent) in the Hughes group and 57 of 333 (17.1 per cent) in the standard closure group had incisional hernia (OR 0.84, 95 per cent c.i. 0.55 to 1.27; P = 0.402). At 2 years, 78 patients (28.7 per cent) in the Hughes repair group and 84 (31.8 per cent) in the standard closure group had incisional hernia (OR 0.86, 0.59 to 1.25; P = 0.429). Adverse events were similar in the two groups, apart from the rate of surgical-site infection, which was higher in the Hughes group (13.2 versus 7.7 per cent; OR 1.82, 1.14 to 2.91; P = 0.011). Conclusion: The incidence of incisional hernia after colorectal cancer surgery is high. There was no statistical difference in incidence between Hughes closure and mass closure at 1 or 2 years. Registration Number: ISRCTN25616490 (http://www.controlled-trials.com). abstract_id: PUBMED:35794039 The external oblique muscle flap technique for the reconstruction of abdominal wall defects. Purpose: Several modifications to the anterior component separation technique (ACST) have been reported to facilitate the closure of abdominal wall defects. In this study, the external oblique (EO) muscle flap for modified ACST during major abdominal wall defect reconstructions has been described. Methods: A retrospective review of consecutive patients undergoing modified ACST was conducted. The clinical data were collected and retrospectively analyzed. Results: Among the 36 patients admitted to our hospital from December 2014 to December 2020, 9 cases had rectus abdominis tumors, 1 case had rectus abdominis trauma, and 26 cases had incisional hernias. The average age was 61.17 ± 13.76 years, and the mean BMI was 24.25 ± 3.18 kg/m2. The average width of the defect was 14.33 ± 2.90 cm. Unilateral EO muscle flap technique was used to reconstruct the abdominal wall. 3 cases of surgical site infection (8.3%), 4 cases of grade III or IV seroma (11.1%) and 2 cases of intestinal obstruction (5.5%)were reported postoperatively. Ischemic necrosis of the abdominal EO muscle flap, incision dehiscence, intestinal fistula, or other complications were not observed. 1 case of incisional hernia recurrence (2.8%) was reported. Recurrence of tumors or abdominal wall bulging were not noted during the follow-up period of 32.53 ± 14.21 months. Conclutions: The EO muscle flap technique is associated with low postoperative morbidity and recurrence rate, which approves it a reliable technique for selected groups of patients. Further research are needed to confirm the effectiveness of this technique. abstract_id: PUBMED:32172906 Correlation Between Peritonitis and Incisional Infections in Horses. Surgical site infection of abdominal incisions is an important complication after laparotomy with increased risk of incisional hernia formation in horses. This study aims to evaluate the healing process of abdominal incisions and correlate peritonitis with the occurrence of surgical site infection and incisional hernias. Nine horses underwent standardized laparotomy, intestinal exploration, and induced septic peritonitis. Standardized relaparotomy was performed two (n = 3), four (n = 3), and six (n = 3) months later to evaluate the abdominal cavity for adhesions and to collect the sutured ventral abdominal wall to evaluate and prepare it for histopathological and tensile strength study. All horses presented with endotoxemia, controllable peritonitis, heat and touch-sensitive ventral abdominal edema and surgical wound infection with presence of purulent discharge. Adhesion of the cecum or colon to the internal portion of the surgical wound was observed. Healing of the infected surgical wounds occurred by second intention and a space between the rectus abdominis muscles developed because of the presence of a scar, which was related to incisional hernia. In the histopathological evaluation, the collagen content increased, and the inflammation decreased over time. The tensile strength increased over time and was highest after 6 months. After the second surgical intervention, there was no infection of the surgical wound in any of the animals and healing by first intention occurred. Surgical site infection may be a symptom of peritonitis in horses recovering from abdominal surgery. Infected surgical wounds heal by second intention, which favors the spacing of rectus abdominis muscle and the formation of incisional hernia. Answer: Yes, abdominal wall thickness (AWT) is associated with superficial and deep incisional surgical site infection (SSI) after colorectal surgery. A study found that increased AWT predicts SSI by univariate analysis. Specifically, the thickness of the abdominal wall at the midpoint between the umbilicus and pubis was associated with an increased risk of SSI (OR 1.03; p=0.014). Body-mass index (BMI) was also significantly associated with SSI (OR 1.08; p=0.014). Other significant predictors of SSI included a history of soft tissue infection, a surgical wound classification of 3 or 4, and lack of compliance with perioperative antibiotic guidelines. However, in a multivariable analysis, AWT and BMI were not significantly associated with SSI when other factors were considered, suggesting that while AWT can be a predictor of SSI, it may not be an independent risk factor when other variables are accounted for (PUBMED:23676120). Another study also confirmed the significance of the subcutaneous thickness of the abdominal wall at the surgical site on the incidence of incisional SSI. The study found that patients with an abdominal wall thickness of ≥20 mm had a higher incidence of SSI compared to those with less thickness (37.2% vs. 19.8%, p < 0.05). The univariate analysis revealed that abdominal wall thickness ≥20 mm and wound classification were important factors related to SSI after abdominal operations. Multivariate analysis showed that abdominal wall thickness and wound classification remained significant (PUBMED:26267990).
Instruction: Does the administration of a high dose of a paramagnetic contrast medium (Gadovist) improve the diagnostic value of magnetic resonance tomography in glioblastomas? Abstracts: abstract_id: PUBMED:8679973 Does the administration of a high dose of a paramagnetic contrast medium (Gadovist) improve the diagnostic value of magnetic resonance tomography in glioblastomas? Purpose: To assess the efficacy of high-dose contrast enhanced magnetic resonance imaging (MRI) in detection and delineation of infiltrating tumour in glioblastoma. Material And Methods: In a phase II clinical trial 14 patients with suspected supratentorial glioblastoma underwent MRI with Gd-DO3A-butriol (Gadovist, Schering AG, Berlin) with doses of 0.1 up to 0.3 mmol/kg. Results: Increasing the dose of Gd-DO3A-butriol resulted in improved enhancement (14/14) and better delineation of infiltrating tumour (9/14) up to 2 cm beyond enhancement after standard dose. In two of the 9 patients additional enhancement in the peritumoural oedema was only seen after the third injection. Conclusions: Successful surgery requires precise preoperative delineation of tumour borders. After high-dose MRI it might be possible to select those patients who would benefit from more radical attempts at eradicating the macroscopic as well as most of the microscopic compartment of glioblastoma. abstract_id: PUBMED:35703267 Preclinical Profile of Gadoquatrane: A Novel Tetrameric, Macrocyclic High Relaxivity Gadolinium-Based Contrast Agent. Objectives: The aim of this report was to characterize the key physicochemical, pharmacokinetic (PK), and magnetic resonance imaging (MRI) properties of gadoquatrane (BAY 1747846), a newly designed tetrameric, macrocyclic, extracellular gadolinium-based contrast agent (GBCA) with high relaxivity and stability. Materials And Methods: The r1-relaxivities of the tetrameric gadoquatrane at 1.41 and 3.0 T were determined in human plasma and the nuclear magnetic relaxation dispersion profiles in water and plasma. The complex stability was analyzed in human serum over 21 days at pH 7.4 at 37°C and was compared with the linear GBCA gadodiamide and the macrocyclic GBCA (mGBCA) gadobutrol. In addition, zinc transmetallation assay was performed to investigate the kinetic inertness. Protein binding and the blood-to-plasma ratio were determined in vitro using rat and human plasma. The PK profile was evaluated in rats (up to 7 days postinjection). Magnetic resonance imaging properties were investigated using a glioblastoma (GS9L) rat model. Results: The new chemical entity gadoquatrane is a macrocyclic tetrameric Gd complex with one inner sphere water molecule per Gd ( q = 1). Gadoquatrane showed high solubility in buffer (1.43 mol Gd/L, 10 mM Tris-HCl, pH 7.4), high hydrophilicity (logP -4.32 in 1-butanol/water), and negligible protein binding. The r1-relaxivity of gadoquatrane in human plasma per Gd of 11.8 mM -1 ·s -1 (corresponding to 47.2 mM -1 ·s -1 per molecule at 1.41 T at 37°C, pH 7.4) was more than 2-fold (8-fold per molecule) higher compared with established mGBCAs. Nuclear magnetic relaxation dispersion profiles confirmed the more than 2-fold higher r1-relaxivity in human plasma for the clinically relevant magnetic field strengths from 0.47 to 3.0 T. The complex stability of gadoquatrane at physiological conditions was very high. The observed Gd release after 21 days at 37°C in human serum was below the lower limit of quantification. Gadoquatrane showed no Gd 3+ release in the presence of zinc in the transmetallation assay. The PK profile (plasma elimination, biodistribution, recovery) was comparable to that of gadobutrol. In MRI, the quantitative evaluation of the tumor-to-brain contrast in the rat glioblastoma model showed significantly improved contrast enhancement using gadoquatrane compared with gadobutrol at the same Gd dose administered (0.1 mmol Gd/kg body weight). In comparison to gadoterate meglumine, similar contrast enhancement was reached with gadoquatrane with 75% less Gd dose. In terms of the molecule dose, this was reduced by 90% when compared with gadoterate meglumine. Because of its tetrameric structure and hence lower number of molecules per volume, all prepared formulations of gadoquatrane were iso-osmolar to blood. Conclusions: The tetrameric gadoquatrane is a novel, highly effective mGBCA for use in MRI. Gadoquatrane provides favorable physicochemical properties (high relaxivity and stability, negligible protein binding) while showing essentially the same PK profile (fast extracellular distribution, fast elimination via the kidneys in an unchanged form) to established mGBCAs on the market. Overall, gadoquatrane is an excellent candidate for further clinical development. abstract_id: PUBMED:30646519 Macrovascular Networks on Contrast-Enhanced Magnetic Resonance Imaging Improves Survival Prediction in Newly Diagnosed Glioblastoma. A higher degree of angiogenesis is associated with shortened survival in glioblastoma. Feasible morphometric parameters for analyzing vascular networks in brain tumors in clinical practice are lacking. We investigated whether the macrovascular network classified by the number of vessel-like structures (nVS) visible on three-dimensional T1-weighted contrast⁻enhanced (3D-T1CE) magnetic resonance imaging (MRI) could improve survival prediction models for newly diagnosed glioblastoma based on clinical and other imaging features. Ninety-seven consecutive patients (62 men; mean age, 58 ± 15 years) with histologically proven glioblastoma underwent 1.5T-MRI, including anatomical, diffusion-weighted, dynamic susceptibility contrast perfusion, and 3D-T1CE sequences after 0.1 mmol/kg gadobutrol. We assessed nVS related to the tumor on 1-mm isovoxel 3D-T1CE images, and relative cerebral blood volume, relative cerebral flow volume (rCBF), delay mean time, and apparent diffusion coefficient in volumes of interest for contrast-enhancing lesion (CEL), non-CEL, and contralateral normal-appearing white matter. We also assessed Visually Accessible Rembrandt Images scoring system features. We used ROC curves to determine the cutoff for nVS and univariate and multivariate cox proportional hazards regression for overall survival. Prognostic factors were evaluated by Kaplan-Meier survival and ROC analyses. Lesions with nVS &gt; 5 were classified as having highly developed macrovascular network; 58 (60.4%) tumors had highly developed macrovascular network. Patients with highly developed macrovascular network were older, had higher volumeCEL, increased rCBFCEL, and poor survival; nVS correlated negatively with survival (r = -0.286; p = 0.008). On multivariate analysis, standard treatment, age at diagnosis, and macrovascular network best predicted survival at 1 year (AUC 0.901, 83.3% sensitivity, 93.3% specificity, 96.2% PPV, 73.7% NPV). Contrast-enhanced MRI macrovascular network improves survival prediction in newly diagnosed glioblastoma. abstract_id: PUBMED:28741075 Brain relaxometry after macrocyclic Gd-based contrast agent. Purpose: To assess if ratios of T1-weighted (T1w) signal intensity (SI) and quantitative T1 relaxometry (qT1) change on serial administration of macrocyclic gadobutrol. Methods: A total of 17 glioblastoma patients were scanned at 3.0 T magnetic resonance imaging (MRI) every 6 weeks after tumor resection with standard MRI and T1 and T2 relaxometry before and after gadobutrol administration. On co-registered images T1w SI was measured and relaxation times T1 (qT1) and quantitative T2 (qT2) were quantified in several deep grey matter nuclei as ratios relative to frontal white matter and to the pons. Ratio changes were evaluated over time with a paired t‑test and multiple regression. Results: An average of 8 (range 5-14) scans per patient were completed. Ratios of T1w SI, qT1 and qT2 remained unchanged for all target regions from the first to the last time point (p &gt; 0.05) and did not correlate with the number of gadobutrol administrations. Multivariate regression showed no significant impact of gadobutrol on qT1 or qT2 ratios, but a significant negative effect on T1w SI ratios. Gender also had no impact on the ratios but age had a significant negative influence on the qT1 ratio. Conclusion: Multiple administrations of a macrocyclic contrast agent did not change relaxation time T1 ratios in any deep grey matter structure. abstract_id: PUBMED:26438560 High-resolution blood-pool-contrast-enhanced MR angiography in glioblastoma: tumor-associated neovascularization as a biomarker for patient survival. A preliminary study. Introduction: The objective of the study was to determine whether tumor-associated neovascularization on high-resolution gadofosveset-enhanced magnetic resonance angiography (MRA) is a useful biomarker for predicting survival in patients with newly diagnosed glioblastomas. Methods: Before treatment, 35 patients (25 men; mean age, 64 ± 14 years) with glioblastoma underwent MRI including first-pass dynamic susceptibility contrast (DSC) perfusion and post-contrast T1WI sequences with gadobutrol (0.1 mmol/kg) and, 48 h later, high-resolution MRA with gadofosveset (0.03 mmol/kg). Volumes of interest for contrast-enhancing lesion (CEL), non-CEL, and contralateral normal-appearing white matter were obtained, and DSC perfusion and DWI parameters were evaluated. Prognostic factors were assessed by Kaplan-Meier survival and Cox proportional hazards model. Results: Eighteen (51.42 %) glioblastomas were hypervascular on high-resolution MRA. Hypervascular glioblastomas were associated with higher CEL volume and lower Karnofsky score. Median survival rates for patients with hypovascular and hypervascular glioblastomas treated with surgery, radiotherapy, and chemotherapy were 15 and 9.75 months, respectively (P &lt; 0.001). Tumor-associated neovascularization was the best predictor of survival at 5.25 months (AUC = 0.794, 81.2 % sensitivity, 77.8 % specificity, 76.5 % positive predictive value, 82.4 % negative predictive value) and yielded the highest hazard ratio (P &lt; 0.001). Conclusions: Tumor-associated neovascularization detected on high-resolution blood-pool-contrast-enhanced MRA of newly diagnosed glioblastoma seems to be a useful biomarker that correlates with worse survival. abstract_id: PUBMED:18414811 Cerebral abscesses and necrotic cerebral tumours: differential diagnosis by perfusion-weighted magnetic resonance imaging. Purpose: This study was undertaken to evaluate the usefulness of perfusion-weighted imaging (PWI) in the differential diagnosis of ring-enhancing cerebral lesions, including abscesses, high-grade gliomas and metastases. Materials And Methods: Nine cerebral abscesses (five pyogenic, four from Toxoplasma gondii), ten glioblastomas and five cerebral metastases in 19 patients were studied with gadolinium-enhanced magnetic resonance imaging, diffusion-weighted imaging (DWI) including calculation of mean apparent diffusion coefficient (ADC) of the lesion core, and PWI. At PWI, the mean of the maximum regional cerebral blood volume (rCBV) was calculated in the gadolinium-enhancing peripheral solid areas and compared with that of the contralateral normal-appearing white matter [ratio=rCBV (lesion)/rCBV (contralateral normal-appearing white matter)]. Results: DWI achieved the differential diagnosis in all cases except for the four Toxoplasma abscesses. At PWI, the mean ratio of the rCBV of the capsular portion was 0.72+/-0.08 (range 0.60-0.82) in the pyogenic abscesses, 0.84+/-0.07 (range 0.75-0.91) in the Toxoplasma abscesses, 4.45+/-1.5 (range 2.9-8.0) in the high-grade gliomas and 3.58+/-0.68 (range 3.28-4.27) in the metastases. Conclusions: PWI seems to be useful in the differential diagnosis of ring-enhancing cerebral lesions. High rCBV values in the peripheral areas appear to indicate the possibility of a necrotic tumour, whereas low values tend to indicate an abscess. abstract_id: PUBMED:23085863 Albumin-binding MR blood pool contrast agent improves diagnostic performance in human brain tumour: comparison of two contrast agents for glioblastoma. Objective: We qualitatively and quantitatively compared MRI enhancement obtained with gadofosveset, an albumin-binding blood-pool contrast agent, and with gadobutrol, an extracellular contrast agent, in patients with glioblastoma. Methods: Thirty-five patients (25 men; 64 ± 14 years) with histologically proven glioblastoma underwent MRI including pre- and post-contrast T1-weighted SE images acquired 5 min after gadobutrol (0.1 mmol/kg) and, 48 h later, images acquired with identical parameters 5 min and 3, 6, and 24 h after gadofosveset (0.03 mmol/kg). Lesion extent, delineation, internal morphology, multifocality, and global diagnostic preference were evaluated quantitatively for the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and contrast enhancement (CE). Results: Mean values of SNR, CNR, and tumour CE were highest 6 h after gadofosveset. Multifocality was seen in 17 (48.6 %) patients; additional lesions had stronger enhancement 6 h after gadofosveset in 12 patients (70.6 %). In 21 (60 %) patients, radiologists' global preference was highest in images acquired 6 h after gadofosveset (kappa = 0.764). In 22 patients (62.8 %), all qualitative endpoints were better at 5 min after gadobutrol than in images acquired 5 min after gadofosveset injection. Conclusions: Gadobutrol gives significant tumour enhancement in early postcontrast imaging. However, images acquired 6 h after gadofosveset injection have significantly better diagnostic information endpoints and contrast enhancement. abstract_id: PUBMED:28240563 Multiparametric MR Imaging of Diffusion and Perfusion in Contrast-enhancing and Nonenhancing Components in Patients with Glioblastoma. Purpose To determine whether regions of low apparent diffusion coefficient (ADC) with high relative cerebral blood volume (rCBV) represented elevated choline (Cho)-to-N-acetylaspartate (NAA) ratio (hereafter, Cho/NAA ratio) and whether their volumes correlated with progression-free survival (PFS) and overall survival (OS) in patients with glioblastoma (GBM). Materials and Methods This retrospective analysis was approved by the local research ethics committee. Volumetric analysis of imaging data from 43 patients with histologically confirmed GBM was performed. Patients underwent preoperative 3-T magnetic resonance imaging with conventional, diffusion-weighted, perfusion-weighted, and spectroscopic sequences. Patients underwent subsequent surgery with adjuvant chemotherapy and radiation therapy. Overlapping low-ADC and high-rCBV regions of interest (ROIs) (hereafter, ADC-rCBV ROIs) were generated in contrast-enhancing and nonenhancing regions. Cho/NAA ratio in ADC-rCBV ROIs was compared with that in control regions by using analysis of variance. All resulting ROI volumes were correlated with patient survival by using multivariate Cox regression. Results ADC-rCBV ROIs within contrast-enhancing and nonenhancing regions showed elevated Cho/NAA ratios, which were significantly higher than those in other abnormal tumor regions (P &lt; .001 and P = .008 for contrast-enhancing and nonenhancing regions, respectively) and in normal-appearing white matter (P &lt; .001 for both contrast-enhancing and nonenhancing regions). After Cox regression analysis controlling for age, tumor size, resection extent, O-6-methylguanine-DNA methyltransferase-methylation, and isocitrate dehydrogenase mutation status, the proportional volume of ADC-rCBV ROIs in nonenhancing regions significantly contributed to multivariate models of OS (hazard ratio, 1.132; P = .026) and PFS (hazard ratio, 1.454; P = .017). Conclusion Volumetric analysis of ADC-rCBV ROIs in nonenhancing regions of GBM can be used to identify patients with poor survival trends after accounting for known confounders of GBM patient outcome. abstract_id: PUBMED:23903995 Benefits of contrast-enhanced SWI in patients with glioblastoma multiforme. Introduction: SWI can help to identify high-grade gliomas (HGG). The objective of this study was to analyse SWI and CE-SWI characteristics, i.e. the relationship between contrast-induced phase shifts (CIPS) and intratumoral susceptibility signals (ITSS) and their association with tumour volume in patients with glioblastoma multiforme (GBM). Materials And Methods: MRI studies of 29 patients were performed to evaluate distinct susceptibility signals comparing SWI and CE-SWI characteristics. The relationship between these susceptibility signals and CE-T1w tumour volume was analysed by using Spearman's rank correlation coefficient and Kruskal-Wallis-test. Tumour biopsies of different susceptibility signals were performed in one patient. Results: Comparison of SWI and CE-SWI demonstrated different susceptibility signals. Susceptibility signals visible on SWI images are consistent with ITSS; those only seen on CE-SWI were identified as CIPS. Correlation with CE-T1w tumour volume revealed that CIPS were especially present in small or medium-sized GBM (Spearman's rho r = 0.843, P &lt; 0.001). Histology identified the area with CIPS as the tumour invasion zone, while the area with ITSS represented micro-haemorrhage, highly pathological vessels and necrosis. Conclusion: CE-SWI adds information to the evaluation of GBM before therapy. It might have the potential to non-invasively identify the tumour invasion zone as demonstrated by biopsies in one case. Key Points: • MRI is used to help differentiate between low- and high-grade gliomas. • Contrast-enhanced susceptibility-weighted MRI (CE-SWI) helps to identify patients with glioblastoma multiforme. • CE-SWI delineates the susceptibility signal (CIPS and ITSS) more than the native SWI. • CE-SWI might have the potential to non-invasively identify the tumour invasion zone. abstract_id: PUBMED:28635617 The dosimetric impact of gadolinium-based contrast media in GBM brain patient plans for a MRI-Linac. Dosimetric effects of gadolinium based contrast media (Gadovist) were evaluated for the Elekta MRI linear accelerator using the research version of the Monaco treatment planning system (TPS). In order to represent a gadolinium uptake, the contrast was manually assigned to a phantom as well as to the gross tumour volume (GTV) of 6 glioblastoma multiforme (GBM) patients. A preliminary estimate of the dose enhancement, due to gadolinium, was performed using the phantom irradiated with a single beam. A more complicated assessment was performed for the GBM patients using a 7 field IMRT technique. The material table in Monaco was modified in order to identify the presence of a non-biological material. The dose distribution was modelled using GPUMCD (MC algorithm in Monaco) for an unmodified (or default) material table (DMT) as well as for a modified (or custom) material table (CMT) for both the phantom and patients. Various concentrations ranging between 8 and 157 mg ml-1 were used to represent the gadolinium uptake in the patient's GTV. It was assumed that the gadolinium concentration remained the same for the entire course of radiation treatment. Results showed that at the tissue-Gadovist interface, inside the phantom, dose scored using the DMT was 7% lower compared to that using the CMT for 157 mg ml-1 concentration of gadolinium. Dosimetric differences in the case of the patient study were measured using the DVH parameters. D 50% was higher by 6% when the DMT was used compared to the CMT for dose modelling for a gadolinium concentration of 157 mg ml-1. This difference decreased gradually with decreasing concentration of gadolinium. It was concluded that dosimetric differences can be quantified in Monaco if the tumour-gadolinium concentration is more than 23 mg ml-1. If the gadolinium concentration is lower than 23 mg ml-1, then a correction for the presence of gadolinium may not be necessary in the TPS. Answer: Yes, the administration of a high dose of the paramagnetic contrast medium Gadovist (Gd-DO3A-butriol) does improve the diagnostic value of magnetic resonance tomography (MRI) in glioblastomas. A phase II clinical trial involving 14 patients with suspected supratentorial glioblastoma who underwent MRI with increasing doses of Gadovist up to 0.3 mmol/kg showed improved enhancement in all patients and better delineation of infiltrating tumor in 9 out of 14 patients. The high-dose MRI allowed for better visualization of the tumor extent, which could potentially aid in selecting patients who might benefit from more radical surgical attempts to eradicate both the macroscopic and microscopic compartments of glioblastoma (PUBMED:8679973).
Instruction: Does clinical judgment of baseline severity and changes in psychopathology depend on the patient population? Abstracts: abstract_id: PUBMED:21105273 Does clinical judgment of baseline severity and changes in psychopathology depend on the patient population? Results of a CGI and PANSS linking analysis in a naturalistic study. Background: Linking of the Clinical Global Impression (CGI) Scale and the Positive and Negative Syndrome Scale (PANSS) was performed within a naturalistic sample. Furthermore, these linking results were compared with those derived from randomized controlled trials to examine if the baseline severity might influence the linking results. Methods: Biweekly PANSS and CGI ratings were performed from admission to discharge in 398 schizophrenia patients treated within a naturalistic study. Equipercentile linking was performed using the statistical program, R 2.8.1. To evaluate how the naturalistic study design would influence linkage results, a so-called study sample was computed with patients of the naturalistic study fulfilling common inclusion criteria of randomized controlled trials (n = 199). Patients not fulfilling these criteria (less ill sample) and those fulfilling the criteria (study sample) were compared using confidence intervals. Results: We found a considerable difference between the linking of the CGI severity score and the PANSS total score comparing the less ill sample and the study sample. Being considered "mildly ill" at admission in the less ill sample corresponded to a PANSS total score of 47 points and to a PANSS total score of 67 points in the study sample. Considering the linking of the CGI improvement score and PANSS changes, similar results were found for CGI improvement ratings ranging from "very much improved" to "minimally improved". Conclusions: Despite considerable differences, a 50% PANSS reduction was found to correspond to a clinical rating of much improved, which seems to be a suitable definition for response in clinical drug trials. abstract_id: PUBMED:1950609 Assessment of child psychopathology: relationships between different methods, different informants and clinical judgment of severity. The purpose of this study was to assess the relationships between information on children's problem behavior obtained by different methods (rating scales and clinical interviews) and from different sources (adolescent, parents and teacher). From a sample of 132 14-year-old international adoptees and their parents, information was obtained via the Child Behavior Checklist (CBCL), the Youth Self-Report and the Teacher's Report Form, the Child Assessment Schedule, and the Graham and Rutter Parent Interview. Agreement between the CBCL and the clinical judgment of the severity of psychopathology was substantial. Assessment procedures providing data on the adolescents' functioning derived from different sources revealed less agreement than those derived from the same informant. Agreement was higher for externalizing than for internalizing behaviors. Data from different sources made unique contributions to clinicians' judgments of the severity of psychopathology. abstract_id: PUBMED:36731308 Frontline Nurses' clinical judgment in recognizing, understanding, and responding to patient deterioration: A qualitative study. Background: Early warning systems and rapid response teams have been widely implemented in hospitals worldwide to facilitate early recognition and response to patient deterioration. Unfortunately, evidence suggests that these interventions have made little impact on unexpected cardiac or respiratory arrest, hospital mortality, unplanned admission to intensive care units, or hospital length of stay. These programs depend on nurses recognizing at risk patients and initiating a timely response. Although physiologic abnormalities commonly precede serious adverse events, nurses often fail to recognize or respond effectively. Clinical judgment is a critical component in the effective response to deterioration, yet little is known about factors that influence nurses' clinical judgment in these situations. Noticing, interpreting, and responding are aspects of clinical judgment and are essential to preventing further patient deterioration and serious adverse events. Objective: To describe medical-surgical nurses' perceptions of factors that influenced their clinical judgment in situations of patient deterioration. Design: A qualitative descriptive design using individual, semi-structured interviews. Tanner's Clinical Judgment Model served as the framework for interview questions and data analysis. Participants: A purposive sample of 20 medical-surgical registered nurses were recruited from 10 adult medical-surgical units at an academic medical center hospital in the United States. Methods: Telephone interviews occurred between March and July 2018. A directed approach to content analysis was used to code the transcribed data and identify themes. Results: Eight themes related to each aspect of clinical judgment emerged from the analysis: Knowing the patient, Experience matters, Lots of small points where the system can fail, Making sense of the data, Something doesn't go together, Caught in the middle, Culture of teamwork, and Increased nursing workload. An overarching theme was Nurses' keen sense of responsibility. Findings revealed that factors within the nurse, the patient, and the work environment influence each component of noticing, interpreting, and initiating an effective response to deteriorating patients. Conclusions: Findings have implications for health care systems regarding interventions to support timely recognition and response to deterioration. Nurses' clinical judgment and factors that influence each aspect (noticing, interpreting, and responding) should be a key consideration in organizational efforts to improve the overall response to patient deterioration. Research is needed to enhance understanding of the contextual factors that impact nurses' clinical judgment to inform interventions to support timely recognition and response. abstract_id: PUBMED:32608574 Effects of Clinical Reasoning Prompts on Nursing Students' Clinical Judgment for a Patient Experiencing Respiratory Distress. Purpose: To test the effects of clinical reasoning prompts on students' clinical judgment of a written case study. Methods: An experimental pre- and posttest study with second semester nursing students (N = 163). Findings: The intervention was insufficient to significantly improve clinical judgment. Students identified that the prompts would help them "narrow… down the problem" and "slow… the decision-making process" to improve analysis. The most accurate patient problem was identified by 28% of students in pretest and 35% in posttest. Conclusions: This study provides evidence of variations in nursing students' clinical judgment and students' desire to use decision-making algorithms. Nursing Implications: Nurse educators should provide students with additional education and practice to identify and solve these types of problems. abstract_id: PUBMED:27939773 Shame and eating psychopathology in Portuguese women: Exploring the roles of self-judgment and fears of receiving compassion. Shame has been for long associated with the development and maintenance of body image and eating-related difficulties. However, the mechanisms underlying this association remain unclear. Therefore, the current study sought to examine the mechanisms of self-judgment and fears of receiving compassion from others in the association between external shame and disordered eating, while controlling for body mass index (BMI). Participants in this study were 400 women from the general population, aged between 18 and 55 years old. Correlation analyses revealed significant and positive relationships between external shame, self-judgment, fears of receiving compassion from others and eating psychopathology. A path analysis confirmed that, when controlling for the effect of BMI, external shame has a direct effect on disordered eating severity, and also an indirect effect, mediated by higher levels of self-judgment and increased fears of receiving others' kindness and compassion. Results showed the plausibility of the tested model which explained 36% of the variance of disordered eating. These findings seem to support that women who perceive that others view them negatively tend to be defensive and engage in maladaptive emotion regulation strategies (such as harsh critical attitudes towards the self and being resistant to others' compassion), which may trigger maladaptive eating attitudes and behaviours. The current research appears to be an innovative study in the field of body image and eating-related psychopathology and seems to represent a new avenue for future research and for the development of intervention programs. abstract_id: PUBMED:37722793 The implementation of Tanner's Clinical Judgment Model and the Indonesian version of the Lasater Clinical Judgment Rubric in the clinical setting. A critical component for nurses to provide safe and quality care with strong patient outcomes is clinical judgment. This study aimed to identify the difference in the clinical judgment score in newly graduated nurses by implementing Tanner's Clinical Judgment Model. This study followed an experimental design with a pretest-posttest and involved thirty-two newly graduated nurses from three private hospitals in Indonesia. The intervention group had a scheduled post-conference with their preceptors using Tanner's Clinical Judgment Model's guidelines with high-level, open-ended questions for two consecutive weeks. Independent samples t-test was used to compare pre-and post-test scores utilizing the valid and reliable Indonesia version of the Lasater Clinical Judgment Rubric. There was no significant difference between the pre-test results of the intervention and control groups, with a significance value of 0.647. The results indicated a significant difference in the mean clinical judgment scores between the intervention and control groups after two weeks of intervention, where the intervention group's scores were higher than the control group's with a significance value of &lt;0.001 with a mean difference of 6.75 and CI of 95 % (4.18-9.31). Tanner's Clinical Judgment Model showed a significant impact in increasing clinical judgment scores. The Indonesia version of the Lasater Clinical Judgment Rubric is a valuable tool for assessing and evaluating the development of clinical judgment and provides detailed targeted areas needing improvement. Preceptors and nursing leaders in the clinical setting should continue to use this model and tools to assist newly graduated nurses in developing clinical judgment skills during their transition period. abstract_id: PUBMED:37483524 The Effect of Expert Patient Simulation on Clinical Judgment: A Quasi-Experimental Study. Background: Worldwide, quality education is one of the important tools to improve healthcare quality. Healthcare practitioners must be competent in their clinical judgement to meet clients' need. However, poor clinical judgment skill accounts for almost one-third of all patient problems in health care. Expert patient simulation has been used as a training method for clinical judgement skill. However, according to empirical studies, using expert patients to develop clinical judgement skill is unclear. The method is effective in one situation but not in another. Objective: To examine the effect of expert patient simulation on the clinical judgment skill of health science students of Mizan-Tepi University. Methods: A pre-test/post-test quasi-experimental design was used on 92 randomly selected samples from the graduating cohort of midwifery students. The research subjects who took part in the experiment were picked at random. Tools included the Creighton Competency Evaluation Instrument (C-CEI®), the Learning Satisfaction and Self-Confidence Questionnaire, and the Kolb Learning Style Inventory (LSI). The Wilcoxon-signed rank test was utilized to compare the self-confidence scores among intervention and control group of students, and the paired sample test was used to compare clinical judgment scores. Cohen's d was used to assess the effect size, and Spearman correlation was used to explore the association. Results: Clinical decision-making ability and self-confidence measures revealed statistical and practical differences between before and after simulation. There was a mean difference of 2.28 (95% CI, 1.78, 2.79), t (45)=9.13, p 0.001, and an effect size of 1.3, p 0.001. A pre-and post-simulation self-confidence measure showed statistically significant improvement after simulation (W = 1, Z = -3.57, P 0.001). A moderately significant positive connection (r = 0.419, p 0.004) was also discovered. Conclusion: The study found that human expert patient simulation is a tremendous clinical training technique for improving students' clinical decision-making skill competency and self-confidence. abstract_id: PUBMED:36572599 Determining Clinical Judgment Among Emergency Nurses During a Complex Simulation. Introduction: Clinical judgment is imperative for the emergency nurse caring for the acutely ill patients often seen in the emergency department. Without optimal clinical judgment in the emergency department, patients are at risk of medical errors and a failure to rescue. Methods: A descriptive observational approach using the Lasater Clinical Judgment Rubric evaluated nurses during a task that required recognition of clinical signs of deterioration and appropriate clinical care for simulated patients. Results: A total of 18 practicing emergency nurses completed only 44.6% of the patient assessments leading to low levels of clinical judgment throughout the simulation. Nurses expressed 4 levels of clinical judgment: exemplary (n = 1), accomplishing (n = 6), developing (n = 9), and beginning (n = 2). On average, nurses completed 69% of required tasks. Discussion: Assessments were completed less than half the time, demonstrating a breakdown in the noticing phase of clinical judgment. The nurses shifted to task completion focus with minimal use of clinical judgment. As the nurses remained task oriented, several medication and medical errors were noted while caring for the simulated patients. Experience and education did not influence observed clinical judgment among the participants. Given the extreme demands placed on the emergency nurse, it cannot be assumed that nurses have developed or can use clinical judgment when caring for their patients. Time and training targeting clinical judgment are essential for emergency nurse development. abstract_id: PUBMED:28632087 Psychopathology and expressed emotion in parents of patients with eating disorders: Relation to patient symptom severity. The purpose of the current study was to examine the relation between parental psychopathology, parental expressed emotion, and patient symptom severity. One hundred twenty-six parents of 79 patients receiving treatment for an eating disorder completed measures of expressed emotion and general psychopathology, and patients completed a measure of eating disorder psychopathology. Mothers reported higher expressed emotion scores than fathers. Both mothers and fathers scored higher on general psychopathology compared to nonpatient population means. Maternal psychopathology was found to be associated with symptom severity. Parental psychopathology and expressed emotion were found to be related, and MANCOVAs revealed that maternal criticism and maternal psychopathology were associated with patient symptom severity, but fathers' scores on the same measures were not. It is recommended that parental expressed emotion and parental psychopathology are assessed at the beginning of treatment so that appropriate clinical interventions can be utilized. abstract_id: PUBMED:37568256 The impact of tutoring on nursing students' clinical judgment: A quasi-experimental study. Background: Nurses' lack of clinical judgment often leads to adverse patient outcomes due to failure to recognize clinical deterioration, intervene, and manage complications. Teaching clinical judgment through a nursing process can help nursing students provide safe and competent patient care with improved health outcomes and to pass the National Council Licensure Examination for Registered Nurses (NCLEX-RN). Aims: The aim of this study was to examine the effect of tutoring on clinical judgment of undergraduate nursing students utilizing Lasater's Clinical Judgment Rubric (LCJR). This study also compared the clinical judgment of male and female nursing students and students from different semester levels. Methods: This quasi-experimental study utilized a single group pretest, posttest design. A convenience sample of n = 40 undergraduate nursing students from the Los Angeles County College of Nursing and Allied Health participated in the study. The participants underwent a pretest simulation, four sessions of the Clinical Judgment Model (CJM)-based tutoring, and a posttest simulation. Results: The posttest clinical judgment scores (35.70 ± 3.6) were significantly different from the pretest scores (25.78 ± 5.20). The tutoring had a significant effect on the clinical judgment of nursing students t(39) = -11.64, n = 40, p &lt; .001, at 95% CI of the mean difference. Linking Evidence To Action: Enhancing nursing students' clinical judgment is crucial to provide high-quality, safe patient care with improved health outcomes. The CJM-based tutoring is an effective strategy for developing clinical judgment in nursing students. This new teaching approach can train students to critically think, develop clinical judgment, and prepare for the complex healthcare environment. Therefore, nurse educators should focus on integrating clinical judgment into the prelicensure nursing program curriculum as a priority. Answer: Yes, clinical judgment of baseline severity and changes in psychopathology does depend on the patient population. This is evidenced by a study that linked the Clinical Global Impression (CGI) Scale and the Positive and Negative Syndrome Scale (PANSS) within a naturalistic sample of schizophrenia patients. The study found considerable differences in the linking of the CGI severity score and the PANSS total score when comparing less ill samples with study samples that fulfilled common inclusion criteria of randomized controlled trials. For instance, being considered "mildly ill" corresponded to a PANSS total score of 47 points in the less ill sample and 67 points in the study sample (PUBMED:21105273). Additionally, assessment procedures providing data on adolescents' functioning derived from different sources revealed less agreement than those derived from the same informant, indicating that the source of information can influence clinical judgment of severity (PUBMED:1950609). Furthermore, factors within the nurse, the patient, and the work environment influence each component of noticing, interpreting, and initiating an effective response to deteriorating patients, which are aspects of clinical judgment (PUBMED:36731308). Moreover, the study on Portuguese women explored the roles of self-judgment and fears of receiving compassion in the association between external shame and disordered eating, suggesting that individual differences and emotional regulation strategies can impact clinical judgment related to eating psychopathology (PUBMED:27939773). The implementation of Tanner's Clinical Judgment Model in the clinical setting also showed that clinical judgment scores can differ among newly graduated nurses, further indicating that clinical judgment can vary based on the training and experience of the healthcare provider (PUBMED:37722793). In summary, clinical judgment regarding the severity of psychopathology and its changes is influenced by the specific patient population, the source of information, individual differences among patients, and the healthcare provider's experience and training.
Instruction: Operative treatment of dislocated midshaft clavicular fractures: plate or intramedullary nail fixation? Abstracts: abstract_id: PUBMED:29887636 Screw Intramedullary Elastic Nail Fixation in Midshaft Clavicle Fractures: A Clinical Outcome in 36 Patients. Background: Surgical stabilization of displaced midshaft clavicle fracture can be achieved by an intramedullary nail or plate. When intramedullary nail is used, one of the dreaded complications is the migration of nail. We have used a screw intramedullary device with screw mechanism at one end which can get hold in the medial cancellous bone, thus preventing chances of nail migration. The aim of our study was to evaluate the clinical outcome following elastic stable screw intramedullary nailing for the fixation of midshaft clavicle fractures. Materials And Methods: 36 patients of midshaft clavicular fractures, who met inclusion criteria, were included in this retrospective study. There were 28 males and 8 females. The mean age was 36.6 years. Twenty one patients were managed by close reduction and fixation with screw intramedullary nail. Fifteen patients required mini-open reduction. Followup examination was done at 1 month, 3 months, and 6 months using patient's subjective evaluation, functional outcome, radiographic assessment, and other complications. Results: Union was achieved at average of 11.6 weeks in 31 cases and five patients went to delayed union. The average followup was 6 months. The average constant score was 90%. Three patients had medial nail protrusion which required early removal after union. Conclusion: Screw intramedullary nail is a safe, minimally invasive surgical technique with a lower complication rate, faster return to daily activities, excellent cosmetic and good functional results, and can be used as an equally effective alternative to plate fixation in displaced midshaft clavicle fractures. abstract_id: PUBMED:25878304 Operative treatment of dislocated midshaft clavicular fractures: plate or intramedullary nail fixation? A randomized controlled trial. Background: Over the past decades, the operative treatment of displaced midshaft clavicular fractures has increased. The aim of this study was to compare short and midterm results of open reduction and plate fixation with those of intramedullary nailing for displaced midshaft clavicular fractures. Methods: A multicenter, randomized controlled trial was performed in four different hospitals. The study included 120 patients, eighteen to sixty-five years of age, treated with either open reduction and plate fixation (n = 58) or intramedullary nailing (n = 62). Preoperative and postoperative shoulder function scores and complications were documented until one year postoperatively. Significance was set at p &lt; 0.05. Results: No significant differences in the Disabilities of the Arm, Shoulder and Hand (DASH) or Constant-Murley score (3.0 and 96.0 points for the plate group and 5.6 and 95.5 points for the nailing group) were noted between the two surgical interventions at six months postoperatively. Until six months after the surgery, the plate-fixation group experienced less disability than the nailing group as indicated by the area under the curve of the DASH scores for this time period (p = 0.02). The mean numbers of complications per patient, irrespective of their severity, were similar between the plate-fixation (0.67) and nailing (0.74) groups (p = 0.65). Conclusions: The patients in the plate-fixation group recovered faster than the patients in the intramedullary nailing group, but the groups had similar results at six months postoperatively and the time of final follow-up. The rate of complications requiring revision surgery was low. Implant-related complications occurred frequently and could often be treated by implant removal. abstract_id: PUBMED:32991430 Plate fixation versus intramedullary nail or Knowles pin fixation for displaced midshaft clavicle fractures: A meta-analysis of randomized controlled trials. Background: Plate fixation and intramedullary nail/Knowles pin fixation methods are commonly used to treat displaced midshaft clavicle fractures. However, the differences between these 2 methods are unclear. Objective: This meta-analysis aimed to compare plate fixation and intramedullary nail/Knowles pin fixation for displaced midshaft clavicle fractures. Methods: We searched PubMed, EBM reviews, and Ovid Medline online for studies related to comparison of plate fixation versus intramedullary nail/Knowles pin fixation for displaced midshaft clavicle fracture from inception to June 30, 2019. Relevant literature search, data extraction, and quality assessment will be performed by 2 researchers independently. The methodological quality of all included studies was appraised using the Cochrane system for randomized trials. The RevMan 5.2 software was used for heterogeneity assessment, generating funnel-plots, data synthesis, sensitivity analysis, and determining publication bias. The fixed-effects or random-effects model was used to calculate mean difference (MD)/relative risks (RRs) and 95% confidence intervals (CIs). Results: This meta-analysis included 839 patients from 12 randomized controlled trials. We found that compared to plate fixation, intramedullary nail/Knowles pin fixation yielded a higher shoulder constant score [MD = -2.43, 95% CI (-3.46 to -1.41), P &lt; .00001] and lower disabilities of the arm, shoulder and hand (DASH) score [MD = 2.98, 95% CI (0.16-5.81), P = .04], and lower infection rates [RR = 2.05, 95% CI (1.36-3.09), P = .003], operation time [MD = 20.20, 95% CI (10.80-29.60), P &lt; .0001], incision size [MD = 6.09, 95% CI (4.54-7.65), P &lt; .00001], and hospital stay [MD = 1.10, 95% CI (0.56-1.64), P &lt; .00001] but with a higher removal rate [RR = 0.52, 95% CI (0.41-0.65), P &lt; .00001] compared to plate fixation. There were no significant differences in nonunion, reintervention, or revision and refracture between these two methods. The limitation is that many studies did not demonstrate the random generated details, and only English articles were enrolled in this meta-analysis. Conclusions: Intramedullary nail/Knowles pin fixation might be an optimum choice for treating displaced midshaft clavicle fractures, with similar performance in terms of the nonunion, reintervention, or revision and refracture, and better shoulder constant and DASH scores, infection rates, and operative parameters. abstract_id: PUBMED:31650755 Effectiveness of Nice knot combined with elastic intramedullary nailing fixation in treatment of Robinson type 2B midshaft clavicular fracture in adults Objective: To investigate the effectiveness of Nice knot combined with elastic intramedullary nailing fixation in treatment of Robinson type 2B midshaft clavicular fracture in adults. Methods: Between March 2016 and January 2018, 20 patients with Robinson type 2B midshaft clavicular fractures were treated with reduction and internal fixation by Nice knot and elastic intramedullary nailing. There were 13 cases and 7 cases, with an average age of 43 years (range, 18-56 years). The causes of injury included the traffic accident in 6 cases, falling in 12 cases, and falling from height in 2 cases. The interval between injury and admission ranged from 1 hour to 2 days (mean, 3.2 hours). The fractures were classified as Robinson type 2B1 in 16 cases and type 2B2 in 4 cases. The length of incision, the operation time, the visual analogue scale (VAS) score on the 2nd day after operation, the fracture healing time, the postoperative shoulder function and the Disability of Arm Shoulder and Hand (DASH) score, the complications, and the time of second surgical removal of internal fixator and incision length were recorded. Results: The length of incision was 2-6 cm (mean, 4.7cm). The operation time was 45-120 minutes (mean, 77.2 minutes). The VAS score was 1-5 (mean, 3.2) on the 2nd day after operation. All incisions healed by first intention and no infection or nerve injury occurred. All patients were followed up 12-32 months (mean, 18.6 months). All fractures healed with the healing time of 10-15 weeks (mean, 12.1 weeks). The Constant score was 92-98 (mean, 96.3) and DASH score was 0-6.4 (mean, 3.1). The elastic intramedullary nailing bending and hypertrophic nonunion occurred in 1 case and the skin stimulated by elastic nail tail in 1 case after operation. The internal fixators were removed at 12-26 months (mean, 14.6 months) after operation. And the length of incision was 1-2 cm (mean, 1.3 cm) and the operation time was 5-15 minutes (mean, 9.0 minutes). Conclusion: For the midshaft clavicular fracture in adults, the procedure of the Nice knot combined with elastic intramedullary nail has advantages of small incision, light pain, rapid fracture healing, small secondary operation injury, and avoiding the risk of clavicular epithelial nerve injury, and can obtain good effectiveness. abstract_id: PUBMED:31027505 Biomechanical comparison between solid and cannulated intramedullary devices for midshaft clavicle fixation. Background: A method of closed reduction and internal fixation with cannulated screws was proposed as a surgical treatment of midshaft clavicle fractures. However, there are no mechanical studies about the cannulated screw used in the fixation of midshaft clavicle fracture. We conducted this study to compare the construct bending stiffness of a fixation midshaft clavicle fracture with a Knowles pin, cannulated screw and reconstruction plate. In addition, purchase lengths of both intramedullary devices were measured. Methods: After transverse osteotomy over the midpoint for fracture simulation, eighteen synthetic clavicles were assigned to 3 groups and fixed with reconstruction plate, Knowles pin or cannulated screw. Purchase length was defined as the engaged length of the intramedullary portion of the two intramedullary devices Stiffness, yield load and maximum load of the cantilever bending test were calculated of each tested synthetic bones. Results: The Knowles pin group had a significantly longer average intramedullary purchase length compared with that of the cannulated screw group. The construct stiffness in the reconstruction plate group (5.6 ± 0.9 N/mm) was higher than that of the intramedullary devices; the Knowles pin group (3.1 ± 0.6 N/mm) provided a greater construct stiffness than did the cannulated screw group (1.7 ± 0.4 N/mm) (p = 0.007). The cannulated screw group had the lowest yield and maximum load compared with the reconstruction plate and Knowles pin groups. Both the reconstruction plate and Knowles pin failed at the implant-bone interface. However, the cannulated screw group failed at the osteotomy site with broken implants. Conclusion: This study suggests that fixation of midshaft clavicle fractures with cannulated screws may lead to early failure due to inadequate mechanical strength. Ideal intramedullary clavicle devices should supply adequate intramedullary purchase lengths and mechanical strength. abstract_id: PUBMED:29628708 Management of displaced midshaft clavicle fractures in adolescent patients using intramedullary flexible nails: A case series. Introduction: Clavicle fractures are common injuries in adolescent patients. In this study we present our technique and results for treating non-comminuted displaced midshaft clavicle fractures using flexible intra medullary nails. Methods: A retrospective review of prospectively collected data using the electronic medical record was performed. Adolescent patients presenting to a level one pediatric trauma center with acute displaced non comminuted clavicular shaft fractures who were treated with intra medullary flexible nails were included in this study. Results: Seven adolescent patients with non-comminuted displaced mid shaft clavicle fractures were treated with flexible nails over a period of five and half years with an average follow up time of 10 months. The average age was 14.6 years (range 14-16), and a 2 mm nail was used in all cases. Closed reduction was obtained in five cases with only two cases needing open reduction to pass the nail. One patient had skin breakdown over the nail entry and had the nail removed in clinic two months after surgery, all other patients healed with no complication. Sports and full shoulder activity were allowed when radiographic healing was seen at an average 8 weeks. Implant removal was performed four of the seven patients. Conclusion: Flexible intramedullary nailing is an effective minimally invasive method for the treatment of displaced midshaft clavicle fractures in the adolescent population. The surgeon should be aware that a 2 mm nail is likely the optimal diameter, and the nail is difficult to pass beyond the lateral 2-3 cm of the distal segment due to canal narrowing, but passage further than this is not necessary to achieve stability and union. Following these pearls will allow the surgeon to successfully perform the procedure. abstract_id: PUBMED:38084163 Outcomes of Threaded Intramedullary Headless Nail Fixation for Metacarpal Fractures. Metacarpal fractures are common and may require operative treatment; however, there is no gold standard fixation technique. K-wires are used for simplicity and flexibility but require immobilization and can lead to complications. Dorsal plating provides greater stability than K-wires but is subject to its own limitations. Intramedullary threadless nails have reduced time to union and improved range of motion (ROM) but may not provide the stability of plating. Intramedullary screws allow rapid return to activity and increased stability; however, compression forces may shorten metacarpals, producing non-anatomic reduction. The ExsoMed INnateTM threaded intramedullary nail (ExsoMed, Aliso Viejo, CA, USA) was developed with tall threads, a diameter fitting the metacarpal canal cortical purchase, stability, and no compression to prevent fracture displacement and shortening. We designed a prospective study to evaluate INnateTM nail fixation in metacarpal fractures. Visual analog scale (VAS) pain score, total active motion (TAM), radiographic union, and return to normal activity were recorded at each postoperative visit. Descriptive statistics were performed for endpoints. Fifty-eight patients were included with a median age of 38.7 years. The INnateTM nail was used in 80 fractured metacarpals. The 4th and 5th metacarpals were most frequently involved. The most common mechanisms of injury were blunt trauma and motor vehicle collisions. Approximately 31% of patients had multiple, concurrent fractures (16 patients required two nails; two required three nails). The mean follow-up was 9.9 weeks. The mean time to radiographic union was 9.1 weeks. The final visual analog scale (VAS) pain score was 1.1. The mean time to return to work and/or normal activities was 9.7 weeks, and the mean percentage of normal activity resumed was 89.1%. abstract_id: PUBMED:27068381 Plate fixation or intramedullary fixation for midshaft clavicle fractures: a systematic review and meta-analysis of randomized controlled trials and observational studies. Background: The last decade has shown a shift toward operative treatment of a subset of midshaft clavicle fractures. However, it is unclear whether there are differences between plate fixation and intramedullary fixation regarding complications and functional outcome. The aim of this systematic review and meta-analysis was to compare plate fixation and intramedullary fixation for midshaft clavicle fractures. Methods: The Medline, Embase, and Cochrane databases were searched for both randomized controlled trials and observational studies. The methodologic quality of all included studies was assessed using the Methodological Index for Non-Randomized Studies. Results: Twenty studies were included. Ten of the 20 included studies used a fracture classification. Seven of these studies reported exclusion of patients with comminuted fractures. No difference in the total re-intervention rate was found (odds ratio [OR], 1.21; 95% confidence interval [CI], 0.71 to 2.04). Major re-interventions occurred more often after plate fixation (OR, 1.88; 95% CI, 1.02 to 3.46). The mean implant removal rates were 38% after plate fixation and 73% after intramedullary fixation. Re-fracture after implant removal occurred more often after plate fixation (OR, 3.42; 95% CI, 1.12 to 10.42). The Constant-Murley scores showed no differences at both short term (mean difference, -1.18; 95% CI, -13.41 to 11.05) and long term (mean difference, 0.15; 95% CI, -1.57 to 1.87). No differences were observed regarding nonunion (OR, 1.50; 95% CI, 0.82 to 2.75). The rate of infections showed no differences when outlier studies were excluded (OR, 1.54; 95% CI, 0.88 to 2.69). Conclusion: Major re-intervention and re-fracture after implant removal occurred more frequently after plate fixation of non-comminuted, displaced midshaft clavicle fractures. No differences in terms of function and nonunion between plate fixation and intramedullary fixation were observed. abstract_id: PUBMED:38216916 Clinical effect of nice knot-assisted minimally invasive titanium elastic nail fixation to treat Robinson 2B midshaft clavicular fracture. Background: The treatment of completely displaced midshaft clavicle fractures is still controversial, especially Robinson 2B fractures. Titanium elastic nail (TEN) fixation is a good option for simple fractures, but no reports exist on its use in complex fractures. This study aimed to present a surgical method using the Nice knot-assisted TEN fixation to treat Robinson 2B midshaft clavicular fractures. Methods: A retrospective analysis of 29 patients who underwent fixation with TEN and had a 1-year postoperative follow-up between 2016 and 2020 was performed. The fractures were classified as Robinson type 2B1 in 17 cases and type 2B2 in 12 cases. Length of the incision, postoperative shoulder function Disability of Arm Shoulder and Hand (DASH) score and Constant score, complications rate, and second surgical incision length were recorded. Results: The length of the incision was 2-6 cm (average 3.7 cm). All incisions healed by first intention, and no infection or nerve injury occurred. The Constant score was 92-100 (average 96) and the DASH score was 0-6.2 (mean, 2.64). TEN bending and hypertrophic nonunion occurred in one case (3.4%) and implant irritation occurred in four cases (13.8%) Fixation implants were removed at 12-26 months (mean, 14.6 months) after surgery, and the length of the second incision was 1-2.5 cm (average 1.3 cm). Conclusions: Intramedullary fixation by TEN is approved as a suitable surgical technique in clavicular fracture treatment. Nice knot-assisted fixation provides multifragmentary fracture stabilization, contributing to good fracture healing. Surgeons should consider this technique in treating Robinson 2B midshaft clavicular fractures. Trial Registration: Retrospectively registered. This study was approved by the Ethics Committee of Wuxi Ninth People's Hospital (LW20220021). abstract_id: PUBMED:34786145 Treatment effect of elastic intramedullary nail reduction and fixation on children with femoral neck fractures. Objective: To investigate the treatment effect of elastic intramedullary nail reduction and fixation on children with femoral neck fractures. Methods: Sixty-two children with femoral neck fractures in our hospital were equally divided into the open reduction and Kirschner wire internal fixation group (group O) and elastic intramedullary nail reduction and fixation group (group E). We investigated the differences in fracture healing time, intraoperative situation, clinical efficacy, Harris scores, secondary effects and bone healing time between group O and group E. Results: The intraoperative indexes such as operation time and blood loss of group E were significantly lower than those of group O (P&lt;0.05). After the operation, the treatment effect of group E was slightly higher than that of group O, but there was no significant difference (P&gt;0.05). The Harris score of group E was significantly higher than that of group O (P&lt;0.05). The fracture healing time and bone healing time after the surgery in group E were significantly shorter than those in group O (P&lt;0.05). The incidence of secondary effects in group E was lower than that in group O, but without significant difference (P&gt;0.05). Conclusion: Elastic intramedullary nail reduction and fixation has a better effect in the treatment of children with femoral neck fractures, which can accelerate fracture healing time and improve children's hip function after the operation. Answer: The operative treatment of dislocated midshaft clavicular fractures can be approached using either plate fixation or intramedullary nail fixation, and the choice between the two methods has been the subject of several studies. A randomized controlled trial comparing open reduction and plate fixation with intramedullary nailing found no significant differences in the Disabilities of the Arm, Shoulder and Hand (DASH) or Constant-Murley score between the two surgical interventions at six months postoperatively. However, the plate-fixation group experienced less disability than the nailing group until six months after surgery. The rate of complications requiring revision surgery was low for both groups, and implant-related complications were often treated by implant removal (PUBMED:25878304). A meta-analysis of randomized controlled trials concluded that intramedullary nail/Knowles pin fixation might be an optimal choice for treating displaced midshaft clavicle fractures, with similar performance in terms of nonunion, reintervention, or revision and refracture, and better shoulder constant and DASH scores, infection rates, and operative parameters compared to plate fixation (PUBMED:32991430). Another study using a screw intramedullary device reported it as a safe, minimally invasive technique with a lower complication rate, faster return to daily activities, excellent cosmetic and good functional results, and an effective alternative to plate fixation in displaced midshaft clavicle fractures (PUBMED:29887636). A systematic review and meta-analysis found that major re-interventions and re-fractures after implant removal occurred more frequently after plate fixation of non-comminuted, displaced midshaft clavicle fractures. No differences in terms of function and nonunion between plate fixation and intramedullary fixation were observed (PUBMED:27068381). In summary, both plate fixation and intramedullary nail fixation are viable options for the operative treatment of dislocated midshaft clavicular fractures. The choice of method may depend on various factors, including the specific fracture type, surgeon preference, and patient characteristics. Intramedullary nail fixation may offer some advantages in terms of shoulder function scores and lower complication rates, while plate fixation may be associated with a faster recovery in the initial postoperative period. However, the rates of major re-interventions and re-fractures after implant removal were found to be higher with plate fixation.
Instruction: Oncocytic versus mitochondrion-rich follicular thyroid tumours: should we make a difference? Abstracts: abstract_id: PUBMED:20002768 Oncocytic versus mitochondrion-rich follicular thyroid tumours: should we make a difference? Aims: To separate true oncocytic neoplasms from mitochondrion-rich non-oncocytic lesions based on the intracellular relationship between major cell organelles, and to establish the diagnostic and clinical relevance of this distinction. Methods And Results: Tissue samples from 276 follicular adenomas, 194 follicular carcinomas, 162 normal thyroids and 296 non-neoplastic lesions were classified as conventional, mitochondrion-rich or oncocytic based on the immunohistochemically assessed quantity and intracellular distribution of mitochondria and endoplasmic reticulum (ER) and nuclear position. Pathological and clinical features were compared among the groups. In oncocytes, densely packed mitochondria resulted in homogeneous immunolabelling of basal cytoplasmic regions, whereas ER and the nuclei were typically displaced to the apical position. This aberrant organelle distribution was not observed in non-oncocytes, which allowed reliable distinction between oncocytic and mitochondrion-rich lesions. Clinically, mitochondrial increase in non-oncocytic lesions was associated with neoplasia, malignancy and higher cancer recurrence rates. Similar correlation, albeit less pronounced, was observed within the oncocytic tumour group. By contrast, oncocytic change per se was not associated with neoplasia, malignancy or cancer aggressiveness. Conclusions: True oncocytic neoplasms can be distinguished from mitochondrion-rich non-oncocytic tumours based on aberrant distribution of all major cell organelles. This distinction has immediate clinical relevance and should be implemented in practice. abstract_id: PUBMED:29631022 Genomic and transcriptomic characterization of the mitochondrial-rich oncocytic phenotype on a thyroid carcinoma background. We conducted the first systematic omics study of the oncocytic phenotype in 488 papillary thyroid carcinomas (PTC) from The Cancer Genome Atlas. Oncocytic phenotype is secondary to PTC, being unrelated to several pathologic scores. The nuclear genome had low impact on this phenotype (except in specific copy number variation), which was mostly driven by the significant accumulation of mitochondrial DNA non-synonymous and frameshift mutations at high heteroplasmy levels. Energy and mitochondrial-related pathways were significantly enriched in oncocytic tumors that also displayed increased levels of expression for genes involved in autophagy and fusion of mitochondria. Our in vitro tests confirmed that autophagy is increased and functional while mitophagy is decreased in these tumors. abstract_id: PUBMED:34975765 Oncocytic Papillary Thyroid Carcinoma and Oncocytic Poorly Differentiated Thyroid Carcinoma: Clinical Features, Uptake, and Response to Radioactive Iodine Therapy, and Outcome. Objective: The main objective of this study was to review the clinicopathologic characteristics and outcome of patients with oncocytic papillary thyroid carcinoma (PTC) and oncocytic poorly differentiated thyroid carcinoma (PDTC). The secondary objective was to evaluate the prevalence and outcomes of RAI use in this population. Methods: Patients with oncocytic PTC and PDTC who were treated at a quaternary cancer centre between 2002 and 2017 were retrospectively identified from an institutional database. All patients had an expert pathology review to ensure consistent reporting and definition. The cumulative incidence function was used to analyse locoregional failure (LRF) and distant metastasis (DM) rates. Univariable analysis (UVA) was used to assess clinical predictors of outcome. Results: In total, 263 patients were included (PTC [n=218], PDTC [n=45]) with a median follow up of 4.4 years (range: 0 = 26.7 years). Patients with oncocytic PTC had a 5/10-year incidence of LRF and DM, respectively, of 2.7%/5.6% and 3.4%/4.5%. On UVA, there was an increased risk of DM in PTC tumors with widely invasive growth (HR 17.1; p&lt;0.001), extra-thyroidal extension (HR 24.95; p&lt;0.001), angioinvasion (HR 32.58; p=0.002), focal dedifferentiation (HR 19.57, p&lt;0.001), and focal hobnail cell change (HR 8.67, p=0.042). There was additionally an increased risk of DM seen in male PTC patients (HR 5.5, p=0.03).The use of RAI was more common in patients with larger tumors, angioinvasion, and widely invasive disease. RAI was also used in the management of DM and 43% of patients with oncocytic PTC had RAI-avid metastatic disease. Patients with oncocytic PDTC had a higher rate of 5/10-year incidence of LRF and DM (21.4%/45.4%; 11.4%/40.4%, respectively). Patients with extra-thyroidal extension had an increased risk of DM (HR 5.52, p=0.023) as did those with angioinvasion. Of the patients with oncocytic PDTC who received RAI for the treatment of DM, 40% had RAI-avid disease. Conclusion: We present a large homogenous cohort of patients with oncocytic PTC and PDTC, with consistent pathologic reporting and definition. Patients with oncocytic PTC have excellent clinical outcomes and similar risk factors for recurrence as their non-oncocytic counterparts (angioinvasion, large tumor size, extra-thyroidal extension, and focal dedifferentiation). Compared with oncocytic PTCs, the adverse biology of oncocytic PDTCs is supported with increased frequency of DM and lower uptake of RAI. abstract_id: PUBMED:17517629 Disruptive mitochondrial DNA mutations in complex I subunits are markers of oncocytic phenotype in thyroid tumors. Oncocytic tumors are a distinctive class of proliferative lesions composed of cells with a striking degree of mitochondrial hyperplasia that are particularly frequent in the thyroid gland. To understand whether specific mitochondrial DNA (mtDNA) mutations are associated with the accumulation of mitochondria, we sequenced the entire mtDNA in 50 oncocytic lesions (45 thyroid tumors of epithelial cell derivation and 5 mitochondrion-rich breast tumors) and 52 control cases (21 nononcocytic thyroid tumors, 15 breast carcinomas, and 16 gliomas) by using recently developed technology that allows specific and reliable amplification of the whole mtDNA with quick mutation scanning. Thirteen oncocytic lesions (26%) presented disruptive mutations (nonsense or frameshift), whereas only two samples (3.8%) presented such mutations in the nononcocytic control group. In one case with multiple thyroid nodules analyzed separately, a disruptive mutation was found in the only nodule with oncocytic features. In one of the five mitochondrion-rich breast tumors, a disruptive mutation was identified. All disruptive mutations were found in complex I subunit genes, and the association between these mutations and the oncocytic phenotype was statistically significant (P=0.001). To study the pathogenicity of these mitochondrial mutations, primary cultures from oncocytic tumors and corresponding normal tissues were established. Electron microscopy and biochemical and molecular analyses showed that primary cultures derived from tumors bearing disruptive mutations failed to maintain the mutations and the oncocytic phenotype. We conclude that disruptive mutations in complex I subunits are markers of thyroid oncocytic tumors. abstract_id: PUBMED:36928735 Top Ten Oncocytic Head and Neck Lesions to Contemplate. Background: Oncocytes are a component of many metaplastic and neoplastic lesions throughout the head and neck area, primarily originating in salivary/seromucinous glands and the thyroid gland. In addition, other lesions can contain cells that mimic oncocytes (pseudo-oncocytes); these can be of epithelial or non-epithelial origin. Methods: Review article. Results: Oncocytic metaplasia is common in seromucinous glands throughout the upper aerodigestive tract, most notable in the oral cavity, nasopharynx and larynx. The main oncocytic salivary gland neoplasms are Warthin tumor and oncocytoma. Infarction of Warthin tumor may lead to recognition difficulties. Oncocytic subtypes of mucoepidermoid carcinoma and intraductal carcinoma have morphologic and immunohistochemical features that allow distinction from major oncocytic entities. Oncocytic thyroid tumors include adenoma, carcinoma (follicular, papillary and medullary), along with poorly differentiated tumors. Oncocytic papillary sinonasal and middle ear tumors must be distinguished from low grade adenocarcinomas. Pseudo-oncocytic entities include paraganglioma, Langerhans cell histiocytosis, giant cell tumor, rhabdomyoma, and metastatic tumors. Conclusions: Correct diagnosis of oncocytic head and neck lesions requires a knowledge of the spectrum of possible entities, their characteristic sites of occurrence, architecture, histomorphology, and immunohistochemistry. Oncocytic subtypes of several newly described entities are now recognized. Both epithelial and non-epithelial mimics of oncocytes exist. The molecular features of oncocytic tumors can be helpful in their diagnosis and understanding their pathogenesis. abstract_id: PUBMED:34177813 Inherited Thyroid Tumors With Oncocytic Change. Familial non-medullary thyroid carcinoma (FNMTC) corresponds to 5-10% of all follicular cell-derived carcinoma (FCDTC). Oncocytic thyroid tumors have an increased incidence in the familial context in comparison with sporadic FCDTC, encompassing benign and malignant tumors in the same family presenting with some extent of cell oxyphilia. This has triggered the interest of our and other groups to clarify the oncocytic change, looking for genetic markers that could explain the emergence of this phenotype in thyroid benign and malignant lesions, focusing on familial aggregation. Despite some advances regarding the identification of the gene associated with retinoic and interferon-induced mortality 19 (GRIM-19), as one of the key candidate genes affected in the "Tumor with Cell Oxyphilia" (TCO) locus, most of the mutations follow a pattern of "private mutations", almost exclusive to one family. Moreover, no causative genetic alterations were identified so far in most families. The incomplete penetrance of the disease, the diverse benign and malignant phenotypes in the affected familial members and the variable syndromic associations create an additional layer of complexity for studying the genetic alterations in oncocytic tumors. In the present review, we summarized the available evidence supporting genomic-based mechanisms for the oncocytic change, particularly in the context of FNMTC. We have also addressed the challenges and gaps in the aforementioned mechanisms, as well as molecular clues that can explain, at least partially, the phenotype of oncocytic tumors and the respective clinico-pathological behavior. Finally, we pointed to areas of further investigation in the field of oncocytic (F)NMTC with translational potential in terms of therapy. abstract_id: PUBMED:36346459 Significance of oncocytic features in poorly differentiated thyroid carcinoma - a bi-institutional experience. Poorly differentiated thyroid carcinoma (PDTC), defined by Turin criteria, comprises a subset of high-grade follicular-derived thyroid carcinomas with intermediate prognosis. While differentiated oncocytic thyroid carcinomas demonstrate clinicopathologic and genetic differences compared to their non-oncocytic counterparts, similar data is limited in oncocytic (Hurthle) PDTCs (OPDTCs). Here, we assessed the impact of various oncocytic cut-offs in PDTCs on clinical, histologic and survival parameters.Our bi-institutional cohort comprised 210 primary PDTCs with available slides reviewed by at least one pathologist. Histologic features, including oncocytic fraction, were recorded. Clinicopathologic data were obtained, including overall survival (OS), disease-free survival (DFS), disease-specific survival (DSS), locoregional recurrence free survival (LRRFS), and distant metastasis-free survival (DMFS). Radioactive iodine avidity data was available for 125 PDTCs based on postoperative whole-body scanning.Within our cohort, 39.0% PDTCs had any oncocytic component with 24.8% meeting the 75% World Health Organization (WHO) oncocytic definition. Any oncocytic component and &gt; 25% oncocytic cut-off correlated with decreased DSS and LRRFS, respectively, compared to non-oncocytic PDTCs (NOPDTCs) on univariate and multivariate analysis. The 100% oncocytic cut-off was significant for DSS on univariate analysis but a non-significant trend on multivariate analysis. Any oncocytic cut-off (100%, &gt; 75%, &gt; 50%, &gt; 25%, or &gt; 0%) conferred higher radioactive iodine (RAI)-refractoriness to OPDTCs compared to NOPDTCs. NF1 and PTEN alterations were enriched in OPDTCs (40% vs. 0%, and 60% vs 8%, respectively), whereas NRAS mutations were frequent in NOPDTCs (47% vs. 7%).Among PDTCs, the presence of oncocytes led to downward trend in all outcome parameters, especially for DSS and LRRFS. OPDTCs were enriched in NF1 and PTEN mutations. Consistently, all oncocytic cut-offs were associated with RAI-refractoriness. Accordingly, additional studies are needed to reassess the current 75% cut-off used to define oncocytic thyroid lesions. abstract_id: PUBMED:27385138 Malignancy risk of oncocytic changes in thyroid nodules: who should we offer surgery to? Background Oncocytic (Hürthle) cell in fine-needle aspiration biopsy (FNAB) remains challenging for surgeons. Surgical treatment is recommended for oncocytic change in FNAB, since it can sometimes be a struggle to determine the nature of thyroid nodules. We aimed to investigate the clinical significance of oncocytic changes in FNAB in terms of management of patients. Methods The FNAB samples of 172 patients with thyroid nodules were reviewed. Of these, 39 patients with cytologic findings of oncocytic changes on FNAB [POC: predominance of oncocytic cells; SFON-H: suspicious for follicular or oncocytic neoplasm (Hürthle cell type), SM-O: suspicious for malignancy-papillary or follicular carcinoma; oncocytic variant)] were included. Results FNAB demonstrated POC in 14 (35.8%), SFON-H in 15 (38.4%), and SM-O in 10 (25.6%) patients. The overall malignancy rate was 35.8% (n = 14). Clinical and laboratory data were not found to be associated with thyroid cancer while nodule size was significantly higher in patients with thyroid malignancy (15.2 versus 23.3 mm, p = 0.032). Regarding FNAB results including oncocytic changes, the rate of malignancy was significantly different and almost three-fold higher in nodules classified as SFON-H and SM-O [48% versus 14.2% with POC, p = 0.044]. Besides, there was a positive correlation between SFON-H and SM-O cytology and malignancy (p = 0.036, r = 0.337). Conclusions It is hard to discern the significance of oncocytic changes in FNAB report and to determine an optimal approach as a surgeon. We recommend surgery for the patients with an FNAB showing SFON-H and SM-O whereas POC should be better to be followed-up. abstract_id: PUBMED:37168165 Encapsulated Papillary Oncocytic Neoplasm: A Newly Recognized Hurthle Cell Neoplasm With Unique Morphology. Diagnosis of oncocytic neoplasm of the thyroid gland can be challenging especially on fine needle aspiration biopsy (FNAB), given the wide differential diagnoses. In this report, we present the case of a 66-year-old male with an incidental thyroid nodule identified on imaging. In this case, identification of the distinctive cytologic features leads to the diagnosis of papillary oncocytic neoplasm on FNAB and helped to decide the appropriate surgical management. To date, the cytomorphologic features of this lesion are not well studied and established, with only a single case report in the literature. Herein, we describe the characteristic cytomorphologic and immunophenotypic features of this neoplasm. The recognition of the unique cytomorphologic features and awareness of the non-classic expression of the thyroid lineage markers will help in accurate diagnosis and management of this entity. abstract_id: PUBMED:27659048 Cardiolipins Are Biomarkers of Mitochondria-Rich Thyroid Oncocytic Tumors. Oncocytic tumors are characterized by an excessive eosinophilic, granular cytoplasm due to aberrant accumulation of mitochondria. Mutations in mitochondrial DNA occur in oncocytic thyroid tumors, but there is no information about their lipid composition, which might reveal candidate theranostic molecules. Here, we used desorption electrospray ionization mass spectrometry (DESI-MS) to image and chemically characterize the lipid composition of oncocytic thyroid tumors, as compared with nononcocytic thyroid tumors and normal thyroid samples. We identified a novel molecular signature of oncocytic tumors characterized by an abnormally high abundance and chemical diversity of cardiolipins (CL), including many oxidized species. DESI-MS imaging and IHC experiments confirmed that the spatial distribution of CLs overlapped with regions of accumulation of mitochondria-rich oncocytic cells. Fluorescent imaging and mitochondrial isolation showed that both mitochondrial accumulation and alteration in CL composition of mitochondria occurred in oncocytic tumors cells, thus contributing the aberrant molecular signatures detected. A total of 219 molecular ions, including CLs, other glycerophospholipids, fatty acids, and metabolites, were found at increased or decreased abundance in oncocytic, nononcocytic, or normal thyroid tissues. Our findings suggest new candidate targets for clinical and therapeutic use against oncocytic tumors. Cancer Res; 76(22); 6588-97. ©2016 AACR. Answer: Yes, a distinction should be made between oncocytic and mitochondrion-rich follicular thyroid tumors due to their differing pathological and clinical features. True oncocytic neoplasms can be reliably distinguished from mitochondrion-rich non-oncocytic tumors based on the aberrant distribution of major cell organelles, such as densely packed mitochondria resulting in homogeneous immunolabelling of basal cytoplasmic regions, with endoplasmic reticulum (ER) and nuclei typically displaced to the apical position. This aberrant organelle distribution is not observed in non-oncocytic lesions (PUBMED:20002768). Clinically, an increase in mitochondria in non-oncocytic lesions is associated with neoplasia, malignancy, and higher cancer recurrence rates. Although a similar correlation is observed within the oncocytic tumor group, it is less pronounced. Oncocytic change per se is not associated with neoplasia, malignancy, or cancer aggressiveness (PUBMED:20002768). Moreover, genomic and transcriptomic characterization has shown that the oncocytic phenotype in thyroid carcinoma is mostly driven by the significant accumulation of mitochondrial DNA non-synonymous and frameshift mutations at high heteroplasmy levels, affecting energy and mitochondrial-related pathways. This phenotype also displays increased levels of expression for genes involved in autophagy and fusion of mitochondria (PUBMED:29631022). Additionally, oncocytic papillary thyroid carcinoma (PTC) and oncocytic poorly differentiated thyroid carcinoma (PDTC) demonstrate distinct clinical features, uptake, and response to radioactive iodine therapy, and outcomes compared to their non-oncocytic counterparts. Oncocytic PTCs have excellent clinical outcomes with similar risk factors for recurrence, whereas oncocytic PDTCs show increased frequency of distant metastasis and lower uptake of radioactive iodine (PUBMED:34975765). Disruptive mitochondrial DNA mutations in complex I subunits are markers of the oncocytic phenotype in thyroid tumors, further supporting the unique nature of these tumors (PUBMED:17517629). In conclusion, the distinction between oncocytic and mitochondrion-rich follicular thyroid tumors has immediate clinical relevance and should be implemented in practice, as it impacts diagnosis, prognosis, and treatment strategies (PUBMED:20002768).
Instruction: Paediatric and adolescent horse-related injuries: does the mechanism of injury justify a trauma response? Abstracts: abstract_id: PUBMED:18782209 Paediatric and adolescent horse-related injuries: does the mechanism of injury justify a trauma response? Objective: To identify the frequency, variety and disposition of horse-related injury presentations to the ED and to use this information to evaluate the existing institutional trauma team activation criteria following horse-related injuries. Methods: A retrospective case analysis was performed of all horse-related injury presentations to the ED of Women's and Children's Hospital, Adelaide, Australia, in the 5 year period between January 1999 and December 2003. Results: A total of 186 children presented with horse-related injuries during the 5 year study period. The median age of injury was 9 years (range 1-17 years), with 81% of presentations female and 60% of patients hospitalized. The mechanism of injury was divided into four groups: 148 falls (79%), 28 kicks (15%), 7 tramples (4%) and 5 bites (3%). There was one death. Seven presentations rated an Injury Severity Score &gt;15, with full trauma team activation occurring for two of these presentations. Conclusion: Although horse-related injury presentations are uncommon, severe injuries do occur. Patients presenting with severe horse-related injuries do not always activate a full trauma team response based on current trauma team activation criteria. These severe injury presentations are supported by a limited trauma team response, which activates on the mechanism of injury. The effectiveness of this as a contingency system needs to be evaluated. abstract_id: PUBMED:28268253 Paediatric horse-related trauma. Aim: This retrospective cohort study reported on the epidemiology of horse-related injuries for patients presenting to the only tertiary paediatric trauma hospital in Queensland. The secondary outcome was to examine the use of helmets and adult supervision. Traumatic brain injury (TBI) was examined in relation to helmet use. Morbidity and mortality were also recorded. Methods: Included were all patients presenting with any horse-related trauma to the Royal Children's Hospital in Brisbane from January 2008 to August 2014. Data were retrospectively collected on patient demographics, hospital length of stay (LOS), mechanism of injury (MOI), safety precautions taken, diagnoses and surgical procedures performed. Results: Included in the analysis were 187 incidents involving 171 patients. Most patients were aged 12-14 years (36.9%) and female (84.5%). The most common MOI were falls while riding horses (97.1%). Mild TBI (24.6%) and upper limb fractures (20.9%) were common injuries sustained. Patients who wore helmets had significantly reduced hospital LOS and severity of TBI when compared with those who did not wear helmets (P &lt; 0.001 and P = 0.028, respectively). Morbidity was reported in 7.5% of patients. There were three deaths in Queensland. Conclusion: Helmet use is recommended for non-riders when handling horses, in addition to being a compulsory requirement whilst horse riding. Prompts in documentation may assist doctors to record the use of safety attire and adult supervision. This will allow future studies to further investigate these factors in relation to clinical outcomes. abstract_id: PUBMED:37160391 Comparison of injury patterns and interventions between adolescent, adult and paediatric trauma cases: a cross-sectional review of TARN data. Objectives: The aim is to compare adolescent (10-24.99 years) trauma patterns and interventions to adult (≥25) and paediatric cases (&lt;10) and to identify any transition points. Design And Setting: Data were collected from the Trauma and Audit Research Network (TARN) over a 10-year period. We conducted a retrospective cross-sectional analysis. Participants: After exclusions there were 505 162 TARN eligible cases. Primary And Secondary Outcome Measures: To compare adolescent trauma patterns and interventions to those in paediatric and adult cohorts. Identify transition points for mechanism of injury (MOI) and interventions by individual year over the adolescent age range (10-24.99). Results: Road traffic accidents are the most common MOI in the adolescent group, in contrast to both the paediatric and adult group where falls &lt;2 m are most common. Violence-related injury (shootings and stabbings) are more common in the adolescent group, 9.4% compared with 0.3% and 1.5% in the paediatric and adult groups, respectively. The adolescent grouping had the highest median Injury Severity Score (ISS) and the highest proportion of interventions. The proportion of cases due to stabbing peaked at age 17 (11.8%) becoming the second most common MOI. The median ISS peaked at 13 at age 18. The percentage of cases that fulfil the definition of polytrauma enters double figures (11.8%) at age 15 reaching a peak of 17.6% at age 18. The use of blood products within the first 6 hours remains around 2% (1.6%-2.8%) until age 15 (3.4%), increasing to 4.7% at age 16. Conclusions: Trauma patterns are more closely aligned between adult and paediatric cohorts than adolescence. The highest proportion of trauma interventions occur in the adolescent population. Analysing the adolescent cohort by year of age identified some common points for when descriptors or outcomes altered in frequency, predominantly between the ages of 15-17 years. abstract_id: PUBMED:6971862 Brainstem auditory evoked response in the diagnosis of inner ear injury in the horse. Brainstem auditory evoked response (BAER) testing was done to evaluate inner ear/VIIIth cranial nerve (CN8) function in the horse. The BAER test consisted of stimulating the auditory system with clicks and recording far-field responses of the brainstem auditory components via cutaneous electrodes and a signal averaging system. The normal response was shown to be a series of waves occurring within the first 10 msec after the stimulus click. Functional loss of the auditory receptor organ (cochlea) or CN8 results in loss of the entire response on the side of the injury. Because of the anatomic relationships of the peripheral auditory and vestibular systems, trauma to one will injure the other. Therefore, auditory testing (BAER tests) may be used to advantage in the diagnosis of peripheral vestibular disease. The BAER test was used in a horse that had signs suggestive of vestibular dysfunction or a brain lesion. The test helped to demonstrate a unilateral inner ear/CN8 lesion and to discount the probability of a more central lesion. abstract_id: PUBMED:18696082 Analysis of horse-related injuries in children. Purpose: The purpose of the present study was to investigate factors affecting the nature, characteristics, severity and outcome of horseback and horse care injuries in paediatric patients and to create guidelines for injury prevention. Methods: Detailed clinical records of 265 children sustained horse-riding related injuries have been analysed. Questionnaires were mailed to provide follow-up information for patients who have been treated in either Department of Paediatric Surgery in Pécs, Hungary, or Department of Paediatric Surgery in Graz, Austria between 1999 and 2003. Those 112 children (42%) who answered the questionnaire were included in the study and further analyses were performed. Results: Female to male ratio of the 112 patients was 101/11. Trauma occurred during horseback riding accounted for 76.8% of all cases; these injuries represented more severe cases comparing to those which happened while handling a horse (23.2%). The mechanism as well as the localisation of injury displayed a close association with age. Conclusions: Prevention strategies targeting horse-related injuries at children should appreciate the age-dependent nature of injury as well as the fact that injury severity is not necessarily associated with the experience of the rider. abstract_id: PUBMED:26696087 Paediatric trauma resuscitation: an update. Purpose: Paediatric trauma is the leading cause of mortality in children. Paediatric trauma resuscitation is the first and foremost step towards a successful treatment and subsequent recovery. Significant advances have taken place in the last years in relation to this field of trauma care. Methods: In this narrative review, we attempt to summarise the recent development in the concepts of fluid resuscitation, massive transfusion, permissive resuscitation, management of coagulopathy and use of tranexamic acid, literature pertaining to implementation of transfusion protocols in the paediatric population and education related to the paediatric trauma resuscitation. Results/conclusions: The current evidence although emerging is still sparse and high-quality studies are needed to shed more light on most of the above domains of resuscitation. abstract_id: PUBMED:33569097 Paediatric Ankle Fractures: Guidelines to Management. Introduction: Paediatric ankle fractures represents about 5% of all paediatric fractures. It is the most common physis to be injured in the lower limb accounting to approximately 15-20% of all physeal injuries. This article reviews the literature on this common injury which still has many controversial areas and gives guidelines to management based on the existing evidence along with clinical experience gained from a Level I trauma center. Classification: The original Salter-Harris Classification with the additional types is a good system to guide on the management. The transitional fractures form a separate group with technically two broad types-biplane and triplane injuries. Though there are many sub-types in this group with some popular eponymous fractures, the treatment principles remain the same. Management: A very low threshold for CT scan is recommended when there is a clinical suspicion of fracture with a negative radiograph or an intra-articular fracture in the radiograph especially in the adolescent age group. CT scan helps in accurate quantification of the intra-articular displacement and also helps to comprehend the fracture geometry better. All the intra-articular fractures with displacement &gt; 2 mm need perfect anatomical reduction and stabilization. Assisted closed reduction and percutaneous fixation along with arthrogram to confirm articular congruity is acceptable as long as the reduction is perfect. Irrespective of the method of treatment, in children with more than 2 years of growth remaining it is important to counsel regarding the high incidence of pre-mature physeal closure and the need for regular follow-up. abstract_id: PUBMED:21820655 Epidemiology and pattern of paediatric and adolescent trauma deaths in a level 1 trauma centre in Benin city, Nigeria. Background: Trauma is a common cause of death amongst children/adolescents, and data on its epidemiology and pattern are crucial for policy formulation. The aim of this study was to determine the epidemiology and pattern of paediatric/adolescent trauma death in a Nigerian referral trauma centre. Methods: The clinical and autopsy data of all paediatric/adolescent trauma death at the University of Benin Teaching Hospital between 1999 and 2010 were analysed in a retrospective study. Results: Of 905 trauma-related deaths, 78 (9%) involved children/adolescents who comprised 49 males and 29 females, with a male/female ratio 1.7:1 and a mean age of 9.2 ± 5 years (range &lt;1-18 years). The Injury-Arrival time varied from 1h to 4 days (mean 18 h). Thirteen (17%) cases were dead on arrival (DOA), and majority of the deaths occurred within the first week on admission. Road traffic accident (RTA), accounting for 61 (78%) cases, was the leading cause of trauma death. Other causes included burns, eight (10%); gunshot injury, five (6%); and stab and sport injuries, two (3%) cases each. Head injury which occurred in 44 (56%) cases was the commonest cause of death, followed by haemorrhagic shock in 25 (32%), hypovolaemic shock in five (6%), septic shock in three (4%) and spinal cord injury in one (1%). Conclusion: Head injury following RTA was the major cause of paediatric/adolescent trauma deaths. Increased road safety campaign, appropriate injury control policies, legislations, enforcement, development of high-quality trauma system, and emergency medical services are advocated. abstract_id: PUBMED:30154141 Paediatric traumatic cardiac arrest: the development of an algorithm to guide recognition, management and decisions to terminate resuscitation. Introduction: Paediatric traumatic cardiac arrest (TCA) is a high acuity, low frequency event. Traditionally, survival from TCA has been reported as low, with some believing resuscitation is futile. Within the adult population, there is growing evidence to suggest that with early and aggressive correction of reversible causes, survival from TCA may be comparable with that seen from medical out-of-hospital cardiac arrests. Key to this survival has been the adoption of a standardised approach to resuscitation. The aim of this study was, by a process of consensus, to develop an algorithm for the management of paediatric TCA for adoption in the UK. Methods: A modified consensus development meeting of UK experts involved in the management of paediatric TCA was held. Statements discussed at the meeting were drawn from those that did not reach consensus (positive/negative) from a linked three-round online Delphi study. 19 statements relating to the diagnosis, management and futility of paediatric TCA were initially discussed in small groups before each participant anonymously recorded their agreement with the statement using 'yes', 'no' or 'don't know'. In keeping with our Delphi study, consensus was set a priori at 70%. Statements reaching consensus were included in the proposed algorithm. Results: 41 participants attended the meeting. Of the 19 statements discussed, 13 reached positive consensus and were included in the algorithm. A single statement regarding initial rescue breaths reached negative consensus and was excluded. Consensus was not reached for five statements, including the use of vasopressors and thoracotomy for haemorrhage control in blunt trauma. Conclusion: In attempt to standardise our approach to the management of paediatric TCA and to improve outcomes, we present the first consensus-based algorithm specific to the paediatric population. While this algorithm was developed for adoption in the UK, it may be applicable to similar healthcare systems internationally. abstract_id: PUBMED:24360669 Analgesia in the management of paediatric and adolescent trauma during the resuscitative phase: the role of the pediatric trauma centre. Background: The objective of this study was to evaluate the use of analgesia in the resuscitative phase of severely injured children and adolescents. Methods: A retrospective cohort of paediatric (age&lt;18 years), severely injured (ISS≥12) patients were identified from the London Health Sciences Centre's Trauma Registry from 2007 to 2010. Variables were compared between Analgesia and Non-analgesia groups with Pearson Chi-square and Mann-Whitney U tests. Resuscitative analgesia use was assessed through multivariable logistic regression controlling for age, gender, mechanism, arrival and Trauma Team Activation (TTA). Results: Analgesia was used in 32% of cases. Univariate analysis did not reveal any differences in gender, age, injury type, injury profile and arrival patterns. Significant differences were found with analgesia used more frequently in patients injured in a motor vehicle collision (58% vs. 42%, p=0.026) and having parents in the resuscitation room (17% vs. 6%, p=0.01). Analgesia patients were more injured (median ISS 22 vs. 17, p=0.027) and had 2.25 times more TTA (39% vs. 17%). Logistic regression revealed patients arriving directly to a trauma centre had a higher incidence of receiving analgesia (OR 2.01, 95% CI: 1.03-3.93), as did TTA (OR 2.18, 95% CI: 1.01-4.73) and having parents in resuscitation room (3.56, 95% CI: 1.23-10.33). Narcotics were most commonly used (85%), followed by benzodiazepines (16%), with 66% given during the primary survey. Conclusion: Use of analgesia is important in the acute management of paediatric trauma. Direct presentation to a level I trauma centre, TTA and the presence of parents lead to higher appropriate use of analgesia in paediatric trauma resuscitation. Answer: The mechanism of injury in pediatric and adolescent horse-related injuries does appear to justify a trauma response, as severe injuries can occur from such incidents. A retrospective case analysis of horse-related injury presentations to the ED of Women's and Children's Hospital in Adelaide, Australia, found that while horse-related injury presentations were uncommon, severe injuries did occur. The study noted that patients presenting with severe horse-related injuries did not always activate a full trauma team response based on current trauma team activation criteria, suggesting that the mechanism of injury should be considered more carefully when deciding on the level of trauma response (PUBMED:18782209). Another study conducted at the Royal Children's Hospital in Brisbane, Australia, reported that the most common mechanism of injury (MOI) was falls while riding horses, which accounted for 97.1% of incidents. The study also found that helmet use significantly reduced the hospital length of stay and severity of traumatic brain injury (TBI), indicating that the mechanism of injury in horse-related accidents can be severe enough to warrant the use of protective gear and potentially a trauma response (PUBMED:28268253). Furthermore, a cross-sectional review of TARN data comparing injury patterns and interventions between adolescent, adult, and pediatric trauma cases highlighted that trauma patterns and interventions are more closely aligned between adult and pediatric cohorts than adolescence, with the highest proportion of trauma interventions occurring in the adolescent population. This suggests that the adolescent age group may have distinct trauma response needs that should be considered when evaluating the mechanism of injury in horse-related accidents (PUBMED:37160391). In summary, the mechanism of injury in pediatric and adolescent horse-related accidents does justify a trauma response, as evidenced by the potential for severe injuries, the importance of protective gear like helmets, and the distinct trauma patterns observed in the adolescent population. These findings support the need for careful evaluation of the mechanism of injury when activating trauma team responses for horse-related injuries in these age groups.
Instruction: Are hospitals prepared to support newborn survival? Abstracts: abstract_id: PUBMED:19695001 Are hospitals prepared to support newborn survival? - An evaluation of eight first-referral level hospitals in Kenya. Objective: To assess the availability of resources that support the provision of basic neonatal care in eight first-referral level (district) hospitals in Kenya. Methods: We selected two hospitals each from four of Kenya's eight provinces with the aim of representing the diversity of this part of the health system in Kenya. We created a checklist of 53 indicator items necessary for providing essential basic care to newborns and assessed their availability at each of the eight hospitals by direct observation, and then compared our observations with the opinions of health workers providing care to newborns on recent availability for some items, using a self-administered structured questionnaire. Results: The hospitals surveyed were often unable to maintain a safe hygienic environment for patients and health care workers; staffing was insufficient and sometimes poorly organised to support the provision of care; some key equipment, laboratory tests, drugs and consumables were not available while patient management guidelines were missing in all sites. Conclusion: Hospitals appear relatively poorly prepared to fill their proposed role in ensuring newborn survival. More effective interventions are needed to improve them to meet the special needs of this at-risk group. abstract_id: PUBMED:34933993 Health System Redesign to Shift to Hospital Delivery for Maternal and Newborn Survival: Feasibility Assessment in Kakamega County, Kenya. Maternal and newborn health (MNH) service delivery redesign aims to improve maternal and newborn survival by shifting deliveries from poorly equipped primary care facilities to adequately prepared designated delivery hospitals. We assess the feasibility of such a model in Kakamega County, Kenya, by determining the capacity of hospitals to provide services under the redesigned model and the acceptability of the concept to providers and users. We find many existing system assets to implement redesign, including political will to improve MNH outcomes, a strong base of support among providers and users, and a good geographic spread of facilities to support implementation. There are nonetheless health workforce gaps, infrastructure deficits, and transportation challenges that would need to be addressed ahead of policy rollout. Implementing MNH redesign would require careful planning to limit unintended consequences and rigorous evaluation to assess impact and inform scale-up. abstract_id: PUBMED:27324266 A Safe Haven: Investigating Social-Support Figures as Prepared Safety Stimuli. Although fear-conditioning research has demonstrated that certain survival-threatening stimuli, namely prepared fear stimuli, are readily associated with fearful events, little research has explored whether a parallel category exists for safety stimuli. We examined whether social-support figures, who have typically benefited survival, can serve as prepared safety stimuli, a category that has not been explored previously. Across three experiments, we uncovered three key findings. First, social-support figures were less readily associated with fear than were strangers or neutral stimuli (in a retardation-of-acquisition test). Second, social-support stimuli inhibited conditional fear responses to other cues (in a summation test), and this inhibition continued even after the support stimulus was removed. Finally, these effects were not simply due to familiarity or reward because both familiar and rewarding stimuli were readily associated with fear, whereas social-support stimuli were not. These findings suggest that social-support figures are one category of prepared safety stimuli that may have long-lasting effects on fear-learning processes. abstract_id: PUBMED:35063185 Reclassifying the Unique Inhibitory Properties of Social Support Figures: A Roadmap for Exploring Prepared Fear Suppression. Recent work has revealed that social support cues are powerful inhibitors of the fear response. They are endowed with a unique combination of inhibitory properties, enabling them to both inhibit fear in the short term and reduce fear in the long term. While these findings had previously been thought to suggest that social support cues belong to a category of prepared safety stimuli, mounting evidence clearly shows that the mechanisms underlying safety signaling cannot account for the unique effects of social support cues. Here, we propose a reclassification of social support cues as members of a prepared fear suppressor category. We present an argument for the prepared fear suppressor classification, discuss potential mechanisms underlying the unique effects of prepared fear suppressors, and outline next steps to build an understanding of this category and its clinical implications. This review is meant to serve as a roadmap for exploring this novel category of prepared fear suppressors, whose never-before-seen range of inhibitory effects makes them an important and impactful discovery with implications for both fear learning theory and clinical application. abstract_id: PUBMED:37471522 Non-medical formula use in newborn infants still common at two Swedish hospitals after a breastfeeding support program. Aim: To evaluate the effectiveness of a breastfeeding support programme on reducing infant formula use and to investigate indications for formula in newborn infants in Sweden. Methods: A quasi-experimental study design was carried out. It included 255 mother-infant pairs in a control group, who received standard care and 254 pairs in an intervention group, who took part in a breastfeeding support programme. Data were collected by reviewing patient records from two regional hospitals in Uppsala and Gotland and recruitment took place between 2017 and 2019. Results: Median age of mothers were 31 years (range 20-49) and median gestational age of infants were 39 + 6 weeks/days (range 37 + 0 to 42 + 4). The intervention did not reduce infant formula use. In total, 87/507 (17%) of the infants received formula. Among children receiving formula 30/87 (34%) had a medical indication, whereas 57/87 (66%) had no medical indication. Main reasons for medically indicated formula use were hypoglycaemia, 13/30 (43%), and weight loss, 13/30 (43%). Main reasons for non-medical use were mothers'/parents' wishes, 25/57 (44%) and infants' dissatisfaction, 11/57 (19%). Conclusion: Continued efforts are needed to develop effective breastfeeding interventions with increased focus on infant formula reduction. abstract_id: PUBMED:34462323 Variation in Newborn Skincare Policies Across United States Maternity Hospitals. Objective: Newborn skincare influences levels of beneficial factors from vernix and vaginal secretions but also the emergence of potential skin pathogens. However, evidence-based national guidelines for newborn skincare do not exist, and actual hospital practices for newborn skincare have not been described. In this study, we test the hypothesis that US maternity hospitals follow differing policies with regard to newborn skincare. Methods: A 16-question survey querying skin care practices was distributed to nursery medical directors at the 109 US hospital members of the Better Outcomes through Research for Newborns network. Data from free text responses were coded by 2 study personnel. Survey responses were analyzed by using descriptive statistics and compared by region of the United States. Results: Delaying the first newborn bath by at least 6 hours is a practice followed by 87% of US hospitals surveyed. Discharging newborns without a bath was reported in 10% of hospitals and was more common for newborns born in nonacademic centers and on the West Coast. Procedures and products used for newborn skincare varied significantly. Parental education on tub immersion and soap use was also inconsistent and potentially contradictory between providers. Evidence cited by hospitals in forming their policies is scant. Conclusion: In this study, we identify similar and strikingly different newborn skincare policies across a national network of US maternity hospitals. Research is needed to identify effects of differing skincare routines on skin integrity, infection rates, and childhood health outcomes to improve the evidence base for the care of newborn skin. abstract_id: PUBMED:10843024 Newborn hearing testing at Wisconsin hospitals: a review of the need for universal screening. Every year, an estimated 200 babies are born in Wisconsin with hearing impairment. Early identification of hearing loss is associated with better language development. This study describes the prevalence of newborn hearing screening at Wisconsin hospitals. Ninety-six percent of all 107 acute care hospitals with newborn delivery services responded to a mailed survey. In 1997, 10% of newborns in Wisconsin had hearing tested. In 1997 and 1998, two and then seven hospitals, respectively, established universal hearing screening programs. Nurses performed hearing tests on most newborns using automated auditory brainstem response equipment. Universal newborn hearing screening can be done by nurses or other personnel at community hospitals prior to discharge. abstract_id: PUBMED:29286931 Newborn drug testing practices in Iowa birthing hospitals. Objectives: Federal law mandates states to have policies and procedures to identify newborns exposed to maternal substance use during pregnancy. National guidelines for newborn drug testing are lacking; therefore, procedures are variable and determined by state law and local practices. In Iowa, maternal substance use during pregnancy is considered child abuse and must be reported.The objective of this study was to identify newborn drug testing policies and procedures among birthing hospitals in Iowa. Methods: This was a cross sectional survey of all birthing hospitals in Iowa identified via the Statewide Perinatal Care Program. An electronic survey was sent to the representative at each affiliated hospital. Results: Sixty-nine of 76 hospitals completed the survey for a 90.8% response rate. Newborn drug testing is ordered in 97.1% of responding hospitals with most testing 25% or less of newborns annually. The majority utilized a risk assessment tool (89.6%), although many (62.7%) also allowed for provider discretion. No hospital performed universal testing of all newborns. 86.6% of hospitals reported all positive newborn drug test results including illicit and/or prescription drugs to child protective services. 35.0% of hospitals notified mothers of the report and 45.5% offered substance abuse services and/or treatment to the mothers. Conclusions: Most Iowa birthing hospitals perform newborn drug testing and report all positive test results to child protective services. The majority use risk assessment tools. Maternal notification practices and referral for substance use disorder treatment are suboptimal and represent an area for future improvement. abstract_id: PUBMED:6525213 Care of low-birthweight and sick newborn infants in community hospitals. Effect of an education program. We studied the effect of an education program on newborn care practices, documented by chart review, in nine community hospitals in central Pennsylvania. Following a visit to each hospital, specific recommendations were made and courses were offered at the regional center. A follow-up review of newborn charts showed significant improvement in newborn care. This kind of educational program, particularly if directed at identified problems in individual hospitals, appears to be effective. In addition, chart reviews can provide the information necessary for assessing both care practices and the effect of an educational program. abstract_id: PUBMED:25472504 Methicillin-resistant Staphylococcus aureus in raw meats and prepared foods in public hospitals in salvador, Bahia, Brazil. This study investigated the presence of methicillin-resistant Staphylococcus aureus (MRSA) in raw meat and fish and foods prepared from them for patient consumption in public hospitals in Salvador, Bahia, in northeastern Brazil. A total of 114 samples of raw meat and fish (chicken, n = 30; beef, n = 30; pork, n = 24; and fish, n = 30) and 63 samples of prepared foods (made with chicken, n = 15; beef, n = 15; pork, n = 15; and fish, n = 18) were collected from the kitchens of 10 different hospitals. Of the 114 investigated raw meat and fish samples, 28.1% were positive for MRSA, which comprised 23.3% beef, 23.3% chicken, 37.5% pork, and 30% fish samples. Of the prepared foods, 9.5% were positive for MRSA, which comprised 5.6% chicken products, 6.7% pork products, and 22.2% fish products. MRSA contamination was not detected in prepared beef dishes. A statistical analysis showed no association between the presence of MRSA and the type of raw food (P &gt; 0.05). The high prevalence of MRSA among the raw foods tested and the presence of the microorganism in prepared foods emphasizes the necessity of enforcing hygienic practices within hospital kitchens. Answer: Based on the provided abstracts, it appears that hospitals may face challenges in being fully prepared to support newborn survival. An evaluation of eight first-referral level hospitals in Kenya revealed that these hospitals often struggled to maintain a safe hygienic environment, had insufficient staffing, lacked key equipment, laboratory tests, drugs, consumables, and were missing patient management guidelines, indicating that they were relatively poorly prepared to meet the special needs of newborns (PUBMED:19695001). Similarly, a feasibility assessment in Kakamega County, Kenya, identified existing system assets but also highlighted health workforce gaps, infrastructure deficits, and transportation challenges that would need to be addressed to improve maternal and newborn survival (PUBMED:34933993). In the United States, a study found that newborn skincare policies varied significantly across maternity hospitals, with no evidence-based national guidelines in place, suggesting a lack of uniform preparedness in this aspect of newborn care (PUBMED:34462323). Additionally, a review of newborn hearing testing in Wisconsin hospitals indicated that only a small percentage of newborns had their hearing tested, although universal screening is feasible and can be performed by nurses or other personnel (PUBMED:10843024). In Iowa, while most birthing hospitals performed newborn drug testing, the practices and procedures were variable, and maternal notification practices and referral for substance use disorder treatment were identified as areas for improvement (PUBMED:29286931). Furthermore, a study on the care of low-birthweight and sick newborn infants in community hospitals in Pennsylvania showed that an education program could significantly improve newborn care practices (PUBMED:6525213). However, a study in Salvador, Bahia, Brazil, found a high prevalence of methicillin-resistant Staphylococcus aureus (MRSA) in raw meats and prepared foods in hospital kitchens, emphasizing the need for better hygienic practices (PUBMED:25472504). Overall, while there are efforts and assets in place, the evidence suggests that hospitals may not be fully prepared to support newborn survival, and there are several areas where improvements are needed to ensure the health and safety of newborns.
Instruction: Still under the microscope: can a surgical aptitude test predict otolaryngology resident performance? Abstracts: abstract_id: PUBMED:24945869 Still under the microscope: can a surgical aptitude test predict otolaryngology resident performance? Objectives/hypothesis: Application to otolaryngology-head and neck surgery residency is highly competitive, and the interview process strives to select qualified applicants with a high aptitude for the specialty. Commonly employed criteria for applicant selection have failed to show correlation with proficiency during residency training. We evaluate the correlation between the results of a surgical aptitude test administered to otolaryngology resident applicants and their performance during residency. Study Design: Retrospective study at an academic otolaryngology-head and neck surgery residency program. Methods: Between 2007 and 2013, 224 resident applicants participated in a previously described surgical aptitude test administered at a microvascular surgical station. The composite score and attitudinal scores for 24 consecutive residents who matched at our institution were recorded, and their residency performance was analyzed by faculty survey on a five-point scale. The composite and attitudinal scores were analyzed for correlation with residency performance score by regression analysis. Results: Twenty-four residents were evaluated for overall quality as a clinician by eight faculty members who were blinded to the results of surgical aptitude testing. The results of these surveys showed good inter-rater reliability. Both the overall aptitude test scores and the subset attitudinal score showed reliability in predicting performance during residency training. Conclusions: The goal of the residency selection process is to evaluate the candidate's potential for success in residency and beyond. The results of this study suggest that a simple-to-administer clinical skills test may have predictive value for success in residency and clinician quality. Level Of Evidence: 4. abstract_id: PUBMED:3773617 Surgical aptitude test for otolaryngology-head and neck surgery resident applicants. The search for a convenient, rapid manual dexterity examination, which could be used at the time of the otolaryngology resident applicant's interview, led us to assess the following areas: purposeful hand direction, depth perception, tactile discrimination, finger pressure coordination, finger visual tracking of moving objects, spatial visualization, finger strength, speed of movements, fine control precision, finger dexterity, steadiness during movement, steadiness without movement, and neatness. With the astronomical increase in applicants to a limited number of positions in the field of otolaryngology-head and neck surgery, the application of an accurate, efficient manual dexterity assessment at the time of applicant interview is desirable. Presently, dental school applicants are screened by a series of manual dexterity tests. To develop our screening test, we integrated manual performance tests currently in use by private industry, occupational therapists, and dental school admission committees. Thus, applicants who are able to master fine microsurgical and macrosurgical techniques common in our specialty will be preferentially selected. abstract_id: PUBMED:29287874 The impact of a resident-run review curriculum and USMLE scores on the Otolaryngology in-service exam. Objective: Describe the association of USMLE Step 1 scores and the institution of a dedicated board review curriculum with resident performance on the Otolaryngology training examination. Study Design: Retrospective cross sectional study. Methods: We reviewed American Board of Otolaryngology Training Examination (OTE) scores for an otolaryngology residency program between 2005 and 2016. USMLE Step 1 scores were collected. In 2011 a resident-run OTE review curriculum was instituted with the goal of improving test preparation. Scores were compared before and after curriculum institution. Linear regression was performed to identify predictors of OTE scores. Results: 47 residents were evaluated, 24 before and 23 after instituting the curriculum. There was a moderate correlation between USMLE step 1 scores and OTE scores for all years. For PGY-2 residents, mean OTE scores improved from 25th percentile to 41st percentile after institution of the review curriculum (p = 0.05). PGY 3-5 residents demonstrated no significant improvement. On multivariate linear regression, after controlling for USMLE step 1 scores, a dedicated board review curriculum predicted a 23-point percentile improvement in OTE scores for PGY-2 residents (p = 0.003). For other post-graduate years, the review curriculum did not predict score improvement. Conclusion: USMLE step 1 scores are moderately correlated with OTE performance. A dedicated OTE review curriculum may improve OTE scores for PGY-2 residents, but such a curriculum may have less benefit for intermediate- and senior-level residents. Level Of Evidence: 4. abstract_id: PUBMED:25093659 Improving applicant selection: identifying qualities of the unsuccessful otolaryngology resident. Objectives/hypothesis: To identify the prevalence and management of problematic residents. Additionally, we hope to identify the factors associated with successful remediation of unsuccessful otolaryngology residents. Study Design: Self-reported Internet and paper-based survey. Methods: An anonymous survey was distributed to 152 current and former program directors (PDs) in 2012. The factors associated with unsuccessful otolaryngology residents and those associated with the successful remediation of problematic residents were investigated. An unsuccessful resident is defined as one who quit or was removed from the program for any reason, or one whose actions resulted in criminal action or citation against their medical license after graduation from residency. Remediation is defined as an individualized program implemented to correct documented weaknesses. Results: The overall response rate was 26% (40 PDs). Seventy-three unsuccessful or problematic residents were identified. Sixty-six problematic or unsuccessful residents were identified during residency, with 58 of 66 (88%) undergoing remediation. Thirty-one (47%) residents did not graduate. The most commonly identified factors of an unsuccessful resident were: change in specialty (21.5%), interpersonal and communication skills with health professionals (13.9%), and clinical judgment (10.1%). Characteristics of those residents who underwent successful remediation include: poor performance on in-training examination (17%, P &lt; .01) and inefficient use of time (11.4%, P = .02). Conclusions: A large proportion of otolaryngology PDs in this sample identified at least one unsuccessful resident. Improved methods of applicant screening may assist in optimizing otolaryngology resident selection. abstract_id: PUBMED:34352495 Radiology resident selection factors predict resident performance. Purpose: To determine selection factors that predict radiology resident performance. Methods: 59 consecutive radiology residents from 2002 to 2015 were ranked on performance during residency. Correlations and multiple regression analyses were performed to predict resident performance from the following selection factors: United States Medical Licensing Exam (USMLE) Step 1 score, medical school rank, Alpha Omega Alpha (AOA) membership, honors in clinical rotations, Medical Student Performance Evaluation (MSPE), and interview score. Results were compared against predictions from Match rank position. Results: Five selection factors showed significant or marginally significant correlations with resident performance (r = 0.2 to 0.3). The interview score was not significantly correlated. A multiple regression model comprised of the USMLE Step 1 score, medical school rank, AOA membership, and interview score predicted resident performance, with an adjusted R2 of 0.19. The interview score was included in the model but did not achieve statistical significance. Match rank did not predict resident performance, with an R2 of 0.01. Conclusions: A multiple regression model comprised of the USMLE Step 1 score, medical school rank, and AOA membership predicted radiology resident performance and may assist with resident selection. abstract_id: PUBMED:30325511 Junior otolaryngology resident in-service exams predict written board exam passage. Objective: The purpose of this study was to assess the association between the Otolaryngology Training Examination (OTE) taken during residency and the passage rate of first-time examinees on the American Board of Otolaryngology-Head and Neck Surgery Written Qualifying Examination (WQE). Methods: Retrospective cohort study using a de-identified database containing information on examinees who took the WQE in 2007 through 2014, and examinees who took the OTE exam 2005 through 2014. A total of 2,214 otolaryngology residents took the WQE for the first time in 2007 through 2014 after taking the OTE during residency training. Data were analyzed using one-way frequencies and table analyses. Logistic regression was used to model the relationship between the pass/fail WQE result and the OTE stanine. Data transformations were used to analyze WQE passage as a function of OTE scores. Results: There is a significant relationship between OTE score and passage of the WQE on the first attempt, evident even for junior residents. The probability of passing the WQE on the first attempt is 97% if the resident scores in the top six stanines of the OTE during residency training compared to 71% to 79% if the examinee scores in the bottom three stanines. Conclusion: There is a significant relationship between performance on the OTE and passing the WQE on the first attempt. Because this relationship is evident even in the first years of residency, it allows early identification of those with a higher chance of failing the WQE so that extra efforts can be undertaken to prepare for the WQE. Level Of Evidence: 3 Laryngoscope, 129:124-128, 2019. abstract_id: PUBMED:3560177 Use of a medical reasoning aptitude test to help predict performance in medical school. A medical reasoning aptitude test (MRAT) was designed to assess aptitude for clinical problem-solving in medical school applicants. The purpose of the study reported here was to determine whether the information provided by this test, when used in conjunction with college grade-point averages (GPAs) and scores on the Medical College Admission Test (MCAT), would improve the prediction of medical school performance. Specifically, the authors investigated the incremental predictive value of the MRAT relative to students' overall performance in medical school and, more specifically, to their knowledge and clinical reasoning during preclinical years and clinical performance during the clerkship year. Overall, it was found that, with the exception of the first year, the addition of MRAT scores to the GPAs and MCAT scores increased precision in identifying students who performed poorly or exceptionally well in the second year and the clinical clerkship year. This last finding is especially useful, since few other tests have provided that information. On the basis of the preliminary findings, the authors propose further use and validation of the MRAT. abstract_id: PUBMED:30151970 Mobile app technology is associated with improved otolaryngology resident in-service performance. Objective: Otolaryngology residents face time-management challenges between clinical duties and self-directed learning. Mobile devices provide a new medium for education that could aid with educational efficiency. The objective of this study was to investigate whether spaced repetition mobile app use of an otolaryngology question bank was associated with improving factual knowledge as measured by in-service exam performance. Methods: For approximately 6 months prior to the 2016 in-service examination, 12 otolaryngology residents at a single institution were provided access to a spaced-repetition app with a preloaded licensed otolaryngology review question bank. The number of questions answered, as well as the time spent on the app, were electronically tracked. The associations of app usage on in-service results were analyzed using an autoregressive model adjusted for prior historical Otolaryngology Trainee Examination performance and postgraduate year. Results: Eleven residents used the mobile app and were included in the analysis for an average of 304 minutes (range: 3 to 1,020) and reviewed on average 679 questions (range: 6 to 1,934). Controlling for residency year and prior performance, app usage was associated with an improvement by 2.92 percentage points (pp) (95% confidence interval [CI]: 0.14, 5.70) on overall in-service score. Allergy and pediatric subsection scores improved by 11.3 pp (95% CI: 3.8, 18.8) and 15.2 pp. (95% CI: 8.9, 21.5), respectively. Increased app use was associated with a score improvement by 0.008 pp per minute of use (95% CI: 0.004, 0.012). Conclusion: Use of mobile spaced-repetition technology can aid resident factual knowledge retention measured by improved in-service exam performance. Level Of Evidence: 4 Laryngoscope, 129:E15-E20, 2019. abstract_id: PUBMED:2231237 Does test anxiety bias scholastic aptitude test performance by gender and sociocultural group? This study examines the commonly held contention that test anxiety may serve as a source of bias in the scholastic aptitude test performance of gender and ethnic minority groups. In addition, this study examines sex and sociocultural group differences in the level and pattern of test anxiety among Israeli college students. The sample was composed of 163 male and 198 female students sitting for scholastic aptitude tests routinely administered to all student applicants as part of their college admissions procedures. About 67% were of Western cultural extraction whereas the remainder were of Eastern extraction. Significant differences in text anxiety scores for males and females were observed, with greater sex group differentiation on the Emotionality than on the Worry scale. Test anxiety scores were not discernible by ethnicity or social class. Furthermore, test anxiety was not differentially related to aptitude test scores by sex or sociocultural group membership. Thus, this study lends little evidence to the common contention that test anxiety differentially debilitates the aptitude test scores of females and ethnic minority student candidates. abstract_id: PUBMED:34564821 The innate aptitude's effect on the surgical task performance: a systematic review. Surgery is known to be a craft profession requiring individuals with specific innate aptitude for manipulative skills, and visuospatial and psychomotor abilities. The present-day selection process of surgical trainees does not include aptitude testing for the psychomotor and manual manipulative skills of candidates for required abilities. We aimed to scrutinize the significance of innate aptitudes in surgical practice and impact of training on skills by systematically reviewing their significance on the surgical task performance. A systematic review was performed in compliance with PRISMA guidelines. An initial search was carried out on PubMed/Medline for English language articles published over 20 years from January 2001 to January 2021. Search strategy and terms to be used included 'aptitude for surgery', 'innate aptitude and surgical skills, 'manipulative abilities and surgery', and 'psychomotor skills and surgery'. MERSQI score was applied to assess the quality of quantitatively researched citations. The results of the present searches provided a total of 1142 studies. Twenty-one studies met the inclusion criteria out of which six citations reached high quality and rejected our three null hypothesis. Consequently, the result specified that all medical students cannot reach proficiency in skills necessary for pursuing a career in surgery; moreover, playing video games and/or musical instruments does not promote skills for surgery, and finally, there may be a valid test with predictive value for novices aspiring for a surgical career. MERSQI mean score was 11.07 (SD = 0.98; range 9.25-12.75). The significant findings indicated that medical students with low innate aptitude cannot reach skills necessary for a competent career in surgery. Training does not compensate for pictorial-skill deficiency, and a skill is needed in laparoscopy. Video-gaming and musical instrument playing did not significantly promote aptitude for microsurgery. The space-relation test has predictive value for a good laparoscopic surgical virtual-reality performance. The selection process for candidates suitable for a career in surgery requests performance in a simulated surgical environment. Answer: The study described in abstract PUBMED:24945869 aimed to evaluate whether a surgical aptitude test could predict performance during otolaryngology residency. The study was retrospective and involved 224 resident applicants who participated in a surgical aptitude test at a microvascular surgical station. The performance of 24 residents who matched at the institution was analyzed using faculty surveys, and both the overall aptitude test scores and the subset attitudinal score showed reliability in predicting residency training performance. The study concluded that a clinical skills test may have predictive value for success in residency and clinician quality. In contrast, the study in abstract PUBMED:3773617 does not provide direct evidence of the predictive value of a surgical aptitude test but discusses the development of such a test for otolaryngology-head and neck surgery resident applicants. The test aimed to assess various manual dexterity skills relevant to the specialty, with the goal of preferentially selecting applicants capable of mastering microsurgical and macrosurgical techniques. The other abstracts provided discuss various factors that may predict resident performance, such as USMLE scores (PUBMED:29287874, PUBMED:34352495, PUBMED:30325511), a resident-run review curriculum (PUBMED:29287874), characteristics of unsuccessful residents (PUBMED:25093659), and the use of mobile app technology (PUBMED:30151970). However, they do not directly address the predictive value of a surgical aptitude test for otolaryngology residents. In summary, the evidence from abstract PUBMED:24945869 suggests that a surgical aptitude test can be predictive of otolaryngology resident performance, although the level of evidence is 4, indicating that the results should be interpreted with caution due to the retrospective study design and potential for bias.
Instruction: Is there a relationship between low vitamin D and rotaviral diarrhea? Abstracts: abstract_id: PUBMED:26287796 Is there a relationship between low vitamin D and rotaviral diarrhea? Background: For children under 5 years of age, 1700 000 000 episodes of diarrhea are seen worldwide, and death occurs in 700 000 of these cases due to diarrhea. Rotavirus is an important cause of diarrhea in this age group, and many studies have shown that vitamin D plays a pivotal role in the immune system, as well as in antimicrobial peptide gene expression. In addition, lower vitamin D has been correlated with higher rates of infectious diseases such as respiratory tract infection, tuberculosis, and viral infection. Methods: Seventy patients with rotaviral diarrhea and 67 healthy patients were enrolled in this study. Serum 25-hydroxy vitamin D(3) (25(OH)D(3)), parathormone, calcium, phosphate, alkaline phosphatase, complete blood count parameters, and C-reactive protein were compared between pre-school children hospitalized due to rotaviral diarrhea and healthy children. Additionally, birthweight, feeding habits in the first 6 months of life, vitamin D and multivitamin supplements, and rotaviral vaccinations were also evaluated in each group. Results: There were no differences between the groups with regard to gender and age, but 25(OH)D(3) was significantly different: 14.6 ± 8.7 ng/mL in the rotaviral diarrhea patients versus 29.06 ± 6.51 ng/mL in the health controls (P &lt; 0.001), and serum 25(OH)D(3) &lt;20 ng/mL (OR, 6.3; 95%CI: 3.638-10.909; P &lt; 0.001) was associated with rotaviral diarrhea. Conclusions: Low vitamin D is associated with rotaviral diarrhea. This is the first study in the literature to show this, and this result needs to be repeated in larger controlled clinical studies. abstract_id: PUBMED:27601995 Shikonin Inhibits Intestinal Calcium-Activated Chloride Channels and Prevents Rotaviral Diarrhea. Secretory diarrhea remains a global health burden and causes major mortality in children. There have been some focuses on antidiarrheal therapies that may reduce fluid losses and intestinal motility in diarrheal diseases. In the present study, we identified shikonin as an inhibitor of TMEM16A chloride channel activity using cell-based fluorescent-quenching assay. The IC50 value of shikonin was 6.5 μM. Short-circuit current measurements demonstrated that shikonin inhibited Eact-induced Cl(-) current in a dose-dependent manner, with IC50 value of 1.5 μM. Short-circuit current measurement showed that shikonin exhibited inhibitory effect against CCh-induced Cl(-) currents in mouse colonic epithelia but did not affect cytoplasmic Ca(2+) concentration as well as the other major enterocyte chloride channel conductance regulator. Characterization study found that shikonin inhibited basolateral K(+) channel activity without affecting Na(+)/K(+)-ATPase activities. In vivo studies revealed that shikonin significantly delayed intestinal motility in mice and reduced stool water content in a neonatal mice model of rotaviral diarrhea without affecting the viral infection process in vivo. Taken together, the results suggested that shikonin inhibited enterocyte calcium-activated chloride channels, the inhibitory effect was partially through inhbition of basolateral K(+) channel activity, and shikonin could be a lead compound in the treatment of rotaviral secretory diarrhea. abstract_id: PUBMED:27279991 Differences in the Biodiversity of the Fecal Microbiota of Infants With Rotaviral Diarrhea and Healthy Infants. Background: Rotaviral diarrhea (RD) has been associated with the biodiversity of the fecal microbiota in infants; however, the differences in the biodiversity of the fecal microbiota between infants with RD and healthy (H) infants have not been clearly elucidated. Objectives: This study aimed to reveal the changes in the biodiversity of the fecal microbiota of infants with RD. Patients And Methods: For this study, 30 fecal samples from 15 RD infants and 15 H infants were collected. The biodiversity of the fecal microbiota from the two groups was compared via polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and gene sequencing. Results: The Shannon-Weaver index showed that the biodiversity of the fecal microbiota from the RD infants was significantly lower (P &lt; 0.05) than that from the H infants. All fifteen RD infants were grouped into one cluster and were separated from the H infants by the un weighted-pair group method, with the arithmetic average (UPGMA) clustering algorithm. In addition, when compared with the healthy infants, the communities of the dominant microbes, Lactobacillus and Bifidobacterium, in the fecal microbiota from the RD infants have obviously changed. Conclusions: With regard to improving the understanding of the differences in the biodiversity of the fecal microbiota between RD infants and H infants, the findings of this study can provide a possible basis to reveal the relationship between RD and intestinal microbiota. abstract_id: PUBMED:16897656 Quantitative evaluation of rotaviral antigenemia in children with acute rotaviral diarrhea. Background: Rotaviral antigen and RNA have recently been identified in the serum of patients with rotaviral gastroenteritis, but the roles they play in disease remains undetermined. Methods: Rotaviral antigen and RNA were quantified by enzyme-linked immunosorbant assay and by quantitative reverse-transcription polymerase chain reaction in stool and serum specimens from children with rotaviral diarrhea (n=102), children with nonrotaviral diarrhea (n=40), and nondiarrheal control children (n=30). Results: Rotaviral antigenemia was detected in 64%, 3%, and 0% of the children with rotaviral diarrhea, the children with nonrotaviral diarrhea, and the nondiarrheal control children, respectively. The level of rotaviral antigen in serum was approximately 2x10(2) -fold lower than that in stool, and a moderate correlation was observed between the 2 levels. Rotaviral RNA was detected in 93% of the antigen-positive serum specimens. The median number of RNA copies in serum was approximately 1 x 10(5) -fold lower than that in stool, and no correlation was observed between the 2 levels. Serum levels of both antigen and RNA were inversely associated with baseline titers of rotaviral serum immunoglobulin G (P&lt;.01). Antigenemia was also associated with G1 serotype. Conclusions: Rotaviral antigenemia and viremia were common in children with rotaviral diarrhea, but antigen and RNA levels in serum were substantially lower than those in stool. Antigenemia was associated with infection with G1 strains and with low baseline titers of rotaviral serum antibody. abstract_id: PUBMED:37625605 Loureirin C extracted from Dracaena cochinchinensis S.C. Chen prevents rotaviral diarrhea in mice by inhibiting the intestinal Ca2+-activated Cl- channels. Ethnopharmacological Relevance: Resina Draconis (RD) is the red resin of Dracaena cochinchinensis (Lour.) S.C. Chen and most used as a hemostatic drug in traditional Chinese medicine. Recent studies have reported that RD has a therapeutic effect on gastrointestinal diseases. Loureirin A, B, and C (LA, LB, and LC) are dihydrochalcone compounds isolated from RD. Aim Of The Study: Dehydration is the primary cause of death in rotaviral diarrhea. Inhibition of Ca2+-activated Cl- channels (CaCCs)-mediated Cl- secretion significantly reduced fluid secretion in rotaviral diarrhea. RD was used to treat digestive diseases such as diarrhea and abdominal pain; however, the pharmacological mechanism remains unclear. This study investigated the effects of RD and loureirin on intestinal Cl- channels and their therapeutic effects on rotavirus-induced diarrhea, aiming to reveal RD's molecular basis, targets, and mechanisms for treating rotaviral diarrhea. Materials And Methods: Cell-based fluorescence quenching assays were used to examine the effect of RD and loureirin on Cl- channels activity. Electrophysiological properties were tested using short-circuit current experiments in epithelial cells or freshly isolated mouse intestinal tissue. Fecal water content, intestinal peristalsis rate, and smooth muscle contraction were measured in neonatal mice infected with SA-11 rotavirus before and after LC treatment or adult mice. Results: RD, LA, LB, and LC inhibited CaCCs-mediated Cl- current in HT-29 cells and colonic epithelium. The inhibitory effect of LC on CaCCs was primarily on the apical side in epithelial cells, which may be partially produced by affecting cytoplasmic Ca2+ levels. LC significantly inhibited TMEM16A-mediated Cl- current. Characterization studies revealed that LC inhibited basolateral K+ channel activity without affecting Na+/K+-ATPase activity in the colonic epithelium. Although LC activated the cystic fibrosis transmembrane regulator in epithelial cells, its effect was not apparent in colonic epithelium. In vivo, LC significantly reduced the fecal water content, intestinal peristalsis rate, and smooth muscle contraction of mice infected with rotavirus. Conclusion: RD and its active compound LC inhibit intestinal CaCCs activity, which might mediate the anti-rotaviral diarrheal effect of RD. abstract_id: PUBMED:12150054 Rotaviral vaccines Rotaviral infections are frequent cause of the severe diarrhea in children. The only way to eliminate rotaviral infections is safe and effective vaccine. Among various rotaviral vaccines elaborated in many countries, the human-rhesus reassortant vaccine (RRV-TV) was chosen for routine immunization of infants. However, post-licensure surveillance of a rotavirus vaccine showed an increased risk of a serious adverse event following vaccination--intussusception. That forced the Centers for Disease Control and Prevention to withdraw the license of RRV-TV. In spite of this defeat, works for receiving safe and effective rotaviral vaccine still last. abstract_id: PUBMED:15810506 Epidemiology of rotaviral infections among children in Poland In Poland there is no obligation of reporting and registration of the rotaviral infections in children. Therefore, data concerning number and incidence of rotaviral infections is unknown. Most of works devoted to rotaviral infections describes clinical course. In many works, Polish authors carry out the analysis of per cent participation of rotaviral infections in global number of children hospitalized due to diarrhea in Poland. The rotaviral part of diarrhea is 20%-80% of all diarrheas in Polish children. Mostly, rotaviral diarrheas take 40% to more than 50% of all diarrheas in children. Rotavirus is also accountable for rising of nosocomial infections. Summarizing, there is conclusion, that is necessary to implement the registration of rotaviral infections in Poland. abstract_id: PUBMED:3037675 Epidemiology of rotaviral infection in adults. Although classic rotaviral gastroenteritis occurs in children between the ages of six and 24 months, infection with rotavirus is common in all age groups, including adults. Virtually all adults have been infected, as is demonstrated by the presence of serum antibodies, but previous infection does not protect against new infection with the same or a different serotype. Rotaviral infection of adults is seen in five settings: secondary contacts from pediatric cases, with variable attack rates in adults; waterborne outbreaks, which are often characterized by higher attack rates in adults than in children; travelers' diarrhea; epidemic spread in isolated or closed populations, often in the absence of contact with children; and endemic infections, which may account for 5%-10% of sporadic cases of diarrhea in adults. Frequent asymptomatic infections with rotavirus occur, and they may be important in the epidemiology of the disease. Although rotaviral infections in adults tend to be milder than those in children, death due to rotaviral infection in adults have been reported. abstract_id: PUBMED:31018530 The Anti-Rotaviral Activity of Low Molecular Weight and Non-Proteinaceous Substance from Bifidobacterium longum BORI Cell Extract. Rotavirus infection is the most common diarrheal disease worldwide in children under five years of age, and it often results in death. In recent years, research on the relationship between rotavirus and probiotics has shown that probiotics are effective against diarrhea. A clinical trial has reported that Bifidobacterium longum BORI reduced diarrhea induced by rotavirus. The present work investigated the anti-rotaviral effect of B. longum BORI by cytopathic effect observation and real time cell analyses. Our study found that B. longum BORI showed strong anti-rotaviral effect when incubated with MA104 cells prior to viral infection, suggesting that the probiotic does in fact interfere with the interaction of viruses and host cells. It is believed that the efficacy is due to low-molecular weight and non-protein components derived from B. longum BORI. This discovery can help broaden the industrial application of B. longum BORI, which has been proven to be a safe and effective probiotic. abstract_id: PUBMED:6324227 Effects of folic acid malnutrition on rotaviral infection in mice. A study was undertaken to determine if dietary deficiencies of folic acid would influence rotaviral diarrheal disease in infant mice. Female mice were fed diets containing essentially no folic acid, 25% of a normal quantity of folic acid, or a normally recommended quantity of folic acid, beginning at time of breeding and continuing through periods of gestation and lactation. Two-day-old infants from these dams were exposed to purified murine rotavirus or to sterile virus diluent and the severity of the rotaviral infection monitored. Infants from the low folic acid group had significantly lower folate levels in their livers, indicating a deficiency was achieved, and developed more severe disease manifestations than those infants from the dams receiving the normal folic acid levels in their diet. The infection enhancement was seen as increased incidences of diarrhea and a significantly greater number of mice exhibiting high intestinal rotaviral antigen titers. Serum rotavirus antibody titers were below detectable levels in a significant number of these same infants. Answer: Yes, there is a relationship between low vitamin D and rotaviral diarrhea. A study involving 70 patients with rotaviral diarrhea and 67 healthy patients found that serum 25-hydroxy vitamin D(3) (25(OH)D(3)) levels were significantly lower in patients with rotaviral diarrhea compared to healthy controls (14.6 ± 8.7 ng/mL versus 29.06 ± 6.51 ng/mL, respectively). Furthermore, serum 25(OH)D(3) levels below 20 ng/mL were associated with rotaviral diarrhea (OR, 6.3; 95%CI: 3.638-10.909; P < 0.001). This suggests that low vitamin D is associated with an increased risk of rotaviral diarrhea, and this was the first study to show this relationship, indicating that further larger controlled clinical studies are needed to confirm these results (PUBMED:26287796).
Instruction: Does the size of intracranial aneurysms change with intracranial pressure? Abstracts: abstract_id: PUBMED:9576252 Does the size of intracranial aneurysms change with intracranial pressure? Observations based on color "power" transcranial Doppler ultrasound. Object: The authors sought to determine whether the increased pulsatility of aneurysms, compared with normal intracranial arteries, on color "power" transcranial Doppler (TCD) ultrasound was due to a true change in aneurysm size and whether aneurysm dimensions change with intracranial pressure (ICP). Methods: The authors studied nine patients who had suffered recent subarachnoid hemorrhages complicated by hydrocephalus requiring intraventricular cerebrospinal fluid drainage, in whom the presence of an aneurysm was confirmed on angiographic examination. Color "power" TCD studies of the intracranial arteries and aneurysm were obtained through the temporal bone window before and after insertion of the ventricular drain and then at different known ICPs. Of the nine patients studied, four were examined both before and after insertion of a ventricular drain. At high ICPs, aneurysms appeared very "pulsatile" and the maximum cross-sectional area was small, whereas at low ICPs, aneurysms appeared larger and were much less pulsatile. The normal arteries did not change significantly in terms of pulsatility or maximum cross-sectional area at different levels of ICP. Conclusions: The change in aneurysm size visualized with the aid of color power TCD is likely to be real. Aneurysm dimensions vary with ICP levels; the lesions are larger and less pulsatile at low ICPs and smaller but more pulsatile at high ICPs. abstract_id: PUBMED:34425691 Impact of cerebral aneurysm size on distal intracranial hemodynamics and changes following flow diversion. Background: The impact of cerebral aneurysm size on distal intracranial hemodynamics such as arterial pressure and Pulsatility Index is not completely understood, either before or after flow diversion. Objective: The aim of the study is to assess the impact of aneurysm size on distal Pulsatility Index and pressure before and after flow diversion. Methods: From December 2015, prospective measurement of middle cerebral artery pressure and Pulsatility Index was performed in consecutive patients with unruptured cerebral aneurysms in the cavernous to communicating segments of the internal carotid artery, which were treated with single flow diversion. Pressure and Pulsatility Index were recorded at the M1-segment ipsilateral to the cerebral aneurysm. Ratio of middle cerebral artery to radial arterial pressure (pressure ratio) was calculated to control for variations in systemic blood pressure. Correlations between aneurysm size and pressure ratio and Pulsatility Index were assessed before and after treatment. Results: A total of 28 aneurysms were treated. The mean aneurysm size was 7.2 mm. Aneurysm size correlated linearly with systolic pressure ratio (1% pressure ratio increase per mm aneurysm size increase, P = 0.002, r2 = 0.33), mean pressure ratio (0.6% per mm, P = 0.03, r2 = 0.17) and Pulsatility Index (5% Pulsatility Index increase per mm, P = 0.003, r2 = 0.43). After flow diversion, aneurysm size preserved a linear correlation with the systolic pressure ratio (1% per mm, P = 0.004, r2 = 0.28), but not with the mean pressure ratio (0.4% per mm, P = 0.15, r2 &lt; 0.1) or Pulsatility Index (0.3% per mm, P = 0.78, r2 &lt; 0.1). Conclusion: Aneurysm size affects distal hemodynamics: patients with larger aneurysms have increased systolic and mean pressure ratio, and increased Pulsatility Index. After flow diversion, mean pressure ratio and Pulsatility Index no longer associate with the aneurysm size, suggesting an effect of the flow diversion also on distal intracranial hemodynamics. abstract_id: PUBMED:7179102 Changes of intracranial pressure during the rupture of intracranial aneurysms: prognosis for survival. The intracranial pressure at the time of rebleeding from an intracranial aneurysm was recorded in 4 patients who were undergoing conservative treatment pending clinical improvement. Two died of severe hemorrhages and the remaining two recovered. In the deceased patients, repeated rapid increases in the intracranial pressure in a short time caused by rupture of the intracranial aneurysm were observed. In the surviving patients, the intracranial pressure increased only once, owing to the rupture of the intracranial aneurysm. We suggest that sudden death following the rupture of an intracranial aneurysm may result from frequent, repeated bleeding causing an intracerebral hematoma or severe brain swelling, or both. abstract_id: PUBMED:25520265 Intracranial pseudoaneurysm after intracranial pressure monitor placement. Traumatic intracranial pseudoaneurysms are a rare but severe complication following arterial injury. Pseudoaneurysm formation can occur secondary to blunt or penetrating trauma or iatrogenic injury. We report a case of traumatic pseudoaneurysm secondary to placement of an intracranial pressure (ICP) monitor. A 27-year-old man was involved in a motorcycle accident resulting in multiple intracranial hemorrhages. The patient underwent craniectomy and placement of an ICP monitor. 17 days later he developed dilation of his left pupil, with imaging demonstrating a new hemorrhage in the vicinity of the previous ICP monitor. A cerebral angiogram confirmed a left-sided distal M4 pseudoaneurysm which was treated by n-butyl cyanoacrylate embolization. Intracranial pseudoaneurysm formation following neurosurgical procedures is uncommon. Delayed intracranial hemorrhage in a region of prior intracranial manipulation, even following a procedure as 'routine' as placement of an ICP monitor, should raise the suspicion for this rare but potentially lethal complication. abstract_id: PUBMED:920303 Intracranial pulse pressure dynamics in patients with intracranial hypertension. The pulsative inflow of blood to the brain causes fluctuations in the cerebral blood volume, and this is considered to be the main cause of the pulsations of the intracranial pressure (ICP). This statement allows discrimination between two different factors that influence the pulse amplitude of the ICP: 1. The pulsatile pattern of the cerebral arterial blood flow. 2. The slope of the intracranial pressure-volume curve. In order to clarify the relative contributions of the two factors we have developed a simulation model of the intracranial pressure-volume curve and the cerebral blood flow. Measurements from clinical practice and data from other authors can be interpreted with reference to this model. The increase in the pulse amplitude due to a moderate increase in the ICP is mainly explained by the decrease of the intracranial compliance. However, it is recognized that, when the ICP approaches the arterial blood pressure, the ICP amplitude increases disproportionally with increasing ICP. In this situation the transmural pressure of the cerebral arteries approaches zero, and the compliance relevant to the arterial flow system is no longer that of the vessel wall but the compliance of the craniospinal compartment. abstract_id: PUBMED:7070614 Furosemide in the intraoperative reduction of intracranial pressure in the patient with subarachnoid hemorrhage. The effect of furosemide in the intraoperative reduction of intracranial pressure was measured in 25 patients undergoing the operative repair of a ruptured intracranial aneurysm. Seven patients with similar intracranial lesions served as controls. A single bolus of 80 mg of furosemide was administered intravenously after the induction of anesthesia, and sequential measurements were made of intracranial pressure, mean arterial pressure, and arterial blood gases. A mean decrease of intracranial pressure of 56% was measured in the furosemide-treated patients, whereas the control patients demonstrated a mean decline of subarachnoid pressures of 18%. These changes are significant at the P less than 0.005 confidence level, whereas changes in mean arterial pressure, mean arterial pCO2, and base line arterial pCO2 were statistically insignificant. This study suggests that intravenous furosemide is a quick, dependable, and effective mechanism for the intraoperative reduction of intracranial pressure in the postsubarachnoid hemorrhage aneurysm patient. abstract_id: PUBMED:6777465 The treatment of raised intracranial pressure following aneurysm surgery. The effect of intravenous mannitol infusion and withdrawal of cerebrospinal fluid on the intracranial pressure and clinical state was studied in 26 patients with raised intracranial pressure after direct surgery for ruptured aneurysm. Each method decreased the mean intracranial pressure by about 60% of the pre-treatment level. The maximal decrease following mannitol occurred after 60-90 minutes and generally lasted between three and four hours. The effects of mannitol did not decrease when repeated infusions were necessary. Rebound increases in the intracranial pressure following infusion were not observed. Withdrawal of cerebrospinal fluid lowered the intracranial pressure immediately and the effect persisted for approximately 60 minutes. This could be repeated as often as necessary and was without systemic disturbance, although a patent intraventricular catheter was necessary. The two methods could be used simultaneously. abstract_id: PUBMED:8109761 Succinylcholine does not change intracranial pressure, cerebral blood flow velocity, or the electroencephalogram in patients with neurologic injury. The effect of succinylcholine (SCh) on intracranial pressure (ICP) was studied in 10 mechanically ventilated patients (Glasgow coma scale score 3-10, median 6) being treated for increased ICP in an intensive care unit. Mean arterial blood pressure (MAP), ICP, processed electroencephalogram (EEG), and mean middle cerebral artery blood flow velocity (V mca) were monitored. Baseline measurements after saline injection were obtained for 5 min. SCh (1 mg/kg) was administered intravenously and the above variables were monitored for 15 min. Neither saline nor SCh cause any significant change in cerebral perfusion pressure, MAP, V mca, EEG, or ICP. We conclude that in brain-injured patients, SCh did not alter cerebral blood flow velocity, cortical electrical activity, or ICP. abstract_id: PUBMED:3185847 Monitoring of intracranial pressure during the postoperative period of aneurysms The authors present the results of the postoperative I.C.P. records in 35 patients. The results emphasize the relationship between the clinical grading and the degree of the intracranial pressure: 4/7 grades III (Hunt and Hess grading) and all the grades IV (4/4) had an abnormal intracranial pressure. The location of the aneurysm does not clearly appear as a significant factor of increased intracranial pressure in the postoperative period. abstract_id: PUBMED:35088709 Aneurysm size and blood pressure severity in patients with intracranial aneurysms and sleep apnea. Study Objectives: Obstructive sleep apnea is a risk factor for hypertension. Hypertension is associated with aneurysm formation, growth, and rupture of intracranial aneurysm (IA). Retrospectively, symptoms of obstructive sleep apnea are more prevalent in patients with IAs. Studies investigating the prevalence and effect of objectively measured sleep apnea in these patients are sparse. We report on the baseline data of an ongoing prospective study. We cross-sectionally analyzed whether obstructive sleep apnea was associated with larger IAs and increased need for antihypertensive medications in a population of patients with unruptured IA. Methods: 130 adult (≥ 18 years) patients with unruptured IAs were recruited. Patients with ischemic stroke or intracranial hemorrhage within 3 months prior to screening were excluded. We assessed obstructive sleep apnea by full-night respiratory polygraphy. Aneurysm size and antihypertensive medication-as a surrogate parameter for the severity of hypertension-were compared between patients with and without obstructive sleep apnea (apnea-hypopnea index &gt;5 events/h). Aneurysm growth and rupture rate were retrospectively analyzed. Results: 101 patients completed the study protocol. Obstructive sleep apnea was diagnosed in 68.0% (17) of male and 34.2% (26) of female participants and associated with more severe hypertension (1.536 ± 0.2 vs 0.74 ± 0.1 drugs; P = .01) and larger aneurysms (6.9 ± 1.0 vs 3.8 ± 0.5 mm; P = .01). Conclusions: Patients with obstructive sleep apnea had more antihypertensive medication and larger IAs, probably due to accelerated aneurysm growth. Sleep apnea should be considered in patients with IAs. More research is needed to investigate the effects of sleep apnea on IAs and aneurysm outcome. Clinical Trial Registration: Registry: ClinicalTrials.gov; Name: Incidence and Effects of Sleep Apnea on Intracerebral Aneurysms-IESA Study; URL: https://clinicaltrials.gov/ct2/show/NCT02880059; Identifier: NCT02880059. Citation: Zaremba S, Albus L, Hadjiathanasiou A, Vatter H, Wüllner U, Güresir E. Aneurysm size and blood pressure severity in patients with intracranial aneurysms and sleep apnea. J Clin Sleep Med. 2022;18(6):1539-1545. Answer: Yes, the size of intracranial aneurysms does change with intracranial pressure (ICP). A study observed that at high ICPs, aneurysms appeared very "pulsatile" and the maximum cross-sectional area was small, whereas at low ICPs, aneurysms appeared larger and were much less pulsatile. This suggests that aneurysm dimensions vary with ICP levels; they are larger and less pulsatile at low ICPs and smaller but more pulsatile at high ICPs (PUBMED:9576252).
Instruction: Admission after the gold interval in acute calculous cholecystitis: Should we really cool it off? Abstracts: abstract_id: PUBMED:26742919 Admission after the gold interval in acute calculous cholecystitis: Should we really cool it off? Purpose: The aim of this study was to compare early and delayed cholecystectomy for the treatment of acute calculous cholecystitis (ACC). Materials And Methods: The medical records of patients who were diagnosed to have ACC by combined clinical and radiological examination were evaluated retrospectively. The patients were divided into two non-randomized groups according to the duration between the onset of symptoms and cholecystectomy. Group 1 included the patients who underwent cholecystectomy within the first 72 h after the onset of symptoms and Group 2 those who underwent beyond the 72nd hour after the onset of symptoms. Results: We reviewed records for 203 patients. There were 109 patients in Group 1 and 74 patients in Group 2. Access-related complications occurred in four patients. One patient in Group 1 and two patients in Group 2 had trocar site bleeding. In one patient in Group 1, liver trauma occurred. Two patients had bile duct injury in Group 1 as Type D injury according to the Strasberg classification in one patient and E2 injury in other. Conclusion: Early cholecystectomy in acute cholecystitis with biliary stones could be performed regardless of time with similar complication, mortality and conversion rates. abstract_id: PUBMED:29046935 Admission after the gold interval in acute calculous cholecystitis: Should we really cool it off? abstract_id: PUBMED:30463770 Percutaneous Cholecystostomy in Acute Cholecystitis-Predictors of Recurrence and Interval Cholecystectomy. Background: Percutaneous cholecystostomy (PC) tube is a preferred option in acute cholecystitis for patients who are high risk for cholecystectomy (CCY). There are no evidence-based guidelines for patient care after PC. We identified the predictors of disease recurrence and successful interval CCY. Methods: A retrospective review of 145 PC patients between 2008 and 2016 at a tertiary hospital was performed. Primary outcomes included mortality, readmissions, hospital and intensive care unit length of stay (LOS), disease recurrence, and interval CCY. Results: There were 96 (67%) calculous and 47 (33%) acalculous cholecystitis cases. Seventy-two (49%) had chronic and 73 (51%) had acute prohibitive risks as an indication for PC. There were 54 (37%) periprocedural complications, which most commonly were dislodgements. Twenty-six (18%) patients had a recurrence at a median time of 65 days. Calculous cholecystitis (odds ratio [OR] 3.44, P = 0.038) and purulence in the gallbladder (OR 3.77, P = 0.009) were predictors for recurrence. Forty-one (28%) patients underwent interval CCY. Patients with acute illness were likely to undergo interval CCY (OR 6.67, P = 0.0002). Patients with acalculous cholecystitis had longer hospital LOS (16 versus 8 days) and intensive care unit LOS (2 versus 0 days), and higher readmission rates (OR 2.42, P = 0.02). Thirty-day mortality after PC placement was 9%. Patients receiving interval CCY were noted to have increased survival compared to PC alone. However, this should not be attributed to interval CCY alone in absence of randomization in this study. Conclusions: Calculous cholecystitis and purulence in the gallbladder are independent predictors of acute cholecystitis recurrence. Acute illness is a strong predictor of successful interval CCY. The association of interval CCY and prolonged survival in patients with PC as noted in this study should be further assessed in future prospective randomized trials. abstract_id: PUBMED:23025944 Admission variables predictive of gangrenous cholecystitis. The objective of the present study was to identify admission clinical factors associated with gangrenous cholecystitis (GC) and factors associated with conversion to open cholecystectomy. We retrospectively evaluated 391 patients over a 17-month period who underwent urgent laparoscopic cholecystectomy for a diagnosis of acute cholecystitis. Eighty-nine patients with pathologically proven GC were compared with 302 patients without GC. On multivariable logistic regression, predictors of GC included male gender, white blood cell count greater than 14,000/mm3, heart rate greater than 90 beats per minute, and sodium 135 mg/dL or less. Conversion rate to open cholecystectomy was 7.9 per cent overall, 4 per cent for non-GC, and 19 per cent for GC (odds ratio, 0.2; 95% confidence interval, 0.1 to 0.4; P&lt;0.00001). Conversion was predicted by increasing number of days to surgery, total bilirubin, and white blood cell count. Complication rate was higher in the GC group (10.1 vs 3.6% in the acute cholecystitis group, P=0.01). The increased rate of conversion observed with surgery delay suggests that early laparoscopic cholecystectomy may be preferable in most patients. abstract_id: PUBMED:18092660 Timing of laparoscopic cholecystectomy for acute cholecystitis: evidence to support a proposal for an early interval surgery. We evaluated the safety and feasibility of delayed urgent laparoscopic cholecystectomy (LC) performed beyond 72 hours to overcome the logistical difficulties in performing early urgent LC within 72 hours of admission with acute cholecystitis (AC), and to avoid earlier readmission with recurrent AC in patients awaiting delayed interval. Patients admitted with AC were scheduled for urgent LC. Patients who underwent early urgent LC were compared with those who had delayed urgent surgery. Fifty consecutive patients underwent urgent LC for AC within 2 weeks of admission. There were no conversions and no bile duct injuries. Delayed surgery (n=36) neither prolonged operating time (90 vs. 85 minutes), nor increased operative morbidity (9.7% vs. 7.7%) or mortality (2.4% vs. 7.7%) compared with early surgery (n=14). Although delayed surgery was associated with shorter postoperative hospital stay (1 vs. 2 days, P = 0.029), it prolonged total hospital stay (9 vs. 5 days, P &lt; 0.0001). Delay of LC beyond 72 hours neither increases operative difficulty nor prolongs recovery. It might be more cost effective to schedule patients who could not undergo early urgent LC but are responding to conservative treatment for an early interval LC within 2 weeks of presentation with AC. abstract_id: PUBMED:12541036 Urgent versus interval laparoscopic cholecystectomy for acute cholecystitis: a comparative study. Background/purpose: The surgical management of patients presenting with acute cholecystitis remains controversial. The aim of this study was to evaluate the safety and feasibility of urgent laparoscopic cholecystectomy (LC) during the "index" (acute) admission with acute cholecystitis, and to compare that with a policy of interval LC. Methods: Between October 2000 and October 2001, 50 patients who had suffered with acute cholecystitis underwent LC. Thirty-three patients underwent surgery during the index admission (group I), of whom 11 patients had surgery within 96 h of admission. Seventeen patients were referred by colleagues to outpatients for, and underwent, an interval LC (group II). Results: All operations were completed laparoscopically. There was no difference between the groups in the operating time (median [interquartile range]: 78 [61-124] versus 93 [53-128] min) or postoperative hospital stay (median, 1 day). The delay in performing an urgent LC beyond 96 h did not affect the operating time or postoperative stay but significantly increased the total hospital stay (median [interquartile range]: 5 (5-8) versus 13 [8-17] days; P = 0.001). Conclusions: Laparoscopic cholecystectomy during the index admission with acute cholecystitis can be performed safely and successfully. Earlier surgery has a beneficial impact for patients and the National Health Service. abstract_id: PUBMED:23271092 No need to wait: an analysis of the timing of cholecystectomy during admission for acute cholecystitis using the American College of Surgeons National Surgical Quality Improvement Program database. Background: The objective of our analysis was to determine the optimal timing of cholecystectomy during admission for acute cholecystitis. Methods: All patients from the American College of Surgeons National Surgical Quality Improvement Program Participant User Files from 2005 through 2010 who underwent emergency cholecystectomy within 7 days of hospital admission for acute cholecystitis were included for analysis. The association between timing of cholecystectomy and postoperative outcomes was determined using multivariate logistic regression analyses after adjustment for patient demographics, acute and chronic comorbid medical conditions, preoperative sepsis classification, American Society of Anesthesiologists physical status classification, and preoperative liver function tests. Results: A total of 5,268 patients were included for analysis. The timing of operation was day of admission for 49.7% of these patients, 1 day after admission for 33.4%, 2 days after admission for 9.5%, 3 days after admission for 3.9%, and 4 days to 7 days after admission for 3.6%. Multivariate logistic regression analyses revealed no significant association between timing of operation and 30-day postoperative mortality or overall morbidity. Patients who underwent operation later in the course of admission were more likely to require an open procedure and sustained significantly longer postoperative and overall lengths of hospitalization. Similar findings were demonstrated for a subgroup of patients who exhibited characteristics that placed them at higher risk for surgical intervention. Conclusion: Immediate cholecystectomy is preferred for patients who require hospitalization for acute cholecystitis. Level Of Evidence: Economic/decision analysis, level III. abstract_id: PUBMED:25450622 Analysis of actual healthcare costs of early versus interval cholecystectomy in acute cholecystitis. Background: Healthcare cost modeling have favored early (ELC) over interval laparoscopic cholecystectomy (ILC) for acute cholecystitis (AC). However, actual costs of treatment have never been studied. The aim of the present study was to compare actual hospital costs involved in ELC and ILC in patients with AC. Methods: Retrospective study of patients who underwent laparoscopic cholecystectomy for AC was conducted. Demographic, clinical, operative data and costs were extracted and analyzed. Results: Between 2011 and 2013, 201 had laparoscopic surgery for AC at Tan Tock Seng Hospital, Singapore. One hundred and thirty-four (67%) patients underwent ELC (≤7 days of presentation, within index admission). Median total length of stay (LOS) was 4.6 and 6.8 days for ELC and ILC groups, respectively (P = 0.006). Patients who had ELC also had significantly lesser total number of admissions (P &lt; 0.001). The median (IQR) total inpatient costs were €4.4 × 10(3) (3.6-5.6) and €5.5 × 10(3) (4.0-7.5) for ELC and ILC patients, respectively (P &lt; 0.007). Costs associated with investigations were significantly higher in the ILC group (P = 0.039), of which serological costs made most difference (P &lt; 0.005). The ward costs were also significantly higher in the ILC group. Conclusion: The cost differences reflect the significantly increased total LOS, and repeat presentations associated with ILC. Therefore, ELC should be the preferred management strategy for AC. abstract_id: PUBMED:29680327 Clinical characteristics of patients with newly developed acute cholecystitis after admission to the intensive care unit. Introduction: Critical care patients have many risk factors for acute cholecystitis (AC). However, less data are available regarding newly developed AC in critically ill patients. Objectives: To investigate the clinical features of AC occurring in critically ill patients after admission to an intensive care unit (ICU). Methods: We performed a retrospective cohort study from January 2006 to August 2016 at a tertiary care university hospital. We included patients diagnosed with AC with or without gallstones after ICU admission. All cases of AC were confirmed by gastroenterologists or general surgeons. We excluded patients with AC diagnosed before or at the time of ICU admission. Results: A total of 38 patients were diagnosed with AC after ICU admission between January 2006 and August 2016. Seventeen (44.7%) had acute acalculous cholecystitis, while 21 (55.3%) had acute calculous cholecystitis. The median age was 73 years (interquartile range = 63-81 years), and 22 (57.9%) patients were male. The most common reason for ICU admission was pneumonia or sepsis. The median interval from ICU admission to diagnosis of AC was 11 days (interquartile range = 4.8-22.8 days). Before AC diagnosis, almost 90% of patients used total parenteral nutrition, 68% used opioids, 76% were mechanically ventilated, and 42% received vasoactive drugs. More than half of patients underwent cholecystectomy, and all surgically resected gallbladders had pathology results for cholecystitis. Gangrenous cholecystitis was observed in five patients with acute calculous cholecystitis. The overall mortality was 42.1%, and 1/3 of these deaths were directly associated with AC. The average length of stay in the ICU and hospital was 26.5 and 44.5 days, respectively. Conclusion: The development of AC in the ICU should be carefully monitored, especially in patients who have been infected and admitted to the ICU for more than 10 days. Proper diagnosis and treatment at a critical time could be lifesaving. abstract_id: PUBMED:32595365 Is Interval Cholecystectomy Necessary After Percutaneous Cholecystostomy in High-Risk Acute Cholecystitis Patients? Objectives: Percutaneous cholecystostomy (PC) for acute cholecystitis (AC) is frequently performed in high-risk surgical patients as an alternative treatment modality. However, debate remains over whether or not an interval cholecystectomy for these patients should be performed. The aim of this study was to investigate the outcomes of PC in high-risk surgical patients with AC. Methods: Between September 2013 and June 2016, 27 of 952 patients with AC were treated with PC. The data collection included demographic variables, including comorbidities, the timing of the PC, the length of the hospital stay, the follow-up period, the complications related to PC, and readmission to hospital. Results: There were 16 female and 11 male patients, with a mean age of 73±12.4 years (range: 49-97 years). Comorbid diseases included ischemic heart disease (n=6), diabetes mellitus (n=5), chronic obstructive pulmonary disease (n=6), and others (n=10). The mean timing of PC was 2.2±1.4 days (range: 1-3 days). The mean length of hospital stay was 9.6±2.1 days (range: 7-14 days), and the catheter was removed after the first month. The mean follow-up period after the PC catheter removal was 19.6±8.6 months (range: 10-38 months). Only 6 patients (22.2 %) were readmitted to the hospital. Cholecystectomy was performed in 4 cases, and 2 responded to medical treatment. Conclusion: Despite ongoing controversy about the management of AC in high-risk surgical patients, PC is an adequate and safely applicable procedure in this group of patients. However, an interval cholecystectomy should be considered in persistent cases, which account for a small percentage. Longer-term follow-up studies with a larger sample size are needed to support our results. Answer: The question of whether to delay cholecystectomy after the "gold interval" in acute calculous cholecystitis (ACC) is addressed by several studies. The "gold interval" typically refers to the first 72 hours after the onset of symptoms, during which early cholecystectomy is often recommended. One study suggests that early cholecystectomy can be performed regardless of the time elapsed since the onset of symptoms, with similar complication, mortality, and conversion rates compared to delayed cholecystectomy (PUBMED:26742919). This implies that even if the gold interval has passed, proceeding with surgery may still be a viable option. Another study indicates that percutaneous cholecystostomy (PC) is an alternative for high-risk patients who cannot undergo cholecystectomy immediately. However, it also notes that calculous cholecystitis and purulence in the gallbladder are predictors of disease recurrence, and acute illness is a strong predictor of successful interval cholecystectomy. This suggests that while PC can be a temporary solution, interval cholecystectomy may still be necessary for long-term resolution (PUBMED:30463770). Further research shows that delaying surgery increases the likelihood of conversion to open cholecystectomy and that early laparoscopic cholecystectomy may be preferable in most patients to avoid complications such as gangrenous cholecystitis (PUBMED:23025944). Additional studies support the idea of early cholecystectomy, indicating that it does not increase operative difficulty or prolong recovery, and may be more cost-effective due to shorter total hospital stays and fewer readmissions (PUBMED:18092660, PUBMED:25450622). Moreover, immediate cholecystectomy is preferred for patients hospitalized for acute cholecystitis, as it does not increase postoperative mortality or morbidity and reduces the length of hospitalization (PUBMED:23271092). In conclusion, the evidence suggests that while cooling off (delaying cholecystectomy) after the gold interval in ACC is an option, especially for high-risk patients, early cholecystectomy remains a viable and often preferred approach due to its association with fewer complications, shorter hospital stays, and lower costs.
Instruction: Analysis of tumor morphology in metastatic colorectal cancer: does this classification have any clinical significance? Abstracts: abstract_id: PUBMED:28107874 Correlation of RUNX3 expression with microvessel density in colorectal adenocarcinoma tissues and clinical significance. Objective: To study the expression of RUNX3 in colorectal adenocarcinoma tissues and its correlation with microvessel density (MVD), and investigate the clinical pathological prognostic significance of RUNX3 and MVD in patients with colorectal cancer. Methods: The expression value of RUNX3 and MVD in 70 specimens' colorectal adenocarcinoma tissues were detected by immunohistochemistry staining technique. The correlation between their expression and the clinicopathologic features was also investigated. Results: The expression value of RUNX3 and the positive rates of RUNX3 in colorectal adenocarcinoma tissues were 3.25 ± 1.14 and 25.71% (18/70). The expression value of MVD in colorectal adenocarcinoma tissues was 13.14 ± 3.23. Expression of RUNX3 and MVD value were correlated with CEA, serosal invasion, liver metastasis, lymph node metastasis, and TNM stage (P &lt; 0.01). The expression value of RUNX3 had negative correlations with that of MVD. Conclusions: The high expression of RUNX3 could inhibit tumor microvascular generation in order to have negative control response on invasion and distant metastasis. abstract_id: PUBMED:31966520 Correlation of HER2 and FOXM1 in human colorectal carcinoma and its clinical significance. Background: The human epidermal growth factor receptor 2 (HER2) and transcription factor forkhead box protein M1 (FOXM1) are proto-oncogenes and have expressed in various kinds of human solid malignancies. However, the clinical significance of HER2 and FOXM1 in colorectal cancer (CRC) remains controversial, and there is no related report of the connection between HER2 and FOXM1 in colorectal cancer. This study aims to investigate the clinical values based on the connection of HER2 with FOXM1 in colorectal cancer. Material/methods: We retrospectively investigated HER2 and FOXM1 expressions of 130 paraffin embedded CRC and their adjacent paraneoplastic tissues through immunohistochemical assay. Western blot and RT-PCR assays were applied to measure the relative expressions of HER2 and FOXM1 in mRNA and the protein levels of 30 fresh CRC and the adjacent paraneoplastic tissues. Results: HER2 and FOXM1 expressions were significantly higher in colorectal cancer than those in paraneoplastic tissues (P&lt;0.001). Similarly, the relative expression levels of HER2 and FOXM1 in mRNA and protein were also significantly higher in CRC tissues than those in paraneoplastic tissues (P&lt;0.05). The HER2 expression was closely correlated with tumor size, degree of differentiation, presence of vascular invasion, lymph node metastasis, distant metastases, advanced TNM stage, and prognosis (P&lt;0.05). The FOXM1 expression was related to tumor invasion, vascular invasion, lymph node metastasis, distant metastases, TNM stage, and prognosis (P&lt;0.05). Besides, FOXM1 was an independent prognostic factor in CRC. There was a significant correlation between FOXM1 and HER2 expressions in colorectal cancer (r=0.335; P&lt;0.01). Conclusions: The overexpressions of HER2 and FOXM1 protein in colorectal cancer correlate with their clinicopathological characteristics and prognosis. HER2 and FOXM1 are important diagnostic markers for colorectal cancer. Furthermore, FOXM1 may be a potential target for therapy especially in HER2-targeted therapy-resistant cancers. abstract_id: PUBMED:33116821 Expression of Zinc Finger and BTB Domain-Containing 4 in Colorectal Cancer and Its Clinical Significance. Background: Zinc finger and BTB domain-containing 4 (ZBTB4), which is a transcriptional regulator, has been identified as a tumor suppressor in several human carcinomas. So far, however, the expression of ZBTB4 and its possible clinical significance in colorectal cancer (CRC) remain unknown. Materials And Methods: The mRNA and protein expressions of ZBTB4 in five CRC cell lines were respectively detected by performing qRT-PCR and Western Blotting. ZBTB4 expression in colorectal tissue specimens was determined, and subsequently its relationship with clinical prognosis was examined. Results: The mRNA and protein expressions of ZBTB4 were significantly decreased in all the five CRC cell lines compared with normal colonic epithelial cells. Consistent with the cell data, immunohistochemical results showed that as compared with the normal colorectal tissue samples, ZBTB4 protein expression was clearly lower in the CRC tissue samples, especially in CRC patients with liver metastasis. In addition, low-expressed ZBTB4 was found associated with tumor metastasis stage (P=0.0003) and level of carcinoembryonic antigen (CEA) (P=0.0004). The overall survival (OS) and recurrence-free survival (RFS) in the ZBTB4-low group were significantly lower than those in the ZBTB4-high group (P=0.0007 and P=0.0077). Conclusion: The current findings showed that patients with high-expressed ZBTB4 in CRC tissues may develop a better prognosis, and ZBTB4 could serve as a potential therapeutic target for CRC treatment. abstract_id: PUBMED:34604029 The Clinical Significance of Mesenteric Lymphocytes in Human Colorectal Cancer. Objective: The mesentery is a potential site of residual tumor in patients with colorectal cancer (CRC). However, the mesenteric immune microenvironment remains unclear. In this study, we investigated the immune landscape of the mesentery, particularly the role of lymphocytes and its association with the clinicopathological characteristics of CRC. Methods: Flow cytometry was used to detect lymphocytes in the paired mesenteric tissue specimens adjacent to the colorectal tumors and normal mesenteric tissue specimens 10 cm away from the colorectal tumor edge and preoperative peripheral blood samples obtained from patients with CRC who underwent surgery. T-distributed stochastic neighbor embedding was utilized to analyze multiparameter flow cytometry data. Multiplex immunohistochemistry was performed to evaluate T cells subsets in the paired mesentery adjacent to the colorectal tumors and normal mesentery. The Fisher's exact test and non-parametric Wilcoxon's matched-pairs tests were used for statistical analysis. The non-parametric Mann-Whitney U test was used to determine associations between percentage data and clinical parameters of patients with CRC. Results: We found that immune cells in the normal mesentery were mainly of lymphoid lineage. Compared with peripheral blood, the normal mesentery showed decreased NK cells and the CD4/CD8 ratio and increased CD3+ CD56+, memory CD4+ T, memory CD8+ T, CD4+ tissue-resident memory T (TRM), and CD8+ TRM cells. Compared with the normal mesentery, the mesentery adjacent to the colorectal tumor showed increased B and regulatory T cells and decreased NK, CD3+ CD56+, CD4+ TRM, and CD8+ TRM cells. Moreover, memory CD8+ T cells and plasmablasts are negatively correlated with the depth of invasion of CRC. Increased memory CD4+ T cells are associated with distant metastasis of CRC and high preoperative serum carcinoembryonic antigen levels. Conclusion: The mesentery shows a specific immune microenvironment, which differs from that observed in peripheral blood. CRC can alter the mesenteric immune response to promote tumor progression. abstract_id: PUBMED:36426404 A multicenter study of clinical impact of variant of uncertain significance reclassification in breast, ovarian and colorectal cancer susceptibility genes. Background: Clinical interpretation of genetic test results is complicated by variants of uncertain significance (VUS) that have an unknown impact on health but can be clarified through reclassification. There is little empirical evidence regarding VUS reclassification in oncology care settings, including the prevalence and outcomes of reclassification, and racial/ethnic differences. Methods: This was a retrospective analysis of persons with and without a personal history of cancer carrying VUS (with or without an accompanying pathogenic or likely pathogenic [P/LP] variant) in breast, ovarian, and colorectal cancer predisposition genes seen at four cancer care settings (in Texas, Florida, Ohio, and New Jersey) between 2013 and 2019. Results: In 2715 individuals included in the study, 3261 VUS and 313 P/LP variants were reported; 8.1% of all individuals with VUS experienced reclassifications and rates varied significantly among cancer care settings from 4.81% to 20.19% (overall p &lt; 0.001). Compared to their prevalence in the overall sample, reclassification rates for Black individuals were higher (13.6% vs. 19.0%), whereas the rates for Asian individuals were lower (6.3% vs. 3.5%) and rates for White and Hispanic individuals were proportional. Two-year prevalence of VUS reclassification remained steady between 2014 and 2019. Overall, 11.3% of all reclassified VUS resulted in clinically actionable findings and 4.6% subsequently changed individuals' clinical managements. Conclusions: The findings from this large multisite study suggest that VUS reclassification alters clinical management, has implications for precision cancer prevention, and highlights the need for implementing practices and solutions for efficiently returning reinterpreted genetic test results. abstract_id: PUBMED:26893713 Expression and clinical significance of Sirt1 in colorectal cancer. The objective of the present study was to examine the expression of Silent information regulator 1 (Sirt1) in colorectal cancer and peritumoral normal mucosa tissue, and therefore analyze the role and molecular mechanism of Sirt1 in the pathogenesis of colorectal cancer. Colorectal cancer tissue specimens were employed as the experimental group, and adjacent normal mucosa tissues &gt;5 cm from tumor lesions were used as the control group. The expression of Sirt1 was detected by the immunohistochemical streptavidin peroxidase detection method in paraffin-embedded sections, whilst Sirt1 protein expression was examined by western blot analysis in the fresh tissues. Sirt1 protein was primarily expressed in the nuclei of the tumor cells, and positive staining was brownish-yellow in color. The relative expression quantities of Sirt1 in the peritumoral normal rectal mucosa and rectal carcinoma were 1.15 and 2.62, and the differences between the two groups were statistically significant (P&lt;0.05). The expression level of Sirt1 in colorectal carcinoma was significantly associated with the depth of tumor invasion, differentiation and tumor size (P&lt;0.05). Sirt1 expression was also found to be associated with tumor tissue type, lymph node metastasis, Duke's stage and patient age. These characteristics combined may therefore be used as markers for the early diagnosis of colorectal cancer pathogenesis. abstract_id: PUBMED:30115537 Expression of TUSC3 and its prognostic significance in colorectal cancer. Background: Colorectal cancer (CRC) is one of the most common cancers worldwide. Tumor suppressor candidate 3 (TUSC3) has been reported be associated with embryogenesis and metabolism. The aim of this study is to investigate the expression of TUSC3 in CRC tissues, and to evaluate the clinical pathological characters and prognostic significance. Method: First, we performed a bioinformatics analysis by using Oncomine and COEXPEDIA databases. Gene Set Enrichment Analysis (GSEA) was performed using TCGA data set. Then, the protein expression level of TUSC3 was detected by immunohistochemistry in 230 pairs of primary colorectal cancer and corresponding non-tumor tissues. Result: We investigated Oncomine databases and found that TUSC3 mRNA expression was significantly higher in CRC tissues compared with normal tissues. The immunohistochemistry results demonstrated that TUSC3 was overexpressed in the CRC tissues. Furthermore, TUSC3 overexpression was associated with T stage, lymph node metastasis, and distant metastasis. TUSC3 overexpression was associated with worse overall survival for CRC, and retained significance as an independent prognostic factor for CRC. Bioinformatics analysis indicated that TUSC3 expression was associated with epithelial-mesenchymal transition signaling pathway and TUSC3 co-expression genes were obtained from COEXPEDIA. Conclusion: TUSC3 may act as an oncogene in the progression of colorectal cancer. Moreover, TUSC3 has potential to be used as prognostic markers or therapeutic targets in CRC. abstract_id: PUBMED:28365877 Elucidating the clinical significance of two PMS2 missense variants coexisting in a family fulfilling hereditary cancer criteria. The clinical spectrum of germline mismatch repair (MMR) gene variants continues increasing, encompassing Lynch syndrome, Constitutional MMR Deficiency (CMMRD), and the recently reported MSH3-associated polyposis. Genetic diagnosis of these hereditary cancer syndromes is often hampered by the presence of variants of unknown significance (VUS) and overlapping phenotypes. Two PMS2 VUS, c.2149G&gt;A (p.V717M) and c.2444C&gt;T (p.S815L), were identified in trans in one individual diagnosed with early-onset colorectal cancer (CRC) who belonged to a family fulfilling clinical criteria for hereditary cancer. Clinico-pathological data, multifactorial likelihood calculations and functional analyses were used to refine their clinical significance. Likelihood analysis based on cosegregation and tumor data classified the c.2444C&gt;T variant as pathogenic, which was supported by impaired MMR activity associated with diminished protein expression in functional assays. Conversely, the c.2149G&gt;A variant displayed MMR proficiency and protein stability. These results, in addition to the conserved PMS2 expression in normal tissues and the absence of germline microsatellite instability (gMSI) in the biallelic carrier ruled out a CMMRD diagnosis. The use of comprehensive strategies, including functional and clinico-pathological information, is mandatory to improve the clinical interpretation of naturally occurring MMR variants. This is critical for appropriate clinical management of cancer syndromes associated to MMR gene mutations. abstract_id: PUBMED:27540303 The association of HMGB1 expression with clinicopathological significance and prognosis in Asian patients with colorectal carcinoma: a meta-analysis and literature review. Background: The association of high mobility group box 1 (HMGB1) expression with clinicopathological significance and prognosis in Asian patients with colorectal carcinoma (CRC) remains controversial. The purpose of this study was to conduct a meta-analysis and literature review to identify the role of HMGB1 in the development and prognosis of CRC in Asians. Methods: All eligible studies regarding the association between HMGB1 expression in tissue with clinicopathological significance and prognosis in Asian patients with CRC published up to January 2015 were identified by searching PubMed, Web of Science, Chinese National Knowledge Infrastructure, and WanFang database. Analysis of pooled data was performed, while odds ratio (OR) or hazard radio with 95% confidence interval (CI) was calculated and summarized to evaluate the strength of this association in fixed- or random-effects model. Results: The expression level of HMGB1 in CRC tissues was much higher than normal colorectal tissues (OR =27.35, 95% CI 9.32-80.26, P&lt;0.0001) and para-tumor colorectal tissues (OR =10.06, 95% CI 4.61-21.95, P&lt;0.0001). There was no relation between the HMGB1 expression and sex, age, clinical T stage, tumor size, and location (colon or rectum cancer). However, a significant relation was detected between the HMGB1 expression and clinical stage (American Joint Committee on Cancer 7), lymph node metastasis, distant metastasis, tumor invasion depth, and differentiation rate (P=0.002, P≤0.0001, P&lt;0.0001, P&lt;0.0001, and P=0.007, respectively). Patients with higher HMGB1 expression had shorter overall survival time, whereas patients with lower level of HMGB1 had better survival (hazard ratio =1.40, 95% CI 0.98-1.82, P&lt;0.0001). Conclusion: In this meta-analysis, our results illustrated the significant relationship of HMGB1 protein overexpression in tissues with clinicopathological characteristics and prognosis of CRC. Thus, HMGB1 may be a promising marker in predicting the clinical outcome of patients with CRC. However, more well-designed studies of large sample size are warranted to validate the findings of current study. abstract_id: PUBMED:30555263 The clinical significance of CCBE1 expression in human colorectal cancer. Purpose: The identification and discovery of prognostic markers for colorectal cancer (CRC) are of great clinical significance. CCBE1 is expressed in various tumors and its expression correlates with lymphangiogenesis and angiogenesis. However, the association between CCBE1 expression and CRC outcome has not been reported. The aim of this study was to investigate clinical significance of CCBE1 expression in CRC. Patients And Methods: CCBE1 expression was examined in 30 pairs of fresh CRC tissues and compared with adjacent normal (AN) tissues using quantitative real-time PCR (qRT-PCR), Western blotting and immunohistochemistry (IHC) staining. Tissue microarray immunohistochemical staining was used to study the CCBE1 expression characteristics of 204 CRC patient samples collected from January 2002 to December 2007, and the relationship of CCBE1 with clinicopathological features and prognosis of CRC was analyzed. Results: CCBE1 was highly expressed in CRC tissues compared with matched AN tissues (P=0.001). Moreover, high expression of CCBE1 was significantly associated with tumor differentiation, lymph node metastasis, vascular invasion, liver metastasis and TNM stage in CRC patients (P≤0.01). Kaplan-Meier survival analysis revealed that high CCBE1 expression, poor tumor differentiation, lymph node metastasis and vascular invasion were significantly associated (all P&lt;0.001) with poor prognosis for patients. Furthermore, univariate and multivariate Cox analysis revealed that high CCBE1 expression, poor tumor differentiation, lymph node metastasis and vascular invasion were independent risk factors for both overall survival (OS) and disease-free survival (DFS) of CRC patients (all P&lt;0.05). OS and DFS of 267 CRC patients from The Cancer Genome Atlas (TCGA) database showed the same trend (log-rank P=6e-04, HR [high] =2.4; log-rank P=0.0081, HR [high] =1.9). Conclusion: High levels of CCBE1 contribute to the aggressiveness and poor prognosis of CRC. CCBE1 can serve as a novel potential biomarker to predict CRC patients' prognosis. Answer: The provided abstracts do not directly address the question of whether the analysis of tumor morphology in metastatic colorectal cancer has clinical significance. However, several abstracts discuss the clinical significance of various molecular and cellular markers in colorectal cancer (CRC) and their association with clinicopathological features and prognosis, which may indirectly relate to the broader question of tumor characteristics and their clinical relevance. For instance, the expression of RUNX3 in colorectal adenocarcinoma tissues was found to have a negative correlation with microvessel density (MVD) and was associated with clinicopathological features such as CEA, serosal invasion, liver metastasis, lymph node metastasis, and TNM stage (PUBMED:28107874). Similarly, the overexpression of HER2 and FOXM1 in CRC was correlated with tumor size, degree of differentiation, vascular invasion, lymph node metastasis, distant metastases, advanced TNM stage, and prognosis (PUBMED:31966520). The expression of ZBTB4 was significantly decreased in CRC tissues, particularly in patients with liver metastasis, and was associated with tumor metastasis stage and CEA levels, affecting overall survival (OS) and recurrence-free survival (RFS) (PUBMED:33116821). The study of mesenteric lymphocytes in CRC revealed that the mesenteric immune microenvironment is altered by the tumor, which can promote tumor progression (PUBMED:34604029). The reclassification of variants of uncertain significance (VUS) in cancer susceptibility genes, including those related to CRC, was shown to alter clinical management and has implications for precision cancer prevention (PUBMED:36426404). The expression of Sirt1 was associated with the depth of tumor invasion, differentiation, and tumor size in CRC (PUBMED:26893713). TUSC3 overexpression was linked to worse overall survival and was suggested as a potential prognostic marker or therapeutic target in CRC (PUBMED:30115537). In summary, while the abstracts do not directly discuss the clinical significance of tumor morphology analysis in metastatic CRC, they do highlight the importance of various molecular and cellular markers in understanding the clinicopathological features and prognosis of CRC, which could be considered part of a broader assessment of tumor characteristics in the clinical setting.
Instruction: Does microvascularization of the footprint play a role in rotator cuff healing of the shoulder? Abstracts: abstract_id: PUBMED:26116206 Does microvascularization of the footprint play a role in rotator cuff healing of the shoulder? Background: The aim of the study was to evaluate the relationship between bone microvascularization of the footprint and tendon integrity after rotator cuff repair of the shoulder. Methods: Forty-eight patients (mean age, 59 years; ±7.9) with a chronic rotator cuff tear underwent a tendon repair with a single-row technique and were studied prospectively. A core obtained from the footprint during the procedure allowed determination of the bone's microvascularization with an immunohistochemistry technique using anti-CD34 antibodies. Clinical evaluation was performed at a minimum of 12-month follow-up, and rotator cuff integrity was assessed with ultrasound according to Sugaya's classification. Results: At a mean follow-up of 13 months, the Constant score improved from 40 to 75 points; American Shoulder and Elbow Surgeons score, from 59 to 89 points; and subjective shoulder value, from 38% to 83% (P &lt; .001). Ultrasound identified 18 patients with Sugaya type I healing, 27 patients with type II, and 3 patients with type IV. No patients showed Sugaya type III or V repairs. The rate of microvascularization of the footprint was 15.6%, 13.9%, and 4.2% for type I, II, and IV tendon integrity, respectively (I vs. II, P = .22; II vs. IV, P = .02; I vs. IV, P = .0022). Patients with a history of corticosteroid injection had a lower rate of microvascularization than the others (10.3% vs. 16.2%; P = .03). Conclusions: Even if overall satisfactory clinical outcomes are achieved after a rotator cuff repair, bone microvascularization of the footprint plays a role in rotator cuff healing. A lower rate of microvessels decreases the tendon integrity and healing potential after repair. abstract_id: PUBMED:36157086 Does Complete Footprint Coverage Affect Outcomes After Conventional Arthroscopic Repair of Large-Sized Rotator Cuff Tears? Background: In large-sized rotator cuff tears, tendon repair with incomplete footprint coverage is performed frequently as a way of tension-free or low-tension repair. Purpose: To compare clinical outcomes after arthroscopic repair of large-sized rotator cuff tears between patients with complete versus incomplete footprint coverage. Study Design: Cohort study; Level of evidence, 3. Methods: Among 297 patients who underwent arthroscopic surgery for a large-sized rotator cuff tear, we selected 58 patients (&lt;50% coverage; mean age, 63.34 ± 6.8 years; 34 men and 24 women) with incomplete footprint coverage. Using propensity score matching, another 58 patients with complete footprint coverage (mean age, 63.4 ± 8.03 years; 34 men and 24 women) were selected after 1:1 matching for age, sex, and tear size-the main demographic and prognostic factors of outcomes after rotator cuff repair. Clinical outcomes were compared on magnetic resonance imaging or ultrasonography at minimum of 6 months postoperatively, and functional outcomes were compared using range of motion and pain visual analog scale; American Shoulder and Elbow Surgeons; Constant; University of California, Los Angeles; and Simple Shoulder Test scores at a minimum of 2 years postoperatively. Results: A total of 18 patients in the incomplete footprint coverage group (31.0%) and 20 patients in the complete footprint coverage group (34.5%) showed healing failure, with no significant difference between groups (P = .843). In addition, there were no differences in functional outcomes between groups (P &gt; .05 for all). Conclusion: Whether the rotator cuff footprint was completely covered did not affect clinical outcomes in conventional arthroscopic repair of large-sized rotator cuff tears. abstract_id: PUBMED:38385225 Effect of stump-preserving repair on rotator cuff healing and shoulder function for degenerative total rotator cuff tears Objective: To analyze the effect of stump-preserving repair on rotator cuff healing and shoulder function for degenerative total rotator cuff tears. Methods: A clinical data of 152 patients with degenerative total rotator cuff tears, who underwent arthroscopic repair between April 2019 and May 2022, was retrospectively analyzed. There were 76 males and 76 females with an average age of 55.4 years (range, 24-78 years). MRI was performed at 6 months postoperatively to evaluate the rotator cuff healing according to the Sugaya classification. Pre- and intra-operative related factors were included for univariate analysis, including age (≥60 years/&lt;60 years), gender (male/female), passive activity disorder (yes/no), disease duration (≤3 months/&gt;3 months), stump-preserving repair (yes/no), use of suture bridge technique (yes/no), shoulder joint abduction angle at knotting (&lt;45°/≥45°), acromioplasty (yes/no), glucocorticoid injection (yes/no), time for patients to start postoperative passive exercise (≤2 weeks/&gt;2 weeks), and time for patients to start postoperative active exercise (≤3 months/&gt;3 months). The influencing factors of tendon healing were screened; further logistic regression was used to conduct multivariate analysis to screen for risk factors. Two sets of data were balanced by propensity score matching. The American Shoulder and Elbow Surgeons (ASES) score and Constant-Murley score of shoulder joint function at 6 and 12 months postoperatively, as well as rotator cuff healing rate at 6 months postoperatively, were compared between groups based on whether or not stump-preserving repair was used. Results: All patients were followed up 12-33 months (mean, 23.8 months). MRI at 25-31 weeks postoperatively showed the 121 cases of rotator cuff healing and 31 cases of non healing. Univariate analysis showed that the disease duration, stump-preserving repair, shoulder joint abduction angle at knotting, and the time for patients to start postoperative active exercise were the influencing factors of rotator cuff healing ( P&lt;0.05). Multivariate analysis showed that non-stump-preserving repair, shoulder abduction angle more than 45° at knotting, and the time to start active exercise within 3 months postoperatively were risk factors affecting rotator cuff healing ( P&lt;0.05). A total of 51 pairs of cases were matched based on the grouping criteria of whether the disease duration exceeded 3 months, whether the shoulder abduction angle at knotting exceeded 45°, and whether the time to start postoperative active exercise exceeded 3 months. The rotator cuff healing rate, ASES score, and Constant-Murley score of the preserving repair group at 6 months postoperatively were superior to those of the non-preserving repair group, and the differences were significant ( P&lt;0.05). There was no significant difference in ASES score and Constant-Murley score between the two groups at 12 months postoperatively ( P&gt;0.05). Conclusion: For degenerative total rotator cuff tears, the stump-preserving repair can shorten the healing time and promote the shoulder function recovery, but has no significant effect on shoulder function at 1 year postoperatively. abstract_id: PUBMED:32700497 Progress on improving tendon-to-bone healing for the enthesis of rotator cuff Rotator cuff repair is a common treatment for rotator cuff tear, which could effectively relieve shoulder pain and improve shoulder movement, and the incidence of rotator cuff retear after rotator cuff repair is still high. The main reason is poor tendon-bone healing in rotator cuff enthesis after rotator cuff repair and could not recover the original histological structure and biomechanical properties. Therefore, the key to solve the problem is how to effectively improve the healing of tendon bone at the end of rotator cuff. With the in-depth study of rotator cuff enthesis, various treatments have made great progress on improving tendon to bone healing of rotator cuff. Our study will discuss the researchprogress on tendon to bone healing of rotator cuff in recent years from three following aspects to provide some guidance for the clinical treatment of rotator cuff tear:the factors affecting the tendon to bone healing of rotator cuff, the recovery of tendon to bone interface promoting the tendon to bone healing of rotator cuff and the application of tissue engineering in tendon to bone healing. abstract_id: PUBMED:37949395 Estimation of the carbon footprint of arthroscopic rotator cuff repairs in France. Aim: The main objective of this study was to estimate the carbon impact of arthroscopic rotator cuff repairs in France. The secondary objective was to assess the effectiveness of the following measures in reducing the carbon footprint associated with this technique: outpatient treatment, arthroscopic water filtration, surgery under locoregional anesthesia. Hypothesis: The hypothesis was that the carbon footprint could be significantly improved with the implementation of these three procedures. Methods: A continuous series of 26 patients who underwent surgery for a rotator cuff tear involving only one tendon between November 2020 and April 2021 were included. The evaluation protocol consisted of three parts: 1/ use of volatile anesthetic agents; 2/ electrical consumption linked to the procedure; 3/ emissions related to patient and staff travel, delivery of implants and waste management. Another series of 26 patients operated between November 2018 and April 2019 who had none of these three factors were matched. Results: The carbon impact of arthroscopic repair of the rotator cuff was estimated at 334.61±18.82kgCO2eq. The implementation of the three methods for improvement made it possible to significantly reduce emissions by 40.9±1.71kgCO2eq (12.2%) (p&lt;0.001). Conclusion: Performing surgery under locoregional anesthesia, on an outpatient basis with water purification, reduces the carbon impact of arthroscopic rotator cuff repair by more than 12%. Level Of Evidence: III, retrospective case control. abstract_id: PUBMED:33964431 Nontendinous healing after repairing of retracted rotator cuff tear: an imaging study. Background: Follow-up magnetic resonance imaging (MRI) after rotator cuff repair can sometimes demonstrate healing with nontendinous tissue that extends from the footprint to the retracted tendon end, which is inferred as fibrous tissue formation. The aim was to investigate this particular finding and its significance. Methods: There were 494 eligible cases of healed supero-posterior medium-sized to massive rotator cuff repairs, after the exclusion of retears. A retrospective review was performed for the 3 groups that were divided according to their MRI appearances of healing: type I described the direct healing of the tendon to the footprint, whereas type II demonstrated the distinctive continuity of nontendinous tissue from the footprint to the retracted tendinous portion, and type III also showed a similar appearance but with obvious thinning of the tissue, without any evidence of defect confirmed on the routine outpatient ultrasonograph. Results: Only 108 of 494 patients (21.9%) demonstrated type I healing, whereas the signs of nontendinous healing were evident for the rest, with the 116 patients (23.5%) being classified as type III with attenuation. Comparing the preoperative tendon retraction, 34.8% and 37.2% of the Patte stages 2 and 3, respectively, resulted in type III healing, which were significantly higher compared with that of stage 1 (15.3%, P &lt; .001). Type III healing had the highest average preoperative Goutallier grades. The average postoperative visual analog scale and the American Shoulder and Elbow Surgeons (ASES) scores have improved significantly for all 3 groups (P &lt; .05), with the ASES score being 86.1±15.9 for type I, 93.7±36.1 for type II, and 87.8±15.1 for type III without significant differences between the groups (P = .3). Conclusions: Only a fifth of the rotator cuff repairs led to a direct healing to the footprint, and the rest healed with MRI appearance of nontendinous tissue formation bridging the retracted tendinous portion and the footprint. These MRI appearances did not represent the true tendinous tissue formation between the torn end of the tendon and the bone after healing. Such appearances did not seem to affect the clinical outcomes. abstract_id: PUBMED:23760681 Microfractures at the rotator cuff footprint: a randomised controlled study. Purpose: Microfractures at the footprint may be a potential additional source of growth factor and enhance the tendon healing at the bone-tendon junction when repairing rotator cuff tears. Methods: Fifty-seven patients who underwent shoulder arthroscopy for repair of complete rotator cuff tears were randomly divided into two groups, using a block randomisation procedure. Patients underwent microfracture at the footprint in the treatment group. The patients in the control group (n = 29) did not receive that treatment. All patients had the same post-operative rehabilitation protocol. Results: The two groups were homogeneous. There was a significant improvement from baseline to the last minimum follow-up of two years. At three months from the index procedure, visual analogue scale (VAS), range of motion (ROM) and University of California at Los Angeles (UCLA) and Constant scores were significantly better in group 1 than in group 2 (P &lt; .05). At the last follow-up (minimum two years), clinical and functional outcomes were further improved in both the groups but inter-group differences were not significant. No technique-related complications were recorded. Conclusions: Microfractures at the footprint are simple, safe, inexpensive and effective at producing less pain in the short term in patients who undergo rotator cuff repair, but at two years they do not result in significantly different outcomes, either clinically or at imaging, compared to traditional rotator cuff repair. abstract_id: PUBMED:14608329 Double-row arthroscopic rotator cuff repair: re-establishing the footprint of the rotator cuff. Recently, there has been an increased interest in the normal anatomy of the rotator cuff footprint and the re-establishment of the footprint during rotator cuff repair. Single-row suture anchor techniques have been criticized because of their inability to restore the normal medial-to-lateral width of the rotator cuff footprint. In this report, the authors describe a double-row technique for rotator cuff repair that re-establishes the normal rotator cuff footprint, increases the contact area for healing, and may potentially improve clinical results. abstract_id: PUBMED:34890864 Weight-bearing shoulder and rotator cuff tear. Two percent of the general population are wheelchair-dependent. The shoulder takes on the weight-bearing locomotor function, and tends with age to develop degenerative pathologies, notably in the rotator cuff. The association between weight-bearing shoulder and rotator cuff tear raises several questions: what are the mechanisms by which wheelchair propulsion and transfer overload the shoulder, and what specificities do the lesions display? They occur in younger patients than in the rest of the population, after about 15 years' fairly constant wheelchair use. As well as the classical supraspinatus damage, an anterior cuff extension is the most frequent case. Is there a particular clinical presentation of cuff tear in this population? As the shoulder cannot be functionally protected, pain is constant and asymptomatic cuff tear is rare. Any pain after 12 years' wheelchair use requires morphologic exploration. How does treatment strategy differ for the weight-bearing shoulder? More than the classic quest for compensation, in the weight-bearing shoulder, the rotator cuff should be protected against any local aggression by systematic surgical recalibration of the subacromial space, and all lesions should be repaired, to avoid extension. Are functional results to be expected to be poorer in this population? On condition that the suture is protected by postponing any transfer beyond 4 months and that the shoulder is protected on a daily basis, analytic results and healing rates are comparable to those in the general population. However, general complications and skin complications are more frequent and require postoperative care in a specialized center. abstract_id: PUBMED:33967091 Animal models for study on rotator cuff injury. Rotator cuff injuries are the most common cause of shoulder pain and dysfunction. Ideal animal shoulder models should have similar shoulder anatomy and function as human, and are able to replicate the microenvironment change after tendon injury. At present, a variety of animal models including rat, mouse, rabbit, sheep, canine, bovine, and primate have been used to study the mechanism of rotator cuff injury, effects of different repair techniques, and factors affecting tendon to bone healing. Although large animal models are more anatomically similar to humans, small animal models are more convenient in revealing the biological mechanism of rotator cuff injury and healing. Choosing appropriate animal models based on research objectives and establishing new small animal models play a critical role in revealing the mechanism of rotator cuff diseases and developing novel treating strategies. Answer: Yes, microvascularization of the footprint does play a role in rotator cuff healing of the shoulder. A study that evaluated the relationship between bone microvascularization of the footprint and tendon integrity after rotator cuff repair found that a lower rate of microvessels in the footprint was associated with decreased tendon integrity and healing potential after repair. The study involved 48 patients who underwent tendon repair with a single-row technique, and bone microvascularization was determined using an immunohistochemistry technique with anti-CD34 antibodies. The results showed that the rate of microvascularization was 15.6%, 13.9%, and 4.2% for Sugaya type I, II, and IV tendon integrity, respectively, indicating that higher microvascularization was associated with better tendon healing (Sugaya type I and II) compared to lower microvascularization (Sugaya type IV). Additionally, patients with a history of corticosteroid injection had a lower rate of microvascularization than others (PUBMED:26116206).
Instruction: Adolescent physical activity and perceived competence: does change in activity level impact self-perception? Abstracts: abstract_id: PUBMED:17448405 Adolescent physical activity and perceived competence: does change in activity level impact self-perception? Purpose: To determine whether change in physical activity level impacts adolescents' self-perceptions. Methods: Using questionnaire responses from the Growing Up Today Study (GUTS) in 1997 and 1999, we evaluated data from 5260 girls and 3410 boys. Physical activity changes were compared to changes in perceived competence in three domains (social, athletic, and scholastic) as well as in global self-worth. Analyses controlled for sibling clustering as well as for potential confounders, including body mass index, cigarette smoking, baseline activity levels, and baseline self-perception scores. Results: For girls and boys, increase in physical activity was positively associated with change in social and athletic (p &lt; .0001), but not scholastic or global, self-perception. Compared to those with little or no change in activity, those who increased physical activity were more likely to have increased self-perception measures. Girls who increased physical activity by 5 or more hours/week were at least 33% more likely to have increased social self-perception, and at least 44% more likely to have increased athletic self-perception. In boys, those who increased activity by 10 or more hours/week were 45% more likely to have increased social self-perception. The reverse was also true; for both girls and boys, those with decreased physical activity were more likely to have decreased self-perception scores. Conclusions: This research indicates that increased physical activity has a positive impact on athletic and social self-perception in girls and boys. abstract_id: PUBMED:16707779 Perceptions of physical and social environment variables and self-efficacy as correlates of self-reported physical activity among adolescent girls. Objective: This cross-sectional study examined the direct and indirect effects of perceived equipment accessibility, neighborhood safety, and social support on self-reported physical activity among older adolescent girls. Methods: Adolescent girls (n = 1,655) who were in the 12th grade completed a battery of questionnaires that included self-report measures of the perceived physical environment, social support, barriers self-efficacy, and physical activity. Results: Perceived neighborhood safety did not exhibit direct or indirect effects on self-reported physical activity. Perceived equipment accessibility exhibited an indirect effect on self-reported physical activity that was accounted for by barriers self-efficacy. Perceived social support exhibited direct and indirect effects on self-reported physical activity; the indirect effect was accounted for by barriers self-efficacy. The relationships were independent of parental education and body mass index (BMI). Conclusions: Perceived physical environmental factors indirectly influenced self-reported physical activity, and perceived social environmental factors both directly and indirectly influenced self-reported physical activity in this sample of older adolescent girls. abstract_id: PUBMED:38315900 Relationship Between Physical Activity Level and Stress Perception: Exploring Factors During COVID-19 Pandemic. Background: After the difficulty of the pandemic process, managing the long-term effects that may occur after the coronavirus disease 2019 (COVID-19) is among the biggest concerns in the present era. This study aimed to explore factors affecting the physical activity level and investigate the relationship between physical activity level and stress perception of university students during the COVID-19 pandemic. Study Design: A cross-sectional study. Methods: Study data were collected via online survey, and volunteer participants completed the survey through the survey link between October 21 and December 31, 2021. Physical activity level was evaluated by "The International Physical Activity Questionnaire- Short Form" (IPAQ-SF), and stress level was evaluated by "Perceived Stress Scale" (PSS). The participants were asked questions about barriers and motivators for physical activity before and during the pandemic. Results: The study included 444 participants (81.3% female and 18.7% male) with an average age of 21±2.95 years. The results showed a negative-significant linear relationship between perceived stress score and total physical activity, vigorous-intensity physical activity, and walking scores (r=-0.157, P&lt;0.01; r=-0.16, P&lt;0.01; r=-0.13, P&lt;0.05 respectively). During the pandemic, the perception of insufficient finance became less important as a barrier (P=0.029), and healthcare professional (HCP) recommendation became more important as a motivator for physical activity than the pre-pandemic conditions (P=0.035). Conclusion: The findings indicated that it is possible to reduce the level of perceived stress by increasing the level of physical activity. Current research will be a key for increasing and maintaining physical activity and reducing perceived stress. abstract_id: PUBMED:35162206 Level of Physical Activity and Its Relationship to Self-Perceived Physical Fitness in Peruvian Adolescents. Background: Physical activity and physical fitness play an important role in adolescence. Both are considered to be indicators of the current and future health status of young adults. The main objective of this article was to report the normative values of the Physical Activity Questionnaire for Adolescents (PAQ-A) and the International Fitness Scale (IFIS) instruments in Peruvian school adolescents. Methods: A sample of 1229 participants (622 girls and 607 boys) aged between 12 and 17 years was used. The type of study was descriptive-comparative. All measures used were obtained by means of self-administered instruments. The PAQ-A was used to assess the level of physical activity and the IFIS to assess the self-perceived physical fitness level of the adolescents. Results: It was observed that the PAQ-A questionnaire results obtained from the total sample was 2.34; significantly higher for boys (2.41) compared with girls (2.27). For the IFIS, the total score was 3.07, with boys obtaining 3.13 and girls 2.97. Conclusions: It was concluded that there was a direct relationship between the level of PA and self-perceived PF in Peruvian adolescents. Furthermore, adolescent boys were more physically active than girls and they had a better self-perceived PF with the exception of flexibility. Finally, there was a higher weight category involved at the lower level of PA. abstract_id: PUBMED:36291528 The Predictive Role of Perceived Autonomy Support in Elementary School Children Physical Activity. The present article aimed to test a predictive model based on children's perception of autonomy support exercised by their physical education teachers in establishing a state of high motivational quality, which in turn leads to greater intention and physical activity. Participants were 502 elementary school students aged 9 to 11 years (52.59% males (9.47 ± 0.53 years old) and 47.41% females (9.54 ± 0.53 years old)), who completed a self-reported questionnaire of perceived autonomy support from physical education teachers, self-determination, intention to be physically active, and physical activity. Results showed that perceived autonomy support was positively related to autonomous motivation, which in turn was associated with intention and physical activity. In addition, the model did not present variations with respect to gender and confirmed the indirect effects of autonomous motivation on intention and practice of physical activity. Finally, we conclude on the importance of an adequate perception of autonomy support in students by their teachers for the establishment of an appropriate motivational state that could promote greater intention and physical activity. Thus, the motivational state acquired in physical education classes can be translated into intention and practice of physical activity in different contexts of students' daily lives. abstract_id: PUBMED:26671711 Perceived physical competence towards physical activity, and motivation and enjoyment in physical education as longitudinal predictors of adolescents' self-reported physical activity. Objectives: The aim of the study was to investigate if adolescents' perceived physical competence towards physical activity (PA), and autonomous motivation and enjoyment in physical education (PE) during early adolescence can predict amount and intensity of self-reported physical activity six years later. Design: This study utilized a 6-year longitudinal data set collected within Finnish school settings. Students responded to questionnaires measuring their perceived physical competence towards physical activity, and autonomous motivation and enjoyment in PE during their first year at middle school (Grade 7), and their PA engagement during their last year in high school (Grade 12). Methods: A sample of 333 students (200 girls, 133 boys; M age=12.41, years, SD=.27) participated in the study. Perceived physical competence in physical activity was assessed by the sport competence dimension of the Physical Self-Perception Profile, autonomous motivation in PE was assessed by the Sport Motivation Scale and enjoyment in PE by the Sport Enjoyment Scale. Students' self-reported metabolic equivalent (MET) and PA intensity (light [LPA], moderate [MPA], vigorous [VPA]) was calculated from the short form of International Physical Activity Questionnaire. Results: Perceived physical competence towards physical activity significantly predicted total METs (β=.28), MPA (β=.18) and VPA (β=.29) six years later. Autonomous motivation and enjoyment in PE at Grade 7, however, were not significant predictors of later PA. Conclusions: The results of this study support the proposition that self-perception of an individual's abilities arising from interactions with the environment related to PA during early puberty has an influential effect on later PA behaviour. abstract_id: PUBMED:33924507 The Relationship between Physical Activity, Self-Perceived Health, and Cognitive Function in Older Adults. There are consistent associations between physical activity and self-perceived health. However, it is not clear whether associations between self-perceived health and participation in physical activity could be accounted for by associations with cognitive function. In the present study, we examined whether associations between physical activity and cognitive functioning could explain the variability between physical activity and self-perceived health. A sample of 204 older adults performed three cognitive tests selected from the Vienna test system battery: The Determination, Cognitrone, and Visual Memory tests. These tests measure general processing speed, attention, and visual memory, respectively. Participants also completed the 12-item Short Form Health Questionnaire SF-12 to measure perceived health, and the Physical Activity Survey for the Elderly to measure physical activity. Linear regressions and the PROCESS macro for SPSS were used to test our hypotheses. Consistent with our hypotheses, processing speed accounted for significant variance in the relationship between physical activity and self-perceived health. This suggests that cognitive processing speed might be an indirect path by which physical activity relates to enhanced health perceptions. The results demonstrate that associations between physical activity and self-perceptions of health are related to a fundamental cognitive process. abstract_id: PUBMED:32721187 Body weight perception and physical activity among young adults: Analysis from the national longitudinal study of adolescent to adult health. ObjectiveTo identify the young adults' perception of their weight and examine the associations between young adults' weight perception and physical activities. Participants: A total of 4,882 participants' responses were derived from the Add Health, Wave III. Method: Chi-square tests were used to examine the weight perception and multinomial logistic regression used to assess the association between weight perception and physical activities. Results: Perceived weight and weight management among male participants were significantly different from female participants (p&lt;.001, p&lt;.001, respectively). Young adults with self-perception of being "slightly overweight" and "very overweight" were more likely of participating in physical activities such as gymnastic/weight lift/strength (OR = 0.91, p &lt;.001 and OR = 0.77, p&lt;.001) and walking (OR = 1.08, p&lt;.001 and OR = 1.13, p&lt;.001). Conclusion: The study showed that physical activities significantly increased the probability of accurate perceived weight. Steps should be taken to implement interventions that increase accurate weight perception and promote physical activity among young adults. abstract_id: PUBMED:31404568 Perception of parents' physical activity as a positive model on physical activity of adolescents. Physical activity (PA) promotes an adequate level of health in adolescents. The PA behaviors moderation is led by the adolescents' perception of their parents. We suppose that higher adolescents' perception of their parents' physical activity influences the physical activity spent in adolescents. The objective of this study was to determine the association between adolescents' perception of their parents' PA practice with adolescent's PA in a sample of public schools in the Morelos state of Mexico. We analyzed the information of 4625 adolescents from the baseline of a cohort study of students in public schools in Mexico. We evaluated adolescents' perceptions of their parents PA and PA practice of the adolescents. A logistic regression analysis stratified by sex was carried out to evaluate the association between adolescents' perceptions of their parents PA, and adolescents' PA. We found that the PA of female adolescents was associated with their perceptions' of both parents' PA (OR = 2.48; IC95% 1.84-3.35). Adolescents' perceptions of their mother's PA was associated with their PA (daughters OR = 1.69; IC95% 1.33-2.15; sons OR = 1.87 IC95% 1.41-2.45). In conclusion, adolescents' perceptions of their parents' PA practice were associated with higher adolescents' PA. Higher adolescents' perception of their parents PA may play an important role as a positive model for the adolescent's PA practice (p &lt; 0.001). Finally, parental PA and the perceptions that adolescents have of their parents' PA must be considered in devising general PA programs. abstract_id: PUBMED:12831253 Relations among physical activity, physical fitness, and self-perceived fitness in Hong Kong adolescents. The relationships between physical fitness, activity, and self-perceived fitness in adolescents in Hong Kong were investigated. Regular physical activity patterns, self-perceptions of fitness, and health-related physical fitness were measured in 201 secondary school students (M age = 13.8 yr.). The battery of fitness tests measured cardiovascular capacity, flexibility, muscular strength, muscular endurance, and body fat composition. The boys had a higher self-perception of their own fitness and exercised more outside school than the girls. Participation in physical activity was related to cardiovascular capacity, muscular strength, and body composition. Linear regression analysis indicated that exercise activity and body composition explained a portion of variance in the subjects' self-perceived fitness. These findings highlight the connections between physical activity, fitness, and self-perception of fitness and consequently the need for understanding the concept for better health-promotion strategies. Answer: Yes, changes in physical activity levels do impact self-perception among adolescents. Research has shown that for both girls and boys, an increase in physical activity is positively associated with changes in social and athletic self-perception, although not necessarily with scholastic or global self-perception (PUBMED:17448405). Girls who increased their physical activity by 5 or more hours per week were at least 33% more likely to have increased social self-perception and at least 44% more likely to have increased athletic self-perception. Similarly, boys who increased their activity by 10 or more hours per week were 45% more likely to have increased social self-perception. Conversely, those with decreased physical activity were more likely to have decreased self-perception scores (PUBMED:17448405). Additionally, perceived physical competence towards physical activity has been shown to significantly predict the total metabolic equivalents (METs), moderate physical activity (MPA), and vigorous physical activity (VPA) six years later, indicating that self-perception of abilities related to physical activity during early puberty has an influential effect on later physical activity behavior (PUBMED:26671711). Furthermore, there is evidence that adolescents' perception of their parents' physical activity can influence their own physical activity levels, suggesting that higher perception of parental physical activity serves as a positive model for adolescents (PUBMED:31404568). In summary, changes in physical activity levels among adolescents are linked to changes in their self-perception, particularly in social and athletic domains, and these changes can have long-term effects on their physical activity behaviors. Additionally, the perception of parental physical activity also plays a role in shaping adolescents' physical activity levels and self-perception.
Instruction: Unstable angina in octogenarians: is an invasive approach feasible and effective? Abstracts: abstract_id: PUBMED:11412773 Unstable angina in octogenarians: is an invasive approach feasible and effective? Background And Objective: Octogenarian patients with unstable angina are usually managed more conservatively despite having a worse prognosis. Studies of balloon angioplasty in the elderly have demonstrated a higher incidence of adverse events but the new advances (mainly stenting) have improved the results. We evaluated the efficacy and safety of an invasive approach in octogenarians with unstable angina. Patients And Method: From January 1996 to October 1999, 100 patients at least 80 years old with unstable angina were admitted to our unit and among these, 74 (74%) underwent percutaneous revascularization. We evaluated immediate results, in-hospital events and clinical follow-up. Results: A total of 145 lesions were treated in 74 patients. The stent implantation rate was 79%. The success rate was 92%. Two patients died during hospitalization due to cardiac causes and 1 patient had a non-Q infarction. At follow-up, 24 +/- 12 months (range: 4 -50 months) 14 patients died (19.4%). New revascularization was performed in 10 patients (13.5%), 9 with PTCA and 1 with surgery. The survival rate free of death and infarction in the first year was 89.2%. At the end of follow-up 58 patients were alive (78.4%), 45 asymptomatic and 13 had stable angina, class I or II. Conclusions: The results of stent implantation in octogenarians were good with a 92% procedural success. Ninety-six percent of patients were free of death and infarction during hospitalization and 78.4% of the patients remained alive, most of them asymptomatic at the end of follow-up. abstract_id: PUBMED:15789393 Outpatient coronary angioplasty: feasible and safe. This study tested the safety and feasibility of coronary angioplasty on an outpatient basis. The purpose of this approach includes cost-effectiveness and patient comfort. Six hundred forty-four patients were randomized to either transradial or transfemoral PTCA using 6 Fr equipment. Patients were triaged to outpatient management based on a predefined set of predictors of an adverse outcome in the first 24 hr after initially successful coronary angioplasty. Three hundred seventy-five patients (58%) were discharged 4-6 hr after PTCA; 42% stayed in hospital overnight. In the outpatient group, one adverse event occurred (subacute stent thrombosis 7 hr postdischarge, nonfatal myocardial infarction). There were no major vascular complications. In the hospital group, 19 patients (7%) sustained an adverse cardiac even in the first 24 hr; 1 patient died. Patients treated via the femoral route had more (minor) bleeding complications (19 patients; 6%); in 17 of these, this was the sole reason that discharge was delayed. PTCA on an outpatient basis, performed via the radial or the femoral artery with low-profile equipment, is safe and feasible in a considerable part of a routine PTCA population. A larger proportion of transradial patients can be discharged due to a reduction in (minor) bleeding complications. abstract_id: PUBMED:35024071 Distal radial approach treating a left main lesion during hemostasis of the forearm radial artery on the same side in a case of unstable angina. Conventional radial access (cRA) for percutaneous coronary intervention (PCI) has become the current standard due to low bleeding complications, although recently, distal radial access (dRA) has attracted attention as an alternative. Here, the usefulness of dRA is shown in a case in whom neither side could be used for cRA. The patient was a woman in her 70 s diagnosed with unstable angina pectoris at another hospital. Although ad hoc PCI was attempted via her right forearm radial artery, her hemodynamics deteriorated and the procedure was abandoned. After an intra-aortic balloon pumping device was inserted via the left femoral approach and hemostasis was established with a dedicated device, the patient was transferred to our hospital. Her right radial artery was being used for hemostasis and her left radial artery was poorly palpable. Because her right distal radial artery was palpable, access via that location was attempted after confirming sufficient blood vessel diameter and blood flow by ultrasound. A 6Fr sheath was inserted and PCI was safely accomplished. Hemostasis on dRA was completed without complications using a hemostasis device. Thus, dRA may be an option as an alternative access site in an emergency. &lt;Learning objective: Radial access is recommended by the guidelines for coronary intervention in order to prevent access site complications. We report a case of unstable angina successfully treated for a left main lesion via the distal radial artery during hemostasis of the forearm radial artery on the same side. Distal radial access, little considered for catheterization, may be an alternative option when conventional radial access cannot be employed. It is important to share this conclusion with all physicians.&gt;. abstract_id: PUBMED:10652676 Outpatient coronary stenting: femoral approach with vascular sealing. Miniaturized devices and pressures for increased patient convenience and lowered cost have shortened length of stay for coronary interventions. A cohort of 60 patients was recruited to assess the feasibility of outpatient stenting with vascular sealing. Patients with stable and unstable angina or myocardial infarction &gt; 24 hours were considered for this strategy. Mean time to hemostasis, ambulation and discharge were 6.1, 256 and 296 minutes, respectively, for the 6F group, and 11.0, 351 and 489 minutes for the 7 to 8F group. No acute procedural complications occurred, and there were no ischemic complications at 24 hours or 1 month. There was 1 pseudoaneurysm requiring surgical correction, but no other access site requiring treatment. The cost saved using the 6F approach is estimated at $478 and using the 8F approach, $437. Outpatient stenting using vascular sealing is feasible and safe, and may lead to significant nationwide cost reductions in the range of $40,000,000 yearly. abstract_id: PUBMED:31043983 Very distal transradial approach (VITRO) for coronary interventions. Introduction: Radial access is a standard approach for coronary interventions. However, it carries some risk of local or long-term complications such as hematoma or radial artery occlusion. Aim: To assess the feasibility of a very distal left and right transradial approach (VITRO) for coronary interventions. Material And Methods: Three hundred and twenty consecutive patients were submitted to diagnostic or therapeutic coronary interventions. In 102 patients the distal radial artery was not palpable or the pulse was too weak. In 218 selected patients (142 male, 76 female, age: 69 ±11 years) we decided to perform a distal transradial approach. Results: The VITRO access was suitable in 195 patients with a success rate of 89.4%. In 9 patients arterial puncture failed, while in 14 others despite successful arterial puncture the wire could not be advanced towards the forearm part of the radial artery. Not only elective diagnostic angiographies were performed with VITRO but also urgent ad hoc coronary interventions in subjects with unstable angina or NSTEMI (48 subjects; 24.3%). Moreover, this very distal approach allowed 11 rotablations and 11 FFR/iFR examinations to be performed in 22 patients. No major bleeding, requiring prolonged hospital stay, surgery or transfusion occurred. One patient on oral anticoagulation with DAPT had conservatively managed minor forearm bleeding. Conclusions: Very distal radial artery access is feasible, safe and comfortable in 60% of patients referred for elective or urgent coronary arteries angiography, or coronary interventions. abstract_id: PUBMED:22899624 Transradial retrograde approach rescuing iatrogenic long spiral dissection during chronic total occlusion intervention. Percutaneous coronary intervention (PCI) of chronic total occlusion (CTO) is one of the greatest challenges in coronary interventions. A retrograde approach via the collateral channel has been recently proposed to improve the success rate of PCI in CTO lesions of the coronary arteries. We describe an accidental complication encountered during transradial PCI to recanalize right coronary artery CTO in a patient with unstable angina. A long spiral dissection has been created by antegrade wiring and extended from the ostium all the way down to mid RCA segment. Subsequent attempts with antegrade wiring into the true lumen were unsuccessful. Ad-hoc retrograde recanalization has been employed to rescue the vessel via septal collateral from left anterior descending artery. Retrograde wiring and dilatation were performed followed by successful antegrade wiring into the true lumen under IVUS guidance, which revealed significant intramural hematoma extending distally to the posterolateral branch. Bailout stenting was achieved with sealing of the multiple entry and exit sites created by the spiral dissection and complete coverage of the intramural hematoma. This report highlights the role of the retrograde approach as a rescue option in the setting of complicated antegrade approach and to improve the success rate of CTO-PCI. Moreover, IVUS was a valuable tool to confirm the true lumen course of the successful wire and to guide the stenting procedure. abstract_id: PUBMED:12655961 Effective relief of acute coronary syndrome. Effective and early pain relief remains a clinical priority, presently no one agent offers the ideal solution to controlling the pain of ACS. The early use of opiates, betablockers and nitrates, as well as reassuring patients, both have vital roles to play in providing effective analgesia. abstract_id: PUBMED:9814803 Transmyocardial holmium laser revascularization: feasibility of a thoracoscopic approach. Objective: Creation of transmyocardial channels from the epicardium to the left ventricular cavity with the use of a laser is a modern approach in the treatment of patients with chronic ischemic heart disease unsuitable for coronary angioplasty or bypass grafting. We present the results of transmyocardial laser revascularization (TMLR) with a holmium laser as sole therapy in 22 patients operated on between November 1995 and February 1997. Methods: There were five females (23%) and 17 males (77%), with a mean age of 67+/-7 years (range 53-74 years). Previous myocardial revascularization had been performed in 77% of the patients. Pre-operatively, 12 patients (55%) were in angina class III and ten (45%) in class IV (mean 3.5+/-0.5); unstable angina was present in seven patients (32%). In 20 patients, TMLR was performed through a limited thoracotomy, while in two a thoracoscopic approach was used. Each patient received a mean of 33+/-8 channels in 27+/-13 min, while total operation lasted 130+/-28 min. Results: There were no hospital deaths and no major post-operative complications. Mean hospital stay was 7+/-3 days; the two patients undergoing thoracoscopic TMLR were discharged after 4 and 5 days, respectively. Two deaths were observed after 40 days and 4 months after TMLR, due to stroke and myocardial infarction. Mean follow-up of current survivors is 10+/-6 months (range 3-15 months), with seven patients followed for over 12 months. At last follow-up, mean angina class is 1.9+/-0.6 (P &lt; 0.001). A significant increase in exercise tolerance and a reduction of the number of hospitalizations for angina were also observed. However, no significant changes in myocardial perfusion were observed. Conclusions: The present study demonstrates that: (1) TMLR with a holmium laser yields clinical improvement in the majority of patients with severe angina unsuitable for conventional surgical treatment, (2) gratifying results in terms of improved anginal status and exercise tolerance are achieved, despite the lack of significant changes in myocardial perfusion at early follow-up and (3) TMLR through a thoracoscopic approach is a feasible procedure. abstract_id: PUBMED:10731376 Is a more efficient operative strategy feasible for the emergency management of the patient with acute chest pain? Patients with acute chest pain are a common problem and a difficult challenge for clinicians. In the United States more than 5 million patients are examined in the emergency department on a yearly basis, at a cost of 6 billion dollars. In the CHEPER registry the prevalence of patients with chest pain in the Emergency Department was 5.3%. Similarly, in 1997 at our institution the prevalence was 4.8%. Only 50% of the patients are subsequently found to have cardiac ischemia as the cause of their symptoms and 50-60% of them showed a non-diagnostic electrocardiogram (ECG). Twenty-five-50% of chest pain patients are not appropriately admitted to the hospital and despite this conservative approach, acute myocardial infarction is misdiagnosed up to 8% of patients with acute chest pain who are released from the emergency department without further evaluation, accounting for approximately 20% of emergency department malpractice in the United States. Important diagnostic information is covered by the patient's medical history, physical examination, and ECG, but often this approach is inadequate for a definitive diagnosis. Creatine kinase (CK) and CK isoenzyme--cardiac muscle subunit (CK-MB)--are traditionally obtained in the emergency department in patients admitted for suspected acute coronary syndrome. Mass measurements of CK-MB have improved sensitivity and specificity, and to date this is the gold standard test for diagnosis of acute myocardial infarction. CK-MB, however, is not a perfect marker because it is not totally cardiac specific and does not identify patients with unstable angina and minimal myocardial damage. There are no controlled clinical impact trials showing that these tests are effective in deciding whether to discharge or to appropriately admit the patient with suspected acute coronary syndrome. Relevant investigative interest has recently been focused on new markers for myocardial injury, including myoglobin, cardiac troponins T and I. Myoglobin, a sensitive but not specific marker for cardiac damage, increases earlier than CK-MB and cardiac troponins. It should be used early after symptom onset and in conjunction with a more specific marker of myocardial damage. Cardiac troponins T and I are highly specific markers for cardiac damage, rise parallel to CK-MB and remain elevated longer, up to 5 to 9 days. They are useful for detection of less severe degrees of myocardial injury, which may occur in several patients with unstable angina who are at higher risk of cardiac events. Recent studies suggest that cardiac troponins have good diagnostic performance and prognostic value in the heterogeneous population of patients seen in the Emergency Department with acute chest pain. Despite these promising data, several analytical and interpretative problems in the routine use of cardiac troponins must be solved. Incremental value of echocardiography in acute chest pain patients is still uncertain. Echocardiography can be recommended as an adjunctive test if readily available during acute chest pain or prolonged pain, especially in patients without previous myocardial infarction. Rest myocardial radionuclide imaging has been studied in the emergency department setting and although the overall diagnostic performance and prognostic value of sestamibi has been found to be promising, it is not suitable, in our country, for extensive clinical use. ECG exercise stress test in the emergency department population has been shown to be safe and it has a good negative predictive value for cardiac events. It should be recommended that any institution identify specific and shared protocol and strategies for management of patients with chest pain. These should include basal clinical evaluation, serial ECG and the use of specific and sensitive myocardial markers. Adjunctive tests, such as echocardiography, nuclear studies and stress tests should be employed when indicated taking into account local facilities. abstract_id: PUBMED:11206016 An ischemia-guided approach for risk stratification in patients with acute coronary syndromes. The optimal management approach for patients with non-ST-segment elevation acute coronary syndromes continues to be an issue of debate. An ischemia-guided strategy appears to be effective as an alternative to either a very conservative "wait-and-see" approach or a very aggressive routine revascularization approach. The need for another approach is supported by the lack of conclusive evidence-based results favoring an early routine invasive treatment strategy. In the Thrombolysis in Myocardial Infarction (TIMI) IIIB trial, there were no differences in the incidence of death or myocardial infarction (MI) between patients treated with an early invasive approach and those treated with a conservative approach to treatment. Significantly worse outcomes were shown in patients assigned to an early invasive strategy in the Veterans Affairs Non-Q-Wave Infarction Strategies in Hospital (VANQWISH) trial at 1-year follow-up (111 clinical events in the invasive group vs 85 in the conservative group; p = 0.05). Registry information, including that from the Organization to Assess Strategies for Ischemic Syndromes (OASIS), which included approximately 8,000 patients with unstable angina or suspected MI, has even suggested an excess hazard with a routine invasive approach. Patients with non-ST-segment elevation MI observed in the Global Use of Strategies to Open Occluded Coronary Arteries in Acute Coronary Syndromes (GUSTO)-IIB and Platelet IIb/IIIa in Unstable Angina: Receptor Suppression Using Integrilin Therapy (PURSUIT) trials also fared better with an ischemia-guided strategy. Even the recent FRagmin and Fast Revascularization during InStability in Coronary artery disease (FRISC II) trial investigators had to be very selective relative to eliminating high-risk patients in the first week and treating with intense anti-ischemic therapy and 5-7 days of low-molecular-weight heparin therapy to show an advantage for assigned revascularization. A careful clinical evaluation with attention to early risk stratification is essential in the ischemia-guided approach. The Braunwald classification for unstable angina helps identify independent clinical predictors of a poor outcome; high risk is clearly associated with Braunwald class III and type C. Electrocardiographic and biochemical markers for myocardial necrosis (cardiac troponin T or I) are important tools for assessing the presence and degree of ischemia and associated risk for adverse outcome. Noninvasive evaluation of left ventricular ejection fraction is essential for identifying those at high risk due to impaired contractile function. When these conventional markers do not provide conclusive information, noninvasive stress testing is most helpful to further identify those at highest risk for revascularization. Answer: The invasive approach for treating unstable angina in octogenarians appears to be both feasible and effective. A study that evaluated the efficacy and safety of an invasive approach in octogenarians with unstable angina found that percutaneous revascularization was performed in 74% of the patients, with a high stent implantation rate of 79%. The procedural success rate was 92%, and during hospitalization, 96% of patients were free of death and infarction. At the end of the follow-up period, which averaged 24 months, 78.4% of the patients were alive, with most of them being asymptomatic. These results suggest that stent implantation in octogenarians can be successful and can lead to a high survival rate free of death and infarction in the first year (PUBMED:11412773). Furthermore, other studies have shown that outpatient coronary angioplasty is feasible and safe, which could also be relevant for octogenarian patients. One study randomized patients to either transradial or transfemoral percutaneous transluminal coronary angioplasty (PTCA) and found that 58% of patients could be discharged 4-6 hours after the procedure, with a low incidence of adverse events (PUBMED:15789393). Another study demonstrated the feasibility of a very distal transradial approach (VITRO) for coronary interventions, which could be an alternative access site in emergencies when conventional radial access cannot be employed (PUBMED:31043983). In summary, the evidence suggests that an invasive approach, including percutaneous revascularization and stent implantation, is a viable and effective option for managing unstable angina in octogenarian patients, with good procedural success and survival rates. Outpatient procedures and alternative access sites like the distal radial approach may also be considered, depending on the patient's condition and the availability of resources.
Instruction: Is it Trauma- or Fantasy-based? Abstracts: abstract_id: PUBMED:8804517 Unconscious fantasy in psychotherapy. Recent decades have seen the growth of an extensive literature reassessing the nature and role of unconscious fantasy. While this literature represents a wide range of psychodynamic theories, some consistent themes have emerged: A comprehension of unconscious fantasy as a pervasive molder of all perception and meaning; a recognition of the stabilizing, anxiety-damping role of unconscious fantasy; a valuation of symptoms as the maladaptive intrusion of unconscious fantasy into everyday life; and a view of progress in psychotherapy as entailing modifications in the structure and/or penetrance of dominant fantasies. The broad-based interest in unconscious fantasy reflects a wide consensus that a focus on unconscious fantasy promises to establish a more solid epistemological foundation for psychodynamic theory, to relate more immediately psychodynamic theory to clinical observation, including providing more incisive and comprehensive insights into motivation, and to yield a more profound understanding of the nature of the therapeutic process and the dynamics of psychological growth. While there is some consensus on the ineluctable molding of perception and memory by unconscious fantasy, questions remain concerning the relationship of an event to its psychological integration, most notably questions regarding to what extent particular traumas yield distinctive patterns of fantasy and pathology. Further answers to these questions will provide deeper insights into how fantasy shapes our path through the world and how that path may be redirected by psychotherapy. abstract_id: PUBMED:32583866 Psychopathological significance of fantasy proneness as measured by the Creative Experiences Questionnaire: a meta-analysis Background: Fantasy proneness has been linked to dissociative symptoms and adverse childhood experiences.&lt;br/&gt; AIM: To review and meta-analyze the empirical literature on fantasy proneness (as indexed by the Creative Experiences Questionnaire) that appeared between 2000 and 2018.&lt;br/&gt; METHOD: We searched Google Scholar to identify relevant papers and subjected them to inspection. In doing so, we specifically looked at correlations between fantasy proneness, on the one hand, and dissociative symptoms, magical ideation, depression, anxiety, trauma, and susceptibility to false memories, on the other hand. Correlations were weighted using the Hunter-Schmidt approach.&lt;br/&gt; RESULTS: We identified 97 studies that together included 16.999 research participants. Fantasy proneness strongly correlated with both dissociative symptoms and magical ideation. The association of fantasy proneness with depression and anomalistic was moderate. Its association with trauma and anxiety was small, albeit significant, and much the same was true for false memories.&lt;br/&gt; CONCLUSION: The psychopathological relevance of fantasy proneness is broader than just dissociative symptoms. The modest correlation between trauma and fantasy proneness suggests that, apart from trauma, other causal antecedents of fantasy proneness exist. What fantasy proneness, dissociation, and magical ideation have in common is that they are manifestations of apophenia, i.e., the tendency to overinterpret reality. abstract_id: PUBMED:24678953 Is the dissociative adult suggestible? A test of the trauma and fantasy models of dissociation. Psychologists have long assumed a connection between traumatic experience and psychological dissociation. This hypothesis is referred to as the trauma model of dissociation. In the past decade, a series of papers have been published that question this traditional causal link, proposing an alternative fantasy model of dissociation. In the present research, the relationship among dissociation, suggestibility, and fantasy proneness was examined. Suggestibility was measured through the Gudjonsson Scale of Interrogative Suggestibility (GSS) as well as an autobiographically based version of this measure based on the events of September 11, 2001. Consistent with prior research and with the trauma model, dissociation correlated positively with trauma severity (r = .32, p &lt; .01) and fantasy proneness (r = .60, p &lt; .01). Inconsistent with the fantasy model, dissociation did not correlate with the neutral form of the GSS and correlated negatively (r = -.24, p &lt; .05) with the trauma-focused form of this suggestibility measure. Although some participants did become quite emotional during the procedure, the risk/benefit ratio was perceived by almost all participants to be positive, with more reactive individuals evaluating the procedure more positively. The results consistently support the trauma model of dissociation and fail to support the fantasy model of dissociation. abstract_id: PUBMED:7429734 The psychodynamics of a beating fantasy. A case history of a woman is presented whose primary conscious masturbatory fantasy from age three until age forty was of her father beating her. Late in her lengthy treatment, the conscious fantasy was seen to screen off the underlying, unconscious fantasy of being beaten by her mother. Early traumata, surgery, and primal scene exposures led to profound separation-individuation problems, heightened preoedipal and oedipal castration anxiety, and a faulty genital schematization. Compulsive masturbation, duplicating genital arousal initiated by the mother's early enema assaults and later by primal scene exposures, allowed neutralization of the intense aggression aroused by separation and castration traumata. Preoedipal fantasies screened off from conscious awareness by the oedipal ones were influenced by the primal scene exposures. These consisted of the ideas of being hurt (castrated) and beaten by the father. In the lengthy therapy with this patient an erotized transference developed in which, again, a libidinal gratification was used defensively to maintain the threatened object relationship. The interaction between beating fantasies and reality events is also discussed. abstract_id: PUBMED:22409505 Evaluation of the evidence for the trauma and fantasy models of dissociation. The relationship between a reported history of trauma and dissociative symptoms has been explained in 2 conflicting ways. Pathological dissociation has been conceptualized as a response to antecedent traumatic stress and/or severe psychological adversity. Others have proposed that dissociation makes individuals prone to fantasy, thereby engendering confabulated memories of trauma. We examine data related to a series of 8 contrasting predictions based on the trauma model and the fantasy model of dissociation. In keeping with the trauma model, the relationship between trauma and dissociation was consistent and moderate in strength, and remained significant when objective measures of trauma were used. Dissociation was temporally related to trauma and trauma treatment, and was predictive of trauma history when fantasy proneness was controlled. Dissociation was not reliably associated with suggestibility, nor was there evidence for the fantasy model prediction of greater inaccuracy of recovered memory. Instead, dissociation was positively related to a history of trauma memory recovery and negatively related to the more general measures of narrative cohesion. Research also supports the trauma theory of dissociation as a regulatory response to fear or other extreme emotion with measurable biological correlates. We conclude, on the basis of evidence related to these 8 predictions, that there is strong empirical support for the hypothesis that trauma causes dissociation, and that dissociation remains related to trauma history when fantasy proneness is controlled. We find little support for the hypothesis that the dissociation-trauma relationship is due to fantasy proneness or confabulated memories of trauma. abstract_id: PUBMED:12901171 A modest proposal: (re)defining unconscious fantasy. The author argues that unconscious fantasy, properly defined, necessarily represents the three-dimensional intersection of wishful thinking (fantasy), veridical perception of the environment (reality), and the naive cognition of childhood. It is proposed that, although attachment theory developed out of the intent to capture the unalloyed reality of dyadic experience, that experience is inextricably entangled with the other two components, wishes and naive cognition, and furthermore, that the behavior of children in the attachment paradigm can only be accounted for by positing the existence of underlying unconscious fantasies. In making these arguments, the author also addresses the development of unconscious fantasies and their relationship to compromise formations and trauma. abstract_id: PUBMED:11341065 The 'Medea fantasy'. An unconscious determinant of psychogenic sterility. The author begins by pointing out that myths have always been powerful vehicles for the projection of ubiquitous unconscious fantasies. Having noted the importance of certain male protagonists of the Greek myths in Freud's theories, she observes that their female counterparts exert an equal fascination and suggests that the Medea myth as recounted by Euripides can be invoked to elucidate a central unconscious fantasy found to underlie the psychogenic frigidity and sterility of several of her female patients. The manifestation of this 'Medea fantasy' is illustrated by a clinical account in which a dream is analysed. The author next summarises the Medea story as told by Euripides and attempts a psychoanalytic interpretation of it. She draws attention to features of the 'unconscious truth' inherent in the myth that were shared by all the members of her group of patients. A case history then shows how the progressive understanding and working through of the Medea fantasy led to a change in the analysand's experience of femininity and enabled her to have children. It is postulated that both early infantile sexual fantasies and repressed memories of early object-relations traumas such as maternal depression combine with ubiquitous bodily fantasies to produce the unconscious Medea fantasy. abstract_id: PUBMED:25762692 Unconscious fantasy as a special class of mental representation: a contribution to a model of mind. Philosophers of mind and cognitive psychologists have proposed that "mind" consists of myriad mental representations, namely, conscious and unconscious representations of belief/desire intentions. It is argued here that unconscious fantasies constitute a subset of the domain of mental representations, those concerned with conflicting wishes, affects, and defensive maneuvers. This proposal anchors the unconscious fantasy construct in a model of mind that accords with contemporary academic views in cognitive and developmental psychology and philosophy of mind, thus allowing psychoanalysts to enter into dialogue with those disciplines. Given this formulation, unconscious fantasy might well serve as a theoretical construct that applies to a large group of theories that share certain criteria regarding mentation. An analyst would then be at liberty to commingle insights from a menu of different theories without committing metatheoretical malpractice, resulting in a principled version of theoretical pluralism. Published case material from Kleinian, close process monitoring, and self psychological perspectives demonstrates how this redefined unconscious fantasy construct can encompass two major types of interventions that analysts make: content and process interpretations. abstract_id: PUBMED:8182250 The reconstruction of early childhood trauma: fantasy, reality, and verification. The contrasting perspectives of historical (actual, material) truth as opposed to narrative (intrapsychic) truth in psychoanalytic reconstruction parallel the controversial shift in Freud's thinking from a traumatic theory of neurosogenesis to a greater emphasis on fantasy and psychic reality. Freud and analysts since have sought to tease apart psychic and material reality as they dovetail in the transference. However, published cases in which a patient's memory of a childhood trauma turned out to be a verifiable fantasy or false belief and not an actuality appear to be remarkably scarce. This paper describes a case in which the patient indicated that in early childhood she had been subjected to antimasturbatory measures and finally to clitoridectomy that had ongoing traumatic effects. The case strikingly illustrates how a false memory of shock trauma may represent effects of strain traumatization interacting with fantasy and the state of drives and ego development in early childhood. It also highlights issues in the areas of trauma and memory, historical and intrapsychic truth, and the theory and technique of psychoanalytic reconstruction. Although it may not always be possible in the psychoanalytic situation to differentiate actual and plausible past events, in some cases the distinction between these perspectives on truth has clinical relevance that is more than academic. abstract_id: PUBMED:7381170 Paranoia and beating fantasy: an inquiry into the psychoanalytic theory of paranoia. Psychoanalytic contributions to the theory of paranoia are reviewed. As a primary problem underlying paranoia, this paper emphasizes the importance of beating fantasy associated with a fragile personality structure rather than the theory of repressed homosexuality. Castration anxiety and homosexual conflict may precipitate paranoia, but all levels of psychosexual development and their corresponding danger situations contribute to the transformed fantasy of persecution and punishment; early infantile narcissism, aggression, and sadomasochism are especially important. Severe preoedipal disturbance has contributed to deficient ego development and oedipal resolution. The failure to negotiate separation-individuation is associated with narcissistic arrest and impaired object relations and testing of reality. Ego integration, identity and sexual identity are unstable. Attempts at undoing, repair, or revenge of traumata and narcissistic injury and the maintenance of ego stability and (narcissistic) object relationship are more fundamental than homosexual object choice or the negative Oedipus complex in the understanding of paranoid psychopathology. Answer: The question of whether dissociation and related psychopathological symptoms are primarily trauma-based or fantasy-based has been a subject of debate in the field of psychology. The trauma model posits that dissociation is a response to traumatic experiences, while the fantasy model suggests that dissociation is related to an individual's propensity for fantasy, which may lead to confabulated memories of trauma. The evidence from the provided abstracts leans towards the trauma model. For instance, research has shown that dissociation correlates positively with trauma severity and fantasy proneness, but does not correlate with suggestibility, which would be expected if the fantasy model were correct (PUBMED:24678953). Additionally, dissociation remains significantly related to trauma history even when fantasy proneness is controlled, suggesting that trauma is a causative factor (PUBMED:22409505). Furthermore, the trauma model is supported by findings that dissociation is temporally related to trauma and trauma treatment, and is predictive of trauma history (PUBMED:22409505). This model also aligns with the understanding that pathological dissociation is a regulatory response to extreme emotion with measurable biological correlates (PUBMED:22409505). On the other hand, the fantasy model's predictions, such as a strong association between dissociation and suggestibility or greater inaccuracy of recovered memories, have not been consistently supported by research (PUBMED:22409505). In conclusion, while there is recognition that fantasy proneness is associated with various psychopathological symptoms and can be influenced by factors other than trauma (PUBMED:32583866), the preponderance of evidence from the provided abstracts supports the trauma model of dissociation over the fantasy model. This suggests that trauma plays a more significant role in the development of dissociative symptoms and related psychopathology than fantasy proneness alone.
Instruction: Can memory and executive functions in patients with obsessive-compulsive disorder predict outcome of cognitive behavioural therapy? Abstracts: abstract_id: PUBMED:26329120 Can memory and executive functions in patients with obsessive-compulsive disorder predict outcome of cognitive behavioural therapy? Background: Most studies find that patients with obsessive-compulsive disorder (OCD) have impaired memory and executive functions. Cognitive behavioural therapy (CBT) is the recommended psychotherapeutic treatment of patients with OCD. We hypothesized that impairments in memory and executive functions would predict poor outcome of CBT. Aim: To investigate whether memory and executive functions in patients with OCD could predict outcome of CBT. Methods: We assessed 39 patients with OCD before CBT with neuropsychological tests of memory and executive functions, the Hamilton Depression Rating Scale, and the Global Assessment of Functioning Scale. Furthermore, we assessed severity of OCD symptoms before and after CBT using the Yale-Brown Obsessive Compulsive Scale. Results: There were no statistically significant differences between recovered (41%) and non-recovered patients (59%) on any neuropsychological test variables or on any baseline demographic variables. Furthermore, change in OCD symptoms was not predicted by neuropsychological test performances or baseline severity of OCD symptoms. The only statistically significant finding was that non-recovered patients had lower social functioning before CBT than recovered patients (p = 0.018, d = 0.797). Conclusions: Memory and executive functions in patients with OCD could not predict outcome of CBT, but level of social functioning may be a predictor of CBT outcome. Some of the main clinical implications are that we cannot use memory and executive functions, or baseline severity of OCD symptoms to determine which patients should be offered CBT. abstract_id: PUBMED:25420427 Are there reliable changes in memory and executive functions after cognitive behavioural therapy in patients with obsessive-compulsive disorder? Introduction: Patients with obsessive-compulsive disorder (OCD) have impaired memory and executive functions, but it is unclear whether these functions improve after cognitive behavioural therapy (CBT) of OCD symptoms. The primary aim of this study was to investigate whether memory and executive functions change after CBT in patients with OCD. Methods: We assessed 39 patients with OCD before and after CBT with neuropsychological tests of memory and executive functions. To correct for practice effects, 39 healthy controls (HCs) were assessed at two parallel time intervals with the neuropsychological tests. Results: There were no changes in memory and executive functions after CBT in patients with OCD when results were corrected for practice effects. Patients performed worse on a test of visuospatial memory and organisational skills (Rey complex figure test [RCFT]) compared to HCs both before and after CBT (ps = .002-.036). Conclusions: The finding of persistent poor RCFT performances indicates that patients with OCD have impaired visuospatial memory and organisational skills that may be trait-related rather than state-dependent. These impairments may need to be considered in treatment. Our findings underline the importance of correcting for practice effects when investigating changes in cognitive functions. abstract_id: PUBMED:23607228 Intact short-term memory and impaired executive functions in obsessive compulsive disorder. Background And Purpose: Previous neuropsychological studies produced inconsistent results with tasks tapping short-term verbal and visual-spatial memory and executive functions in obsessive compulsive disorder (OCD). The aim of this study was to investigate the presence of deficits in these cognitive domains. A further goal was to describe the distribution of patients in different impairment ranges for all functions, and clarify the relationship between symptom severity and cognitive impairments. Methods: Thirty patients with OCD (DSM-IV) and 30 healthy volunteers were compared using well-known neuropsychological tasks. We assessed short-term verbal memory with the Digit Span Forward and Digit Span Backward Tasks, short-term visual-spatial memory with the Corsi Block Tapping Task, while we measured the level of executive functions with the StroopTask and the Wisconsin Card Sorting Test (WCST). Results: Compared with a matched healthy control group, the performance of OCD patients was in the impaired range only in the two executive tasks. We find a significant positive correlations between the Y-BOCS (Yale-Brown Obsessive Compulsive Scale) total scores and the number of perseverative responses (r(28) = 0.409, p &lt; 0.05) and perseverative errors (r(28) = 0.385, p &lt; 0.05) in the WCST. Conclusion: Our results gave evidence that executive functions are impaired while short-term memory is intact in OCD. This is in line with neuropsychological model of OCD that the deficit of cognitive and behavioral inhibition are responsible for the main cognitive findings of this disorder, most prevalently the deficit in set shifting and prepotent response inhibition. abstract_id: PUBMED:34512409 Executive Functioning in Chinese Patients With Obsessive Compulsive Disorder. Introduction: Studies have shown that patients with obsessive compulsive disorder (OCD) often perform more poorly than healthy control (HC) participants on cognitive tasks involving executive functions. Most studies, however, have been performed in Western countries and societies, making it uncertain whether impaired executive functions can also be observed among non-Western patients with OCD. To address this gap in the literature, we evaluated several executive functions in Chinese patients with OCD and HCs. Methods: Participants included consisted of 46 Chinese patients with OCD (25 men, 21 women), ranging in age from 19 to 56 years, and 45 matched HCs without any self-reported lifetime psychiatric disorder. They all lived in Shanghai or the surrounding area. Five tests of the Cambridge Neuropsychological Test Automated Battery (CANTAB) were used to evaluate several executive functions (response inhibition, spatial working memory, planning, and cognitive flexibility) along with testing basic learning and visual recognition memory. Statistical tests using a Bonferroni-corrected significance level of p = 0.003 were performed to assess overall patient-control group differences in cognitive performance. Additionally, we explored performance differences between patients classified as having either relatively mild symptoms or severe symptoms based on the individual total scores on the Yale-Brown Obsessive-Compulsive Scale. Results: There were no significant performance differences between patients with OCD and HC in any of the cognitive tests. Similarly, cognitive performance of patients with relatively mild OCD symptoms did not differ significantly from that of patients with severe symptoms. Conclusions: These results do not seem to support the view that impaired executive functioning represents a basic cognitive and pathophysiological feature of Chinese patients with OCD. However, due to study limitations, additional research is required before this conclusion can be well accepted. abstract_id: PUBMED:24674961 Memory and executive functions in patients with obsessive-compulsive disorder. Objective: We investigated whether patients with obsessive-compulsive disorder have poorer memory and executive functions than healthy controls. Background: The relatively inconsistent previous findings on this question reflect a lack of well-matched control groups, the inclusion of patients with comorbidity, and the use of noncomparable neuropsychological tests to assess memory and executive functions. Methods: We used well-accepted neuropsychological tests of memory and executive functions to assess 42 patients who had obsessive-compulsive disorder without comorbidity, and 42 healthy controls. We matched the patients and controls pairwise by sex, age, and years of education. Results: The patients performed significantly worse than the controls on the Rey Complex Figure Test, which assesses visuospatial memory and organizational skills. This group difference remained after we controlled for age, education, intelligence, and severity of depressive symptoms. Conclusions: The findings indicate that patients with obsessive-compulsive disorder may have impaired visuospatial memory and organizational skills, and these impairments should be considered in treatment. Trial Registration: ClinicalTrials.gov NCT00792038. abstract_id: PUBMED:31339342 Are claims of non-inferiority of Internet and computer-based cognitive-behavioural therapy compared with in-person cognitive-behavioural therapy for adults with anxiety disorders supported by the evidence from head-to-head randomised controlled trials? A systematic review. Objective: This review examines the evidence from head-to-head randomised controlled trials addressing whether the efficacy of cognitive-behavioural therapy for anxiety disorders, obsessive-compulsive disorder and post-traumatic stress disorders in adults delivered by computer or online (computer- and Internet-delivered cognitive-behavioural therapy) is not inferior to in-person cognitive-behavioural therapy for reducing levels of symptoms and producing clinically significant gains at post-treatment and at follow-up. A supplementary aim is to examine the evidence for severity as a moderator of the relative efficacy of computer- and Internet-delivered cognitive-behavioural therapy and in-person cognitive-behavioural therapy. Method: PubMed, PsycINFO, Embase and Cochrane database of randomised trials were searched for randomised controlled trials of cognitive-behavioural therapy for these disorders with at least an in-person cognitive-behavioural therapy and Internet or computer cognitive-behavioural therapy arm. Results: A total of 14 randomised controlled trials (9 Internet, 5 computer) of cognitive-behavioural therapy for social anxiety disorder, panic disorder and specific phobia and 3 reports of effect moderators were included. One study showed a low risk of bias when assessed against risk of bias criteria for non-inferiority trials. The remaining studies were assessed as high or unclear risk of bias. One study found that Internet-delivered cognitive-behavioural therapy was superior and non-inferior at post-treatment and follow-up to group in-person cognitive-behavioural therapy for social anxiety disorder. One study of Internet-delivered cognitive-behavioural therapy for panic disorder showed non-inferiority to individual in-person cognitive-behavioural therapy for responder status at post-treatment and one of Internet cognitive-behavioural therapy for panic disorder for symptom severity at follow-up. Other comparisons (22 Internet, 13 computer) and for estimates pooled for Internet cognitive-behavioural therapy for social anxiety disorder, Internet cognitive-behavioural therapy for panic disorder and computer-delivered cognitive-behavioural therapy studies did not support non-inferiority. Evidence of effect moderation by severity and co-morbidity was mixed. Conclusion: There is limited evidence from randomised controlled trials which supports claims that computer- or Internet-delivered cognitive-behavioural therapy for anxiety disorders is not inferior to in-person delivery. Randomised controlled trials properly designed to test non-inferiority are needed before conclusions about the relative benefits of in-person and Internet- and computer-delivered cognitive-behavioural therapy can be made. Prospero: CRD420180961655-6. abstract_id: PUBMED:38083976 A new perspective for evaluating the efficacy of tACS and tDCS in improving executive functions: A combined tES and fNIRS study. Background: Executive function enhancement is considered necessary for improving the quality of life of patients with neurological or psychiatric disorders, such as attention-deficit/hyperactivity disorder, obsessive-compulsive disorder and Alzheimer's disease. Transcranial electrical stimulation (tES) has been shown to have some beneficial effects on executive functioning, but the quantification of these improvements remains controversial. We aimed to explore the potential beneficial effects on executive functioning induced by the use of transcranial alternating current stimulation (tACS)/transcranial direct current stimulation (tDCS) on the right inferior frontal gyrus (IFG) and the accompanying brain function variations in the resting state. Methods: We recruited 229 healthy adults to participate in Experiments 1 (105 participants) and 2 (124 participants). The participants in each experiment were randomly divided into tACS, tDCS, and sham groups. The participants completed cognitive tasks to assess behavior related to three core components of executive functions. Functional near-infrared spectroscopy (fNIRS) was used to monitor the hemodynamic changes in crucial cortical regions in the resting state. Results: Inhibition and cognitive flexibility (excluding working memory) were significantly increased after tACS/tDCS, but there were no significant behavioral differences between the tACS and tDCS groups. fNIRS revealed that tDCS induced decreases in the functional connectivity (increased neural efficiency) of the relevant cortices. Conclusions: Enhancement of executive function was observed after tES, and the beneficial effects of tACS/tDCS may need to be precisely evaluated via brain imaging indicators at rest. tDCS revealed better neural benefits than tACS during the stimulation phase. These findings might provide new insights for selecting intervention methods in future studies and for evaluating the clinical efficacy of tES. abstract_id: PUBMED:28881220 Executive function predicts cognitive-behavioral therapy response in childhood obsessive-compulsive disorder. Cognitive-behavioral therapy (CBT) is considered first-line treatment for childhood obsessive-compulsive disorder (OCD). Despite CBT's efficacy, too many children and adolescents do not fully respond to treatment, making the identification of predictors of treatment response highly relevant. Executive functions (EF) have been suggested to constitute such predictors, but studies with pediatric samples are scarce. In the present study, we investigated latent level EF test performance and ratings of daily life EF behavior as predictors of CBT response in pediatric OCD. We further examined the stability of EF from pre-to post-treatment and the association of EF changes with OCD severity change. EF test performance significantly predicted exposure-based CBT outcome. Patients with better EF test performance had significantly elevated risk of non-response relative to patients with poorer performance. Daily life EF behavior in OCD probands improved after treatment relative to controls. The findings suggest that EF performance impacts CBT outcome, and that exposure-based CBT is well-suited for children and adolescents with OCD and poorer EF test performance. This study supports the relevance of EF in CBT for childhood OCD and denotes a possible need for development of enhanced treatments for children and adolescents with OCD and superior EF performance. abstract_id: PUBMED:20703394 Neuropsychological disposition and its impact on the executive functions and cognitive style in patients with obsessive-compulsive disorder. Background: Recent brain imaging and electrophysiological studies have consistently shown dysfunction of the fronto-striatal thalamic pathways in patients with obsessive-compulsive disorder (OCD). Aim: To study the relationship of neuropsychological disposition with the executive functions and cognitive style in patients with OCD. Methods: Twenty OCD patients (14 males, 6 females) and 20 normal control subjects, matched for all relevant variables including age, sex and education, were studied. Neuropsychological disposition was assessed on the Adult Neuropsychological Questionnaire (ANQ), the executive functions were assessed through Wisconsin Card Sorting Test (WCST), and the cognitive style was assessed by employing the Embedded Figure Test (EFT). Results: Subcortical-cerebellar-spinal domain of ANQ was found to be associated with cognitive style and executive functions. Conclusion: The impairment of executive functions and poor activation of specific neurological circuitry in OCD patients affirms the neurobiological basis of the disorder. abstract_id: PUBMED:33890432 The story of memory and executive functions in obsessive-compulsive disorder: a case-control study. Objective: Neuropsychological findings in obsessive-compulsive disorder (OCD) are mainly clustered around the role of memory and executive functions. However, outcomes vary across different OCD populations. In addition, the extent to which each of these factors can distinguish patients with OCD (PwOCD) from healthy individuals remains uncertain and attracts great attention. The present study aims to investigate the above issues. Method: This was a cross-sectional study of 182 individuals (90 PwOCD and 92 matched healthy controls). After screening for inclusion and exclusion criteria, the participants were administered neuropsychological tests including, the Wechsler Memory Scale-III (WMS-III), the Wisconsin Card Sorting Test (WCST), and the Stroop Color-Word Test (SCWT). Data were analyzed to test the study hypotheses using comparison of means and regression analysis methods. Results: The results showed that PwOCD had poorer performance than the control group in Immediate Memory, General Memory, and Working Memory and also according to response inhibition indexes. The results also showed that General Memory and Reaction Time2 from the SCWT index could be predictive variables for discriminating between PwOCD and controls. Conclusion: The findings of this study support the prior assumptions that PwOCD would have impaired memory dimensions and response inhibition, but did not support worse set-shifting performance. We also present an initial model for the predictive role of these neuropsychological variables in discriminating OCD from healthy individuals and increasing diagnostic accuracy. Answer: Memory and executive functions in patients with obsessive-compulsive disorder (OCD) do not appear to predict the outcome of cognitive behavioural therapy (CBT). A study assessing 39 patients with OCD before CBT with neuropsychological tests of memory and executive functions found no statistically significant differences between recovered and non-recovered patients on any neuropsychological test variables or baseline demographic variables. Furthermore, change in OCD symptoms was not predicted by neuropsychological test performances or baseline severity of OCD symptoms. The only significant finding was that non-recovered patients had lower social functioning before CBT than recovered patients, suggesting that level of social functioning may be a predictor of CBT outcome, rather than memory and executive functions (PUBMED:26329120). Additionally, another study investigating whether memory and executive functions change after CBT in patients with OCD found no changes in these cognitive functions after CBT when results were corrected for practice effects. Patients with OCD performed worse on a test of visuospatial memory and organizational skills compared to healthy controls both before and after CBT, indicating that these impairments may be trait-related rather than state-dependent (PUBMED:25420427). Moreover, a study examining executive functions as predictors of CBT response in childhood OCD found that EF test performance significantly predicted exposure-based CBT outcome, with patients having better EF test performance at elevated risk of non-response relative to patients with poorer performance (PUBMED:28881220). In summary, while some studies suggest that certain aspects of executive function may influence CBT outcomes in pediatric OCD, overall, memory and executive functions do not seem to be reliable predictors of CBT outcome in adult patients with OCD (PUBMED:26329120; PUBMED:25420427).
Instruction: Is adherence to drug treatment correlated with health-related quality of life? Abstracts: abstract_id: PUBMED:30724586 Health beliefs, attitudes, and health-related quality of life in persons with fibromyalgia: mediating role of treatment adherence. Fibromyalgia is a chronic illness characterized by pain and fatigue. Persons with fibromyalgia experience increased the risk for poor mental and physical health-related quality of life, which may be dependent on multiple factors, including health beliefs, such as confidence in physicians and the health-care system, and health behaviors, such as treatment adherence. Respondents with fibromyalgia (n = 409) were recruited nationally, via support organizations, and completed self-report measures: Multidimensional Health Profile - Health Functioning Index (MHP-H), Short-Form-36 Health Survey (SF-36v2), and Medical Outcomes Study (MOS) Measure of Patient Adherence - General Adherence Items. In mediation models, belief in the healthcare system and health-care personnel, and health efficacy exerted an indirect effect through treatment adherence on mental and physical quality of life. Adaptive health beliefs and attitudes were related to greater treatment adherence and, in turn, to a better quality of life. Maladaptive health beliefs and mistrusting attitudes about physician-level and systemic-level healthcare provision are negatively related to both treatment adherence and consequent physical and mental health-related quality of life in persons with fibromyalgia. Future randomized controlled trials are needed to determine if therapeutic strategies to alter health values might improve adherence and self-rated health. abstract_id: PUBMED:14516172 Is adherence to drug treatment correlated with health-related quality of life? Background: Adherence to drug treatment and health-related quality of life (HRQL) are two distinct concepts. Generally one would expect a positive relationship between the two. Objectives: The purpose of this study was to assess the relationship between adherence and HRQL. Methods: HRQL was measured using the physical and mental summary measures of the RAND-12 (PHC-12, MHC-12), the SF-12 (PCS-12, MCS-12), HUI-2 and HUI-3. Adherence was assessed using Morisky's instrument. Three longitudinal datasets were used. One dataset included 100 hypertensive patients. Another dataset covered 199 high risk community-dwelling individuals. The third dataset consisted of 365 elderly patients. Spearman's correlation coefficients were used to assess association. Subgroup analyses by type of medication and inter-temporal analyses were also performed. Results: Correlation between adherence and PHC-12 ranged from 0.08 (p = 0.26) to 0.22 (p &lt; 0.01). Correlations between adherence and MHC-12 ranged from 0.11 (p = 0.11) to 0.15 (p &lt; 0.01). Similar results were observed using HUI-2, HUI-3, and SF-12 as well as by type of medication and in the lagged analyses. Conclusions: Correlations between HRQL and adherence were positive but typically weak or negligible in magnitude. abstract_id: PUBMED:29665241 The effects of medication adherence and health literacy on health-related quality of life in older people with hypertension. Aim And Objective: This study investigated the effects of medication adherence and health literacy on health-related quality of life in vulnerable older people with hypertension. Background: Health literacy is particularly critical for providing accurate information regarding correct medication intake to improve medication adherence. Additionally, health-related quality of life is directly related to impairment from chronic disease. Design: A descriptive, cross-sectional study design was used. The participants were 160 low-income older people with hypertension registered in 16 public health centres in Busan, South Korea. They had received "visiting nursing services" for at least 6 months until the latest date and had belonged to the priority group of visiting nursing services. Methods: The Morisky Medication Adherence Scale, the Newest Vital Signs and EuroQol-5 Dimensions were used to assess medication adherence, health literacy and health-related quality of life, respectively. Results: Medication adherence and health literacy were significantly associated with health-related quality of life in vulnerable older people with hypertension, although exercise and subjective health were more significant factors affecting health-related quality of life than medical adherence and health literacy. Level of education, monthly income and employment status were not associated with health-related quality of life. Conclusions: To effectively promote health-related quality of life in this population, medication adherence and health literacy of patients should be considered when developing health interventions, including subjective health and exercise. Implications For Practice: To effectively promote health-related quality of life in vulnerable older people, medication adherence and health literacy of patients should be assessed, and patient-centred intervention strategies that consider their individual differences should be developed. abstract_id: PUBMED:19920952 Association between perceived treatment adherence and health-related quality of life in children with juvenile idiopathic arthritis: perspectives of both parents and children. Objective: The aim of the study was to examine the relationship between perceived treatment adherence and health-related quality of life (HRQOL) in children with arthritis, from both parent and child perspectives. Methods: Patients and their parents, who attended the juvenile idiopathic arthritis (JIA) clinic at the Montreal Children's Hospital, completed the Juvenile Arthritis Quality of Life Questionnaire (JAQQ), and either the Child Adherence Report Questionnaire (CARQ) or the Parent Adherence Report Questionnaire (PARQ). Linear regression models examined the associations between perceived treatment adherence and HRQOL while adjusting for age, severity, duration of the disease, and complexity of the medical regimen. Results: Perceived adherence to medications was associated with a better HRQOL total score from the children's perspective (beta = -0.02, 95% confidence interval [CI] = -0.03, -0.004), particularly with respect to gross motor (beta = -0.03, 95% CI = -0.05, -0.01) and psychosocial functions (beta = -0.03, 95% CI = -0.04, -0.01). According to parents, perceived adherence to exercises was associated with fewer symptoms (beta = -0.01, 95% CI = -0.03, 0.000) and better psychosocial functioning (beta = -0.01, 95% CI = -0.03, -0.002). Conclusion: Perceived adherence to medications is associated with an improved HRQOL according to children. According to parents, adherence to exercises may be associated with an improved HRQOL. abstract_id: PUBMED:33427481 Treatment adherence mediates the relationship between HIV-related stigma and health-related quality of life. There is a strong association between HIV-related stigma (HS) and health-related quality of life among people living with HIV/AIDS, yet few studies, to date, have examined mediators of this link. This study examined the mediating role of antiretroviral medication adherence (MA) on association between HS and HRQoL. Respondents were 969 people living with HIV/AIDS (PLWH) (628 males and 341 females, Mean age = 35.55, years), conveniently drawn from four hospital facilities in Enugu and Ebonyi States of Nigeria. Data were collected using HIV Stigma Scale, Morisky Medical Adherence Scale, and Patient Reported Outcome Quality of Life-HIV. Data were analysed using the Hayes PROCESS macro for SPSS which uses a regression-based, path-analytical framework. Results showed that HS across dimensions (personalised stigma, disclosure concern, concern about people's attitude and negative self-image) were negatively associated with MA (r = -.16; p &lt; .001, r = -.13; p &lt; .00, r = -.22; p &lt; .001 and r = -.21; p &lt; .001, respectively) HRQoL. MA was positively associated with HRQoL (r = . 24; p &lt; .001). MA mediated HS-HRQoL association. These findings suggest that stigma associated with HIV/AIDS is associated with poor adherence to active antiretroviral therapy (ART) treatment regimen, which can result in poor health and treatment outcome among PLWH. The findings may be helpful in improving HRQoL among PLWH. abstract_id: PUBMED:34392415 Predictors of financial toxicity and its associations with health-related quality of life and treatment non-adherence in Turkish cancer patients. Purpose: This study aims to determine the financial toxicity (FT) level in cancer patients, identify the risk factors associated with this level, and reveal the effect of this level on patient outcomes (health-related quality of life (HRQoL) and treatment non-adherence). Methods: The data of 316 cancer patients, who were receiving inpatient treatment in an oncology hospital affiliated to a public university in Ankara, Turkey, were ≥ 18 years old, and were receiving chemotherapy for at least 3 months, were evaluated. The data were collected through a face-to-face interview. FT was measured with the COmprehensive Score for financial Toxicity (COST) Measure (v2), HRQoL was measured with the Functional Assessment of Cancer Therapy-General (FACT-G) Scale (v4), and treatment adherence was measured using a questionnaire created by conducting a literature review. Results: Patients were found to experience FT above the moderate level (mean ± SD, 21.85 ± 12.02; median value, 24.0). Younger age, being married, low education level, low monthly household income, and receiving social/economic support for treatment were revealed to be the determinants of high FT. Moreover, as the FT score decreased (as the FT level felt increased), the HRQoL was observed to decrease, and treatment non-adherence increased. Conclusion: The results indicate that FT problem is also valid for cancer patients in Turkey despite the system of general health insurance. The FT was found to result from socio-economic characteristics rather than the disease and treatment-related characteristics. The significant associations revealed between FT and patient outcomes emphasize the importance of reducing the FT in cancer patients. abstract_id: PUBMED:29300427 Relationship between Adherence to the Mediterranean Diet and Health-Related Quality of Life and Life Satisfaction among Older Adults. Objectives: Evaluate the relationship between adherence to the Mediterranean diet and health-related quality of life and degree of life satisfaction among older adults. Design And Setting: Cross-sectional descriptive study. Participants: A total of 351 people older than 60 years participated in the study. Measurements: The Mediterranean Diet Score (MDS) was calculated to assess the degree of adherence to the MD. MD adherence was related to health-related quality of life using the Short Form Healthy Survey (SF-12) questionnaire, to life satisfaction using the Satisfaction with Life Scale (SWLS), and to sociodemographic, clinical and lifestyle variables. Multiple logistic regression models were used to analyse this relationship. Results: Mediterranean diet adherence was related to health- related quality of life. Participants with better adherence to the MD were more physically active (p=0.01) and had better health-related quality of life (p&lt;0.05) and lower consumption of alcoholic beverages (p=0.04). The age-adjusted model showed a significant association between the MD and mental function for both sexes and with physical function only for men. The fully adjusted model showed a direct relationship between the MD and life satisfaction of women (p&gt;0.05) but not for that of men (p=0.31). Conclusions: The adherence to the MD is directly associated with the self-perceived physical and mental function of both sexes and with the life satisfaction of women. Further studies in older adult populations should be performed to obtain conclusive results on the MD effect on health-related quality of life, including wellness indicators. abstract_id: PUBMED:28867260 Screening for ADHD in adults with cystic fibrosis: Prevalence, health-related quality of life, and adherence. Background: International guidelines recommend depression and anxiety screening in individuals with cystic fibrosis (CF), but Attention-Deficit Hyperactivity Disorder (ADHD) remains understudied. Methods: Adults with CF (n=53) were screened using the Adult ADHD Self-Report Scale-v1.1 Symptom Checklist (ASRS-v1.1), Cystic Fibrosis Questionnaire-Revised (CFQ-R), and a self-report measure of treatment adherence. Results: Elevated ADHD symptoms on the ASRS-v1.1 screener were reported by 15% of participants. Self-reported adherence, Body Mass Index in kg/m2 (BMI), and Forced Expiratory Volume in 1 Second, Percent Predicted (FEV1%pred) did not differ between participants with vs. without elevated ADHD scores. Three CFQ-R scales, Physical Functioning, Role Functioning, and Respiratory Symptoms, were significantly lower in participants with elevated ADHD screens (unadjusted p&lt;0.05). This difference remained statistically significant for the Role Functioning and Respiratory Symptoms scales following correction for multiple comparisons. Conclusions: The highly specific screening tool ASRS-v1.1 can ascertain previously undetected ADHD symptoms in adults with CF. ADHD was substantially more prevalent than expected in this population. Elevated ASRS-v1.1 screens correlated with poorer Health-Related Quality of Life (HRQoL) in some domains, but not with BMI, FEV1%pred, or self-reported CF treatment adherence. Additional research will elucidate the impact of ADHD and its treatment on HRQoL, CF self-care and health outcomes. abstract_id: PUBMED:35682087 Treatment Adherence and Health-Related Quality of Life in Patients with Hemophilia in Hong Kong. Background: This study aims to identify factors affecting health-related quality of life (HRQoL) in Chinese patients with hemophilia in Hong Kong, and to examine the association between treatment adherence and HRQoL outcomes. Methods: Patients with hemophilia A or B from a non-governmental organization reported their HRQoL and treatment adherence to prophylactic therapy using validated tools. Univariate tests and multivariable regression analysis were used to compare differences in outcomes across clinically relevant subgroups. Results: Fifty-six patients were recruited (mean age 30.4 [17.4] years; majority hemophilia A: 75%; moderate-to-severe severity: 88%). Patients who received prophylactic treatment reported fewer work/school problems (25.8 [18.9] versus 51.5 [26.3]; p = 0.001) than those who received on-demand therapy. The multivariable model showed that older age (B = 0.42, 95% CI = 0.093−0.75) and living in public housing (B = 10.24, 95% CI = 0.70−19.77) were associated with worse HRQoL. Older age was associated with treatment non-adherence (r = 0.66, p &lt; 0.0001). Patients with poor adherence tended to report worse functioning in sports/leisure (r = 0.31, p = 0.033). Conclusions: Our results suggest that patients who were older, had lower education attainment and received on-demand treatment had poorer perception of their health. Improving adherence may lead to better HRQoL. Future work includes evaluating the occupational needs prospectively in this population. abstract_id: PUBMED:33928481 Association between reported medication adherence and health-related quality of life in adult patients with haemophilia. Background Medication adherence is an important issue, not just health-related, for patients with haemophilia. Poor medication adherence to long-term therapies limits the potential of effective treatments to improve patients' health-related quality of life. Objective The aim of this study was to investigate the association of reported medication adherence and health-related quality of life in patients with haemophilia. Setting Data were collected from patients at University Hospital Centre Zagreb, Croatia and at University Medical Centre Ljubljana, Slovenia. Method Adult male patients with severe or moderate haemophilia receiving prophylactic treatment were eligible for the study. Main outcome measure Implementation phase of medication adherence was assessed with the self-reported VERITAS-Pro instrument and health-related quality of life with SF-36v2. Results A total of 82 participants were included in the study (median age was 44.50, range 18-73 years). The majority of our participants reported being adherent to medication (83%). Participants showed better health in the mental health domains and Mental Component Summary than in the physical health domains and Physical Component Summary. After controlling for demographic, socioeconomic and clinical predictors, better reported medication adherence explained an additional 4-6% of better health variance in Bodily Pain and Social Functioning domains and Mental Component Summary. Conclusion We found that reported medication adherence can contribute to better health-related quality of life in patients with haemophilia. Since life with a chronic condition is demanding, it is an important finding that medication adherence to replacement therapy can improve life conditions for patients with haemophilia. Answer: Yes, adherence to drug treatment is correlated with health-related quality of life (HRQoL). Several studies have found a positive relationship between treatment adherence and HRQoL across different patient populations. For instance, in persons with fibromyalgia, adaptive health beliefs and attitudes were related to greater treatment adherence and, in turn, to a better quality of life (PUBMED:30724586). Similarly, in older people with hypertension, medication adherence and health literacy were significantly associated with HRQoL (PUBMED:29665241). In children with juvenile idiopathic arthritis, perceived treatment adherence was associated with a better HRQoL from both the children's and parents' perspectives (PUBMED:19920952). Among people living with HIV/AIDS, antiretroviral medication adherence mediated the relationship between HIV-related stigma and HRQoL (PUBMED:33427481). Furthermore, in Turkish cancer patients, as financial toxicity increased, HRQoL decreased, and treatment non-adherence increased, suggesting that adherence to treatment is associated with better HRQoL outcomes (PUBMED:34392415). The relationship between adherence to the Mediterranean diet, which can be considered a form of treatment adherence in the context of lifestyle interventions, and HRQoL was also observed, with better adherence being associated with better self-perceived physical and mental function (PUBMED:29300427). In patients with hemophilia, those who were more adherent to prophylactic therapy reported fewer work/school problems, indicating a better HRQoL (PUBMED:35682087). Additionally, in adult patients with haemophilia, better reported medication adherence was associated with better HRQoL (PUBMED:33928481). However, it is important to note that while the correlations between HRQoL and adherence were positive, they were typically weak or negligible in magnitude in some studies (PUBMED:14516172). This suggests that while there is a correlation, the strength of the relationship can vary and may be influenced by other factors. Overall, the evidence supports the notion that adherence to drug treatment is correlated with HRQoL, but the extent of this relationship can differ depending on the specific conditions and populations studied.
Instruction: Straight versus modiolar hugging electrodes: does one perform better than the other? Abstracts: abstract_id: PUBMED:29102129 An overview of cochlear implant electrode array designs. Cochlear implant electrode arrays are designed with specific characteristics that allow for the preservation of intra-cochlear structures during the insertion process, as well as during explantation. Straight lateral wall (LW) electrode arrays and pre-curved modiolar hugging (MH) electrode arrays are the two types that are commercially available. Although there is a third type of electrode array called the mid-scala (MS), which is positioned in the middle of the scala tympani (ST), and is usually considered as an MH type of electrode. Different lengths of straight LW electrode arrays are currently available which allow for insertion across a range of different sized cochleae; however, due to manufacturing limitations, pre-curved MH electrodes are generally only available to cover the basal turn of the cochlea, while the spiral ganglion cells are distributed in the Rosenthal's canal that extends into 1.75 turns of the cochlea. Both straight LW and pre-curved MH electrodes can cause a certain degree of intra-cochlear trauma, but pre-curved MH electrodes tend to deviate into the scala vestibuli from the scala tympani more often than the straight LW electrodes, resulting in damage to the osseous spiral lamina/spiral ligament which could initiate new bone formation and eventually affect the cochlear implant users' hearing performance. Structural damage to the cochlea could also affect the vestibular function. With pre-curved MH electrodes, higher degrees of trauma are related to the fixed curling geometry of the electrode in relation to the variable coiling pattern of individual cochleae, the orientation of the electrode contacts in relation to the modiolus wall, and how effectively the stylet was handled by the surgeon during the procedure. Wire management, metal density, and the shore hardness of the silicone elastomer all contribute to the stiffness/flexibility of the electrode. It is important to acknowledge the impact of bringing the stimulating contacts closer to the modiolus wall with an MH electrode type in terms of the resultant damage to intra-cochlear structures. The presence of malformed cochleae should be identified and appropriate electrodes should be chosen for each specific cochlea, irrespective of the cochlear implant brand. In order to utilize drug therapy, the cochlea should be free from any trauma. abstract_id: PUBMED:34718854 Tight modiolar proximity and feasibility of slim modiolar cochlear implant electrode array insertion in diverse etiologies of hearing loss. Purpose: To report on our experience with the slim modiolar electrode (SME) especially focusing on the wide range of etiologies including inner ear anomalies, tumors, ossifications, and even revision surgeries. Methods: All the cochlear implantation cases performed from June 2018 to September 2019 by a single surgeon was prospectively recruited. The molecular/radiological etiology of hearing loss, intraoperative outcomes, and radiographic studies of cases where the SME was implanted was reviewed to evaluate compatibility of SME for the wide range of etiologies. For cases where SME replaced the other electrode as a revision, audiologic assessment was also made. Results: Among the 99 ears implanted during the study period, the SME was successfully implanted in 86 ears. These SME cases comprised inner ear anomaly/cochear nerve deficiency (n = 21) including cochlear hypoplasia type IV with the modiolus, intracochlear schwannoma (n = 1), far advanced otosclerosis (n = 1) and 7 revision cases. The SME was successfully used in 7 revision surgeries to replace the existing electrode. Shorter spiral diameter and decreased intracochlear position index for SME was found compared with their previous electrodes. Four out of the 6 patients who received revision implantation showed better speech perception after their surgeries. Conclusion: The SME can be implanted in any cases unless the integrity of the modiolus is totally compromised. Due to its slim design and tight modiolar-hugging feature, good functional outcome can also be anticipated. Additionally, it is suitable for revision surgeries possibly allowing better hearing outcomes which may be attributed to its closer proximity to the modiolus. abstract_id: PUBMED:26863644 Dispersed Hydrogel Actuator for Modiolar Hugging Cochlear Implant Electrode Arrays. Objective: Cochlear implants are used in patients with profound deafness due to degeneration of hair cells inside the cochlea. As improvement for commercially available CIs with straight electrode arrays, it is desired that the electrode array hugs toward the nerve cells located at the central axis of the cochlea in order to reduce the stimulation distance. Therefore, we present a hydrogel-based actuated electrode shaft, which should bend itself when exposed to saline solution (simulating the intracochlear liquid perilymph). Methods: In vitro tests with the electrode arrays were performed in a cochlea model. Different quantities and grain sizes of the hydrogel-building polymer (polyacrylamide) were used to study the self-bending effect. Furthermore, material interfaces between the components were evaluated with scanning electron microscopy, fluorescence microscopy, and an adapted tape test according to DIN EN ISO 2409. Results: Self-bending of the electrode array was observed in the cochlea model and a maximum number of 3.1 turns was achieved. No delamination between the components could be observed. Conclusion: After insertion, we expect that the electrode shaft moves into a patient individual perimodiolar position without losing its functionality because of a delamination of the components. Significance: A modiolar hugging electrode design is presented, which may improve hearing restauration with cochlear implants due to an easier insertion technique and a patient individualized hugging to the modiolus. abstract_id: PUBMED:34399646 Speech Recognition Outcomes in Adults With Slim Straight and Slim Modiolar Cochlear Implant Electrode Arrays. Objective: To compare differences in audiologic outcomes between slim modiolar electrode (SME) CI532 and slim lateral wall electrode (SLW) CI522 cochlear implant recipients. Study Design: Retrospective cohort study. Setting: Tertiary academic hospital. Methods: Comparison of postoperative AzBio sentence scores in quiet (percentage correct) in adult cochlear implant recipients with SME or SLW matched for preoperative AzBio sentence scores in quiet and aided and unaided pure tone average. Results: Patients with SLW (n = 52) and patients with SME (n = 37) had a similar mean (SD) age (62.0 [18.2] vs 62.6 [14.6] years, respectively), mean preoperative aided pure tone average (55.9 [20.4] vs 58.1 [16.4] dB; P = .59), and mean AzBio score (percentage correct, 11.1% [13.3%] vs 8.0% [11.5%]; P = .25). At last follow-up (SLW vs SME, 9.0 [2.9] vs 9.9 [2.6] months), postoperative mean AzBio scores in quiet were not significantly different (percentage correct, 70.8% [21.3%] vs 65.6% [24.5%]; P = .29), and data log usage was similar (12.9 [4.0] vs 11.3 [4.1] hours; P = .07). In patients with preoperative AzBio &lt;10% correct, the 6-month mean AzBio scores were significantly better with SLW than SME (percentage correct, 70.6% [22.9%] vs 53.9% [30.3%]; P = .02). The intraoperative tip rollover rate was 8% for SME and 0% for SLW. Conclusions: Cochlear implantation with SLW and SME provides comparable improvement in audiologic functioning. SME does not exhibit superior speech recognition outcomes when compared with SLW. abstract_id: PUBMED:25415467 Straight versus modiolar hugging electrodes: does one perform better than the other? Objective: To determine differences in speech perception outcomes for patients who received a CI422 and a Contour cochlear implant. Study Design: Retrospective case review. Setting: Tertiary referral center. Patients: Thirty-two adults who underwent cochlear implantation. Intervention: Cochlear implantation using a CI422 or Contour device. Main Outcome Measure: Bamford-Kowal-Bench (BKB) speech perception scores at 3 and 9 months after activation. Results: The mean BKB scores at 3 months for the CI422 device were 86.0% in quiet and 55.1% in noise. This compares with 86.0% in quiet and 62.3% in noise for the Contour device. At 9 months, the mean BKB scores were 85.9% in quiet and 67.1% in noise for the CI422 and 90.1% in quiet and 77.6% in noise for the Contour device. There was no statistically significant difference (p &gt; 0.05) between speech perception outcomes at 3 or 9 months. Conclusion: This study suggests that CI422 and Contour electrode both improve speech perception outcomes postoperatively, and there does not appear to be any significant difference in outcome between the two types of devices. abstract_id: PUBMED:30295044 Possibilities for residual hearing preservation with Nucleus CI532 Slim Modiolar electrode array. Case report During the rehabilitation of hearing-impaired patients, the preservation of residual acoustic hearing following cochlear implantation by minimizing the implantation trauma allows for improved hearing performance. To achieve this, minimally invasive, soft surgery methods and thinner, atraumatic electrodes were required. In our present study, we reported a case where Cochlear® Nucleus CI532 Slim Modiolar electrode was implanted in a patient with residual hearing. Our aim was to study the possible preservation of postoperative acoustic residual hearing by audiological monitoring. Since childhood, due to her congenital hearing loss, she has been wearing a conventional, airborne hearing correction device on both ears. Six months before cochlear implantation, we measured the progression on both sides of the hearing loss, so we decided to perform cochlear implantation. The patient had residual hearing on both ears prior to surgery thus the Cochlear® Nucleus CI532 Slim Modiolar Implant was used. The minimally invasive surgery was performed on the patient's right ear through the round window approach. Compared to the preoperative hearing threshold (average 85 dBHL) in the 4th postoperative week, an initial hearing threshold progression of 20-25 dBHL was observed between 0.25 and 1.0 kHz, while of 5-10 dBHL between 2.0-4.0 kHz. Hearing threshold measured in the 6th month showed a slight progression in the range above 1 kHz, but improved by the 12th month, to the results achieved at the 4th week. The effects of cochlear implantation on residual hearing have been studied in numerous studies, in which several key surgical and technical factors have been identified. Nucleus CI532 is a Slim Modiolar electrode profile that is close to the modiolus, so it is expected to have a lower endocochlear hydrodynamic load since it lies in the covering of the osseus spiral lamina, thus less influencing the dynamics of the basilar membrane. However, the perimodiolar location of the electrode array allows the adjacent nerve elements of the spiral ganglion to be stimulated with a lower electrical intensity and a reduced surface that may be neuroprotective. Preservation of acoustic residual hearing following cochlear implantation improves the patient's speech perception and the sound localization skills, particularly in difficult circumstances. Long-term residual hearing preservation may also be of great importance in the subsequent feasibility for regenerative procedures and drug treatments. Orv Hetil. 2018; 159(41): 1680-1688. abstract_id: PUBMED:35151931 Natural course of residual hearing preservation with a slim, modiolar cochlear implant electrode array. Purpose: Understanding residual hearing preservation and its natural course following cochlear implantation is important for developing rehabilitation strategies for hearing loss. However, non-uniform evaluation criteria and varying surgical skills pose challenges in fair comparison of the effect of different electrodes on residual hearing preservation. We compared the effect of a slim modiolar electrode (SME) and a slim straight electrode (SSE), implanted by a single surgeon, on progression of residual hearing using different parameters, based on cross-sectional and longitudinal audiological analyses. Methods: Patients with preoperative low-frequency pure-tone average (LFPTA) ≤85 dB at 250 and 500 Hz and who underwent minimally traumatic surgical techniques were included. The progression of residual hearing using threshold shifts, hearing preservation rate according to the HEARRING classification, and maintenance of functional low-frequency hearing potentially qualifying for a hybrid stimulation was analyzed up to five time points throughout the 1-year follow-up period. Results: Threshold shifts and hearing preservation rates according to the HEARRING classification of the electrodes were comparable from 3 months through 12 months postoperatively. Maintenance of functional low-frequency hearing, required for the usage of a hybrid stimulation, was similar for both electrodes. A substantial proportion of implantees with SME use a hybrid stimulation, resulting in long-term use. However, a difference in the pattern of postoperative residual hearing preservation between the two electrodes is possible, probably due to differences in their physical characteristics and location. Specifically, correlation analysis exhibited that significantly less tight modiolar proximity negatively affect the residual hearing preservation, albeit only at 3 months postoperatively, among patients with the SME. Conclusion: Collectively, both SME and SSE implantation showed favorable residual hearing preservation. Our findings further refine the recently proposed hearing preservation with the SME and suggest that the physical characteristics and location of electrodes, in terms of electrode-to-modiolus distance, could affect loss of acoustic hearing in various ways. abstract_id: PUBMED:35566935 Medical-Grade Silicone Rubber-Hydrogel-Composites for Modiolar Hugging Cochlear Implants. The gold standard for the partial restoration of sensorineural hearing loss is cochlear implant surgery, which restores patients' speech comprehension. The remaining limitations, e.g., music perception, are partly due to a gap between cochlear implant electrodes and the auditory nerve cells in the modiolus of the inner ear. Reducing this gap will most likely lead to improved cochlear implant performance. To achieve this, a bending or curling mechanism in the electrode array is discussed. We propose a silicone rubber-hydrogel actuator where the hydrogel forms a percolating network in the dorsal silicone rubber compartment of the electrode array to exert bending forces at low volume swelling ratios. A material study of suitable polymers (medical-grade PDMS and hydrogels), including parametrized bending curvature measurements, is presented. The curvature radii measured meet the anatomical needs for positioning electrodes very closely to the modiolus. Besides stage-one biocompatibility according to ISO 10993-5, we also developed and validated a simplified mathematical model for designing hydrogel-actuated CI with modiolar hugging functionality. abstract_id: PUBMED:25614265 Straight vs. modiolar hugging electrodes--does one perform better than the other? N/A abstract_id: PUBMED:34106549 A new method to determine the optimal orientation of Slim Modiolar cochlear implant electrode array insertion. Background And Purpose: Our goal was to determine the optimal orientation of insertion of the Slim Modiolar electrode and develop an easy-to-use method to aid implantation surgery. In some instances, the electrode arrays cannot be inserted in their full length. This can lead to buckling, interscalar dislocation or tip fold-over. In our opinion, one of the possible reasons of tip fold-over is unfavourable orientation of the electrode array. Our goal was to determine the optimal orientation of the Slim Modiolar electrode array relative to clear surgical landmarks and present our method in one specified case. Methods: For the measurement, we used the preoperative CT scan of one of our cochlear implant patients. These images were processed by an open source and free image visualization software: 3D Slicer. In the first step we marked the tip of the incus short process and then created the cochlear view. On this view we drew two straight lines: the first line represented the insertion guide of the cochlear implant and the second line was the orientation marker (winglet). We determined the angle enclosed by winglet and the line between the tip of the incus short process and the cross-section of previously created two lines. For the calculation we used a self-made python code. Results: The result of our algorithm for the angle was 46.6055°. To validate this result, we segmented, from the CT scan, the auditory ossicles and the membranaceous labyrinth. From this segmentation we generated a 3D reconstruction. On the 3D view, we can see the position of the previous lines relative to the anatomical structures. After this we rotated the 3D model together with the lines so that the insertion guide forms a dot. In this view, the angle was measured with ImageJ and the result was 46.599°. Conclusion: We found that our method is easy, fast, and time-efficient. The surgery can be planned individually for each patient, based on their routine preoperative CT scan of the temporal bone, and the implantation procedure can be made safer. In the future we plan to use this method for all cochlear implantation surgeries, where the Slim Modiolar electrode is used. Answer: The comparison between straight and modiolar hugging electrodes in cochlear implants has been a subject of research, with studies examining their impact on speech perception outcomes and the preservation of residual hearing. One study found no statistically significant difference in speech perception outcomes between patients who received a straight electrode (CI422) and those who received a modiolar hugging electrode (Contour) at 3 or 9 months postoperatively. The mean Bamford-Kowal-Bench (BKB) scores for both devices were similar in quiet and in noise, suggesting that both types of electrodes can improve speech perception outcomes postoperatively without any significant difference in performance (PUBMED:25415467). Another study comparing slim modiolar electrode (SME) CI532 and slim lateral wall electrode (SLW) CI522 cochlear implant recipients also found that postoperative speech recognition outcomes were comparable between the two groups. However, in patients with very low preoperative speech recognition scores, the SLW group showed significantly better outcomes at 6 months post-implantation. The intraoperative tip rollover rate was higher for SME compared to SLW, indicating a potential difference in surgical experience with the two types of electrodes (PUBMED:34399646). Regarding residual hearing preservation, a study reported that the slim modiolar electrode (SME) could be successfully implanted in various cases, including those with inner ear anomalies, tumors, ossifications, and even in revision surgeries. The SME's slim design and tight modiolar-hugging feature were suggested to potentially allow for better hearing outcomes due to its closer proximity to the modiolus (PUBMED:34718854). Another study on the Nucleus CI532 Slim Modiolar electrode array indicated that its perimodiolar location allows for stimulation of adjacent nerve elements of the spiral ganglion with lower electrical intensity, which may be neuroprotective and contribute to the preservation of residual hearing (PUBMED:30295044). In summary, both straight and modiolar hugging electrodes have been shown to improve speech perception outcomes, and there is no clear evidence that one consistently outperforms the other. The choice between the two may depend on individual patient anatomy, the specific goals of the implantation (such as residual hearing preservation), and the surgeon's experience with the electrode type.
Instruction: Oculomotricity in childhood: is the normal range the same as in adults? Abstracts: abstract_id: PUBMED:16612532 Oculomotricity in childhood: is the normal range the same as in adults? Unlabelled: The study of oculomotricity is done by the evaluation of three systems: saccade eye movements (MOS), optokinetic nystagmus (NO) and smooth pursuit eye movement (MORL). The joint action of these three systems allows the visual field's establishment in different movement situations. Aim: To compare the value of oculomotricity in normal adults and children to confirm, or not if it is viable to use the same parameters of adults normality to children's exams interpretation. Study Design: Clinical with transversal cohort. Material And Method: We studied MOS, NO and MORL in 50 normal children and in 35 adults and the results were compared by the t Student test. Results: The data analysis showed significant difference between children and adults (significance at level +/- = 0.05) Discussion: In the literature we have found evidence that myelinization of the vestibular pathways happen at about 16 weeks and the pyramidal tracts, at 24 months. Oculomotricity is finished at this time. Other papers describe the importance of these tests in the diagnostic of neurological diseases, visual alterations and as predictors of the risk of schizophrenia development but they do not report the normal range in children. In our study we found increased latency of MOS, increase in gain of NO, reduction in gain and increase in the distortion of MORL in children if compared to adults, which is in accordance with the literature. These alterations can be explained by the low attention during the tests and the immaturity of ocular movements' control in children. Conclusion: Therefore, the establishment of a parameter of normality to the oculomotricity in childhood is necessary for the correct analysis of the oculography to avoid misinterpretation of the exam. abstract_id: PUBMED:23113020 The Study of Total IgE Reference Range in Healthy Adults in Tehran, Iran. Background: IgE is an antibody class that regarded as an important factor in the pathogenesis of allergic diseases, asthma, immune responses to parasitic infection and it could be responsible for the late- phase allergic response. The objective of this study was to evaluate total IgE in healthy Iranian adults, establishment of reference range of total IgE and assess helpfulness of this value in clinical diagnosis atopic and allergic diseases. Method: Three hundred sixty six healthy adults from blood transfusion volunteers (18 to 60 years) were selected in this study. A specific questionnaire (including demographic factors, smoking status and …) was filled out for each person. Also, we evaluated effect of race and education on total IgE. These adults had no history of allergic disease. The total serum IgE level using a commercial enzyme immunoassay and CBC (Eosinophil count) was determined in them. Results: Mean of age was 37.32± 10.93 yrs and 219 cases were males and 147 females. The geometric mean of total IgE was 20.84 IU/ml (2-373 IU/ml) (95% percentile= 250) (95% confidence interval=46.27-62.70). No differences was observed between mean of IgE log in males and females (P= NS) but mean of total IgE log in females is more than males. Conclusion: Normal range of serum total IgE obtained in this study could be helpful for diagnosis of IgE-dependent allergic disease, as reference ranges in Iranian healthy adults. abstract_id: PUBMED:18229796 Can oculomotricity be altered in patients with tinnitus only? A preliminary study. The study of oculomotricity is performed by evaluating three systems: saccadic ocular movements (SOMs), optokinetic nystagmus (OKN), and smooth pursuit eye movements (SPEMs). Our aim was to study oculomotricity in patients with a complaint of only tinnitus and to compare it with the value of our control group. We studied the SOMs, OKN, and SPEMs in 25 patients complaining only about tinnitus and in 35 normal adults and compared the results. The data analysis showed a significant difference in the value of the SOMs and SPEMs between the two groups. Sensorineural tinnitus can originate in the organ of Corti, in the cochlear nerve, or in the auditory pathways of the central nervous system. The auditory cortex connects with visual areas and with the superior colliculus. The latter structure is involved in the origin of SOMs and OKN. In our study, we found an increased delay in saccadic tests. In the SPEMs, we observed an increase in the degree of distortion, and a reduction in the gain. This outcome is in accordance with the literature. However, we detected a few alterations in the OKN, and this finding is in partial agreement with the studies analyzed. Alterations in oculomotricity can indicate involvement of the central nervous system in patients with a complaint of only tinnitus. abstract_id: PUBMED:19893944 Oculomotricity parameters in digital nystagmography among children with and without learning disorders. Unlabelled: The saccadic pathway involves numerous regions of the brain cortex, the cerebellum and the brainstem. Saccadic movement latency, velocity and precision parameters assess the efficacy of central nervous system (CNS) control over rapid eye movements. Very few disorders which alter the CNS are missed when these parameters are carefully measured using a computer. Pendular tracking assesses the integrity of the oculomotor system in controlling slow eye movements - vulnerable to CNS and vestibular system dysfunctions. Optokinetic nystagmus represents a stereoceptive response which compensates environment movements by psycho-optical inputs. Aims: to compare the oculomotricity values found in children with and without learning complaints. Materials And Methods: prospective study. We included in the study 28 children of both genders, within the age range between 8 and 12 years, with learning disorders (study group) and 15 without (control group). We carried out the fixed and randomized saccadic movement tests, pendular tracking study and optokinetic nystagmus. Results: There was a statistically significant difference between the groups concerning the randomized saccadic movement velocity parameters and in the pendular tracking test. Conclusion: The children with learning disorders presented alterations in some oculomotricity tests when compared to children without complaints. abstract_id: PUBMED:36553993 Kinematic Strategies for Sustainable Well-Being in Aging Adults Influenced by Footwear and Ground Surface. Falls are an inescapable problem influencing the health and threatening the safety of older adults. Exploring the kinematic strategies of aging adults can help reduce the risk of falls. To study kinematic strategies of aging adults in response to footwear (flat shoes, toe spring shoes, rocker sole shoes) and ground surfaces (level ground, grassland and rock road), a 3D motion capture system and subjective stability evaluation, with 14 female participants, were performed. Results indicated that footwear and ground surfaces significantly impacted joint dynamics during walking. Compared with young adults, aging adults tended to adopt a more conservative walking pattern. Wearing different shoes on the three ground surfaces mainly reduced the ROM (range of motion) of the ankle (p &lt; 0.05). By analyzing the objective and subjective results, rocker sole shoes gave aging adults a stronger sense of instability, so they controlled the movement of ankle joint initiatively. When walking on grassland and rock road, aging adults adjusted the movements of hip, knee and ankle joints to maintain gait stability. Aging adults are recommended to strengthen flexibility training of the ankle joint, perform hip adduction and abduction exercises, and wear rocker sole shoes to improve their balance ability and sustainable well-being. abstract_id: PUBMED:30351201 An Age-Specific Serum Thyrotropin Reference Range for the Diagnosis of Thyroid Diseases in Older Adults: A Cross-Sectional Survey in China. Background: The fact that serum thyrotropin (TSH) levels increase with age may influence the diagnosis of thyroid diseases in older adults. This study aimed to establish an age-specific serum TSH reference range, examine the prevalence of thyroid diseases in older adults ≥65 years, and analyze the risk factors. Methods: A cross-sectional study of adult populations in 10 cities in China was conducted from 2010 to 2011. A total of 15,008 subjects were randomly selected and completed the present study. Urinary iodine concentration, serum TSH, thyroid peroxidase antibody (TPOAb), and thyroglobulin antibody (TgAb) titers were measured. Thyroid ultrasonography and questionnaires were completed by all the subjects. When the TSH level was abnormal, free thyroxine and/or free triiodothyronine levels were measured. Results: When the reference range of the general population was used, the prevalence rates of overt hypothyroidism (Ohypo) and subclinical hypothyroidism (Shypo) in older adults ≥65 years were significantly higher than those in younger adults &lt;65 years (2.09% vs. 0.80% and 19.87% vs. 16.23%, respectively; p &lt; 0.001). Positive TPOAb and positive TgAb were associated with the prevalence of Shypo in older adults. An age-specific serum TSH reference range was formulated according to guidelines set forth by the National Academy of Clinical Biochemistry. Both the median and upper limit values of serum TSH in older adults were higher than those in younger adults (2.58 [0.75-8.86] mIU/L vs. 2.38 [0.76-6.57] mIU/L; p &lt; 0.001). Using the age-specific serum TSH reference range, the prevalence of Shypo in older adults was 3.3%, which was significantly lower than the prevalence based on the reference range of the general population (3.3% vs. 19.87%). The prevalence rates of Ohypo, overt hyperthyroidism (Ohyper), and subclinical hyperthyroidism (Shyper) did not change much (Ohypo: 1.6% vs. 2.09%; Ohyper: 0.7% vs. 0.52%; and Shyper: 3.8% vs. 0.73%). Positive TPOAb, but not positive TgAb, was also associated with the prevalence of Shypo as diagnosed with the age-specific serum TSH reference range. Conclusion: The serum TSH level increases with age, which may represent a normal compensatory phenomenon in older adults ≥65 years. To prevent misdiagnosis and mistreatment, the use of an age-specific serum TSH reference range is recommended in older adults for the diagnosis of thyroid diseases. abstract_id: PUBMED:38123512 Long-term physical activity time-in-target range in young adults with cardiovascular events in later life. Aims: Achieving at least 150 min per week of moderate-to-vigorous physical activity (PA) is a 'Class I, A level' recommendation for the primary prevention of cardiovascular disease. However, long-term PA is a complex behaviour and varied by lifetime, which was insufficiently reflected by the current studies. This study used time-in-target range (TTR) to measure the long-term PA level during young adulthood and investigated its relationship with cardiovascular events in later life. Methods And Results: Participants in the Coronary Artery Risk Development in Young Adults study were recruited (n = 2902) and allocated into four groups by PA TTR: &lt;25% (n = 1028), 25 to &lt;50% (n = 444), 50 to &lt;75% (n = 424), 75 to 100% (n = 1006). TTR was estimated with linear interpolation across the first 15 years. The primary outcome was a composite of cardiovascular events. The mean (SD) age after the exposure period was 40.3 (3.6) years. After a median follow-up for an additional 18.9 years, the participants with a TTR of at least 75% had a 40% lower risk of the primary outcome (HR: 0.60; 95%CI: 0.38 to 0.95) compared with the lowest TTR group. Each 1-SD increase in TTR was also significantly associated with a 21% decreased risk of the primary outcome (HR: 0.79; 95%CI: 0.65-0.97). Conclusion: Increasing PA is essential in young adulthood. In young adults, maintaining long-term guidelines-recommended PA levels may help to lower the risk of cardiovascular events in later life. Maintaining the guidelines-recommended PA level for at least 75% of time across young adulthood may be preferable. abstract_id: PUBMED:36358352 Eye-Movement Deficits in Seniors with Hearing Aids: Cognitive and Multisensory Implications. In recent years, there has been a growing body of literature highlighting the relationship between presbycusis and consequences in areas other than hearing. In particular, presbycusis is linked to depression, dementia, and cognitive decline. Among this literature, the effect of hearing aids, currently the most common method of treating presbycusis, is also a growing research topic. This pilot study aims to explore the effects of hearing aids on the cognitive and multisensory consequences of presbycusis. To that purpose, saccades and vergences eye movements were studied, towards visual and audiovisual targets, of a presbycusis population wearing hearing aids for an average of two years. These measurements were done whether or not participants were wearing their hearing aids. Eye-movement characteristics, particularly latencies (the reaction time taken to initiate an eye movement), allows one to measure attentional and multisensory characteristics. Previous studies showed that presbycusis was linked with an increase of saccade latencies and an improvement in audiovisual interaction capacities, i.e., latencies for audiovisual targets are shorter than those for visual targets. Eye movements are measured and analyzed with REMOBI and AIDEAL technologies. Results show a shortening, with hearing aids, of right saccade latencies to visual targets, suggesting an increase in attention and/or engagement. Yet, saccade latencies are not shorter for audiovisual vs. visual targets alone, neither when wearing hearing aids, nor without. Moreover, convergence latencies are particularly slow for any type of target and with or without hearing aids. The results suggest deficits for audiovisual interactions and the initiation of convergences in that population. These deficits could be part of the factors triggering the need to wear hearing aids. These results therefore show interesting relationships between hearing-aid wearing in a presbycusis population and oculomotricity and invite further research in this area. abstract_id: PUBMED:36872788 Reduced Range of Gait Speed: A Parkinson's Disease-Specific Symptom? Reduced range of gait speed (RGS) may lead to decreased environmental adaptability in persons with Parkinson's disease (PwPD). Therefore, lab-measured gait speed, step time, and step length during slow, preferred, and fast walking were assessed in 24 PwPD, 19 stroke patients, and 19 older adults and compared with 31 young adults. Only PwPD, but not the other groups, showed significantly reduced RGS compared to young adults, driven by step time in the low and step length in the high gait speed range. These results suggest that reduced RGS may occur as a PD-specific symptom, and different gait components seem to contribute. abstract_id: PUBMED:34538183 Testing a Middle -Range Theory of Heart Failure Self-Care. A middle-range theory of heart failure self-care, derived from the self-care deficit theory of nursing, was tested among 175 Arab American older adults with heart failure. The middle-range theory achieved good statistical fit, but not all hypothesized relationships were supported. Specifically, conceptualizing basic conditioning factors as a single latent variable was not supported. However, individual factors of depression, social support, and time living with heart failure had a direct effect on both self-care agency and quality of life. Understanding predictors and outcomes of self-care within a theoretical framework is essential in caring for patients with heart failure. Answer: No, the normal range of oculomotricity in childhood is not the same as in adults. A study comparing the value of oculomotricity in normal adults and children found significant differences between the two groups, indicating that it is not viable to use the same parameters of normality for adults when interpreting children's exams (PUBMED:16612532). The study observed increased latency of saccade eye movements, an increase in gain of optokinetic nystagmus, and a reduction in gain and increase in the distortion of smooth pursuit eye movement in children compared to adults. These differences were attributed to the lower attention span during tests and the immaturity of ocular movements' control in children. Therefore, establishing a parameter of normality specific to childhood is necessary for the correct analysis of oculography to avoid misinterpretation of the exam (PUBMED:16612532).
Instruction: Are injuries spatially related? Abstracts: abstract_id: PUBMED:37121451 Spatially resolved metabolomics combined with bioactivity analyses to evaluate the pharmacological properties of two Radix Puerariae species. Ethnopharmacological Relevance: P. lobata and P. thomsonii are medicinal plants with similar pharmacological functions but different therapeutic effects. A novel method is presented herein to investigate metabolites in terms of their distribution and qualification, quantification is necessary to elucidate the different therapeutic effects of the two Puerariae species. Aim Of The Study: The aim of the present study was to perform spatially resolved metabolomics combined with bioactivity analyses to systematically compare the metabolite differences in P. lobata and P. thomsonii by distribution, qualification, quantification, and biological activity to evaluate their pharmacological properties. Materials And Methods: Air flow-assisted desorption electrospray ionization-mass spectrometry imaging (AFADESI-MSI) was performed to characterize the differences in the metabolite distributions of P. lobata and P. thomsonii. Further qualitative and quantitative analyses of the differential metabolites were performed using liquid chromatography-mass spectrometry (LC-MS). Biological activities correlated with the differences in the metabolites were validated by MTT assays. Results: Some metabolites showed complementary distributions of the phloem and xylem in the two species, saccharide, vitamin, and inosine levels were higher in the phloem of P. thomsonii but higher in the xylem of P. lobata. The 3'-hydroxyl puerarin level was higher in the xylem of P. thomsonii but higher in the phloem of P. lobata. Qualitative and quantitative analyses of the metabolites revealed a total of 52 key differential metabolites. MTT assays showed that daidzein, daidzin, puerarin, ononin, genistin, formononetin, 3'-hydroxy puerarin, 3'-methoxy puerarin, mirificin, and 3'-methoxy daidzin exerted protective effects on H9c2 cells against hypoxia/reoxygenation injury. P. lobata extracts exhibited a significantly better protective efficacy than P. thomsonii extracts. Conclusions: In this study, AFADESI-MSI combined with LC-MS and biological activities comprehensively elucidated metabolite differences in the distribution, qualification, quantification, and pharmacological properties of P. lobata and P. thomsonii. The results of this study could provide a novel strategy for species identification and quality assessment of similar Chinese herbal medicines. abstract_id: PUBMED:27829193 Modifying the strength and strain concentration profile within collagen scaffolds using customizable arrays of poly-lactic acid fibers. The tendon-to-bone junction is a highly specialized tissue which dissipates stress concentrations between mechanically dissimilar tendon and bone. Upon injury, the local heterogeneities across this insertion are not regenerated, leading to poor functional outcomes such as formation of scar tissue at the insertion and re-failure rates exceeding 90%. Although current tissue engineering methods are moving towards the development of spatially-graded biomaterials to begin to address these injuries, significant opportunities remain to engineer the often complex local mechanical behavior of such biomaterials to enhance their bioactivity. Here, we describe the use of three-dimensional printing techniques to create customizable arrays of poly-lactic acid (PLA) fibers that can be incorporated into a collagen scaffold under development for tendon bone junction repair. Notably, we use additive manufacturing concepts to generate arrays of spatially-graded fibers from biodegradable PLA that are incorporated into collagen scaffolds to create a collagen-PLA composite. We demonstrate the ability to tune the mechanical performance of the fiber-scaffold composite at the bulk scale. We also demonstrate the incorporation of spatially-heterogeneous fiber designs to establish non-uniform local mechanical performance of the composite biomaterial under tensile load, a critical element in the design of multi-compartment biomaterials for tendon-to-bone regeneration applications. Together, this work highlights the capacity to use multi-scale composite biomaterials to control local and bulk mechanical properties, and provides key insights into design elements under consideration for mechanically competent, multi-tissue regeneration platforms. abstract_id: PUBMED:38103677 Spatially-dependent model for rods and cones in the retina. We develop a mathematical model for photoreceptors in the retina. We focus on rod and cone outer segment dynamics and interactions with a nutrient source associated with the retinal pigment epithelium cells. Rod and cone densities (number per unit area of retinal surface) are known to have significant spatial dependence in the retina with cones located primarily near the fovea and the rods located primarily away from the fovea. Our model accounts for this spatial dependence of the rod and cone photoreceptor density as well as for the possibility of nutrient diffusion. We present equilibrium and dynamic solutions, discuss their relation to existing models, and estimate model parameters through comparisons with available experimental measurements of both spatial and temporal photoreceptor characteristics. Our model compares well with existing data on spatially-dependent regrowth of photoreceptor outer segments in the macular region of Rhesus Monkeys. Our predictions are also consistent with existing data on the spatial dependence of photoreceptor outer segment length near the fovea in healthy human subjects. We focus primarily on the healthy eye but our model could be the basis for future efforts designed to explore various retinal pathologies, eye-related injuries, and treatments of these conditions. abstract_id: PUBMED:27704406 White matter abnormalities are associated with overall cognitive status in blast-related mTBI. Blast-related mild traumatic brain injury (mTBI) is a common injury of the Iraq and Afghanistan Wars. Research has suggested that blast-related mTBI is associated with chronic white matter abnormalities, which in turn are associated with impairment in neurocognitive function. However, findings are inconsistent as to which domains of cognition are affected by TBI-related white matter disruption. Recent evidence that white matter abnormalities associated with blast-related mTBI are spatially variable raises the possibility that the associated cognitive impairment is also heterogeneous. Thus, the goals of this study were to examine (1) whether mTBI-related white matter abnormalities are associated with overall cognitive status and (2) whether white matter abnormalities provide a mechanism by which mTBI influences cognition. Ninety-six Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OEF) veterans were assigned to one of three groups: no-TBI, mTBI without loss of consciousness (LOC) (mTBI-LOC), and mTBI with LOC (mTBI + LOC). Participants were given a battery of neuropsychological tests that were selected for their sensitivity to mTBI. Results showed that number of white matter abnormalities was associated with the odds of having clinically significant cognitive impairment. A mediation analysis revealed that mTBI + LOC was indirectly associated with cognitive impairment through its effect on white matter integrity. These results suggest that cognitive difficulties in blast-related mTBI can be linked to injury-induced neural changes when taking into account the variability of injury as well as the heterogeneity in cognitive deficits across individuals. abstract_id: PUBMED:36469281 Sports-Related Shoulder Injuries Among Female Athletes. Purpose Of Review: The objectives of this review are to explore the recent literature evaluating sports-related shoulder injuries among female athletes. Recent Findings: Recent literature has highlighted sex-related differences in injury trends and patterns among athletes. Increased participation of women in both recreational and professional sports has resulted in increased exposure to injury. While men experience greater rates of shoulder injury overall, women tend to experience more overuse-related injuries. Evidence also suggests women are more susceptible to shoulder laxity and rotator cuff tears. In comparison to their male counterparts, women note poorer function, increased pain, and decreased activity level following shoulder injuries. Women may also be more likely to experience worse outcomes following surgical intervention. Sex-related differences in injury patterns and outcomes results from a combination of molecular and environmental influences, including hormone pathways, shoulder morphology, and differing rates of participation in, and athletic regulations among, certain sports. Sex-related differences occur in how athletes sustain, experience, and recover from sports-related injuries. A comprehensive understanding of sex-related injuries enhances clinical decision making, treatment, and recovery. Further research is needed to clarify sex as an independent variable when evaluating sports-related shoulder injuries. abstract_id: PUBMED:38146581 Sports-Related Injuries in Adolescent Athletes: A Systematic Review. Sports participation among adolescents is increasing, offering numerous health benefits and exposing them to the risk of sports-related injuries. This paper aims to understand that the prevalence and risk factors associated with these injuries are crucial for effective injury prevention and the overall well-being of adolescent athletes. This systematic review synthesizes the existing literature on sports-related injuries in adolescent athletes. A comprehensive search was conducted, yielding 11 relevant studies. The studies were analyzed to determine the prevalence of injuries and identify associated risk factors. A qualitative synthesis of the findings was performed. The included studies collectively highlight the significant burden of sports-related injuries among adolescent athletes, with prevalence rates ranging from 34.1% to 65%. Specific risk factors associated with these injuries include body mass index (BMI), physical activity patterns, age, gender, sport type, previous injuries, and training practices. Obese adolescents, those engaged in excessive weekly practice hours, younger athletes, and females were found to be at higher risk. Certain sports, such as soccer and football, exhibited higher injury rates. Sports-related injuries in adolescent athletes are a multifaceted issue influenced by various factors. Tailored injury prevention strategies are essential, considering the specific needs of adolescent athletes in different sports and age groups. Interventions should encompass physical and educational components, emphasizing proper warm-ups, protective equipment use, and injury prevention education. Longitudinal studies and standardized injury reporting systems are needed to monitor injury trends and evaluate prevention strategies effectively. This systematic review contributes to our understanding of sports-related injuries in adolescent athletes and underscores the importance of evidence-based injury prevention efforts. abstract_id: PUBMED:31565200 Health-related quality of life and related characteristics of persons with spinal cord injury in Nigeria. Background: Spinal cord injury (SCI) is impairment of the spinal cord resulting in numerous health problems that considerably affect the quality of life (QOL) of the patients. Moreover, a number of sociodemographic and clinical characteristics may influence the persons' health-related quality of life (HRQOL). However, there is limited information on the HRQOL and related characteristics among affected persons living in Nigeria. This study explores the HRQOL and related characteristics of persons with SCI in Kano, Northwestern Nigeria. Methods: A prospective cross-sectional survey of 41 subjects with SCI and 40 age and gender matched healthy subjects was conducted from January to December 2016. Subjects' sociodemographic and clinical characteristics and HRQOL (using the SF-36 questionnaire) were collected and analyzed. Results: The majority of the subjects were men in both the SCI (85.4%) and healthy (82.5%) groups. The mean injury duration was 28.4 ± 20.2 months. Road traffic accident (46.3%) was the leading cause of injury with paraplegia (70.7%) being the most frequent level of injury. A greater number of the subjects (43.9%) had a complete impairment. Subjects with SCI had significantly lower HRQOL in the domains of general health, physical functioning, bodily pain, social functioning, role-emotional, and mental health compared to healthy controls. Gender, level of injury, and severity of injury were commonly found to be related to lower HRQOL scores. Conclusion: Persons with SCI from Kano, Northwestern Nigeria have lower HRQOL across various domains compared to healthy controls. Common factors related to lower HRQOL scores were gender, level of injury, and severity of injury. There is a need for optimal rehabilitation for persons with SCI in Kano, Northwestern Nigeria. abstract_id: PUBMED:32111566 A conceptual model and detailed framework for stress-related, strain-related, and overuse athletic injury. A multitude of athletic injuries occur when the various tissues that make up the human body experience stresses and strains that exceed their material strength. The precise amount of stress and strain that any given tissue can withstand is determined by the mechanical properties and resultant strength of that particular tissue. These mechanical properties are directly determined by an individual's physiology and acute regulation of these properties. A number of theoretical frameworks for athletic injury occurrence have been proposed, however, a detailed conceptual framework for injury aetiology that considers the interplay between the physiological and mechanical factors and outlines the causal pathways to tissue damage and injury is needed. This will guide injury research towards a more thorough investigation of causal mechanisms and understanding of risk factors. Further, it is important to take into account the considerable differences in loading patterns which can result in varying injury outcomes such as acute stress-related, strain-related, or overuse injury. Within this article a simplified conceptual model of athletic injury is proposed along with a detailed, evidence-informed, conceptual framework for athletic injury aetiology that focuses on stress-related, strain-related, and overuse injury. abstract_id: PUBMED:20303187 Spatially restricting gene expression by local translation at synapses. mRNA localization and regulated translation provide a means of spatially restricting gene expression within each of the thousands of subcellular compartments made by a neuron, thereby vastly increasing the computational capacity of the brain. Recent studies reveal that local translation is regulated by stimuli that trigger neurite outgrowth and/or collapse, axon guidance, synapse formation, pruning, activity-dependent synaptic plasticity, and injury-induced axonal regeneration. Impairments in the local regulation of translation result in aberrant signaling, physiology and morphology of neurons, and are linked to neurological disorders. This review highlights current advances in understanding how mRNA translation is repressed during transport and how local translation is activated by stimuli. We address the function of local translation in the context of fragile X mental retardation. abstract_id: PUBMED:37888327 A Spatially Guided Machine-Learning Method to Classify and Quantify Glomerular Patterns of Injury in Histology Images. Introduction The diagnosis of glomerular diseases is primarily based on visual assessment of histologic patterns. Semi-quantitative scoring of active and chronic lesions is often required to assess individual characteristics of the disease. Reproducibility of the visual scoring systems remains debatable, while digital and machine-learning technologies present opportunities to detect, classify and quantify glomerular lesions, also considering their inter- and intraglomerular heterogeneity. Materials And Methods: We performed a cross-validated comparison of three modifications of a convolutional neural network (CNN)-based approach for recognition and intraglomerular quantification of nine main glomerular patterns of injury. Reference values provided by two nephropathologists were used for validation. For each glomerular image, visual attention heatmaps were generated with a probability of class attribution for further intraglomerular quantification. The quality of classifier-produced heatmaps was evaluated by intersection over union metrics (IoU) between predicted and ground truth localization heatmaps. Results: A proposed spatially guided modification of the CNN classifier achieved the highest glomerular pattern classification accuracies, with area under curve (AUC) values up to 0.981. With regards to heatmap overlap area and intraglomerular pattern quantification, the spatially guided classifier achieved a significantly higher generalized mean IoU value compared to single-multiclass and multiple-binary classifiers. Conclusions: We propose a spatially guided CNN classifier that in our experiments reveals the potential to achieve high accuracy for the localization of intraglomerular patterns. Answer: Yes, injuries can be spatially related. The concept of spatial relation in the context of injuries refers to the idea that the location, distribution, and extent of injuries within a particular tissue, organ, or body part can be influenced by various factors and can have implications for treatment and recovery. Several of the provided abstracts discuss the spatial aspects of injuries in different contexts: - In the study of Radix Puerariae species, spatially resolved metabolomics combined with bioactivity analyses were used to evaluate the pharmacological properties of the plants, which could be relevant to understanding how different compounds are distributed within the plant tissues and how this might relate to their therapeutic effects on injuries or other health conditions (PUBMED:37121451). - The research on modifying the strength and strain concentration profile within collagen scaffolds using customizable arrays of poly-lactic acid fibers highlights the importance of spatially-graded biomaterials for tissue engineering, particularly for tendon-to-bone junction repair, where local mechanical behavior is crucial (PUBMED:27829193). - The spatially-dependent model for rods and cones in the retina takes into account the spatial distribution of these photoreceptors and their interactions with nutrients, which could be relevant to understanding retinal injuries and pathologies (PUBMED:38103677). - The study on white matter abnormalities in blast-related mild traumatic brain injury (mTBI) suggests that the spatial variability of white matter disruptions is associated with heterogeneous cognitive impairments following the injury (PUBMED:27704406). - The review on sports-related shoulder injuries among female athletes discusses how sex-related differences in shoulder morphology and other factors can influence the spatial patterns of injury and recovery outcomes (PUBMED:36469281). - The systematic review on sports-related injuries in adolescent athletes identifies various risk factors, including the type of sport and training practices, which can influence the spatial distribution of injuries across different body parts (PUBMED:38146581). - The study on health-related quality of life in persons with spinal cord injury in Nigeria mentions that the level and severity of injury, which have spatial dimensions, are related to lower quality of life scores (PUBMED:31565200). - The conceptual model for stress-related, strain-related, and overuse athletic injury considers the spatial distribution of stresses and strains that tissues experience, which can lead to different types of injuries (PUBMED:32111566).
Instruction: Blunt abdominal trauma with small bowel injury: are isolated lesions riskier than associated lesions? Abstracts: abstract_id: PUBMED:8376136 Lesions of the small intestine and colon in blunt injuries of the abdomen Between 1976 and 1991, we observed lesions of the small bowel or colon in 39 patients having sustained blunt abdominal trauma. 70% of the patients presented with concomitant injuries. Except for 3 cases, all the patients presented with abdominal pain on admission. All the patients were operated on. The delay between admission and operation varied between a few minutes and 48 hours. Indication was hemoperitoneum, peritonitis or progressive abdominal pain. Overall morbidity is high, often related to associated disease. 4 patients died (mortality 10%), including 2 patients with isolated intestinal trauma who were operated on after 20 and 36 hours. Due to the lack of specific laboratory or X-ray test, we suggest a high index of suspicion for bowel lesions in blunt abdominal trauma, especially in unconscious patients. Close observation is mandatory. Indication for laparotomy must not be delayed if any doubt exists regarding the integrity of hollow viscus. abstract_id: PUBMED:18372966 Blunt abdominal trauma with small bowel injury: are isolated lesions riskier than associated lesions? Purpose: The objective of this study was to compare patients with "isolated" blunt small bowel injury (SBI) to patients with multiple intra-abdominal injuries and analyze whether delayed laparotomy affected outcome. Methods: Medical records of patients that suffered a blunt SBI between 1994 and 2005 were reviewed. The patients were divided into two groups: those with isolated SBI and those with other associated intra-abdominal injuries ("non-isolated"). The method of diagnosis, time to operation, small bowel Organ Injury Scale (OIS) assessment (grade &gt;or= 2), injury severity score (ISS), morbidity, and mortality were analyzed. Results: A total of 90 patients met the inclusion criteria, including 62 (68.9%) isolated cases and 28 (31.1%) non-isolated cases. Isolated cases required more supplementary diagnostic methods than the non-isolated cases. Non-isolated cases had a shorter diagnosis to treatment period (p &lt; .01) and a higher ISS (mean 22.5 vs. 17.2 in "isolated" group). Morbidity (51.6% and 53.6%) and mortality (16.1% and 28.6%) did not differ significantly between the isolated and non-isolated groups. Delays in diagnosis were common in the isolated group, but this did not affect outcome. Patients with associated injuries, and higher ISS, had higher mortality. Conclusions: The presence of associated intra-abdominal injuries significantly affected the presentation and time to diagnosis of patients with SBI, but not morbidity or mortality. Delayed surgical treatment in the isolated cases was not associated with an increased incidence of complications. Patients inflicted with more severe associated injuries were less likely to survive the trauma. abstract_id: PUBMED:8071404 Intestinal-mesenteric lesions of closed abdominal traumas A retrospective series of 24 intestinomesenteric lesions after blunt abdominal trauma were investigated to determine the distribution of the lesions as a function of the type of accident and to establish diagnostic and therapeutic patterns based on observed post-operative results. The chi squared test was used for statistical analysis. There were 20 men and 4 women, mean age 39 +/- 15 years (range 17-74). Trauma was caused by a traffic accident in 16 cases (67%) including 12 victims under 40 (p &lt; 0.01) and 14 drivers (87.5%) (12 wore a safety belt). Haemodynamic instability or generalized abdominal defense. Pneumoperitoneum was seen on only one of the 21 (5%) admission X-rays. Abdominal echography was often ordered (16 times). The mean interval between trauma and surgical intervention was 39 hours. Injury involved the mesentery (n = 15), the small intestine (n = 14), the colon (n = 7) and the colic mesenteriolum (n = 4). Lesions of the colon were observed more often in automobile accidents (p &lt; 0.05). Injury to other abdominal organs was also observed in 12 patients: liver (n = 6), spleen (n = 5) and pancreas (n = 5). The thorax was involved in 14 patients including 11 under 40 (p &lt; 0.01). Nine intestinal resections/anastomoses, 9 enterroaphies (including 4 colic injuries) and 1 Hartman were performed. Two patients died (8.3%) and follow-up was complicated in 14 (including the 2 deaths). Wearing the safety belt did not affect the greater number of intestinomesenteric lesions. In subjects wearing a safety belt, there were more injuries of the thorax but less neurological involvement.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:1340003 Exclusive jejunal and ileal lesions due to blunt trauma. The exclusive jejunal and ileal lesion due to blunt trauma is a rare and potentially lethal condition. The small intestine is the most damaged organ in penetrating abdominal injuries, although its isolated injury in blunt abdominal trauma is rare and difficult to diagnosed. There are no characteristic signs nor symptoms in the exclusive jejunal and ileal lesions due to abdominal contusions which result in high morbidity and mortality rates since a late diagnosis is done despite advanced auxiliary diagnostic methods available. Considering these facts, the aim of the present work is to study the prognosis of patients suffering from this type of injury due to abdominal trauma, taking into consideration the time elapsed between the trauma and its diagnosis and the importance of its early surgical correction. abstract_id: PUBMED:28209449 Maxillofacial and concomitant serious injuries: An eight-year single center experience. Purpose: Maxillofacial injuries are frequently associated with multiple trauma and can determine functional and aesthetic bad outcomes. The severity of maxillofacial injuries may be considerable and can divert clinicians' attention from other concomitant injuries which is less evident but potentially life-threatening. The aim of this study was to find out the concomitant injuries in patients referred to the Emergency Department (ED) of the University Hospital of Messina (North-East Sicily, Italy) for maxillofacial traumas. Methods: We retrospectively evaluated data of 240,833 patients admitted at the ED of the University Hospital of Messina from January 2008 to December 2015 because of maxillofacial injuries leading to hospitalization and surgical treatment. Patients who primarily received treatment care at different institutions, pediatric trauma patients and adult patients who were transferred in accordance with pre-existing agreements in case of paucity of beds were excluded. Finally we included 447 (0.2%) patients over the 8 years. Data were evaluated with emphasis on epidemiology (age, gender, mechanism of trauma), primary survey and abnormalities and pattern of trauma. Results: The most frequent cause of maxillofacial trauma was road accidents (319 patients, 71.4%), among which motorcycle ones were prevalent. The maxillofacial injured who presented major lesions were 98 patients and minor lesions occurred in 349 patients; 443 (99.1%) patients underwent maxillofacial surgery, immediate or delayed depending on the severity of concomitant injuries (χ2 = 557.2, p &lt; 0.0001). Five concomitant neglected lesions were found to be associated with severe maxillofacial traumas (χ2 = 17.13, p &lt; 0.0001 vs minor lesions). All of the neglected lesions occurred in paucisymptomatic patients who showed painless abdomen, no hemodynamic instability, no signs of hematoma of anterior and posterior abdominal wall or other suspicious clinical signs and symptoms. Conclusion: Among the patients admitted firstly in other surgical wards different from the Maxillofacial Surgery Unit, diagnosis was more difficult, especially for blunt abdominal traumas, in which patients showed only vague and nonspecific symptoms concealing serious and life-threatening injuries. We recommend the routine use of whole body CT scan, when the maxillofacial injuries appear prevalent, mainly in patients affected by maxillofacial major lesions. abstract_id: PUBMED:8186798 Lesions of the renal pedicle in blunt abdominal trauma in children. 7 cases The frequency of urinary tract trauma in children is difficult to estimate. Renal pedicle lesions are estimated to represent 10% of all forms of urinary tract trauma. These lesions are difficult to diagnose and may be completely missed. Over a 12 year period, we have treated 7 children with a renal pedicle injury. Haematuria was immediate in 3 cases, delayed in 2 cases and absent in 2 cases. The diagnosis is based on intravenous urography or postcontrast computed tomography. Treatment consisted of disobstruction with intimal suture with a good result, but impaired renal function, an upper pole nephrectomy, functional aorto-renal bypass graft but with altered renal function, total nephrectomy and 3 cases of surveillance with one non-functioning kidney. None of these children are hypertensive. The authors discuss the diagnostic approach, the contribution of imaging and the therapeutic options and results. abstract_id: PUBMED:2964833 Scintigraphic evaluation of traumatic splenic lesions in children. Ninety-eight children with recent blunt abdominal trauma which initially evoked clinical suspicion of splenic injury were examined with colloid scintigraphy of the spleen and the liver using multiple imaging views and with abdominal survey. Nineteen children were, in addition, examined with tomographic scintigraphy. The clinical findings and the course of the disorder were reanalyzed. Scintigraphy indicated splenic injury in 56 children and hepatic injury in another 5 children. The left lateral and the left oblique were the optimum imaging views for detecting splenic ruptures. Tomographic scintigraphy did not improve the diagnostic yield. Abdominal survey failed to indicate almost every second case of splenic rupture and provided no additional information of significance. The clinical review agreed with the scintigraphic diagnosis of splenic lesions but, in addition, it suggested possible splenic lesions in another 10 children with normal scintigraphy. This discrepancy cannot be explained as surgery was not employed; the occurrence of splenic lesions too small to become detectable at scintigraphy or to provoke clinically evident symptoms may be supposed. abstract_id: PUBMED:19106841 Abdominal trauma: spectrum of lesions Abdominal injuries are more often observed in the setting of polytrauma. Detection and accurate description of hemoperitoneum, solid organ injuries (contusion, hematoma, laceration, fracture), bowel perforation and arterial bleeding allow optimal multidisciplinary management. Mesenteric and bowel injuries are often difficult to diagnose and may be masked by the presence of more frequent injuries (spleen, liver). MDCT is the gold standard imaging modality for severe blunt abdominal trauma. abstract_id: PUBMED:8292437 Extrahepatic biliary lesions in abdominal traumatism The extrahepatic biliary tree traumatisms are rare and hard to diagnose; 15 of 864 patients with surgery treatment for abdominal trauma for over 19 years (1974-1992) were studied retrospectively. Twelve blunt abdominal traumas (9 traffic accident and 3 falls down) and 3 open traumas (stabbing in two and gunshot wounds in one case). Eleven had gallbladder injury, two gallbladder and biliary duct and two exclusively biliary duct. Only one case was an isolated lesion, in the rest of cases, intra or extra abdominal lesions, occasionally both, were seen. The diagnostic method, associated lesions, morbi-mortality are reviewed. Surgical treatment was cholecystectomy for gallbladder injury and direct suture with T-tube (Kehr) for biliary duct injury. abstract_id: PUBMED:30446947 Ultrasonography of the pediatric spleen: a pictorial essay. In infants and children, the spleen is involved in many pathological processes, whether those processes are isolated or related to systemic diseases. Pathology of the pediatric spleen includes congenital anomalies, splenomegaly, trauma, focal lesions, infarction, and tumors. Ultrasonography (US) is a widely available, fast, noninvasive imaging technique to assess the size, shape, and position of the spleen, as well as to define splenic echotexture. US is capable of screening for splenic disorders without the risk of ionizing radiation; it is the initial imaging examination performed to evaluate suspected splenic pathology, providing clinicians with helpful decisional support. US plays an important role in the detection of even very small amounts of hemoperitoneum, a herald of significant abdominal organ injury, in pediatric blunt abdominal trauma. Moreover, contrast-enhanced US may allow early detection of splenic injuries, ideally minimizing children's risk from radiation exposure. This pictorial essay illustrates the normal ultrasound appearance of the pediatric spleen and the sonographic findings which may guide clinicians to a correct diagnosis of pathologic conditions. Answer: The question of whether isolated lesions of the small bowel due to blunt abdominal trauma are riskier than those with associated lesions has been investigated in the literature. According to a study that compared patients with isolated blunt small bowel injury (SBI) to those with multiple intra-abdominal injuries, the presence of associated intra-abdominal injuries significantly affected the presentation and time to diagnosis of patients with SBI. However, it did not affect morbidity or mortality. Delays in diagnosis were common in the isolated group, but this did not affect the outcome. Patients with more severe associated injuries and higher injury severity scores (ISS) had higher mortality, suggesting that the severity of associated injuries rather than the isolation of the SBI itself is a more critical factor in patient outcomes (PUBMED:18372966). In another study, it was suggested that there is a high index of suspicion for bowel lesions in blunt abdominal trauma, especially in unconscious patients. Close observation is mandatory, and the indication for laparotomy must not be delayed if any doubt exists regarding the integrity of hollow viscus. The study reported a high overall morbidity, often related to associated disease, and a mortality rate of 10%, including patients with isolated intestinal trauma who were operated on after significant delays (PUBMED:8376136). These findings indicate that while isolated SBI can be associated with delays in diagnosis, the risk associated with isolated lesions is not necessarily higher than that of non-isolated lesions when it comes to morbidity and mortality. The critical factor appears to be the timely diagnosis and treatment of the injury, as well as the severity of any associated injuries.
Instruction: Aortic Dissection Limited to Abdominal Aorta: an Underrecognized Entity? Abstracts: abstract_id: PUBMED:26324032 Aortic Dissection Limited to Abdominal Aorta: an Underrecognized Entity? Background: Aortic dissection limited to the abdominal aorta is a rare clinical entity with non-specific clinical features. Optimal treatment, as well as natural history and progression of the disease, remain unclear. In 1992 we -reported 5 cases of isolated abdominal aortic dissection (IAAD) and in the present paper we update our series with 5 additional patients. A concise literature review is also provided. Methods: Between 1992 and 2014, we diagnosed 5 patients with IAAD (4 men, mean age 60.6 years, range 45-77). No patient presented with acute onset of symptoms. One patient was diagnosed with a periumbilical bruit, and diagnosis was made with magnetic resonance (MR)-angiography. Other diagnoses were incidental findings on computed tomographic (CT) scanning. Dissection was located infrarenally in four cases and at the celiac trunk in one case. Results: All cases were treated conservatively with hypertension control and close follow-up. Follow-up period ranged from 10 months to 20 years and was performed yearly by CT- or MR-angiography and blood pressure monitoring. All patients remained symptom-free, all dissection lengths remained stable. Slowly increasing post-dissection aneurysmal dilatation was encountered in two patients. We combined results of these five new patients with five previously diagnosed and reported patients at our center. Treatment was surgical in only one out of 10 patients. There was no disease-related mortality during follow-up. Conclusions: Based on our case series, IAAD remains a rare clinical condition with relative benign clinical course. Treatment was almost exclusively conservative. Recent publications state IAAD might be underrecognized and under-diagnosed compared to thoracic aortic dissections. abstract_id: PUBMED:15776306 Limited dissection of abdominal aortic aneurysm in a patient with multiple myeloma. Dissection limited to the abdominal aorta contributes 4% of all aortic dissections, and inflammatory injury of the aortic media is one of factors associated with dissection. In multiple myeloma, leukocytoclastic vasculitis of the skin has been known. We describe limited dissection of an abdominal aortic aneurysm with dense lymphocyte infiltration in a 62-year-old man with multiple myeloma. Although it is unclear whether the lymphocyte infiltration in the aortic wall, which was denser than that of atherosclerotic aneurysm, was associated with multiple myeloma, the excessive aortic wall inflammation may have somewhat influenced aneurysm formation or aortic dissection. abstract_id: PUBMED:15915419 Dissecting abdominal aortic aneurysm in a young man: an uncommon presentation of abdominal pain. Aortic dissection results from an intimal tear that allows blood to penetrate the wall of the aorta. Patients, typically males over the age of fifty with a history of hypertension, usually present with sudden onset of severe chest pain. An unusual presentation of a dissecting aortic aneurysm as a cause of abdominal pain in a 32-year-old male is discussed. Although a relatively uncommon cause of abdominal pain, it is important to keep this entity on the list of differential diagnoses and adequately rule it out before discharge from the emergency department. abstract_id: PUBMED:34458033 Identification and Management of Abdominal Aortic Dissection With Concurrent Aneurysm. The incidence of acute aortic dissection ranges from 2.6 to 3.5 per 100,000 people per year. An abdominal aortic dissection is known to be the rarest of all types of aortic dissection, with high morbidity and mortality rates. In this case report, we are hoping to shed light on this unusual entity, its etiology, and management options. abstract_id: PUBMED:24611358 Spontaneous isolated dissection of the abdominal aorta. Isolated spontaneous dissection of the abdominal aorta is such a rare entity and there are only a few cases reported in literature up to date. A 42-year old male was admitted to the hospital with mild pain in the lower abdomen and back that had began seven days prior to admission together with the sudden onset of the ischemic symptoms of the left leg (ischemic ulcers of the calf gangrenous toe and pallor foot). Patient denied any trauma, hypertension history was negative, while he was active cigarette smoker. MSCT and digital subtracted angiography have shown a dissection of the abdominal aorta approximately two centimeters below the origin of the inferior mesenteric artery extending in the left common iliac artery, with no sign of the aneurysmatic dilatation of the abdominal aorta. Emergent surgery was performed with aorto-biiliacal bypass graft interposition, amputation of the left toe and necrectomy of the left calf Postoperative follow up and local vascular condition were satisfied. Even though is rare entity, isolated abdominal aorta dissection accounts for approximately 2-4% of all aortic dissection. Nowadays therapeutic regimen includes endovascular, open surgery or conservative treatment. abstract_id: PUBMED:8368094 Echo Doppler in abdominal aortic dissection The authors present a case of abdominal aorta dissection--type B. The diagnosis was made by Doppler ultrasound and computerized tomography (CT). The use of Doppler ultrasound in the diagnosis and assessment of abdominal aorta dissection and visceral arterial involvement is discussed. Doppler ultrasound is also a useful imaging technique in follow-up evaluation if conservative treatment is employed. abstract_id: PUBMED:19136531 Spontaneous isolated dissection of the abdominal aorta: single-center experience. Isolated abdominal aortic dissection is a rare clinical entity. We report a single-center experience with six patients with acute isolated abdominal aortic dissection managed during a period of seven years. The patients' median age was 62 years and they were predominantly male. All patients presented acutely with abdominal pain, one with hypovolemic shock due to aortic rupture, two with acute lower limb ischemia and one with buttock necrosis. Interestingly, one major symptom was astasia observed in four patients that resolved postoperatively. Computed tomography scan of the thoracic and abdominal aorta was the main diagnostic tool in all patients. Treatment consisted of open repair in three cases and endovascular repair with stent-graft deployment in the remaining three patients. One patient died postoperatively from multiorgan failure. Although natural history and treatment strategies of isolated abdominal aortic dissection have not been well defined, accurate clinical diagnosis and prompt management are essential to prevent adverse complications. Moreover, sudden development of astasia should alert the physician about this pathologic entity. abstract_id: PUBMED:10597006 Acute dissection of the abdominal aorta. A 64-year-old man presented with sudden lower abdominal pain and diffuse lumbago. He was diagnosed as having primary dissection of the abdominal aorta. Entry closure and aneurysmal wall plication was performed, and the subsequent course was satisfactory. Surgical intervention is recommended for patients with abdominal aortic dissection in the infrarenal segment, where the extent of dissection is limited and access is comparatively easy. Enhanced computed tomography is useful both in diagnosis and follow-up of this aortic disease. abstract_id: PUBMED:21772903 Acute spontaneous isolated dissection of abdominal aorta. Aortic dissection occurs when the layers of the aorta separate as a result of extra luminal cavity of blood through an intimal tear. Dissection limited to the abdominal aorta is rare. Unfortunately, the appropriate management of dissecting aneurysm of abdominal aorta is not documented yet. A 43 years old man was admitted to Al-zahra hospital in Isfahan with sudden onset of periumbilical abdominal pain. CT scan confirmed infrarenal dissection of abdominal aorta. Performing laparotomy, aorta was repaired using bifurcate collagen-coated Dacron graft. Surgical intervention with synthetic graft is recommended in patients with dissecting aortic aneurysm of infrarenal segments where the extent of dissection is limited and accessible. abstract_id: PUBMED:11769157 Spontaneous dissection of the abdominal aorta. Spontaneous abdominal aortic dissection is a rare entity, often with a clinically unspecific presentation. The cause of the dissection is unclear. Angiography used to be the definitive diagnostic study, but today a correct diagnosis can be achieved with CT scanning and magnetic resonance angiography. The optimal form of management for the individual patient is not clearly established. Chronic dissections may best be managed conservatively, with close follow-up achieved with CT scanning and magnetic resonance. Acute and complicated dissections should be treated surgically with aortic and aortic branch replacement if it can be offered with low morbidity and mortality. In selected cases, resection of the ischemic organs may represent an alternative. Answer: Aortic dissection limited to the abdominal aorta is indeed an underrecognized entity. It is a rare clinical condition with non-specific clinical features, making diagnosis challenging. The optimal treatment, as well as the natural history and progression of the disease, remain unclear (PUBMED:26324032). This type of dissection contributes to only 4% of all aortic dissections (PUBMED:15776306), and it is known to be the rarest of all types of aortic dissection, with high morbidity and mortality rates (PUBMED:34458033). The condition can present in various ways, including as a cause of abdominal pain in younger individuals without a history of hypertension (PUBMED:15915419), or with symptoms such as mild pain in the lower abdomen and back, ischemic symptoms of the leg, or even as an incidental finding on imaging studies (PUBMED:24611358). Diagnosis can be made using Doppler ultrasound, computed tomography (CT), or magnetic resonance angiography (PUBMED:8368094). Management strategies for isolated abdominal aortic dissection have not been well defined, but they range from conservative treatment with close follow-up to surgical intervention, depending on the acuteness and complications of the dissection (PUBMED:19136531; PUBMED:21772903). Surgical intervention is recommended for patients with dissections in the infrarenal segment where access is comparatively easy (PUBMED:10597006). However, the appropriate management of dissecting aneurysm of the abdominal aorta is not yet documented (PUBMED:21772903). In summary, isolated abdominal aortic dissection is underrecognized and under-diagnosed compared to thoracic aortic dissections. Due to its rarity and non-specific presentation, it requires a high index of suspicion for diagnosis. The management is variable and should be tailored to the individual patient's presentation and the extent of the dissection (PUBMED:26324032; PUBMED:11769157).
Instruction: Is It Beneficial to Remove the Pedicle Screw Instrument After Successful Posterior Fusion of Thoracolumbar Burst Fractures? Abstracts: abstract_id: PUBMED:23133723 Comparison of posterior fixation alone and supplementation with posterolateral fusion in thoracolumbar burst fractures. Objective: We compared the radiological and clinical outcomes between patients who underwent posterior fixation alone and supplemented with fusion following the onset of thoracolumbar burst fractures. In addition, we also evaluated the necessity of posterolateral fusion for patients treated with posterior pedicle screw fixation. Methods: From January 2007 to December 2009, 46 consecutive patients with thoracolumbar burst fracture were included in this study. On the basis of posterolateral fusion, we divided our patients into the non-fusion group and the fusion group. The radiological assessment was performed according to the Cobb's method, and results were obtained at immediately, 3, 6, 12 months after surgery. The clinical outcomes were evaluated using the modified Mcnab criteria at the final follow-up. Results: The demographic data and the mean follow-up period were similar between the two groups. Patients of both groups achieved satisfactory clinical outcomes. The mean loss of kyphosis correction showed that patients of both groups experienced loss of correction with no respect to whether they underwent the posterolateral fusion. There was no significant difference in the degree of loss of correction at any time points of the follow-up between the two groups. In addition, we also compared the effect of fixed levels (i.e., short versus long segment) on loss of correction between the two groups and there was no significant difference. There were no major complications postoperatively and during follow-up period. Conclusion: We suggest that posterolateral fusion may be unnecessary for patients with thoracolumbar burst fractures who underwent posterior pedicle screw fixation. abstract_id: PUBMED:26091158 Is It Beneficial to Remove the Pedicle Screw Instrument After Successful Posterior Fusion of Thoracolumbar Burst Fractures? Study Design: Case-control study. Objective: To investigate the clinical and radiological outcomes of pedicle screw removal after successful fusion of thoracolumbar burst fractures. Summary Of Background Data: Implant removal is a common procedure in orthopedic surgery, although the benefits of and indications for removal remain controversial. Previous studies on pedicle screw removal have reported conflicting outcomes, because the study subjects and surgical goals were heterogeneous in nature. Methods: We studied 45 consecutive patients who underwent implant removal and a control cohort of 45 age- and sex-matched patients who retained their spinal implants after successful posterior fusion of thoracolumbar burst fractures using pedicle screw instrument. In most cases, long-segment instrumentation with short-segment posterior fusion was performed. The mean elapsed period prior to implant removal after index fracture surgery was 18.3 ± 17.6 months. A visual analogue scale for back pain was applied, the Oswestry disability index calculated, and radiological parameters derived at the time of implant removal and 1 and 2 years postoperatively obtained. These data were compared with those of the control group evaluated at the same times after index fracture surgery. Results: Patient demographics, mechanisms of injury, fracture morphologies, and the outcomes of index fracture surgery were similar between the 2 groups. The mean visual analogue scale and Oswestry disability index scores were better at both the 1- and 2-year follow-ups in the implant removal group than in the control group (all P values = 0.000). The segmental motion angle of the implant removal group was 1.6° ± 1.5° at the time of implant removal, and increased significantly to 5.8° ± 3.9° at 1-year follow-up (P = 0.000), and was maintained at this level at the 2-year follow-up (5.9° ± 4.1°) (P = 0.000). Conclusion: In patients treated successfully for thoracolumbar burst fractures, pedicle screw removal is beneficial because it alleviates pain and disability. Restoration of the segmental motion angle after implant removal may contribute to the clinical improvement. Level Of Evidence: 3. abstract_id: PUBMED:35159967 Efficacy and Radiographic Analysis of Minimally Invasive Posterior Mono-Axial Pedicle Screw Fixation in Treating Thoracolumbar Burst Fractures. Background: The purpose of this study was to evaluate the effectiveness of minimally invasive posterior mono-axial pedicle screws fixation in the treatment of thoracolumbar burst fractures. Methods: In the present study, we analyzed 98 patients retrospectively who had thoracolumbar burst fractures without a neurological deficit. Patients were divided into two groups: mono-axial pedicle screw fixation group (n = 52) and poly-axial pedicle screw fixation group (n = 46). We collected clinical data (visual analog scale (VAS) score for back pain) and included radiographic measurements. Results: Sagittal index was significantly improved at postop and last follow-up in the mono group and the poly group. The mono group was better for reducing and maintaining anterior vertebral height. For the mono group, the mean postoperative regional kyphosis correction rate was 62.31%, and correction loss was 14.18% in late follow-up. For the poly group, the mean postoperative regional kyphosis correction rate was 52.17%, and correction loss was 33.42% in late follow-up. The mono-axial pedicle screw group had a good correction rate and reduced the risks of correction loss. The mean VAS scores for back pain improved by 2.4/2.5 and 3.8/4.2 for the mono and poly groups, respectively. There was no significant difference between groups. Conclusions: The mono-axial pedicle screw fixation was better for reducing and maintaining anterior vertebral height and regional kyphosis. Therefore, the mono-axial pedicle screw is a better optional instrumentation to treat thoracolumbar vertebral fractures. abstract_id: PUBMED:35086503 Vacuum phenomenon as a predictor of kyphosis after implant removal following posterior pedicle screw fixation without fusion for thoracolumbar burst fracture: a single-center retrospective study. Background: Posterior pedicle screw fixation without fusion has been commonly applied for thoracolumbar burst fracture. Implant removal is performed secondarily after bone union. However, the occurrence of secondary kyphosis has recently attracted attention. Secondary kyphosis results in poor clinical outcomes. The purpose of this was to determine predictors of kyphosis after implant removal following posterior pedicle screw fixation without fusion for thoracolumbar burst fracture. Methods: This retrospective study reviewed 59 consecutive patients with thoracolumbar burst fracture who underwent implant removal following posterior pedicle screw fixation without fusion. Inclusion criteria were non-osteoporotic fracture and T11-L3 burst fracture. Old age, sex, initial severe wedge deformity, initial severe kyphosis, and vacuum phenomenon were examined as factors potentially associated with final kyphotic deformity (defined as kyphotic angle greater than 25°) or loss of correction. Logistic regression analysis was performed using propensity score matching. Results: Among the 31 female and 28 male patients (mean age 38 years), final kyphotic deformity was found in 17 cases (29%). Multivariate analysis showed a significant association with the vacuum phenomenon. Loss of correction was found in 35 cases (59%) and showed a significant association with the vacuum phenomenon. There were no significant associations with other factors. Conclusions: The findings of this study suggest that the vacuum phenomenon before implant removal may be a predictor of secondary kyphosis of greater than 25° after implant removal following posterior pedicle screw fixation without fusion for thoracolumbar burst fracture, but that old age, sex, initial severe kyphosis, and initial severe wedge deformity may not be predictors. abstract_id: PUBMED:36983216 The Necessity of Implant Removal after Fixation of Thoracolumbar Burst Fractures-A Systematic Review. Background: Thoracolumbar burst fractures are a common traumatic vertebral fracture in the spine, and pedicle screw fixation has been widely performed as a safe and effective procedure. However, after the stabilization of the thoracolumbar burst fractures, whether or not to remove the pedicle screw implant remains controversial. This review aimed to assess the benefits and risks of pedicle screw instrument removal after fixation of thoracolumbar burst fractures. Methods: Data sources, including PubMed, EMBASE, Cochrane Library, Web of Science, Google Scholar, and Clinical trials.gov, were comprehensively searched. All types of human studies that reported the benefits and risks of implant removal after thoracolumbar burst fractures, were selected for inclusion. Clinical outcomes after implant removal were collected for further evaluation. Results: A total of 4051 papers were retrieved, of which 35 studies were eligible for inclusion in the review, including four case reports, four case series, and 27 observational studies. The possible risks of pedicle screw removal after fixation of thoracolumbar burst fractures include the progression of the kyphotic deformity and surgical complications (e.g., surgical site infection, neurovascular injury, worsening pain, revision surgery), while the potential benefits of pedicle screw removal mainly include improved segmental range of motion and alleviated pain and disability. Therefore, the potential benefits and possible risks should be weighed to support patient-specific clinical decision-making about the removal of pedicle screws after the successful fusion of thoracolumbar burst fractures. Conclusions: There was conflicting evidence regarding the benefits and harms of implant removal after successful fixation of thoracolumbar burst fractures, and the current literature does not support the general recommendation for removal of the pedicle screw instruments, which may expose the patients to unnecessary complications and costs. Both surgeons and patients should be aware of the indications and have appropriate expectations of the benefits and risks of implant removal. The decision to remove the implant or not should be made individually and cautiously by the surgeon in consultation with the patient. Further studies are warranted to clarify this issue. Level Of Evidence: level 1. abstract_id: PUBMED:27879575 Cost-Utility Analysis of Pedicle Screw Removal After Successful Posterior Instrumented Fusion in Thoracolumbar Burst Fractures. Study Design: A cost-utility analysis (CUA). Objective: The aim of this study was to determine the cost-effectiveness of pedicle screw removal after posterior fusion in thoracolumbar burst fractures. Summary Of Background Data: Pedicle screw instrumentation is a standard fixation method for unstable thoracolumbar burst fracture. However, removal of the pedicle screw after successful fusion remains controversial because the clinical benefits remain unclear. CUA can help clinicians make appropriate decisions about optimal health care for pedicle screw removal after successful fusion in thoracolumbar burst fractures. Methods: We conducted a single-center, retrospective, longitudinal matched-cohort study of prospectively collected outcomes. In total, 88 consecutive patients who had undergone pedicle screw instrumentation for thoracolumbar burst fracture with successful fusion confirmed by computed tomography (CT) were used in this study. In total, 45 patients wanted to undergo implant removal surgery (R group), and 43 decided not to remove the implant (NR group). A CUA was conducted from the health care perspective. The direct costs of health care were obtained from the medical bill of each patient. Changes in health-related quality of life (HRQoL) scores, validated by Short Form 6D, were used to calculate quality-adjusted life-years (QALYs). Total costs and gained QALY were calculated at 1 year (1 year) and 2 years (2 years) compared with baseline. Results are expressed as an incremental cost-effectiveness ratio (ICER). Different discount rates (0%, 3%, and 5%) were applied to both cost and QALY for sensitivity analysis. Results: Baseline patient variables were similar between the two groups (all P &gt; 0.05). The additional benefits of implant removal (0.201 QALY at 2 years) were achieved with additional costs ($2541 at 2 years), equating to an ICER of $12,641/QALY. On the basis of the different discount rates, the robustness of our study's results was also determined. Conclusion: Implant removal after successful fusion in a thoracolumbar burst fracture is cost-effective until postoperative year 2. Level Of Evidence: 3. abstract_id: PUBMED:29750137 Posterior-only Stabilization for Traumatic Thoracolumbar Burst Fractures. Background No consensus exists for the management of unstable thoracolumbar (TL) burst fractures. Surgical options include anterior, lateral, or posterior stabilization (or a combination), depending on the fracture. The potential benefits of anterior reconstruction come with increased operative time and associated morbidity. A posterior-only approach can offer stable correction without increased operative risks but may result in loss of kyphotic correction over time. Purpose To determine whether posterior-only stabilization is a viable treatment option for patients with traumatic TL fractures as opposed to anterior and combined approaches. Methods We performed a retrospective analysis of adult patients with TL burst fractures who underwent posterior--only surgical intervention from 2005 to 2015. Operations were performed at two levels above and below the fractured segment using pedicle screw-rod fixation constructs with autograft and allograft. All patients received TL bracing for at least three months. Patients lost to follow-up were excluded. Results Sixty-four consecutive patients with posterior--only stabilization were identified, with 18 lost to follow-up. Of the remaining 46 patients, 93% (n=43) were male and 7% (n=3) were female, with a mean age of 36.8 years. All patients were followed for 12 months. The mean time until the removal of the brace was 3.54 months. No patients required additional surgical intervention for spinal stabilization. Three patients experienced postoperative complications, all of which were related to infection. Conclusions Our data indicate that posterior--only stabilization for traumatic TL burst fractures is a durable and effective option in select patients. The approach offers surgical intervention with a decreased perioperative risk as well as reduced morbidity and mortality, with a minimal increase in the risk of kyphotic deformity. Further prospective studies are necessary to validate these findings clinically. abstract_id: PUBMED:25664090 Short segment screw fixation without fusion in treatment for unstable thoracolumbar burst fracture. This study aims to evaluate clinical efficacy of short segment pedicle screw fixation without bone fusion for unstable thoracolumbar burst fracture. Nineteen patients younger than 40 years old with unstable thoracolumbar burst fractures were included. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The implants were removed 12 months after initial operation. Imaging and clinical findings were analyzed at preoperative, 12 months after surgery, just before implant removal, and at six months after implant removal. Results indicated that difference was statistically significant between preoperative period or postoperative 1 year follow-up, just before implant removal and 6 months after implant removal (P &lt; 0.05). Results at postoperative 1 year follow-up, just before implant removal and 6 months after implant removal were better than preoperative period. There were no significant complications or neurological deterioration after screws insert and removal in any patient. The rate of clinical outcome with excellent and good was 94.7%. In conclusion, short segment pedicle screw fixation without bone fusion can be an effective and safe operative srategytechnique in the management of young patients suffering from unstable burst fracture. abstract_id: PUBMED:35378735 The Radiological Complications of Short-Segment Pedicle Screw Fixation Combined with Transforaminal Interbody Fusion in the Treatment of Unstable Thoracolumbar Burst Fracture: A Retrospective Case Series Study in Vietnam. Background: The radiological complications including correction loss and hardware failure of short segment posterior pedicle screw fixation in the treatment of unstable thoracolumbar burst fractures remain a main concern. Several procedures aiming to reinforce the anterior column have been introduced to solve these limitations, including transforaminal interbody fusion (TIF). The purposes of this study were to evaluate the radiological complications of short-segment pedicle screw fixation in combination with transforaminal interbody fusion in the treatment of unstable thoracolumbar burst fractures. Methods: This retrospective case series study enrolled patients with isolated unstable thoracolumbar burst fractures, who were treated by posterior short fixation with TIF between January 2013 and January 2017. Patients were followed up for a minimum of one and half years. For evaluation of correction loss, % loss of anterior vertebral body height (%AVB), vertebral kyphotic angle (VA) and regional kyphotic angle (RA) were collected preoperatively, postoperatively and at the final follow-up. Hardware failure was assessed on radiological images at the last follow-up. Results: There were 36 patients who met the inclusion criteria with a mean follow-up duration of 53 months. The mean correction loss of %AVB, VA and RA were 10.2%, 2.9° and 5.6°, respectively. There were 6 patients (16.7%) with hardware failure at the final follow-up. Conclusion: Short-segment posterior pedicle screw fixation with TIF using bone chip grafts does not completely prevent hardware failure and progressive kyphosis in the treatment of unstable thoracolumbar burst fractures. abstract_id: PUBMED:28243383 Short Segment versus Long Segment Pedicle Screws Fixation in Management of Thoracolumbar Burst Fractures: Meta-Analysis. Posterior pedicle screw fixation has become a popular method for treating thoracolumbar burst fractures. However, it remains unclear whether additional fixation of more segments could improve clinical and radiological outcomes. This meta-analysis was performed to evaluate the effectiveness of fixation levels with pedicle screw fixation for thoracolumbar burst fractures. MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Springer, and Google Scholar were searched for relevant randomized and quasirandomized controlled trials that compared the clinical and radiological efficacy of short versus long segment for thoracolumbar burst fractures managed by posterior pedicle screw fixation. Risk of bias in included studies was assessed using the Cochrane Risk of Bias tool. Based on predefined inclusion criteria, Nine eligible trials with a total of 365 patients were included in this meta-analysis. Results were expressed as risk difference for dichotomous outcomes and standard mean difference for continuous outcomes with 95% confidence interval. Baseline characteristics were similar between the short and long segment fixation groups. No significant difference was identified between the two groups regarding radiological outcome, functional outcome, neurologic improvement, and implant failure rate. The results of this meta-analysis suggested that extension of fixation was not necessary when thoracolumbar burst fracture was treated by posterior pedicle screw fixation. More randomized controlled trials with high quality are still needed in the future. Answer: The question of whether it is beneficial to remove the pedicle screw instrument after successful posterior fusion of thoracolumbar burst fractures has been explored in several studies, with varying conclusions. One study found that pedicle screw removal after successful fusion of thoracolumbar burst fractures is beneficial because it alleviates pain and disability, and the restoration of the segmental motion angle after implant removal may contribute to the clinical improvement (PUBMED:26091158). Another study suggested that implant removal is cost-effective up to two years postoperatively, as it provides additional benefits in terms of quality-adjusted life-years (QALYs) with additional costs, resulting in an incremental cost-effectiveness ratio (ICER) of $12,641/QALY (PUBMED:27879575). However, there is conflicting evidence regarding the benefits and harms of implant removal. A systematic review highlighted that while there are potential benefits such as improved segmental range of motion and alleviated pain and disability, there are also possible risks including the progression of kyphotic deformity and surgical complications. The review concluded that the current literature does not support a general recommendation for the removal of pedicle screw instruments, which may expose patients to unnecessary complications and costs. The decision to remove the implant should be made individually and cautiously by the surgeon in consultation with the patient (PUBMED:36983216). Additionally, a study on posterior-only stabilization for traumatic thoracolumbar burst fractures indicated that posterior-only stabilization is a durable and effective option, offering surgical intervention with decreased perioperative risk and reduced morbidity and mortality, with a minimal increase in the risk of kyphotic deformity (PUBMED:29750137). In summary, while there may be benefits to removing the pedicle screw instrument after successful posterior fusion of thoracolumbar burst fractures, such as pain relief and improved mobility, the decision should be made on a case-by-case basis, considering the potential risks and the individual patient's circumstances. Further studies are warranted to clarify the issue and provide more definitive guidance.
Instruction: Does endolymphatic sac decompression surgery prevent bilateral development of unilateral Ménière disease? Abstracts: abstract_id: PUBMED:24464868 Does endolymphatic sac decompression surgery prevent bilateral development of unilateral Ménière disease? Objectives/hypothesis: The aim of the study was to elucidate whether endolymphatic sac decompression surgery (ESDS) has the potential to prevent unilateral Ménière disease (MD) from becoming bilateral. Study Design: Prospective case-control study at tertiary referral center. Methods: Between 1996 and 2008, we performed a glycerol test (G-test) and electrocochleography (ECoG) on 237 patients with intractable unilateral MD. We performed ESDS on 179 patients (144 with no endolymphatic hydrops and 35 with silent endolymphatic hydrops in the contralateral ear). The other 58 patients (40 without endolymphatic hydrops and 18 with silent endolymphatic hydrops in the contralateral ear) were given available medical treatments. All underwent regular follow-up for at least 5 years. Results: Altogether, 22.4% (53 of 237) of patients with clinically diagnosed unilateral intractable MD had silent endolymphatic hydrops in the contralateral ear using G-test and ECoG. In the nonsurgical group, six of 40 patients with unilateral MD with no endolymphatic hydrops in the contralateral ear developed bilateral disease, whereas in the surgical group 12 of 144 patients did so (P = .231, Fisher test). In the nonsurgical group, nine of 18 patients with unilateral MD and silent endolymphatic hydrops developed the disease in the contralateral ear, whereas in the surgical group six of 35 patients developed bilateral disease (P = .022, Fisher test). Conclusions: The present findings suggest that ESDS may decrease the incidence of developing MD in silent endolymphatic hydronic contralateral ears diagnosed with G-test and ECoG within the first 5 postoperative years. abstract_id: PUBMED:31163517 The prognostic value of glycerol test of electrocochleography on the effects of endolymphatic sac decompression surgery in patients with unilateral Meniere's disease Objective:To determine whether the preoperative results of glycerol test of electrocochleography and hearing stage have a prognostic value on the effects of endolymphatic sac decompression surgery in patients with unilateral Meniere's disease. Method:A retrospective study was conducted of 58 unilateral Meniere's disease patients who underwent endolymphatic sac decompression surgery. The correlation between the preoperative results of glycerol test of electrocochleography, hearing stage and vertigo control was analyzed by Kendall test. Result:Patients in the positive glycerol test of electrocochleography group had better vertigo control after the decompression of the lymphatic sac than the negative group(P=0.029). Preoperative hearing staging was significantly associated with vertigo control after endolymphatic sac decompression(P=0.028). The better the preoperative hearing, the higher the control rate of vertigo after endolymphatic sac decompression. The stage Ⅰ patients were 91.7%, the stage Ⅱ patients were 78.1%, and the third stage Ⅲ patients were 57.1%, respectively. Conclusion:Our findings suggested that the positive preoperative results of glycerol test of electrocochleography and better hearing stage may indicate a better effects of endolymphatic sac decompression surgery in patients with unilateral Meniere's disease. abstract_id: PUBMED:34930474 A comparison of endolymphatic duct blockage, endolymphatic sac drainage and endolymphatic sac decompression surgery in reversing endolymphatic hydrops in Meniere's disease. Background: To explore the differences between endolymphatic duct blockage, endolymphatic sac drainage and endolymphatic sac decompression surgery in the reversal of endolymphatic hydrops (EH) in patients with intractable Meniere's disease (MD). Methods: A total of 27 MD patients receiving endolymphatic duct blockage surgery (n = 10), endolymphatic sac drainage surgery (n = 9) and endolymphatic sac decompression surgery (n = 8) underwent gadolinium-enhanced inner ear magnetic resonance imaging (MRI) scans prior to, 2 weeks after and at &gt; 12 months following surgery. Results: In the group with endolymphatic duct blockage, the second MRI revealed no changes in EH, whereas the third MRI revealed a reversal of vestibular EH in 3 patients and a downgrading of cochlear hydrops in 2 of these 3 patients, who presented with an improvement in their hearing and complete control of vertigo. In the group with endolymphatic sac drainage, the second MRI showed a reversal of EH in 4 patients, and no changes in EH in the remaining 5 patients, whereas the third MRI showed that those 4 patients who presented with a reversal of EH at the second MRI stage remained unchanged except a recurrence of vestibular hydrops in 1 patient. All 4 patients exhibited a complete control of vertigo, but hearing improved in 1, worsened in 1 and remained unchanged in 2. In the group with endolymphatic sac decompression, both the second and third MRI examination revealed no reversal of EH. Conclusions: The present study has shown that both endolymphatic duct blockage surgery and endolymphatic sac drainage surgery have the potential to reduce EH in certain MD patients, but none of the patients receiving endolymphatic sac decompression surgery showed reversal of their EH. abstract_id: PUBMED:20359096 Long-term results of endolymphatic sac decompression and endolymphatic-mastoid shunting for Meniere's disease Objective: To investigate the efficacy of endolymphatic sac surgery for Meniere's disease, and compare the effects of endolymphatic sac decompression with endolymphatic-mastoid shunting. Method: Twelve patients(13 ears) undergoing endolymphatic-mastoid shunting and eleven patients (11 ears) undergoing endolymphatic sac decompression were retrospectively compared for hearing results and vertigo controlled rates. All of them have been followed up for more than two years after surgery. Result: According to Chinese Meniere's disease diagnosis and curative effect standard evaluation criteria published in 2006, for vertigo symptom of endolymphatic mastoid shunting group, 9 cases (69.2%) achieved grade A(completely controlled), 4 cases (30.8%) achieved grade B (fundamentally controlled). There were 8 cases (72.7%) with grade A, 2 cases (18.2%) with grade B and one case (9%) with grade C among 11 patients who received endolymphatic sac decompression. There was not statistically significant differences in postoperative speech pure tone average and vertigo controlled rate between the two groups. Conclusion: Endolymphatic sac decompression and endolymphatic-mastoid shunting are effective management with less complication for intractable Meniere's disease. Particularly, the vertigo symptoms were controlled significantly. Patients with Meniere's disease in advanced clinical stages may also be relieved. abstract_id: PUBMED:28382128 Endolymphatic Sac Surgery for Ménière's Disease - Current Opinion and Literature Review. Introduction The endolymphatic sac is thought to maintain the hydrostatic pressure and endolymph homeostasis for the inner ear, and its dysfunction may contribute to the pathophysiology of Ménière's disease. Throughout the years, different surgical procedures for intractable vertigo secondary to Ménière's disease have been described, and though many authors consider these procedures as effective, there are some who question its long-term efficacy and even those who think that vertigo control is achieved more due to a placebo effect than because of the procedure itself. Objective To review the different surgical procedures performed in the endolymphatic sac for the treatment of Ménière's disease. Data Sources PubMed, MD consult and Ovid-SP databases. Data Synthesis We focus on describing the different surgical procedures performed in the endolymphatic sac, such as endolymphatic sac decompression, endolymphatic sac enhancement, endolymphatic sac shunting and endolymphatic duct blockage, their pitfalls and advantages, their results in vertigo control and the complication rates. The senior author also describes his experience after 30 years of performing endolymphatic sac surgery. Conclusions The endolymphatic sac surgery, with all its variants, is a good option for patients with incapacitating endolymphatic hydrops, providing a high percentage of vertigo control and hearing preservation. abstract_id: PUBMED:29871202 The surgical treatment discussion in intractable Meniere's disease: lateral and posterior semicircular canal plugging with endolymphatic sac decompression Objective:To introduce and discuss a surgical technique, i.e. lateral and posterior semicircular canal plugging with endolymphatic sac decompression, to treat intractable Meniere's disease.Method:Thirty-three cases of intractable Meniere's disease were enrolled. All cases were performed under general anesthesia. The endolymphatic sac was decompressed and the bone of lateral and posterior canal were drilled to create a fenestra followed by soft tissue plugging into the canal.Result:All cases had no facial palsy, no total defness, no vertical after surgery. In the period of following up, the attack of Meniere's disease was completely controlled and the hearing loss happened for 6 case one week after operation. One case had sudden hearing loss with vertical and result of defness.Conclusion:The design of this surgical procedure was based on relieving endolymphatic press and controlling the nerve pulse transmission. The surgical technique was reliable for lateral and posterior semicircular canal plugging with endolymphatic sac decompression. abstract_id: PUBMED:35222247 A Comparison of Local Endolymphatic Sac Decompression, Endolymphatic Mastoid Shunt, and Wide Endolymphatic Sac Decompression in the Treatment of Intractable Meniere's Disease: A Short-Term Follow-Up Investigation. Background: Meniere's disease (MD) is an inner ear disorder, characterized by recurrent attacks of vertigo, low-frequency sensorineural hearing loss, tinnitus, and aural fullness. Endolymphatic sac surgery is an effective treatment to control vertigo attacks but without causing a hearing loss for intractable MD. However, the methods and effects of endolymphatic sac surgery have been controversial for many years, and the relationship between the vertigo control rates of different endolymphatic sac surgery methods is not well-documented. Objectives: This study compared the vertigo control rate, hearing outcome, and quality of life (QOL) among different endolymphatic sac surgery, such as local endolymphatic sac decompression (LESD), endolymphatic sac mastoid shunt (ESMS), and wide endolymphatic sac decompression (WESD). Materials And Methods: We retrospectively analyzed the patients who underwent endolymphatic sac surgery from January 2008 to June 2019. The control rate of vertigo and QOL scores were compared after 2 years of follow-up. The QOL was scored with validation of the MD patient-oriented symptom-severity index (MDPOSI). The pure tone thresholds of all patients at pre- and postoperation were also compared. Results: In total, 83 MD patients with complete follow-up data were included in the study, i.e., 20 patients with LESD, 28 patients with ESMS, and 35 patients with WESD. Results showed a better vertigo control with WESD than the other groups (70% with LESD, 71.4% with ESMS, and 88.6% with WESD). The QOL was improved after surgery in all groups in which the difference was statistically significant (QOL, preoperative vs. postoperative, 38.2 vs. 10.1 with LESD, 37.8 vs. 9.6 with ESMS, and 37.6 vs. 8.3 with WESD), respectively. After endolymphatic sac surgery, the hearing was well-preserved in the three groups [pure tone averages (PTAs), dB, preoperative vs. postoperative, 41.0 ± 19.3 vs. 40.8 ± 17.9 with LESD, 39.7 ± 16.4 vs. 40.8 ± 18.2 with ESMS, and 38.5 ± 18.7 vs. 36.6 ± 19.5 with WESD]. Conclusion: Wide endolymphatic sac decompression has a higher vertigo control rate, better improvement of QOL, and relatively higher hearing stability or improvement rate after surgery in patients with MD compared with LESD and ESMS. abstract_id: PUBMED:16094123 Endolymphatic sac decompression as a treatment for Meniere's disease. Objectives/hypothesis: Endolymphatic sac decompression is a surgical treatment option for patients with medically intractable Meniere's disease. However, effectiveness is debated because published data show great variability. Outcome-based research studies are useful in incorporating the patient's perspective on the success of treatment. To further assess effectiveness of endolymphatic sac decompression, we performed a prospective study to examine both symptom-specific and general health outcomes. Study Design: Prospective, observational outcome study. Methods: Nineteen patients with endolymphatic sac decompression responded to symptom-specific questionnaires and the Medical Outcomes Short-Form 36 Health Survey (SF-36) before and after surgery. Follow-up ranged from 6 to 58 months with a mean duration of 50 months. Results: Overall measures of physical health were significantly improved following endolymphatic sac decompression (P = .04), whereas overall measures of mental health were unchanged (P = .74). Role Physical and Social Functioning scores were significantly improved following endolymphatic sac decompression (P = .04 and P = .03, respectively). Study patients scored significantly lower (P &lt; .05) than SF-36 normative data in 6 of 10 categories before endolymphatic sac decompression but patient scores were not significantly different from normal scores in all but one category (General Health) following endolymphatic sac decompression. The mean number of vertigo episodes was significantly reduced from an average of 8.3 times per month to an average of 2.6 times per month following endolymphatic sac decompression (P = .006). Ninety-five percent of patients (18 of 19 patients) reported improvement in symptoms (frequency, duration, or intensity) of vertigo and 37% (7 of 19 patients) reported complete resolution of vertigo. Conclusion: Endolymphatic sac decompression significantly improved perception of physical health, as well as symptom-specific outcomes, in patients with medically intractable Meniere's disease. abstract_id: PUBMED:12707660 Endolymphatic sac-vein decompression for intractable Meniere's disease: long term treatment results. Objectives: We sought to determine the long-term efficacy of endolymphatic sac-vein decompression surgery on patients with classic Meniere's disease. Study Design And Setting: Using the 1995 American Academy of Otolaryngology-Head and Neck Surgery Committee on Hearing and Equilibrium criteria, starting stage, functional level, vertigo class, and hearing results were addressed. We studied 68 patients with classic Meniere's disease from a tertiary, private otology-neurotology practice. Patient data were gathered by retrospective chart review, questionnaire, and patient interview. All patients underwent endolymphatic sac-vein decompression with an average follow-up period of 55 months. Results: Median functional level before surgery was level 4, improving to level 2 after surgery. Eighty-one percent of patients showed improvement in functional level, 12% remained stable, and 7% declined. Long-term vertigo control was 47% in class A, 25% in class B, 9% in class C, 3% in class D, and 16% in class F. Twenty percent of patients were in hearing stage I Meniere's disease; 31%, stage II; 44%, stage III; and 5%, stage IV. Eighteen percent of patients showed improvement in hearing class, 64% were stable, and 18% declined. Conclusion: Endolymphatic sac-vein decompression surgery is a safe, nondestructive surgical option for Meniere's disease that offers durable control of vertigo and stabilization of hearing for the majority of symptomatic patients. Significance: The beneficial long-term outcome of the endolymphatic sac-vein decompression supports its continued use as a first-line treatment option in intractable Meniere's disease. abstract_id: PUBMED:36634488 Endolymphatic sac decompression effect on secondary symptoms of Meniere's disease. Objective: To evaluate if endolymphatic sac decompression (ESD) significantly improves secondary symptoms of Meniere's disease including tinnitus and aural fullness. Study Design: Survey study with retrospective chart review. Setting: Tertiary care center. Methods: Survey of adult patients with Meniere's disease that underwent primary ESD surgery from 2015 to 2020. Subjective reporting of pre- and postoperative aural fullness and tinnitus based on postoperative survey. Survey results and audiologic data of the patients that reported were compared pre- and postoperatively. Results: Statistical analysis was performed using weighted kappa statistics to examine the level of agreement. There was a value of 0.12 for pre- and postoperative aural fullness, indicating a difference in the two groups with 77 % having improvement and only 4 % having worsening. There was a value of 0.21 for pre- and postoperative tinnitus, demonstrating a lack of agreement with 58 % having improvement and 4 % having worsening. Overall, there was significant improvement in both tinnitus and aural fullness postoperatively. There was no significant difference in word recognition score, speech reception threshold, or pure tone average between the pre- and postoperative group based on paired t-test. Conclusions: There is a significant improvement in both aural fullness and tinnitus for patients undergoing ESD with no negative effect on audiologic status. ESD is a viable option for treatment of Meniere's disease with vertigo, aural fullness, and tinnitus relief. Future prospective studies are needed to further improve the evidence of ESD's effect on secondary symptoms of Meniere's disease. Answer: Endolymphatic sac decompression surgery (ESDS) has been studied to determine its potential in preventing the bilateral development of unilateral Ménière disease (MD). According to a prospective case-control study, ESDS may decrease the incidence of developing MD in silent endolymphatic hydronic contralateral ears diagnosed with a glycerol test (G-test) and electrocochleography (ECoG) within the first 5 postoperative years (PUBMED:24464868). This suggests that ESDS could have a role in preventing the progression of unilateral MD to bilateral MD, particularly in patients with silent endolymphatic hydrops in the contralateral ear. However, it is important to note that the effectiveness of ESDS in reversing endolymphatic hydrops (EH) in patients with intractable MD has been questioned. A study comparing different surgical approaches found that none of the patients receiving ESDS showed a reversal of their EH, which contrasts with the outcomes of endolymphatic duct blockage surgery and endolymphatic sac drainage surgery (PUBMED:34930474). This indicates that while ESDS may help prevent bilateral development in some cases, it may not be effective in reversing EH. Moreover, the long-term efficacy of ESDS has been supported by other studies, which have shown that it offers durable control of vertigo and stabilization of hearing for the majority of symptomatic patients with intractable MD (PUBMED:12707660). Additionally, ESDS has been associated with significant improvement in secondary symptoms of MD, such as tinnitus and aural fullness, without negatively affecting audiologic status (PUBMED:36634488). In summary, while there is evidence to suggest that ESDS may help prevent the bilateral development of unilateral MD, particularly in patients with silent endolymphatic hydrops, its effectiveness in reversing EH is not supported. The surgery has been shown to improve vertigo control and stabilize hearing in the long term, as well as improve secondary symptoms of MD.
Instruction: Does regular zoledronic acid change the bone turnover of the jaw in men with metastatic prostate cancer: a possible clue to the pathogenesis of bisphosphonate related osteonecrosis of the jaw? Abstracts: abstract_id: PUBMED:24469255 Does regular zoledronic acid change the bone turnover of the jaw in men with metastatic prostate cancer: a possible clue to the pathogenesis of bisphosphonate related osteonecrosis of the jaw? Purpose: To find out whether the most popular pathogenesis hypothesis of the bisphosphonate (BP) related osteonecrosis of the jaw (BRONJ) is comprehensible: (1) is there a higher bone remodeling in the jaw compared with other skeletal sites? (2) Is the bone turnover (BT) of the jaw overly altered after BP intake? (3) Are there gender- or entity-specific differences in BT before and after BP intake? Methods: Bone scintigraphies of 42 patients with prostate cancer were retrospectively analyzed (n = 21 with BP intake; n = 21 no BP). All patients received bone scintigraphy prior to the therapy and in the course of the treatment (after 12 and 24 months). Data were quantitatively analyzed using six predetermined regions of interest and compared with a breast cancer cohort. Results: The mandible revealed a similar BT as the femur and a significant lower BT compared with the maxilla. All investigated bone regions showed no significant changes under BP administration. Inter-gender differences revealed significantly lower BT values for the prostate cancer compared with the female breast cancer cohort, changes over the course of time could not be found. Conclusions: The finding that the mandible revealed a significant lower BT than the maxilla and the fact that 2/3 of the BRONJ cases occur in the mandible are inconsistent with the investigated hypothesis. Furthermore, the BT in the jawbone is not overly suppressed by BP. Thus, it seems implausible that a high BT and its over-suppression play the key role in the pathomechanism of BRONJ. abstract_id: PUBMED:27501591 Osteonecrosis of the Jaw in Patients Receiving Bone-Targeted Therapies: An Overview--Part I. Urologic patients receiving bone-targeted therapies are at risk of developing osteonecrosis of the jaw (ONJ). ONJ has historically been associated with bisphosphonate therapy. More recently, RANK-Ligand inhibitors (denosumab) have also been used to reduce the risk of skeletal-related events in patients who have advanced cancers with bone metastases. More than 65% of men with metastatic prostate cancer and nearly 75% of women with metastatic breast cancer are affected by bone metastases. The literature has described ONJ associated with bisphosphonate therapy as bisphosphonate-related osteonecrosis of the jaw (BRONJ). However, with evidence also linking the use of RANK-Ligand inhibitors with osteonecrosis of the jaw, we advocate use of the term "anti-bone resorption therapy-related osteonecrosis of the jaw" (ABRT-ONJ). The term "medication-related osteonecrosis of the jaw" (MRONJ) is now becoming more widespread. There is not a universally accepted definition of ABRT-ONJ, which may have hindered recognition and reporting of the condition. In Part I of this article, a review of current knowledge around the etiology of ABRT-ONJ and incidence data are provided. In Part II, we provide an audit of ONJ in a nurse consultant-led bone support clinic. In the article, we refer to zoledronic acid because this is the bisphosphonate of choice for use in men with prostate cancer in the United Kingdom. abstract_id: PUBMED:31697256 Predictive value of the bone markers osteocalcin and C-terminal telopeptide for jaw osteonecrosis in high-risk prostate cancer patients on bisphosphonate therapy: A prospective study. Objective: To examine the predictive value of osteocalcin (OC) and C-terminal telopeptide (CTX) levels for jaw osteonecrosis in high-risk prostate cancer (PCa) patients taking bisphosphonates (BPs). Methods: Twenty-four patients were prospectively recruited in this study and followed from 2011 to 2015. All patients were diagnosed with metastatic PCa with secondary bone deposits and were on androgen deprivation therapy (ADT). All participants were started on 4mg of zoledronic acid intravenously every 4 weeks for two years. The patients were reviewed every three months with full blood count, blood biochemistry, PSA and measurement of OC and CTX. Patients also underwent dental/oral examination. OC and CTX levels in serum were calculated using the ELISA method. Results: A significant decrease in PSA levels was found (β=-0.06, SE=0.02, p=0.006). The levels of OC (β=-0.46, SE=0.14, p=0.001) and CTX (β=-0.01, SE=0.004, p=0.007) also decreased significantly during the two years of follow up. Osteonecrosis of the jaw was identified in three patients at two years. Patients with osteonecrosis also showed a decrease in OC and CTX levels. The mean OC reduction was 77.3% for patients with osteonecrosis and 12.6% for patients without osteonecrosis. The mean CTX reduction was 44.1% for patients with osteonecrosis and 9.62% for patients without osteonecrosis. Conclusion: Our study demonstrated no clear association between the levels of serum OC and CTX and bisphosphonate-related osteonecrosis of the jaw (BRONJ). To date, there is no clinically useful biomarker for the prediction of jaw osteonecrosis. More studies are needed using different bone turnover markers in order to identify patients at risk for BRONJ. abstract_id: PUBMED:27501592 Retrospective Audit: Does Prior Assessment by Oral and Maxillofacial Surgeons Reduce the Risk of Osteonecrosis of The Jaw in Patients Receiving Bone-Targeted Therapies for Metastatic Cancers to the Skeleton?--Part II. Men who receive bone-targeted therapy for metastatic prostate cancer are at increased risk of osteonecrosis of the jaw (ONJ). Development of ONJ has been associated with the administration of bone-targeted therapies in association with other risk factors. ONJ can be distressing for a patient because it can cause pain, risk of jaw fracture, body image disturbance, difficultly eating, and difficulty maintaining good oral hygiene. The aim of this article is to report results of an audit of prior assessment by oral and maxillofacial surgeons (OMFS) before initiation of bone-targeted therapies and whether it may reduce the risk of ONJ in patients receiving bone-targeted therapies for advanced cancers. abstract_id: PUBMED:24113758 Case of prostate cancer associated with bisphosphonate-related osteonecrosis of the jaw followed by retropharyngeal abscess A 67-year-old man with castration-resistant prostate cancer associated with multiple bone metastases had been treated with zoledronic acid and docetaxel. Although there was no evidence of damage around the right lower jaw bone, the patient complained of pain in May 2011, which worsened during the next 2 weeks and was followed by difficulty with breathing. Computed tomographic (CT) findings of the cervical area showed swelling of the cervical tissue with air and tightening of the trachea, suggesting cellulitis caused by gas gangrene. He was intubated and treated with antibiotics. On the 12th hospital day, CT scan revealed a pharyngeal abscess and we performed a drainage operation. On the 20th hospital day, bone of the intraoral lower jaw was exposed, revealing that the infection was caused by osteonecrosis. Additional CT findings showed the abscess extending to the mediastinum and multiple liver metastases. Although antibiotic therapy was continued, the patient died of liver failure on the 61st day. abstract_id: PUBMED:21989437 Bisphosphonate-induced osteonecrosis of the jaw in patients with bone metastatic, hormone-sensitive prostate cancer. Risk factors and prevention strategies. Aims And Background: Evidence from the literature suggests that osteonecrosis of the jaw is emerging as a serious complication of treatment with bisphosphonates for patients with advanced prostate cancer. Methods And Study Design: This study is a series of 60 patients with osseous metastases from prostate cancer under complete androgen deprivation therapy. All patients also received bisphosphonates intravenously every 3 to 4 weeks. Over a period of 3 and a half years, we recorded the incidence, presenting signs and symptoms of osteonecrosis of the jaw among those patients and the diagnostic workup required. Results: Nine of the 60 patients with metastatic prostate cancer were found to be affected with osteonecrosis of the jaw secondary to bisphosphonate administration at the Urology Department at the University Hospital of Alexandroupolis between January 2006 and August 2009. For diagnostic reasons, all 9 patients underwent computed tomography scan and magnetic resonance imaging of the maxillary region, as well as a three-phase whole body bone scan. CONCLUSIONS; There is evidence that administration of bisphosphonates in patients with advanced prostate cancer may increase the risk of osteonecrosis of the jaw. Guidelines regarding the diagnosis and management of those patients are needed. abstract_id: PUBMED:25128260 Auto-fluorescence of the bone and its use for delineation of bone necrosis. Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is a challenging complication of long-term bisphosphonate treatment. Currently, the drawback in the surgical management of BRONJ is the difficulty distinguishing between viable and necrotic bone. Intraoperative bone fluorescence induced by tetracyclines has been shown to be a valuable aid to overcome this problem. In this technical note we report the finding that viable bone is auto-fluorescent using the VELscope Vx fluorescence lamp. Necrotic bone shows an altered fluorescence pattern (pale or no fluorescence). Thus it is suggested that auto-fluorescence of bone might be of similar use during the surgical therapy of BRONJ. abstract_id: PUBMED:19531404 Intravenous bisphosphonate-related osteonecrosis of the jaw: bone scintigraphy as an early indicator. Purpose: Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is recognized as a serious complication among patients receiving bisphosphonate therapy. However, methods for early detection and identification of patients at risk for osteonecrosis of the jaw (ONJ) need further investigation. The purpose of this study was to characterize BRONJ among patients receiving intravenous bisphosphonates and to examine bone scintigraphy findings that preceded manifestation of frank ONJ. Materials And Methods: We identified all known cases of BRONJ (defined according to 2006 American Association of Oral and Maxillofacial Surgeons criteria) diagnosed between January 2004 and September 2008 among patients who received intravenous bisphosphonate therapy (IVBP). The medical records were abstracted, and the clinical and radiographic features of BRONJ and relevant comorbidities were characterized. Technetium Tc 99 bone scintigrams were systematically reviewed among the subset of patients who received these imaging studies for oncologic care and imaging findings were correlated with the temporal development of ONJ. Results: We identified 59 cases of intravenous BRONJ (median age, 61.4 +/- 10.7 years; 57.6% female), of whom 44.1% had breast cancer, 33.9% had multiple myeloma, and the remainder had metastatic prostate cancer (15.3%) or other cancers (6.8%). One third (32.2%) of the cohort was diabetic. In addition to IVBP, the vast majority (86.4%) had also received prior systemic glucocorticoid therapy. The median cumulative number of IVBP doses was 25 (interquartile range, 16-39) at the time of BRONJ diagnosis. Half of the patients had prior invasive dental procedures; ONJ developed spontaneously in 27.1%, and in the remainder ONJ developed in the setting of periodontal disease (10.1%) or local trauma (8.4%). Most patients presented with painful stage 2 disease involving the mandible (75%), and Actinomyces was present in more than 77% of available histologic specimens. During the median follow-up of 1.5 years, 15 patients (25.4%) regressed to a less severe stage, with healing in 6 patients; 16 (27.1%) worsened; and the remainder stayed within the same stage, but in almost half of these patients, the extent of involvement progressed. Of the 38 patients who had 99Tc bone scintigraphy, 35 had bone scans before development of BRONJ, and among these patients, 23 (67.5%) had positive tracer uptake in areas that subsequently developed BRONJ. Conclusions: In this study bone scintigraphy showed positive tracer uptake before the development of BRONJ in almost 66% of patients who had these scans before clinical evidence of frank osteonecrosis. BRONJ subsequently developed in the areas identified on scintigraphy in these patients. Further studies should explore the role of bone scintigraphy in the detection of early subclinical BRONJ. abstract_id: PUBMED:32886176 Antiresorptive agent-related osteonecrosis of the jaw in prostate cancer patients with bone metastasis treated with bone-modifying agents. Introduction: The incidence rate and risk factors of antiresorptive agent-related osteonecrosis of the jaw (ARONJ) in prostate cancer patients with bone metastasis are not clear. Materials And Methods: We retrospectively reviewed patients' records of prostate cancer patients with bone metastasis who were treated with zoledronic acid or denosumab between 1/Dec/2008 and 31/Mar/2019. ARONJ-free survival rate was analyzed with Kaplan-Meier analysis, and risk factors for ARONJ were analyzed with Cox proportional hazard model. Results: We identified 124 and 67 patients treated with zoledronic acid and denosumab, respectively. Seventy-six patients were hormone sensitive, and 115 patients were castration resistant when they started bone-modifying agents (BMA). Twenty-eight patients developed ARONJ during the observation period (median: 23 months, range 1-130 months). Their number of doses of BMA ranged 3-69 (median: 21.5). The 2-year ARONJ-free survival rate was 91.1%, and the 5-year ARONJ-free survival rate was 72.5%. There was no significant difference in the incidence rate of ARONJ between zoledronic acid and denosumab. However, multivariate analysis revealed that use of denosumab (hazard ratio [HR] 3.67, 95% confidence interval [CI] 1.01-13.31; p = 0.0484), serum calcium &lt; 9.2 mg/dL (HR 3.16, 95% CI 1.10-9.13; p = 0.033)), and concomitant or prior use of chemotherapeutic agents (HR 4.71, 95% CI 1.51-14.71; p = 0.0076) were independent risk factors for the development of ARONJ. Conclusion: Almost one-quarter of patients had a risk of developing ARONJ within 5 years after starting BMA. Low serum calcium, use of chemotherapeutic agents, and use of denosumab might contribute to the development of ARONJ. abstract_id: PUBMED:27380219 Prevention and management of osteonecrosis of the jaw secondary to bone-targeted therapy in patients with kidney cancer. Purpose Of Review: The aim of this revision is prevention and management of osteonecrosis of the jaw (ONJ) secondary to bone-targeted therapy in patients with kidney cancer. Recent Findings: Patients with kidney cancer treated with zoledronate suffered from ONJ earlier compared with patients with breast cancer or multiple myeloma; among men, ONJ occurred at 24 months of zoledronic acid treatment in more than 80% of the patients and much earlier, in respect to patients with prostate cancer or multiple myeloma. Protective factors against an ONJ can be sequential prescription of different bisphosphonates and female sex. Less data are available on ONJ secondary to denosumab administration in patients with kidney cancer. Summary: Bone metastases, developing in about 30% of the patients with metastatic renal cell carcinoma, are typically osteolytic on imaging and cause significant morbidity and poor quality of life. Incidence of skeletal-related events has been reported to reach 3.38 per year in such patients. To decrease the incidence of ONJ, a maxillofacial examination must be performed in all patients before treatment with bisphosphonates, in particular in patients with metastatic renal cell carcinoma treated with sunitinib alone or in association with zoledronate. The management of ONJ consider a conservative approach. Answer: Regular zoledronic acid does not appear to significantly change the bone turnover (BT) of the jaw in men with metastatic prostate cancer, according to a study that investigated this as a possible clue to the pathogenesis of bisphosphonate-related osteonecrosis of the jaw (BRONJ) (PUBMED:24469255). The study found that the mandible had a significantly lower BT than the maxilla and that all investigated bone regions showed no significant changes under bisphosphonate (BP) administration. Additionally, the study noted that two-thirds of BRONJ cases occur in the mandible, which is inconsistent with the hypothesis that high BT and its over-suppression play a key role in the pathomechanism of BRONJ. Furthermore, the study revealed that there were significantly lower BT values for the prostate cancer cohort compared with a female breast cancer cohort, and changes over time could not be found. This suggests that the pathogenesis of BRONJ may not be solely related to alterations in bone turnover due to BP intake, at least in the population of men with metastatic prostate cancer.
Instruction: Do computer generated ECG reports improve interpretation by accident and emergency senior house officers? Abstracts: abstract_id: PUBMED:11423597 Do computer generated ECG reports improve interpretation by accident and emergency senior house officers? Objectives: To determine whether access to a computer generated electrocardiogram (ECG) report can reduce errors of interpretation by senior house officers (SHOs) in an accident and emergency department. Methods: Ten SHOs were asked to interpret 50 ECGs each: 25 with computer generated reports, 25 without. Their answers, and the computer generated reports, were compared with a "gold standard" produced by two experienced clinicians. The primary outcome measure was the proportion of major errors of interpretation. Results: The computer reading system made two major errors (4%, 95% confidence interval (CI) 1.1% to 13.5%) compared with the gold standard. Access to the computer report did not significantly reduce major errors among SHOs (46 (18.4%) with report v 56 (22.4%) without, odds ratio 0.64, 95% CI 0.36% to 1.14%, p=0.13) or improve the proportion completely correct (104 (41.6%) with report v 91 (36.4%) without, odds ratio 1.43, 95% CI 0.88 to 2.33, p=0.15). Conclusions: SHOs have a high error rate when interpreting ECGs, which is not significantly reduced by access to a computer generated report. Junior doctors should continue to seek expert senior help when they have to interpret a difficult ECG. abstract_id: PUBMED:1888402 Missed myocardial ischaemia in the accident &amp; emergency department: E.C.G. a need for audit? Accident &amp; Emergency Department Senior House Officers rely heavily on their ECG interpretation skills in the diagnosis and management of patients with chest pain. This prospective double-blind study was designed to test the accuracy with which Accident &amp; Emergency Senior House Officers interpret ECGs, by comparing their interpretation with that of a Consultant Cardiologist. ECGs from 279 of 314 consecutive patients with chest pain were analysed. Ninety per cent of normal electrocardiographs and 57% of abnormal ECGs were correctly interpreted. Despite the inaccurate interpretation of 43% of abnormal ECGs, 96.5% of the patients in the study were considered to have been managed correctly. Audit of all ECGs recorded in the Accident &amp; Emergency Department should be undertaken by someone with experience of ECG interpretation. New A&amp;E staff should receive training in the interpretation of ECGs. abstract_id: PUBMED:9116547 Senior house officers' work related stressors, psychological distress, and confidence in performing clinical tasks in accident and emergency: a questionnaire study. Objective: To investigate the relation between accident and emergency senior house officers' psychological distress and confidence in performing clinical tasks and to describe work related stressors. Design: Questionnaire survey with data collected at four points during senior house officers' six month attachment to accident and emergency departments. Subjects: 171 newly appointed accident and emergency senior house officers from 27 hospitals in the South Thames region. Main Outcome Measures: Psychological distress measured with a 25 item questionnaire; confidence in performing a range of 35 clinical and practical activities (visual analogue scales); reported consultation stress factors, other work related stressors, and personal stressors. Results: Overall confidence scores in carrying out a range of clinical and practical activities increased significantly between the end of the first and the end of the fourth month (Z = -6.05, P &lt; 0.001). Senior house officers with higher psychological distress scores at the end of their first and fourth month had significantly lower confidence scores (Z = -3.20, P &lt; 0.001; Z = -1.90, P &lt; 0.05). Senior house officers with lower increases in confidence between the first and fourth month had significantly higher distress than those with greater increases (Z = -2.62, P &lt; 0.001). Factors identified as causing stress during consultations included difficulties with communication, certain clinical presentations, and department organisational factors (particularly the intensity of workload). Conclusions: Psychological distress is linked to confidence in senior house officers. This supports the need to monitor and build confidence in senior house officers and to address work related stressors. Additional communication skills training needs to be considered. abstract_id: PUBMED:7746770 Improving the interpretation of electrocardiographs in an accident and emergency department. In this study we have examined the ability of senior house officers in the Accident and Emergency (A&amp;E) Department to interpret electrocardiographs (ECGs) and tested the value of a single seminar and guidelines on interpretation. We prospectively audited ECG interpretation taken from notes over a two-month period and repeated this audit following a single seminar by a consultant cardiologist and after the issue of guidelines. A formal test of interpretation of a set of 20 ECGs by senior house officers in A&amp;E was also carried out. 245 case notes were reviewed and in one third the ECG was interpreted incorrectly by senior house officers in A&amp;E but incorrect clinical management followed in only 3.2% of cases. Following the intervention, 242 case notes were reviewed and serious misinterpretations were halved as was the number of patients mismanaged as a result (1.7%). In conclusion, formal training in ECG interpretation can reduce serious errors. abstract_id: PUBMED:8731707 Do accident and emergency senior house officers know the British guidelines on the management of acute asthma? Avoidable deaths from asthma continue, even in hospital. Since the management of acute severe asthma is often initiated in the Accident and Emergency department, it is crucial that staff there have adequate knowledge. An anonymous questionnaire, containing items based on chart 6 of the UK guidelines, was completed by 66 Accident and Emergency Senior House Officers from the Yorkshire region. The study aim was to establish these doctors' levels of knowledge about the recommended management of acute asthma in Accident and Emergency. The median score was 10 (out of a possible 24) and the interquartile range 8-13. Further efforts are required to implement these guidelines so that the best patient outcomes can be achieved. abstract_id: PUBMED:12748136 Stress and coping in accident and emergency senior house officers. Objectives: To identify levels of psychological distress in accident and emergency (A&amp;E) senior house officers (SHOs). Methods: Questionnaire survey given to SHOs at training sessions. Measures: General Health Questionnaire (GHQ) and the Brief COPE. Results: Over half of the 37 respondents (51%; 95% CI: 36% to 67%) scored over the threshold for psychological distress on the GHQ. This is considerably higher than for other groups of doctors and for other professional groups. Correlational analysis revealed that the coping style Venting was significantly related to greater anxiety (r=0.34; p&lt;0.05) and depression (r=0.33; p&lt;0.05), while the coping style Active was significantly related to lower anxiety (r=-0.38; p&lt;0.05), somatic complaints (r=-0.46; p&lt;0.001) and years since qualification (r=0.40; p&lt;0.05). Conclusions: Replicating findings from a study of stress in A&amp;E consultants, this study shows higher levels of overt psychological distress among A&amp;E SHOs than among other groups of doctors. An intervention to improve coping strategies may be useful for this group of doctors. abstract_id: PUBMED:21373298 ECG interpretation skills of South African Emergency Medicine residents. Background: The use and interpretation of electrocardiograms (ECGs) are widely accepted as an essential core skill in Emergency Medicine. It is imperative that emergency physicians are expert in ECG interpretation when they exit their training programme. Aim: It is unknown whether South African Emergency Medicine trainees are getting the necessary skills in ECG interpretation during the training programme. Currently there are no clear criteria to assess emergency physicians' competency in ECG interpretation in South Africa. Methods: A prospective cross-sectional study of Emergency Medicine residents and recently qualified emergency physicians was conducted between August 2008 and February 2009 using a focused questionnaire. Results: At the time of the study, there were 55 eligible trainees in South Africa. A total of 55 assessments were distributed; 50 were returned (91%) and 49 were fully completed (89%). In this study, we found the overall average score of ECG interpretation was 46.4% [95% confidence interval (CI) 41.5-51.2%]. The junior group had an overall average of 42.2% (95% CI 36.9-47.5%), whereas the senior group managed 52.5% (95% CI 43.4-61.5%). Conclusion: In this prospective cross-sectional study of Emergency Medicine residents and recently qualified emergency physicians, we found that there was improvement in the interpretation of ECGs with increased seniority. There exists, however, a low level of accuracy for many of the critical ECG diagnoses. The average score of 46.4% obtained in this study is lower than the scores obtained by other international studies from countries where Emergency Medicine is a well-established speciality. abstract_id: PUBMED:8775954 Senior house officers' work activities in an accident and emergency department. A study of senior house officers' daytime work activities was conducted in the accident and emergency department of the Cardiff Royal Infirmary. Activities were timed every 15 seconds by a single observer and were subdivided into working periods, meal periods and periods when no patient was waiting to be seen. A total of 96.1 h was analysed. The SHOs were found to be working for 78% of their time, of which 18.5% was taken up by non-doctor tasks. The SHOs took an average of 10.4 min for each walking wounded patient, 10.6 min for each paediatric patient and 27.3 min for a trolley patient. Each patient required an average of 14.2 min for assessment and treatment, which meant that 3.3 patients were seen each hour. However, considering that SHOs worked for 78% of their time, the actual rate of work was 4.2 patients for each working hour. The implications for staffing are discussed. abstract_id: PUBMED:12421780 Dental knowledge of accident and emergency senior house officers. Objectives: To determine the dental knowledge of accident and emergency (A&amp;E) senior house officers (SHOs). Method: A telephone survey of A&amp;E departments in England with more than 30 000 new attendances per year was conducted between November 1998 and July 1999. The questionnaire covered basic dental knowledge as well as scenarios of some common dental problems encountered in the A&amp;E department. Results: Most SHOs in this study saw between one and five dental problems each week. Of the 102 SHOs in this survey, 52% had no previous training in examination of the mouth. Only 52% knew the approximate date of eruption of a permanent upper central incisor. In the treatment of post-extraction bleeding, 36% knew the first line of treatment. When presented with a scenario of a patient with a dental infection, only 29% gave the optimal empirical treatment. Conclusions: There is a need for better dental education among medical personnel. Guidelines or algorithms for the management of some common dental problems would also be useful as well as standard written advice sheets for patients with dental problems. abstract_id: PUBMED:8646050 Cost effectiveness of treating primary care patients in accident and emergency: a comparison between general practitioners, senior house officers, and registrars. Objectives: To compare outcome and costs of general practitioners, senior house officers, and registrars treating patients who attended accident and emergency department with problems assessed at triage as being of primary care type. Design: Prospective intervention study which was later costed. Setting: Inner city accident and emergency department in south east London. Subjects: 4641 patients presenting with primary care problems: 1702 were seen by general practitioners, 2382 by senior house officers, and 557 by registrars. Main Outcome Measures: Satisfaction and outcome assessed in subsample of 565 patients 7-10 days after hospital attendance and aggregate costs of hospital care provided. Results: Most patients expressed high levels of satisfaction with clinical assessment (430/562 (77%)), treatment (418/557 (75%)), and consulting doctor's manner (434/492 (88%)). Patients' reported outcome and use of general practice in 7-10 days after attendance were similar: 206/241 (85%), 224/263 (85%), and 52/59 (88%) of those seen by general practitioners, senior house officers, and registrars respectively were fully recovered or improving (chi2 = 0.35, P = 0.840), while 48/240 (20%), 48/268 (18%), and 12/57 (21%) respectively consulted a general practitioner or practice nurse (chi2 = 0.51, P = 0.774). Excluding costs of admissions, the average costs per case were 19.30 pounds, 17.97 pounds, and 11.70 pounds for senior house officers, registrars, and general practitioners respectively. With cost of admissions included, these costs were 58.25 pounds, 44.68 pounds, and 32.30 pounds respectively. Conclusion: Management of patients with primary care needs in accident and emergency department by general practitioners reduced costs with no apparent detrimental effect on outcome. These results support new role for general practitioners. Answer: No, computer generated ECG reports do not significantly improve the interpretation of ECGs by accident and emergency senior house officers (SHOs). A study aimed at determining the impact of computer generated ECG reports on SHOs' interpretation errors found that access to the computer report did not significantly reduce major errors among SHOs. The study involved SHOs interpreting ECGs with and without computer generated reports and comparing their answers to a "gold standard" produced by experienced clinicians. The computer reading system itself made a small percentage of major errors compared to the gold standard, but the SHOs' error rate remained high and was not significantly reduced by the computer generated reports. The study concluded that junior doctors should continue to seek expert senior help when interpreting difficult ECGs (PUBMED:11423597).
Instruction: Does normalisation improve the diagnostic performance of apparent diffusion coefficient values for prostate cancer assessment? Abstracts: abstract_id: PUBMED:26126712 Does normalisation improve the diagnostic performance of apparent diffusion coefficient values for prostate cancer assessment? A blinded independent-observer evaluation. Aim: To evaluate the performance of normalised apparent diffusion coefficient (ADC) values for prostate cancer assessment when performed by independent observers blinded to histopathology findings. Materials And Methods: Fifty-eight patients undergoing 3 T phased-array coil magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI; maximal b-value 1000 s/mm(2)) before prostatectomy were included. Two radiologists independently evaluated the images, unaware of the histopathology findings. Regions of interest (ROIs) were drawn within areas showing visually low ADC within the peripheral zone (PZ) and transition zone (TZ) bilaterally. ROIs were also placed within regions in both lobes not suspicious for tumour, allowing computation of normalised ADC (nADC) ratios between suspicious and non-suspicious regions. The diagnostic performance of ADC and nADC were compared. Results: For PZ tumour detection, ADC achieved significantly higher area under the receiver operating characteristic curve (AUC; p=0.026) and specificity (p=0.021) than nADC for reader 1, and significantly higher AUC (p=0.025) than nADC for reader 2. For TZ tumour detection, nADC achieved significantly higher specificity (p=0.003) and accuracy (p=0.004) than ADC for reader 2. For PZ Gleason score &gt;3+3 tumour detection, ADC achieved significantly higher AUC (p=0.003) and specificity (p=0.005) than nADC for reader 1, and significantly higher AUC (p=0.023) than nADC for reader 2. For TZ Gleason score &gt;3+3 tumour detection, ADC achieved significantly higher specificity (p=0.019) than nADC for reader 1. Conclusion: In contrast to prior studies performing unblinded evaluations, ADC was observed to outperform nADC overall for two independent observers blinded to the histopathology findings. Therefore, although strategies to improve the utility of ADC measurements in prostate cancer assessment merit continued investigation, caution is warranted when applying normalisation to improve diagnostic performance in clinical practice. abstract_id: PUBMED:29045185 Apparent Diffusion Coefficient Values of Prostate Cancer: Comparison of 2D and 3D ROIs. Objective: The purpose of this study was to compare the reproducibility and diagnostic performance of 2D and 3D ROIs for prostate apparent diffusion coefficient (ADC) measurements. Materials And Methods: The study included 56 patients with prostate cancer undergoing 3-T MRI including DWI (b = 50 and 1000 s/mm2) before radical prostatectomy. Histologic findings from prostatectomy specimens were reviewed to denote each patient's dominant tumor and a benign region with visually decreased ADC. Three readers independently measured the ADCs of both areas using an ROI placed on a single slice through the lesion (2D) and an ROI encompassing all slices through the lesion (3D). Readers repeated measurements after 3 weeks. Assessment included Bland-Altman analysis (coefficient of repeatability [CR] in which lower values indicated higher reliability) and ROC analysis. Results: For intrareader variability, the CRs across readers for all ROIs were 9.9% for 2D and 9.3% for 3D. For tumor ROIs the CRs were 10.6% for 2D and 9.6% for 3D. For interreader variability, the CRs across readers for all ROIs were 17.1% for 2D and 20.5% for 3D and for tumor ROIs were 17.9% for 2D and 22.2% for 3D. For combined reader data, the AUCs for benign and malignant findings were 0.77 for 2D and 0.78 for 3D (p = 0.146). For differentiating Gleason score (GS) 3 + 3 from GS &gt; 3 + 3 tumors, the AUCs were 0.92 for 2D and 0.92 for 3D ROIs (p = 0.649). For differentiating GS ≤ 3 + 4 from GS ≥ 4 + 3 tumors, the AUCs were 0.70 for 2D and 0.67 for 3D ROIs (p = 0.004). Conclusion: Use of a 3D ROI did not improve intrareader or interreader reproducibility or diagnostic performance compared with use of a 2D ROI for prostate ADC measurements. Interreader reproducibility of 2D ROIs was suboptimal nonetheless. abstract_id: PUBMED:26204306 Anterior Prostate Cancer: Diagnostic Performance of T2-Weighted MRI and an Apparent Diffusion Coefficient Map. Objective: Diagnosis of anterior prostate cancer is challenging. The purpose of this study was to evaluate the diagnostic performance of T2-weighted imaging and an apparent diffusion coefficient (ADC) map in the detection of anterior prostate cancer and to compare that with the diagnostic performance in the detection of posterior prostate cancer. Materials And Methods: We retrospectively reviewed the records of 87 patients who underwent 3-T MRI that included T2-weighted imaging and diffusion-weighted imaging before radical prostatectomy. The prostate gland was divided into anterior and posterior segments, and the radiologists interpreted two protocols (T2-weighted imaging alone vs T2-weighted imaging and an ADC map) and sorted the confidence levels for the presence of prostate cancer into five grades. ROC analysis was performed to evaluate the diagnostic performance of each protocol for the detection of anterior and posterior prostate cancers. We also assessed the relative fractions of sensitivity and specificity between anterior and posterior prostate cancers. Additionally, the ADCs of noncancerous anterior fibromuscular stroma were measured and compared with the ADCs of anterior prostate cancers. Results: The AUCs with T2-weighted imaging alone and with T2-weighted imaging and an ADC map were 0.75 and 0.88 for anterior prostate cancer, respectively, and were 0.70 and 0.81 for posterior prostate cancer. The sensitivity for detecting anterior prostate cancer was 90% and was significantly higher than that for detecting posterior prostate cancer in the protocol using T2-weighted imaging and an ADC map (p = 0.003) when scores of 3-5 were considered as positive for prostate cancer. The ADC was significantly lower in anterior prostate cancer (mean, 0.80 × 10(-3) mm(2)/s) than in noncancerous anterior fibromuscular stroma (1.13 × 10(-3) mm(2)/s) (p &lt; 0.001). Conclusion: The protocol using T2-weighted imaging and an ADC map showed higher accuracy for the detection of anterior prostate cancer than for the detection of posterior prostate cancer. abstract_id: PUBMED:31114355 Apparent diffusion coefficient in extraprostatic extension of prostate cancer: a systematic review and diagnostic meta-analysis. Objective: To evaluate the diagnostic performance of apparent diffusion coefficient (ADC) for local staging of prostate cancer. Methods: Databases of Web of Science, MEDLINE (Ovid and PubMed), Cochrane Library, EMBASE, and Google Scholar were searched up to May 31, 2018, with language restricted to English. All studies concerning multiparametric magnet resonance imaging (mpMRI) with ADC for detection of extracapsular extension (ECE, T3a) and/or extraprostatic extension (EPE, overall stage of T3) were identified by two reviewers independently, and quality of included studies was evaluated using Quality Assessment of Diagnostic Accuracy Studies-2 tool. True positive, false positive, false negative and true negative of each study were extracted to reconstruct the 2×2 tables for evaluating diagnostic accuracy. Summary estimates of sensitivity, specificity, and corresponding 95% CIs were calculated with bivariate model and hierarchical summary receiver operating characteristic model, then presented in forest plots. Multiple subgroup analyses and meta-regression were performed, and publication bias was evaluated with Deeks funnel. Results: A total of 18 studies were included, with 6 involved ECE and 12 for EPE. Pooled sensitivity was 80.5% (95% CI 76.5-83.9%) with specificity of 69.1% (95% CI 62.3-75.2%). Multiple subgroup analyses showed that if ADC and length of capsular contact are regarded as independent predictors, pooled sensitivity was 85% (95% CI 77-90%) and 81.1% (95% CI 76.0-85.3%), with specificity of 70.8% (95% CI 56.3-82.0%) and 66.6% (95% CI 57.6-74.5%), respectively. Meta-regression demonstrated that there was no substantially significant difference in types of coil, magnet field strength (1.5T versus 3.0T), and analysis method (per-lesion versus per-patient). Conclusion: By introducing ADC to MRI, we could obtain favorable sensitivity for diagnostic performance of EPE, but with a little decreased specificity. abstract_id: PUBMED:29908349 MR diffusion kurtosis imaging for cancer diagnosis: A meta-analysis of the diagnostic accuracy of quantitative kurtosis value and diffusion coefficient. Purpose: To perform a meta-analysis for assessing the accuracy of diffusion kurtosis imaging (DKI)-derived quantitative parameters (kurtosis values, K; and corrected diffusion coefficients non-Gaussian bias, D) in separating malignant cancers from benign lesions. Methods: Relevant studies were searched in PubMed and Cochrane Library databases and were analyzed by Meta-DiSc software. Results: Fourteen eligible studies involving 1847 lesions in 1107 patients (895 were benign and 952 were malignant) were included. Pooled analysis showed the sensitivity, specificity, positive likelihood ratio (LR), and negative LR were respectively 0.83 (95% CI, 0.79-0.85), 0.83 (95% CI, 0.80-0.86), 4.61 (95% CI, 2.98-7.14), and 0.22 (95% CI, 0.18-0.28) for K, with the overall area under curve (AUC) of 0.89. The sensitivity, specificity, positive LR, and negative LR were 0.85 (95% CI, 0.80-0.88), 0.85 (95% CI, 0.79-0.89), 6.39 (95% CI, 3.14-12.99), and 0.18 (95% CI, 0.14-0.23) for D, with the overall AUC of 0.92. The sensitivity, specificity, positive LR, and negative LR for apparent diffusion coefficient (ADC) derived from standard diffusion-weighted imaging (DWI) were 0.82 (95% CI, 0.79-0.84), 0.85 (95% CI, 0.82-0.88), 4.75 (95% CI, 3.38-6.68), and 0.24 (95% CI, 0.19-0.29), with the overall AUC of 0.89. The superiority of D to K and ADC was also confirmed by the subgroup analysis of prostate cancer. Conclusion: Our findings suggest that DKI should be added to the routine imaging protocol for screening cancer, with the highest diagnostic accuracy of diffusion coefficients. abstract_id: PUBMED:29939084 Quantitative evaluation of the relative apparent diffusion coefficient values on multiparametric magnetic resonance imaging to predict higher Gleason score prostate cancer. Introduction: Apparent diffusion coefficient (ADC) values on multiparametric magnetic resonance imaging (mpMRI) have been reported to correlate with high-Gleason score (GS) prostate cancer. However, the relative ADC values between tumor lesions and normal tissue have been suggested as more suitable than the absolute ADC values for evaluation of diffusion abnormalities, because absolute ADC values are susceptible to differences in scanners or scanner settings. The present study evaluated the usefulness of the relative assessment of ADC values between tumor lesions and normal tissue on preoperative mpMRI for the prediction of high-risk prostate cancer on radical prostatectomy specimens. Materials And Methods: A retrospective analysis of 48 men who underwent radical prostatectomy between January 2013 and December 2014 was conducted. MpMRI was performed with a 3.0-T scanner using b-values of 0 and 1500 s/mm2. ADC values of the tumor (ADCTUMOR) and normal prostate and the relative ADC tumor/normal ratio (ADCTNR) were evaluated by two radiologists. Results: The inter-rater reliability between two radiologists for ADCTUMOR measurement was high, with Pearson's r = 0.982. There was no difference in ADCTUMOR between GS ≤7 and GS ≥8. In contrast, ADCTNR was significantly lower in GS ≥8 than in GS ≤7. ROC curves of ADCTNR to predict higher GS (≥8) showed better classification performance (AUC = 0.8243, p = .0012 by radiologist A and AUC = 0.7961, p = .0031 by radiologist B) than of ADCTUMOR. Conclusions: The relative assessment of ADC values between tumor lesions and normal tissue could improve the detection rate of high-risk prostate cancers. abstract_id: PUBMED:36961616 Clinical value of minimum apparent diffusion coefficient for prediction of clinically significant prostate cancer in the transition zone. Background: This study investigated the association between apparent diffusion coefficients in Prostate Imaging Reporting and Data System 4/5 lesions and clinically significant prostate cancer in the transition zone. Methods: We included 102 patients who underwent transperineal cognitive fusion targeted biopsy for Prostate Imaging Reporting and Data System 4/5 lesions in the transition zone between 2016 and 2020. The association between apparent diffusion coefficients and prostate cancers in the transition zone was analyzed. Results: The detection rate of prostate cancer was 49% (50/102), including clinically significant prostate cancer in 37.3% (38/102) of patients. The minimum apparent diffusion coefficients in patients with clinically significant prostate cancer were 494.5 ± 133.6 µm2/s, which was significantly lower than 653.8 ± 172.5 µm2/s in patients with benign histology or clinically insignificant prostate cancer. Age, prostate volume, transition zone volume, and mean and minimum apparent diffusion coefficients were associated with clinically significant prostate cancer. Multivariate analysis demonstrated that only the minimum apparent diffusion coefficient value (odds ratio: 0.994; p &lt; 0.001) was an independent predictor of clinically significant prostate cancer. When the cutoff value of the minimum apparent diffusion coefficient was less than 595 µm2/s, indicating the presence of prostate cancer in the transition zone, the detection rate increased to 59.2% (29/49) in this cohort. Conclusion: The minimum apparent diffusion coefficient provided additional value to indicate the presence of clinically significant prostate cancer in the transition zone. It may help consider the need for subsequent biopsies in patients with Prostate Imaging Reporting and Data System 4/5 lesions and an initial negative targeted biopsy. abstract_id: PUBMED:38123889 Short repetition time diffusion-weighted imaging improves visualization of prostate cancer. Purpose: This study aimed to assess whether short repetition time (TR) diffusion-weighted imaging (DWI) could improve diffusion contrast in patients with prostate cancer (PCa) compared with long TR (conventional) reference standard DWI. Materials And Methods: Our Institutional Review Board approved this retrospective study and waived the need for informed consent. Twenty-five patients with suspected PCa underwent multiparametric magnetic resonance imaging (mp-MRI) using a 3.0-T system. DWI was performed with TR of 1850 ms (short) and 6000 ms (long) with b-values of 0, 1000, and 2000s/mm2. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), visual score, apparent diffusion coefficient (ADC), and diagnostic performance were compared between short and long TR DWI for both b-values. The statistical tests included paired t-test for SNR and CNR; Wilcoxon signed-rank test for VA; Pearson's correlation and Bland-Altman plot analysis for ADC; and McNemar test and receiver operating characteristic analysis and Delong test for diagnostic performance. Results: Regarding b1000, CNR and visual score were significantly higher in short TR compared with long TR (P = .003 and P = .002, respectively), without significant difference in SNR (P = .21). Considering b2000, there was no significant difference in visual score between short and long TR (P = .07). However, SNR and CNR in long TR were higher (P = .01 and P = .04, respectively). ADC showed significant correlations, without apparent bias for ADC between short and long TR for both b-values. For diagnostic performance of DWI between short and long TR for both b-values, one out of five readers noted a significant difference, with the short TR for both b-values demonstrating superior performance. Conclusions: Our data showed that the short TR DWI1000 may provide better image quality than did the long TR DWI1000 and may improve visualization and diagnostic performance of PCa for readers. abstract_id: PUBMED:37464833 Objective Value of the Apparent Diffusion Coefficient (ADC) Map from Ultrahigh b-value Diffusion-weighted Imaging (DWI) in 3T MRI could be a Non-invasive Specific Biomarker for Prostate Cancer. Objective: This article aims to explore the ADC value of ultrahigh b-value DWI and the diagnostic cutoff point in prostate cancer. Methods: A total of 78 patients were included in this study. T2 weighted imaging (T2WI), conventional diffusion-weighted imaging (DWI) (1000 s/mm2), and DWI with ultrahigh b-values of 2000 s/mm2 and 3000 s/mm2 were performed in each patient. With reference biopsy as the gold standard, the apparent diffusion coefficient (ADC)s of each b-value DWI image were analyzed. According to different b-value receiver operating characteristic (ROC) curves, the ADC diagnostic cutoff point for prostate cancer was determined. Results: A total of 154 lesions were identified as prostate cancer. The ADC values for conventional DWI and ultrahigh b-value DWI with 2000 s/mm2 and 3000 s/mm2 were 1.097×10-3 mm2/s (1.040-1.153), 0.809×10-3 mm2/s (0.766-0.851) and 0.622×10-3 mm2/s (0.591-0.652), respectively, in the peripheral zone and 1.085×10-3 mm2/s (1.022-1.147), 0.815×10-3 mm2/s (0.770-0.861) and 0.651×10-3 mm2/s (0.617-0.685) in the transition zone. The area under the curve (AUC)s of the ADC values from ultrahigh b-value DWI (2000 s/mm2 and 3000 s/mm2) were 0.824 and 0.852 in the peripheral zone and 0.905 for the ADC values from ultrahigh b-value DWI (3000 s/mm2) in the transition zone. In the peripheral zone, the ADC diagnostic cutoff values for prostate cancer were 0.75×10-3 mm2/s and 0.685×10-3 mm2/s in DWI at 2000 s/mm2 and 3000 s/mm2, respectively, and the diagnosis of transition zone cancer was 0.8×10-3 mm2/s and 0.634×10-3 mm2/s, respectively. Conclusion: The ADC values from ultrahigh b-value DWI demonstrated better consistency and diagnostic efficacy in the diagnosis of prostate cancer. abstract_id: PUBMED:27055916 Apparent diffusion coefficient values are superior to transrectal ultrasound-guided prostate biopsy for the assessment of prostate cancer aggressiveness. Background Few studies have focused on comparing the utility of diffusion-weighted imaging (DWI) and transrectal ultrasound (TRUS)-guided biopsy in predicting prostate cancer aggressiveness. Whether apparent diffusion coefficient (ADC) values can provide more information than TRUS-guided biopsy should be confirmed. Purpose To retrospectively assess the utility of ADC values in predicting prostate cancer aggressiveness, compared to the TRUS-guided prostate biopsy Gleason score (GS). Material and Methods The DW images of 54 patients with biopsy-proven prostate cancer were obtained using 1.5-T magnetic resonance (MR). The mean ADC values of cancerous areas and biopsy GS were correlated with prostatectomy GS and D'Amico clinical risk scores, respectively. Meanwhile, the utility of ADC values in identifying high-grade prostate cancer (with Gleason 4 and/or 5 components in prostatectomy) in patients with a biopsy GS ≤ 3 + 3 = 6 was also evaluated. Results A significant negative correlation was found between mean ADC values of cancerous areas and the prostatectomy GS ( P &lt; 0.001) and D'Amico clinical risk scores ( P &lt; 0.001). No significant correlation was found between biopsy GS and prostatectomy GS ( P = 0.140) and D'Amico clinical risk scores ( P = 0.342). Patients harboring Gleason 4 and/or 5 components in prostatectomy had significantly lower ADC values than those harboring no Gleason 4 and/or 5 components ( P = 0.004). Conclusion The ADC values of cancerous areas in the prostate are a better indicator than the biopsy GS in predicting prostate cancer aggressiveness. Moreover, the use of ADC values can help identify the presence of high-grade tumor in patients with a Gleason score ≤ 3 + 3 = 6 during biopsy. Answer: Normalization of apparent diffusion coefficient (ADC) values for prostate cancer assessment does not consistently improve diagnostic performance according to the study by Rosenkrantz et al. (PUBMED:26126712). In this study, the diagnostic performance of normalized ADC (nADC) values was compared to non-normalized ADC values by two independent radiologists who were blinded to histopathology findings. The results showed that for peripheral zone (PZ) tumor detection, ADC achieved significantly higher area under the receiver operating characteristic curve (AUC) and specificity than nADC for both readers. Similarly, for PZ Gleason score >3+3 tumor detection, ADC also achieved significantly higher AUC and specificity than nADC for both readers. However, for transition zone (TZ) tumor detection, nADC achieved significantly higher specificity and accuracy than ADC for one of the readers. Overall, the study concluded that ADC outperformed nADC for the independent observers in the study, suggesting that caution is warranted when applying normalization to improve diagnostic performance in clinical practice (PUBMED:26126712). This finding is consistent with the broader context of ADC value use in prostate cancer assessment, where various studies have explored different methods to enhance the diagnostic accuracy of ADC measurements. For instance, studies have compared the reproducibility and diagnostic performance of 2D and 3D regions of interest (ROIs) for ADC measurements (PUBMED:29045185), evaluated the diagnostic performance of T2-weighted imaging and ADC maps for anterior prostate cancer (PUBMED:26204306), and assessed the diagnostic performance of ADC for local staging of prostate cancer (PUBMED:31114355). Additionally, research has been conducted on the diagnostic accuracy of diffusion kurtosis imaging (DKI) parameters (PUBMED:29908349), the predictive value of relative ADC values for higher Gleason score prostate cancer (PUBMED:29939084), and the clinical value of minimum ADC for predicting clinically significant prostate cancer in the transition zone (PUBMED:36961616). Other studies have also looked into the use of short repetition time DWI to improve visualization of prostate cancer (PUBMED:38123889), the objective value of ADC maps from ultrahigh b-value DWI as a biomarker for prostate cancer (PUBMED:37464833), and the superiority of ADC values over transrectal ultrasound-guided prostate biopsy for assessing prostate cancer aggressiveness (PUBMED:27055916). In summary, while normalization of ADC values has been considered as a potential method to improve diagnostic performance for prostate cancer assessment, the evidence does not consistently support its superiority over non-normalized ADC values.
Instruction: Are cerebral aneurysms atherosclerotic? Abstracts: abstract_id: PUBMED:31228701 Predictors of Atherosclerotic Change in Unruptured Intracranial Aneurysms and Parent Arteries During Clipping. Objectives: To identify predictors of atherosclerotic change in aneurysms and parent arteries, and to retrospectively analyze outcomes from clipped aneurysms that showed atherosclerotic changes. Methods: Between May 2017 and April 2018, we collected a total of 151 clipping cases and reviewed records of operation videos to classify atherosclerosis by location (dome, neck of aneurysm, or parent artery). To identify predictors of atherosclerotic change in aneurysms, we analyzed baseline demographic characteristics, preoperative images, and Framingham Risk Scores (FRS). We also analyzed incomplete clipping cases according to atherosclerosis presence and location. Result: This study cohort included 110 women (mean age, 59.3 ± 7.1 years) and 41 men (mean age, 55.9 ± 9.6 years). Atherosclerotic change was seen in 77 cases. FRS, diabetes mellitus, and aneurysm size were identified as independent risk factors for atherosclerotic change in multivariate logistic regression analysis. There were 11 incomplete clipping cases (7.2%). Among the 30 cases with atherosclerotic change in the neck were 10 cases of incomplete clipping (P &lt; 0.001). Conclusions: FRS, diabetes mellitus, and aneurysm size as predictors of atherosclerosis in patients undergoing aneurysm surgery can help guide surgical decisions and performance. abstract_id: PUBMED:36212635 Intervals of endovascular treatment for coincidental non-adjacent unruptured aneurysms in patients with symptomatic intracranial atherosclerotic stenosis. Background And Purpose: To explore the safety of endovascular therapy for concomitant non-adjacent unruptured intracranial aneurysms (UIAs) which is incidentally found in severe patients with symptomatic intracranial atherosclerotic stenosis at the same session and different sessions. Methods: Patients between January 2019 to December 2020 were retrospectively reviewed at our institution. Patients with concomitant non-adjacent incidental UIA in severe symptomatic intracranial atherosclerotic stenosis, who underwent endovascular treatment for both lesions were included. They were divided into two groups according to the intervals (The aneurysm was treated at the same session as stenosis or at separated sessions). The demographics, procedure details, complications, and clinical outcomes were compared between groups. Results: A total of 22 patients were involved. In total, ten patients underwent endovascular treatment for UIA and stenosis at one session and 12 patients at separate sessions. In total, three (13.6%) patients experienced procedural related complications, including 2 (20%) in the one session group and 1(8.3%) in the separate sessions group. Follow-up (Range 6-12, mean = 8.5 months) results showed good clinical outcome in all the patients. There is no statistical significance in terms of complication rate and unfavorable clinical outcome between groups. Conclusions: Non-adjacent concomitant UIA and severe symptomatic intracranial atherosclerotic stenosis will not pose additional endovascular treatment risks. Both simultaneous endovascular management and short intervals between separated procedures are technically feasible and safe. abstract_id: PUBMED:31140180 Intra-Aneurysmal Rupture of an Atheroma during the Clipping of Large Atherosclerotic Aneurysm of the Middle Cerebral Artery: A Case Report with Video Demonstration. Ischemic complications including silent or symptomatic events are known to occur during the clipping of intracranial aneurysms, although at a relatively lower rate than during endovascular treatment. An atherosclerotic or calcified neck is one of the major contributors to postoperative ischemic sequelae from a surgically treated aneurysm. Atherosclerotic changes in intracranial vessels or within an aneurysm wall or neck area are often seen during surgery. However, we were not previously able to detect any showering of atheromatous emboli during temporary or permanent clipping procedures. We describe a case of an intra-aneurysmal rupture of a squeezed atheroma observed after permanent clipping of an atherosclerotic large middle cerebral artery aneurysm. After tentative clipping to treat the severe atherosclerotic aneurysm in this patient, we applied supplementary clipping to the atherosclerotic area of the aneurysmal sac. The resulting squeezing of the intra-aneurysmal atheroma caused a leakage into the subadventitial layer of the aneurysmal sac. We also discuss the potential ischemic complications of aneurysmal clipping surgery. We conclude that the surgical techniques used to treat these specific aneurysms require circumspect planning through a review of preoperative images. abstract_id: PUBMED:26384767 Ischemic Event and Risk Factors of Embolic Stroke in Atherosclerotic Cerebral Aneurysm Patients Treated with a New Clipping Technique. Background: We would like to know the exact rate of ischemic event and the risk factors associated with embolic stroke by treatment of atherosclerotic cerebral aneurysm with a new technique. Methods: This is a retrospective cohort study in patients diagnosed as unruptured atherosclerotic cerebral aneurysm who underwent microsurgical clipping between January 2012 and August 2014. All intraoperative video recordings were reviewed and chosen in patients who were identified as atherosclerotic lesion on the dome, neck, or parent artery of the aneurysm. The demographic, radiographic, operative, and postoperative data were collected. The primary end point was the incidence of cerebral infarction and the neurologic outcomes using the discharge modified Rankin Scale (mRS) score at postoperation time. A statistical analysis of the factors associated with embolic stroke was done by Fisher exact and Wilcoxon rank-sum tests. The individual surgical technique was demonstrated as an illustration for use as a guide. Results: Among 103 atherosclerotic cerebral aneurysms, only 3 patients (2.9%) were associated with postoperative cerebral infarction and 1 of them had permanent neurologic deficit. A good mRS score (0-2) was found in 99% of patients and only 1% had a fair mRS score (3-4). A factor associated with ischemic outcome was duration of operation (P = .046). The differences in the atherosclerotic location showed no statistical significance. Conclusions: We found a very low incidence of embolic infarction after clipping of an atherosclerotic cerebral aneurysm with our new technique. Only duration of the operation time was a dependent risk factor for embolic infarction. abstract_id: PUBMED:25156785 The impact of atherosclerotic factors on cerebral aneurysm is location dependent: aneurysms in stroke patients and healthy controls. Previous studies have indicated that cerebrovascular diseases (CVDs) seem to increase the occurrence of unruptured intracranial aneurysms (UIAs). However, this maybe explained by the fact that CVDs and UIAs share common risk factors, such as hypertension (HT) and smoking. To clarify the impact of atherosclerotic risk factors on cerebral aneurysmal formation, we explored the incidence of UIAs and their locations in healthy controls and patients with CVD, who frequently have atherosclerotic risk factors. This study included consecutive 283 asymptomatic healthy adults and 173 acute stroke patients, from patients diagnosed with acute cerebral hemorrhage or cerebral infarction and admitted to our hospital. The incidence, maximum diameter, and location of UIAs were evaluated, and we also investigated the following factors: age, gender, current smoking, HT, diabetes mellitus (DM), and dyslipidemia. UIAs were found in 19 of the total 456 subjects (4.2%), 11 of 283 healthy subjects (3.9%), and 8 of 173 stroke patients (4.6%). These differences are not statically significant. The incidence of middle cerebral artery (MCA) aneurysms was significantly higher in the CVD patients than in the healthy controls (P = .03), and the incidence of paraclinoid aneurysms was significantly higher in the healthy controls than in the CVD patients (P = .03). Moreover, higher incidences of HTs and CVDs in the MCA aneurysms than in the other locations of UIAs were observed. These results indicate that the impact of atherosclerotic factors on cerebral aneurysmal formation depends on their location and that there is a stronger impact on MCA aneurysms than on paraclinoid aneurysms. abstract_id: PUBMED:29325952 Endovascular Treatment of Intracranial Aneurysms Concomitant with Severe Adjacent Atherosclerotic Stenosis. Purpose: To investigate the effect and safety of endovascular treatment of intracranial aneurysms concomitant with severe adjacent atherosclerotic stenosis. Materials And Methods: Twenty-six patients with aneurysms and adjacent stenosis were prospectively enrolled. The characteristics of the aneurysm, parent artery, atherosclerotic stenosis and endovascular treatment methods were analyzed. Results: All aneurysms were successfully embolized (100%), with stent-assisted coiling in 14 (53.8%) cases, coiling alone in 10 (38.5%), double microcatheter coiling in 1 (3.8%), and balloon-assisted coiling in the remaining 1 (3.8%). Immediately after embolization, complete occlusion was achieved in 10 cases (38.5%), nearly complete occlusion in 6 (23.1%) and non-complete occlusion in 10 (38.5%). Ten aneurysms were type I and were managed with coiling alone in 8 cases and stent-assisted coiling in the remaining 2 cases, with complete occlusion achieved in 6 cases (60%), nearly complete in 2 (20%), and noncomplete in the other 2 (20%). Sixteen aneurysms were type II and treated with stent-assisted coiling in 12 cases (75%), single coiling in 2 (12.5%), double microcatheters in 1 (6.3%), and balloon-assisted coiling in the remaining aneurysm (6.3%). Aneurysm occlusion was complete in 4 cases (25%), nearly complete in 4 (25%), and noncomplete in the other 8 (50%). Clinical follow-up of 2 months to 5 years (mean 26 ± 11 months) demonstrated no rebleeding, with the modified Rankin scale score of 0-2 in 20 patients, 3 in 4, and 6 in the remaining 2. Conclusions: Intracranial aneurysms concomitant with severe adjacent atherosclerotic stenosis can be successfully treated endovascularly, and careful evaluation of the characteristics of the aneurysm, parent artery, stenosis and collateral circulation can help reducing complications. abstract_id: PUBMED:16987162 Matrix metalloproteinases 2 and 9 in human atherosclerotic and non-atherosclerotic cerebral aneurysms. Matrix metalloproteinases 2 and 9 (MMP 2 and -9) have been implicated in the pathogenesis of atherosclerosis and aneurysm formation. The goal of the study was to establish the role of these metalloproteinases in both human atherosclerotic and non-atherosclerotic cerebral aneurysms. Eleven cerebral aneurysms (four atherosclerotic, seven non-atherosclerotic) were immunohistochemically stained for MMP 2 and -9. As controls, atherosclerotic and normal Circle of Willis arteries were similarly immunostained. All specimens were retrieved at autopsy and were paraffin-embedded. In order to evaluate the real MMP 2 and -9 activities, gelatin zymography was also performed in only two available specimens of non-atherosclerotic intracranial aneurysms, because of the relative unavailability of fresh intracranial aneurysm tissue (i.e. reluctance to excise the aneurysm fundus at surgery). Our data establish that MMP 2 and -9 were expressed minimally or not at all in normal Circle of Willis arteries but were strongly expressed in medial smooth muscle cells of atherosclerotic Circle of Willis arteries. In the aneurysm group, both MMP 2 and -9 were strongly expressed in the atherosclerotic aneurysms, but MMP 2 alone was detected in the non-atherosclerotic aneurysms. Zymography revealed a weak enzyme activity correlating to MMP 9 standard recombinant protein. MMP 2 activity was not demonstrated in either specimen. This study shows that the expression of MMP 2 and -9 is associated with atherosclerosis, be it in aneurysmal or non-aneurysmal cerebral vessels but MMP 2 appears to be specifically expressed in aneurysms devoid of atherosclerosis perhaps suggesting a pathogenic role for MMP 2 in the alteration of the extracellular matrix of cerebral arteries during aneurysm formation. abstract_id: PUBMED:31627063 Relationship between cerebrovascular atherosclerotic stenosis and rupture risk of unruptured intracranial aneurysm: A single-center retrospective study. Objectives: Cerebrovascular atherosclerotic stenosis (CAS) and intracranial aneurysm (IA) have a common underlying arterial pathology and common risk factors, but the clinical significance of CAS in IA rupture (IAR) is unclear. This study aimed to investigate the effect of CAS on the risk of IAR. Patients And Methods: A total of 336 patients with 507 sacular IAs admitted at our center were included. Univariable and multivariable logistic regression analyses were performed to determine the association between IAR and the angiographic variables for CAS. We also explored the differences in CAS in patients aged &lt;65 and ≥65 years. Results: In all the patient groups, moderate (50%-70%) cerebrovascular stenosis was significantly associated with IAR (odds ratio [OR], 3.4; 95% confidence interval [CI], 1.8-6.5). Single cerebral artery stenosis was also significantly associated with IAR (OR, 2.3; 95% CI, 1.3-3.9), and intracranial stenosis may be a risk factor for IAR (OR, 1.8; 95% CI, 1.0-3.2). In addition, IAs with lobulation may be at a higher risk for rupture than IAs with regular shape (OR, 2.6; 95% CI, 1.1-5.8; P = 0.026), although the same was not true of aneurysms with a daughter sac (OR, 1.8; 95% CI, 0.9-3.7; P = 0.098). Bifurcation location (OR, 2.4; 95% CI, 1.5-3.8; P &lt; 0.001) was significantly associated with aneurysmal rupture. For the patient subgroup aged &lt;65 years, rupture risk was higher for aneurysms with moderate stenosis (OR, 3.4; 95% CI, 1.8-6.5). For patients aged ≥65 years, single-artery stenosis (OR, 1.9; 95% CI, 1.2-3.0) was statistically associated with IAR. Conclusions: We observed substantial differences in the severity of atherosclerotic stenosis, parent-artery stenosis, number of stenotic arteries, and intracranial/extracranial stenosis as indicators between ruptured and unruptured aneurysms. CAS is significantly associated with the risk of intracranial aneurysm rupture, whether in patients aged ≥65 years or &lt;65 years. These findings indicate the clinical significance of CAS in IAR. abstract_id: PUBMED:23466011 Smooth muscle cells of human intracranial aneurysms assume phenotypic features similar to those of the atherosclerotic plaque. Objectives: Characterize the phenotypic features of smooth muscle cells (SMCs) in the wall of human saccular intracranial aneurysms (sIAs). Methods And Results: We investigated by means of immunohistochemistry the expression of the cytoskeletal differentiation markers α-smooth muscle actin (α-SMA), smooth muscle myosin heavy chains (SMMHCs), and smoothelin in 26 sIAs and 15 nonaneurysmal cerebral arteries. In addition, S100A4, a recently identified marker of dedifferentiated SMCs in atherosclerotic plaques, was also investigated. Six sIAs and 5 nonaneurysmal arteries were used for morphometric analysis. sIAs displayed a significant medial atrophy compared with nonaneurysmal cerebral arteries; moreover, sIA SMCs showed marked decrease of α-SMA and SMMHCs expression and disappearance of smoothelin. Unexpectedly, S100A4 was strongly up-regulated in media SMCs of sIAs. Conclusions: In sIAs, media SMCs acquire a dedifferentiated phenotype and show de novo expression of S100A4, characteristic features of atherosclerotic plaque SMCs. abstract_id: PUBMED:27703491 Computational Hemodynamic Analysis for the Diagnosis of Atherosclerotic Changes in Intracranial Aneurysms: A Proof-of-Concept Study Using 3 Cases Harboring Atherosclerotic and Nonatherosclerotic Aneurysms Simultaneously. This was a proof-of-concept computational fluid dynamics (CFD) study designed to identify atherosclerotic changes in intracranial aneurysms. We selected 3 patients with multiple unruptured aneurysms including at least one with atherosclerotic changes and investigated whether an image-based CFD study could provide useful information for discriminating the atherosclerotic aneurysms. Patient-specific geometries were constructed from three-dimensional data obtained using rotational angiography. Transient simulations were conducted under patient-specific inlet flow rates measured by phase-contrast magnetic resonance velocimetry. In the postanalyses, we calculated time-averaged wall shear stress (WSS), oscillatory shear index, and relative residence time (RRT). The volume of blood flow entering aneurysms through the neck and the mean velocity of blood flow inside aneurysms were examined. We applied the age-of-fluid method to quantitatively assess the residence of blood inside aneurysms. Atherosclerotic changes coincided with regions exposed to disturbed blood flow, as indicated by low WSS and long RRT. Blood entered aneurysms in phase with inlet flow rates. The mean velocities of blood inside atherosclerotic aneurysms were lower than those inside nonatherosclerotic aneurysms. Blood in atherosclerotic aneurysms was older than that in nonatherosclerotic aneurysms, especially near the wall. This proof-of-concept study demonstrated that CFD analysis provided detailed information on the exchange and residence of blood that is useful for the diagnosis of atherosclerotic changes in intracranial aneurysms. Answer: Cerebral aneurysms can be associated with atherosclerotic changes, but not all cerebral aneurysms are atherosclerotic. Atherosclerosis is a condition characterized by the buildup of plaque in the arteries, which can lead to vessel wall weakening and aneurysm formation. Several studies have investigated the relationship between atherosclerosis and cerebral aneurysms. For instance, one study found that atherosclerotic changes were often seen in intracranial vessels or within an aneurysm wall or neck area during surgery (PUBMED:31140180). Another study identified that the expression of matrix metalloproteinases 2 and 9, which are associated with atherosclerosis, was strong in atherosclerotic aneurysms (PUBMED:16987162). Additionally, a study that characterized the phenotypic features of smooth muscle cells in the wall of human saccular intracranial aneurysms found that these cells acquire a dedifferentiated phenotype and show de novo expression of S100A4, which are characteristic features of atherosclerotic plaque smooth muscle cells (PUBMED:23466011). However, the impact of atherosclerotic factors on cerebral aneurysmal formation appears to be location-dependent. For example, the incidence of middle cerebral artery aneurysms was significantly higher in patients with cerebrovascular diseases than in healthy controls, suggesting a stronger impact of atherosclerotic factors on aneurysms in certain locations (PUBMED:25156785). Furthermore, studies have also shown that atherosclerotic changes can be predictors of outcomes in aneurysm surgery. Factors such as the Framingham Risk Score, diabetes mellitus, and aneurysm size have been identified as independent risk factors for atherosclerotic change in patients undergoing aneurysm surgery (PUBMED:31228701). In summary, while atherosclerosis can be associated with cerebral aneurysms and may influence their formation and surgical outcomes, not all cerebral aneurysms are necessarily atherosclerotic. The relationship between atherosclerosis and cerebral aneurysms is complex and may vary depending on the location and characteristics of the aneurysm.
Instruction: Total fertilization failure: is it the end of the story? Abstracts: abstract_id: PUBMED:24962788 Total fertilization failure: is it the end of the story? Purpose: To study parameters that could predict in-vitro fertilization (IVF) success in patients who experienced total fertilization failure (TFF) with intracytoplasmic sperm injection (ICSI) in their previous cycles. Methods: Cycle characteristics of patients with TFF (Group I, n = 136 cycles), cycles resulting in embryo transfer (ET) following TFF (Group II, n = 36 cycles) and recurrent TFF (Group III, n = 25 cycles) and were studied retrospectively. Demographic features, cycle characteristics of three groups were compared. Results: Follicle count measuring 15-17 mm was significantly higher in group II when compared to group I (p = 0.02). Total number of retrieved oocytes and mature oocytes were significantly higher in group II when compared to groups I and III (p = 0.001). Estradiol level at oocyte pick up (OPU) day was significantly higher in group II when compared to group I (p = 0.02). When the characteristics of ET cycles and preceding TFF cycles of the same patient were compared, total number of retrieved oocytes (5.11 ± 0.72 (95% CI 3.69-6.52) vs. 11.44 ± 1.60 (95% CI 5.29-17.59)) and mature oocytes (3.26 ± 3.66 (95% CI 2.04-4.47) vs. 6.92 ± 5.61 (95% CI 5.09-8.75)) were found to be significantly lower in TFF cycles (p = 0.001). Five biochemical and 5 clinical pregnancies occurred while only 2 healthy babies were born, corresponding to a live birth rate 5.5%. Conclusions: Increasing the number of retrieved and mature oocytes may increase the success of fertilization in patients with a history of previous failed fertilization. However, live birth rate is still low in embryo transfer cycles. abstract_id: PUBMED:25587256 Analysis of 232 total fertilization failure cycles during intracytoplasmic sperm injection. Background: The intracytoplasmic sperm injection procedure ending with total fertilization failure is very distressfull event for both the clinician and the patient. Objective: The aim of this study was to identify independent factors which could be used to identify total fertilization failure before the day of intracytoplasmic sperm injection. Materials And Methods: This was a retrospective study of 232 patients who were admitted to a tertiary-care hospital IVF Unit and showed total fertilization failure during intracytoplasmic sperm injection cycles. To sort out the interwined effects of female age, basal FSH, sperm quality, antral follicle count, starting dose of gonadotrophine, sperm extraction technique, cycle length, &gt;14 mm follicle number, oocyte number after oocyte pick up, estradiol and progesterone level on the day of hCG and the MI, MII and GV oocyte number on the fertilization, multiple logistic regression analysis was used. Results: The total fertilization failure rate was 6% and the recurrance rate was 23%. The original model illustrated that the presence of GV oocytes, total oocyte number less than six, &lt;2000 pg/mL E2 concentration on the day of hCG and testicular sperm extraction increases the total fertilization failure risk. Conclusion: It is very difficult to predict total fertilization failure. Sometimes even with one good quality oocyte and sperm and in the case of globozoospermia fertilization can be achieved. Not only azoospermia but also low oocyte numbers increase the chance of total fertilization failure even after intracytoplasmic sperm injection. abstract_id: PUBMED:37382786 Development and validation of a conventional in vitro total fertilization failure prediction model. Background: Conventional total fertilization failure (TFF) is a challenging problem for clinicians. The predictive model developed in this study aims to predict the individual probability of conventional in vitro total fertilization failure. Methods: The prediction model was developed based on 1635 patients who underwent first-attempt in vitro fertilization (IVF) cycles from January 2018 to January 2020. Total fertilization failure and normal fertilization occurred in 218 and 1417 cycles, respectively. Multivariate logistic regression analyses were used to develop the prediction model. Performance of our model was evaluated using calibration (Hosmer-Lemeshow test) and discrimination (area under the receiver operating characteristic curve [AUC]). Results: Thirteen risk factors for TFF were included in the prediction model, as follows: female age; female body mass index; infertility duration; number of oocytes retrieved; stimulation protocol; infertility etiology; infertility diagnosis; male age; sperm concentration; total sperm motility; normal sperm morphology percentage; swim-up sperm motility; and swim-up sperm concentration. The AUC of our model was 0.815 (95% CI: 0.783-0.846), indicating satisfactory discrimination performance. Conclusion: Considering female and male factors (especially sperm parameters), we established a model that predicts the probability of TFF in conventional IVF procedures that will be helpful in the laboratory supporting IVF to facilitate physicians in determining optimal treatment. abstract_id: PUBMED:29021901 ICSI diagnostic: a way to prevent total fertilization failure after 4 unsuccessful IUI. Background: The aim of this retrospective study is to investigate the relevance of dividing oocytes and using some for traditional in vitro fertilization (IVF) and others for intracytoplasmic sperm injection (ICSI) as of the first IVF cycle in patients with unexplained infertility who have undergone 4 intrauterine insemination (IUI) cycles which produced no pregnancies. Methods: This retrospective study includes patients with unexplained infertility who have failed to become pregnant, after 4 IUI, despite normal semen parameters after sperm capacitation. These women were treated in our assisted fertilization program from 2008 until 2015. We analysed the first cycles of women in whom more than 4 oocyte cumulus complexes (OCC) were retrieved and single embryo transfer was performed. Results: Dividing oocytes between two fertilization techniques reduce the rate of total fertilization failure during the first IVF cycle. No statistical difference were observed for 2 pronuclei (PN) rate between the two techniques. On the other hand, we observed a significantly lower rate of 3 PN, 1 PN, 0 PN with ICSI in comparison with conventional fertilization. Conclusions: Splitting the oocytes between classical IVF and ICSI increases the chance of embryo transfer on a first IVF cycle after 4 unsuccessful IUI cycles. This half-and-half policy reduces the risk, for the infertile couple, of facing total failure of fertilization and also can provide useful information for the next attempts. abstract_id: PUBMED:25101018 More than pretty pictures? How illustrations affect parent-child story reading and children's story recall. Previous research showed that story illustrations fail to enhance young preschoolers' memories when they accompany a pre-recorded story (e.g., Greenhoot and Semb, 2008). In this study we tested whether young children might benefit from illustrations in a more interactive story-reading context. For instance, illustrations might influence parent-child reading interactions, and thus children's story comprehension and recall. Twenty-six 3.5- to 4.5-year-olds and their primary caregivers were randomly assigned to an Illustrated or Non-Illustrated story-reading condition, and parents were instructed to "read or tell the story" as they normally would read with their child. Children recalled the story after a distracter and again after 1 week. Analyses of the story-reading interactions showed that the illustrations prompted more interactive story reading and more parent and child behaviors known to predict improved literacy outcomes. Furthermore, in the first memory interview, children in the Illustrated condition recalled more story events than those in the Non-Illustrated condition. Story reading measures predicted recall, but did not completely account for picture effects. These results suggest that illustrations enhance young preschoolers' story recall in an interactive story reading context, perhaps because the joint attention established in this context supports children's processing of the illustrations. abstract_id: PUBMED:36568932 Total fertilization failure with in vitro fertilization-intracytoplasmic sperm injection related to WEE2 mutation highlights emerging importance of genetic causes of in vitro fertilization failure. Objective: To report a unique case of total fertilization failure (TFF) after in vitro fertilization with intracytoplasmic sperm injection related to homozygous WEE2 gene mutation and summarize the current literature and management of TFF. Design: Case report. Setting: Academic fertility center. Patients: A 25-year-old woman and her 35-year-old partner with a history of near-complete fertilization failure after 2 cycles of in vitro fertilization/intracytoplasmic sperm injection. Interventions: Consultation with medical and commercial genetic testing for WEE2, PLCZ1, and TLE6. Main Outcome Measures: Oocyte fertilization. Results: The patient was homozygous for WEE2 pathogenic variant impacting oocyte activation and resulting in infertility. Conclusions: In the setting of TFF, early consideration should be given to genetic testing to assist couples in clinical decision-making and help limit the financial and emotional burden associated with unsuccessful fertility intervention. abstract_id: PUBMED:35321222 Finding the Story. Story is the oldest known way of sharing knowledge and information and engages us in our collective humanity. In research settings, story brings meaning to complex ideas, making them feel palpable and connects us with our audience. Historically, the disciplines that take a research interest in the importance of narrative have been largely in fields like the philosophy of science/medicine, medical humanities, and sociology though story is "always already" a part of scientific research. Humanities have gained traction in medical and science education, and researchers are seeking such curricula to communicate more effectively with the public and their students. We believe that story is an effective tool to enable CTS investigators to be effective educators and communicators of translational science. Story-based interdisciplinary pedagogy emphasizes an approach encouraging clinical researchers to keep the human story as the driving force of research design, dissemination, and application of research to diverse audiences. In this article, we provide backgrounds on successful programs that have used story in science communication and education as well as a tool researchers can use to incorporate the structure of story into their own work. abstract_id: PUBMED:24790158 Human sperm devoid of germinal angiotensin-converting enzyme is responsible for total fertilization failure and lower fertilization rates by conventional in vitro fertilization. In conventional in vitro fertilization (IVF), complete failure of fertilization occurs in 5% to 15% of treatments. Although the causes may be unclear, sperm defects appear to be the major contributor. However, a convincing test is not yet available that can predict the risk of fertilization failure. In this study, we found that germinal angiotensin-converting enzyme (gACE) (also called testicular ACE) was undetectable in sperm from patients who had total fertilization failure (TFF) and lower fertilization rates (LFRs) by IVF based on Western blot and indirect immunofluorescence analyses. Additionally, almost all of the patients without gACE on sperm (23 of 25) manifested a TT genotype of the rs4316 single-nucleotide polymorphism of ACE. Overall, our results indicate that the absence of gACE expression is responsible for TFF and LFRs by IVF. The rs4316 polymorphism of ACE might be associated with infertility in those patients. We conclude that sperm lacking gACE may be recognized before commencing IVF and that the patients may be directed instead to consider intracytoplasmic sperm injection. abstract_id: PUBMED:29577757 Association between sperm mitochondrial ND2 gene variants and total fertilization failure. The objective of this study was to explore the association of sperm mitochondrial ND2 (MT-ND2) gene variants with total fertilization failure (TFF). A retrospective comparative study of 246 cases of fresh in vitro fertilization (IVF) cycles or half-intracytoplasmic sperm injection cycles in the Han Chinese population was performed from July 2011 to May 2017. A total of 59 cases undergoing TFF, and 187 control cases with normal fertilization (fertilization rates &gt;50%) were included. The sperm mitochondrial genovariation was determined using nested sequencing. A total of 32 homoplasmic variants and 47 heteroplasmic variants of MT-ND2 gene were observed in this study. There were no significant differences in the frequencies of the 32 homoplasmic variants of MT-ND2 gene between the TFF and control groups. A total of 53 pair-wise comparisons were performed, and the general characteristics of the IVF failure and control subjects were adjusted in logistic models. Data suggested that there were no significant differences in the frequencies of point 4914, 5320, and 5426 heteroplasmic variants of MT-ND2 gene between the TFF and control groups. In addition, no significant difference was observed in the frequency of mtDNA haplogroup D or haplogroup G between the IVF failure group and the normal fertilization group. This study suggests that the MT-ND2 gene variants might not be associated with TFF. Abbreviations: ATP: adenosine triphosphate; dNTP: deoxy-ribonucleoside triphosphate; FADH2: flavin adenine dinucleotide; FDR: false discovery rate; FSH: follicle-stimulating hormone; IVF: in vitro fertilization; LH: luteinizing hormone; MTATP6: mitochondrially encoded ATP synthase 6; MTCYB: mitochondrially encoded cytochrome b; mtDNA: mitochondrial DNA; MT-ND2: mitochondrial ND2; NADH: nicotinamide adenine dinucleotide; ND2: NADH dehydrogenase subunit 2; OXPHOS: oxidative phosphorylation; PCR: single nucleotide polymorphisms; SNPs: single nucleotide polymorphisms; TFF: total fertilization failure. abstract_id: PUBMED:34368125 A Novel Assisted Oocyte Activation Method Improves Fertilization in Patients With Recurrent Fertilization Failure. Total fertilization failure (TFF) occurs in 1-3% of total intracytoplasmic sperm injection (ICSI) cycles and can reoccur in subsequent cycles. Despite the high success rate with the application of assisted oocyte activation (AOA), there is still a small number of couples who cannot obtain fertilized eggs after conventional calcium (Ca2+) ionophores-based ICSI-AOA. Six couples experiencing repeated TFF or low fertilization (&lt;10%) after ICSI and conventional ICSI-AOA were enrolled in this study. Compared with the regular ICSI group and the conventional ICSI-AOA group, the new AOA method, a combination of cycloheximide (CHX) and ionomycin, can significantly increase the fertilization rate from less than 10 up to approximately 50% in most cases. The normal distribution of sperm-related oocyte activation factor phospholipase C zeta (PLCζ1) in the sperms of the cases indicated the absence of an aberrant Ca2+ signaling activation. The results of the whole-embryo aneuploidies analysis indicated that oocytes receiving the novel AOA treatment had the potential to develop into blastocysts with normal karyotypes. Our data demonstrated that CHX combined with ionomycin was able to effectively improve the fertilization rate in the majority of patients suffering from TFF. This novel AOA method had a potential therapeutic effect on those couples experiencing TFF, even after conventional AOA, which may surmount the severe fertilization deficiencies in patients with a repeated low fertilization or TFF. Answer: Total fertilization failure (TFF) is not necessarily the end of the story for couples attempting to conceive through in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI). Research has identified various factors that can predict the success of IVF in patients who have previously experienced TFF. For instance, an increased number of retrieved and mature oocytes may enhance the chances of successful fertilization in subsequent cycles, although the live birth rate remains low (PUBMED:24962788). Additionally, certain independent factors, such as the presence of GV oocytes, total oocyte number less than six, and specific sperm extraction techniques, have been associated with an increased risk of TFF (PUBMED:25587256). A predictive model has been developed that includes thirteen risk factors, such as female and male age, sperm parameters, and the number of oocytes retrieved, to estimate the individual probability of TFF in conventional IVF procedures (PUBMED:37382786). Moreover, strategies such as dividing oocytes for use in both traditional IVF and ICSI can reduce the rate of TFF and provide useful information for future attempts (PUBMED:29021901). Genetic factors have also been recognized as important contributors to TFF. For example, a homozygous WEE2 gene mutation has been linked to TFF due to its impact on oocyte activation (PUBMED:36568932). Furthermore, the absence of germinal angiotensin-converting enzyme (gACE) in sperm has been implicated in TFF and lower fertilization rates in conventional IVF (PUBMED:24790158). While some studies have explored the association between sperm mitochondrial gene variants and TFF, no significant associations have been found (PUBMED:29577757). However, novel assisted oocyte activation (AOA) methods, such as a combination of cycloheximide (CHX) and ionomycin, have shown promise in improving fertilization rates in patients with recurrent TFF (PUBMED:34368125). In conclusion, while TFF is a challenging and distressing event for couples undergoing fertility treatments, it is not the definitive end. Ongoing research and new techniques continue to improve the understanding and management of TFF, offering hope for better outcomes in future IVF cycles.
Instruction: An anatomy-based lumped parameter model of cerebrospinal venous circulation: can an extracranial anatomical change impact intracranial hemodynamics? Abstracts: abstract_id: PUBMED:26099795 An anatomy-based lumped parameter model of cerebrospinal venous circulation: can an extracranial anatomical change impact intracranial hemodynamics? Background: The relationship between extracranial venous system abnormalities and central nervous system disorders has been recently theorized. In this paper we delve into this hypothesis by modeling the venous drainage in brain and spinal column areas and simulating the intracranial flow changes due to extracranial morphological stenoses. Methods: A lumped parameter model of the cerebro-spinal venous drainage was created based on anatomical knowledge and vessels diameters and lengths taken from literature. Each vein was modeled as a hydraulic resistance, calculated through Poiseuille's law. The inputs of the model were arterial flow rates of the intracranial, vertebral and lumbar districts. The effects of the obstruction of the main venous outflows were simulated. A database comprising 112 Multiple Sclerosis patients (Male/Female = 42/70; median age ± standard deviation = 43.7 ± 10.5 years) was retrospectively analyzed. Results: The flow rate of the main veins estimated with the model was similar to the measures of 21 healthy controls (Male/Female = 10/11; mean age ± standard deviation = 31 ± 11 years), obtained with a 1.5 T Magnetic Resonance scanner. The intracranial reflux topography predicted with the model in cases of internal jugular vein diameter reduction was similar to those observed in the patients with internal jugular vein obstacles. Conclusions: The proposed model can predict physiological and pathological behaviors with good fidelity. Despite the simplifications introduced in cerebrospinal venous circulation modeling, the key anatomical feature of the lumped parameter model allowed for a detailed analysis of the consequences of extracranial venous impairments on intracranial pressure and hemodynamics. abstract_id: PUBMED:28664122 Numerical reproduction of hemodynamics change by acupuncture on Taichong (LR-3) based on the lumped-parameter approximation model of the systemic arteries. Background: The aim of this study was to develop a mathematical model of blood flow in the systemic circulation to emulate the change in hemodynamics by acupuncture therapy to elucidate the mechanism of the therapy. For this purpose, as a first step, a simple model of arterial blood flow was presented to reproduce previously reported change in the blood flow volume by the acupuncture needle stimulation of Taichong (LR-3). Methods: This model was based on the lumped-parameter approximation of arterial blood flow together with linear resistance of peripheral circulation. It has been reported that blood flow in the left arm was enhanced after the stimulation, yielding the peripheral vascular resistance-regulated blood flow dominated by the sympathetic nervous system. In addition to the peripheral resistance, another parameter that possibly regulates the blood flow is the cross-sectional area of the vessel. These two factors were changed to numerically examine their contributions to the blood flow based on the hypothesis that they could be changed by the stimulation. The numerical result was compared with the experimental result to confirm the validity of the hypothesis that the blood flow in the arm is regulated by the peripheral resistance. Results: This model is extremely simple and the physical parameters introduced for the simulation were gleaned from different reports in the literature. It was demonstrated, however, that regulation of the peripheral resistance rather than of the cross-sectional area could reproduce the experimentally observed change in the blood flow. Moreover, the relationship between the changes in the flow volume and the systemic vascular resistance quantitatively matched the experimental data. Conclusion: The present model has a potential to emulate hemodynamic change by acupuncture therapy by incorporating physiological correlation of stimulation of an acupoint and regulation of parameters that affect the hemodynamics. abstract_id: PUBMED:29270856 Lumped parameter model for hemodynamic simulation of congenital heart diseases. The recent development of computer technology has made it possible to simulate the hemodynamics of congenital heart diseases on a desktop computer. However, multi-scale modeling of the cardiovascular system based on computed tomographic and magnetic resonance images still requires long simulation times. The lumped parameter model is potentially beneficial for real-time bedside simulation of congenital heart diseases. In this review, we introduce the basics of the lumped parameter model (time-varying elastance chamber model combined with modified Windkessel vasculature model) and illustrate its usage in hemodynamic simulation of congenital heart diseases using examples such as hypoplastic left heart syndrome and Fontan circulation. We also discuss the advantages of the lumped parameter model and the problems for clinical use. abstract_id: PUBMED:34758345 Parameter estimation for closed-loop lumped parameter models of the systemic circulation using synthetic data. Physics-based models can be applied to describe mechanisms in both health and disease, which has the potential to accelerate the development of personalized medicine. The aim of this study was to investigate the feasibility of personalizing a model of systemic hemodynamics by estimating model parameters. We investigated the feasibility of estimating model parameters for a closed-loop lumped parameter model of the left heart and systemic circulation using the step-wise subset reduction method. This proceeded by first investigating the structural identifiability of the model parameters. Secondly, we performed sensitivity analysis to determine which parameters were most influential on the most relevant model outputs. Finally, we constructed a sequence of progressively smaller subsets including parameters based on their ranking by model output influence. The model was then optimized to data for each set of parameters to evaluate how well the parameters could be estimated for each subset. The subsequent results allowed assessment of how different data sets, and noise affected the parameter estimates. In the noiseless case, all parameters could be calibrated to less than 10-3% error using time series data, while errors using clinical index data could reach over 100%. With 5% normally distributed noise the accuracy was limited to be within 10% error for the five most sensitive parameters, while the four least sensitive parameters were unreliably estimated for waveform data. The three least sensitive parameters were particularly challenging to estimate so these should be prioritized for measurement. Cost functions based on time series such as pressure waveforms, were found to give better parameter estimates than cost functions based on standard indices used in clinical assessment of the cardiovascular system, for example stroke volume (SV) and pulse pressure (PP). Averaged parameter estimate errors were reduced by several orders of magnitude by choosing waveforms for noiseless synthetic data. Also when measurement data were noisy, the parameter estimation procedure based on continuous waveforms was more accurate than that based on clinical indices. By application of the stepwise subset reduction method we demonstrated that by the addition of venous pressure to the cost function, or conversely fixing the systemic venous compliance parameter at an accurate value improved all parameter estimates, especially the diastolic filling parameters which have least influence on the aortic pressure. abstract_id: PUBMED:34198422 An integrated lumped-parameter model of the cardiovascular system for the simulation of acute ischemic stroke: description of instantaneous changes in hemodynamics. Acute Ischemic Stroke (AIS) is defined as the acute condition of occlusion of a cerebral artery and is often caused by a Hypertensive Condition (HC). Due to its sudden occurrence, AIS is not observable the right moment it occurs, thus information about instantaneous changes in hemodynamics is limited. This study aimed to propose an integrated Lumped Parameter (LP) model of the cardiovascular system to simulate an AIS and describe instantaneous changes in hemodynamics. In the integrated LP model of the cardiovascular system, heart chambers have been modelled with elastance systems with controlled pressure inputs; heart valves have been modelled with static open/closed pressure-controlled valves; eventually, the vasculature has been modelled with resistor-inductor-capacitor (RLC) direct circuits and have been linked to the rest of the system through a series connection. After simulating physiological conditions, HC has been simulated by changing pressure inputs and constant RLC parameters. Then, AIS occurring in arteries of different sizes have been simulated by considering time-dependent RLC parameters due to the elimination from the model of the occluding artery; instantaneous changes in hemodynamics have been evaluated by Systemic Arteriolar Flow (Qa) and Systemic Arteriolar Pressure (Pa) drop with respect to those measured in HC. Occlusion of arteries of different sizes leaded to an average Qa drop of 0.38 ml/s per cardiac cycle (with minimum and maximum values of 0.04 ml/s and 1.93 ml/s) and average Pa drop of 0.39 mmHg, (with minimum and maximum values of 0.04 mmHg and 1.98 mmHg). In conclusion, hemodynamic variations due to AIS are very small with respect to HC. A direct relation between the inverse of the length of the artery in which the occlusion occurs and the hemodynamic variations has been highlighted; this may allow to link the severity of AIS to the length of the interested artery. abstract_id: PUBMED:38277058 Interrelation Between Cerebrospinal Fluid Pressure, Intracranial Morphology and Venous Hemodynamics Studied by 4D Flow MRI. Purpose: To quantify the effects of CSF pressure alterations on intracranial venous morphology and hemodynamics in idiopathic intracranial hypertension (IIH) and spontaneous intracranial hypotension (SIH) and assess reversibility when the underlying cause is resolved. Methods: We prospectively examined venous volume, intracranial venous blood flow and velocity, including optic nerve sheath diameter (ONSD) as a noninvasive surrogate of CSF pressure changes in 11 patients with IIH, 11 age-matched and sex-matched healthy controls and 9 SIH patients, before and after neurosurgical closure of spinal dural leaks. We applied multiparametric MRI including 4D flow MRI, time-of-flight (TOF) and T2-weighted half-Fourier acquisition single-shot turbo-spin echo (HASTE). Results: Sinus volume overlapped between groups at baseline but decreased after treatment of intracranial hypotension (p = 0.067) along with a significant increase of ONSD (p = 0.003). Blood flow in the middle and dorsal superior sagittal sinus was remarkably lower in patients with higher CSF pressure (i.e., IIH versus controls and SIH after CSF leak closure) but blood flow velocity was comparable cross-sectionally between groups and longitudinally in SIH. Conclusion: We were able to demonstrate the interaction of CSF pressure, venous volumetry, venous hemodynamics and ONSD using multiparametric brain MRI. Closure of CSF leaks in SIH patients resulted in symptoms suggestive of increased intracranial pressure and caused a subsequent decrease of intracranial venous volume and of blood flow within the superior sagittal sinus while ONSD increased. In contrast, blood flow parameters from 4D flow MRI did not discriminate IIH, SIH and controls as hemodynamics at baseline overlapped at most vessel cross-sections. abstract_id: PUBMED:32899110 Broadband Lumped-Element Parameter Extraction Method of Two-Port 3D MEMS In-Chip Solenoid Inductors Based on a Physics-Based Equivalent Circuit Model. Integrated 2D spiral inductors possess low inductance per unit area, which limits their application range. However, the state of investigation into the lumped-element parameter extraction method for integrated 3D in-chip multi-turn solenoid inductors, which possess higher inductance per unit area, is inadequate. This type of inductor can thus not be incorporated into fast computer-aided design (CAD)-assisted circuit design. In this study, we propose a broadband two-port physics-based equivalent circuit model for 3D microelectromechanical system (MEMS) in-chip solenoid inductors that are embedded in silicon substrates. The circuit model was composed of lumped elements with specific physical meanings and incorporated complicated parasitics resulting from eddy currents, skin effects, and proximity effects. Based on this model, we presented a lumped-element parameter extraction method using the electronic design automation software package, Agilent Advanced Design System (ADS). This method proved to be consistent with the results of two-port testing at low to self-resonant frequencies and could thus be used in CAD-assisted circuit design. The lumped element value variations were analyzed based on the physical meaning of the elements with respect to variations in structures and the substrate resistivity of inductors. This provided a novel perspective in terms of the design of integrated in-chip solenoid inductors. abstract_id: PUBMED:36978759 Integrated Thermofluid Lumped Parameter Model for Analyzing Hemodynamics in Human Fatigue State. It is well known that driving while fatigued is dangerous and can lead to serious traffic accidents. However, there is a lack of studies on the mechanism of fatigue. This paper sought to infer changes in the cardiovascular system through hand and head skin temperature peripheral factors via an integrated lumped parameter model. A multi-layer inner structure with variable blood perfusion was used to construct a full-body thermal model. The cardiovascular system model provided blood perfusion using lumped parameters. The peripheral resistance and heart rate in the cardiovascular system model were adjusted to match the experimental temperatures of the head and hands obtained from induced fatigue experiments. The simulation results showed that the heart rate and blood pressure decreased, and the peripheral skin resistance of the hands and head increased after fatigue. A decrease in heart rate and an increase in peripheral resistance affect the magnitude of blood flow to the periphery of the body, leading to a decrease in skin temperature during fatigue. The present integrated model elucidates a key effect of human fatigue on the cardiovascular system, which is expected to help improve the accuracy of fatigue monitoring systems. abstract_id: PUBMED:17355060 A new lumped-parameter model of cerebrospinal hydrodynamics during the cardiac cycle in healthy volunteers. Our knowledge of cerebrospinal fluid (CSF) hydrodynamics has been considerably improved with the recent introduction of phase-contrast magnetic resonance imaging (phase-contrast MRI), which can provide CSF and blood flow measurements throughout the cardiac cycle. Key temporal and amplitude parameters can be calculated at different sites to elucidate the role played by the various CSF compartments during vascular brain expansion. Most of the models reported in the literature do not take into account CSF oscillation during the cardiac cycle and its kinetic energy impact on the brain. We propose a new lumped-parameter compartmental model of CSF and blood flows in healthy subjects during the cardiac cycle. The system was divided into five submodels representing arterial blood, venous blood, ventricular CSF, cranial subarachnoid space, and spinal subarachnoid space. These submodels are connected by resistances and compliances. The model developed was used to reproduce certain functional characteristics observed in seven healthy volunteers, such as the distribution (amplitude and phase shift) of arterial, venous, and CSF flows. The results show a good agreement between measured and simulated intracranial CSF and blood flows. abstract_id: PUBMED:36326994 Sensitivity Analysis of a Mathematical Model Simulating the Post-Hepatectomy Hemodynamics Response. Recently a lumped-parameter model of the cardiovascular system was proposed to simulate the hemodynamics response to partial hepatectomy and evaluate the risk of portal hypertension (PHT) due to this surgery. Model parameters are tuned based on each patient data. This work focuses on a global sensitivity analysis (SA) study of such model to better understand the main drivers of the clinical outputs of interest. The analysis suggests which parameters should be considered patient-specific and which can be assumed constant without losing in accuracy in the predictions. While performing the SA, model outputs need to be constrained to physiological ranges. An innovative approach exploits the features of the polynomial chaos expansion method to reduce the overall computational cost. The computed results give new insights on how to improve the calibration of some model parameters. Moreover the final parameter distributions enable the creation of a virtual population available for future works. Although this work is focused on partial hepatectomy, the pipeline can be applied to other cardiovascular hemodynamics models to gain insights for patient-specific parameterization and to define a physiologically relevant virtual population. Answer: Yes, an extracranial anatomical change can impact intracranial hemodynamics. This is supported by a study that created an anatomy-based lumped parameter model of cerebrospinal venous drainage, which was able to predict physiological and pathological behaviors with good fidelity. The model demonstrated that extracranial venous impairments, such as the obstruction of the main venous outflows, could have consequences on intracranial pressure and hemodynamics. Specifically, the model predicted intracranial reflux topography in cases of internal jugular vein diameter reduction, which was similar to those observed in patients with internal jugular vein obstacles (PUBMED:26099795).
Instruction: Are barriers in accessing health services in the Roma population associated with worse health status among Roma? Abstracts: abstract_id: PUBMED:23546390 Are barriers in accessing health services in the Roma population associated with worse health status among Roma? Objectives: The health of Roma has been found to be poorer than that of the majority population. The aim of this study was to explore the differences between Roma and non-Roma regarding perceived barriers in accessing health services. Furthermore, we aimed to assess the association between self-rated health status and Roma ethnicity and explore to what degree barriers in accessing health services explain this association. Methods: We used data from the cross-sectional HepaMeta study conducted in 2011 in Slovakia. The final sample comprised 452 Roma (mean age 34.7; 35.2 % men) and 403 (mean age 33.5; 45.9 % men) non-Roma respondents. Results: Roma ethnicity was found to be significantly associated with poorer self-rated health status. A considerable part of this association can be explained by barriers in accessing health services as perceived by Roma. Conclusions: Worse health in Roma is partially mediated by worse access to health services, apart from a large educational gap between Roma living in settlements and the majority population. Interventions should focus not only on health literacy among Roma but also on the health care system and health care professionals. abstract_id: PUBMED:37210718 Health status and cardiovascular risk of Roma and non-Roma population in underprivileged settlements Introduction: The "Taking the screening tests in place" program offers cardiovascular screening activities to the inhabitants of underprivileged settlements. Objective: Evaluation of the health status and cardiovascular risk of the Roma and non-Roma population in underprivileged settlements. Method: Information was collected about the demography, lifestyle, current illnesses, access to healthcare and the quality of patient information. General health check (body weight, height, blood pressure, blood sugar, ankle-brachial index) and cardiovascular examination were performed. Data were analyzed in Roma and non-Roma groups with Pearson's chi-squared test. Results: 3649 people participated in the study (851 [23%] men, 2798 [77%] women), 16% (598) of the investigated population belonged to the Roma population. The mean age of men in the general population was 58 years and of women 55 years, in the Roma population 48 years and 47 years. People in the Roma population smoked more often (men 45%, women 64%) than people in the general population (both sexes 30%). In the Roma population, the consumption of sugary soft drinks at least four times a week (men 55% vs. women 43%) and the BMI (men 30 vs. 29, woman 29 vs. 28) were significantly higher. In the Roma population, 31% of the men and 13% of the women thought that their health status was bad, while in the general population it was 17% of men and 8% of women. Incidence of COPD (18% vs. 9%), coronary disease (18% vs. 13%), peripheral artery disease (13% vs. 9%) was significantly higher in the Roma population among women. Conclusion: In the investigated population, the Roma inhabitants were significantly younger, smoked more, they were more obese, the chronic diseases were more common among them, and they considered their health status worse than the general population. Orv Hetil. 2023; 164(20): 792-799. abstract_id: PUBMED:24704770 Public health issues of the Roma and non-Roma unemployed in the Ózd microregion of Hungary Introduction: In their previous studies authors of the present work showed that public health situation and socioeconomic position of unemployed Roma persons in the Ozd microregion were well below the average of the whole Hungarian population. Aim: To continue these previous studies, the authors wanted to determine whether the greater proportion of the unemployed Roma persons in the Ózd microregion compared to the country average could contribute to the worse public health situation and the poor hygienic situation of the living environment of unemployed persons hardly or not suitable for learning. Method: Data from 400 unemployed Roma (96 males, 97 females) and caucasian non-Roma subjects (114 males, 93 females) obtained in 2012 and 2013 using self-completed and interview questionnaires were analysed. In addition, occupational medical examination methods were applied and the results were analysed. Results: It was found that all studied parameters (public health and epidemiological safety, in-door living environmental hygiene, conditions for learning) indicated significantly disadvantaged situation of the Roma compared to the non-Roma unemployed persons. There was a clear relationship between these examined parameters and other factors adversely influencing the quality of life of Roma persons (unemployment, deep poverty, lower level of education, shorter life span). Finally, the results showed that the quality of life conditions of the lowest 1/10, 1/5 of the non-Roma unemployed persons were comparable to those found in the average of Roma unemployed persons. Conclusions: The authors conclude that i) the greater proportion of unemployed Roma persons in the Ozd microregion contributes to the worse public health, epidemiological safety and the worse living conditions of unemployed persons in this regions of the country; ii) It would be essential to provide Roma persons with conditions appropriate for learning, which could enable them to get qualification necessary for employment. abstract_id: PUBMED:32252719 Foot health among the Roma population of southern Spain according to the foot health status questionnaire. Background: Foot health of the Roma population is a challenge for the health professionals where this minority is significant, as is the case in Spain. At present, little is known about foot health of the Roma population and their knowledge would promote the training of these professionals at the community level. Foot pain is common and a reason for consulting podiatry services. The purpose of this study was to determine foot health among the Roma population according to the Foot Health Status Questionnaire. Method: An observational, cross-sectional and quantitative study conducted at the Roma population living in Spain in 2018. Self-reported data and the Foot Health Status Questionnaire were recorded. Examining the general health and foot health (foot pain, foot function, footwear and general health) and general (general health, social capacity, physical activity and vigour). This questionnaire is recommended as a valid and reliable patient-reported outcome. The obtained scores were compared. Results: A sample made up of 624 men and women from the Roma population took part in this study. 45% were Roma men and 55% Roma women. In the first section of the FHSQ, a lower score of values was recorded in the footwear domain (62.5) and in the general foot health domain (60). Gypsy women obtained lower scores in all the domains. In the second section, lower scores were obtained in the vigour (56) domain and in the general health (60) domain. A large effect size (r-Rosenthal) was found by gender in the footwear domain (0.334) and in the vigour domain (0.195). Roma women showed higher values in cardiac disorders, serious illnesses, doctor visits and foot problems. 67.8% reported that they had never been assisted by a podiatrist. Conclusions: The studied Roma population has foot health problems, and these are more pronounced among women. They show lower values in the footwear and vigour domains. More professional training is required for health workers in this field to avoid cultural diversity stereotypes. abstract_id: PUBMED:32211359 Unmet Health Needs of Roma Women in the Two Biggest Roma Communities in the Republic of Srpska, Bosnia and Herzegovina. Background: Reasons for unmet health needs vary from individual to contextual determinants but are defined as the difference between needed health service and services actually received. Roma experience elevated health issues and challenging social conditions. Objective: The aim of this study was to explore the unmet health needs and potential risk factors among Roma women living in the two biggest Roma communities in the Republic of Srpska. Method: We conducted a health assessment of 183 adult Roma women in the Republic of Srpska. Unmet health needs were observed as the absence of needed medical supervision, despite having chronic conditions. We used logistic regression to assess the degree to which unmet health needs were related to the social determinants and the health status of Roma women. Results: The majority of Roma women were married or were in an unofficial relationship (55.2%), were without schooling (62.8%), and were unemployed (88.5%). The results showed that 94.0% had health insurance, had a health card, and were registered with a family medicine doctor. Sixty percent reported having a chronic disease; however, 68.2% reported that their chronic disease was not medically supervised. Roma women that had less education, those who were unemployed, and those who were divorced or widowed women were more likely to have unmet health needs. Conclusion: Roma women in Bijeljina and Prijedor have unmet health needs due to the circumstances they live in despite the fact that majority of them have health insurance and universal health access is legally guaranteed. abstract_id: PUBMED:26926238 Accessible health care for Roma: a gypsy's tale a qualitative in-depth study of access to health care for Roma in Ghent. Background: In general, vulnerable populations experience more problems in accessing health care. This also applies to the Roma-population. In the City of Ghent, Belgium, a relatively large group of Roma resides more or less permanently. The aim of this study is to explore the barriers this population encounters in their search for care. Methods: In this qualitative study using in-depth interviews the barriers to health care for the Roma in Ghent are explored. We interviewed 12 Roma and 13 professionals (volunteers, health care providers,...) who had regular contact with the Roma-population in Ghent. For both groups purposive sampling was used to achieve maximal variation regarding gender, age, nationality and legal status. Results: The Roma-population in Ghent encounters various barriers in their search for care. Financial constraints, not being able to reach health care and having problems to get through the complexity of the system are some of the most critical problems. Another important finding is the crucial role of trust between patient and care provider in the care-giving process. Conclusion: Roma share several barriers with other minority groups, such as: financial constraints, mobility issues and not knowing the language. However, more distinctive for this group is the lack of trust in care providers and health care in general. As a result, restraint and lack of communication form serious barriers for both patient and provider in their interaction. In order to ensure equitable access for Roma, more emphasis should be on establishing a relationship of mutual respect and understanding. abstract_id: PUBMED:12455143 Health status of the roma population in Hungary The status and problems of the roma (gipsy) population have been in the forefront in Hungary and have called for numerous benevolent interventions. Successful planning and implementation of programs aimed at the improvement of their health status must be based on solid facts regarding their problems and the causes behind. The authors give a literature review on research papers discussing the health (disease) status of the Hungarian roma population published between 1980 and 2001. They give a summary on the demography of gypsies, an overview of publications on pregnancy, delivery and infant mortality, on adult morbidity and mortality, on genetic investigations among roma people, as well as on their health behaviour and relations with the health care system, and finally, they give a brief overview of their socio-economic status. The authors sum up the major difficulties of research aimed at roma people, express their concern regarding health research papers published on gypsies; and outline their recommendation on the future direction of research on the health of the roma population. abstract_id: PUBMED:11556609 Health needs of the Roma population in the Czech and Slovak Republics. In the growing literature on the human rights of Roma people in Central Europe, their relatively poor health status is often mentioned. However, little concrete information exists about the contemporary health status of the Roma in this region. We sought information on the health of the Roma in two of countries with significant Roma minorities, the Czech and Slovak Republics, by means of systematic searches for literature on the health of Roma people published in Czech or Slovak or by authors from the two countries. Published research on health of the Roma population is sparse. The topics that have received attention suggest a focus on concepts of contagion or social Darwinism, indicating a greater concern with the health needs of the majority populations with which they live. What limited evidence exists indicates that the health needs of the Roma population are considerable. With very few exceptions, the health status of Roma is worse than that of non-Roma population in both countries. The burden of communicable disease among Roma is high and diseases associated with poor hygiene seem to be particularly important. Evidence on health care suggests poor communication between Roma and health workers and low uptake of preventative care. The health needs of Roma lack visibility, not only because of the absence of research but also the absence of advocacy on their behalf. Since 1989, Czech and Slovak researchers have largely turned away from health research on particular ethnic groups. This probably reflects a growing sensitivity about stigmatising Roma, but it also makes it difficult to know how their circumstances might be improved. There is a need for further research into the health of Roma people with particular emphasis on non-communicable disease and for interventions that would improve their health. abstract_id: PUBMED:37769608 Mental health and access to care among the Roma population in Europe: A scoping review. The Roma are Europe's largest ethnic minority group, and often face discrimination and social exclusion. Social strife and lack of access to healthcare are associated with increased symptoms of psychopathology. We aimed to review evidence on mental health outcomes and on access to mental healthcare among the Roma population in Europe. We systematically searched five databases (PsycINFO, Global Health, Social Policy and Practice, Web of Science and PubMed) and conducted a grey literature search in August 2020. We identified 133 studies, 26 of which were included for final analysis. We present the results using a narrative synthesis. The available literature indicates a relatively high prevalence of anxiety, depression and substance abuse among Roma, and females seem to be more affected than males. Roma children exhibit more externalizing and internalizing disorders when compared with non-Roma children. Mental health and perceived well-being among the Roma population are strongly linked to social determinants of health such as housing or economic income. Access to mental healthcare is limited for Roma people because of several barriers pertaining to language, lack of information regarding available services, and the insurance and economic status of Roma people. Roma people report mainly negative experiences with mental health services, including a lack of understanding from healthcare providers, and instances of racism and discrimination. There is a need for more research on mental health and access to healthcare in Roma people. Future studies should be participatory in order to provide guidelines for mental healthcare that addresses the needs of the Roma population. abstract_id: PUBMED:23678843 Social exclusion as a determining health factor of the Roma population. Introduction: Poverty, social inequality and marginalization of certain population groups cause the exclusion of these groups from the society and the benefits provided by the social relations of the community. Goal: The goal of this study was to evaluate the characteristics of social exclusion of the Roma population in the local community and to determine how this exclusion affects the health status of the population. Material And Methods: Included-survey, medical examination and laboratory testing of full blood, blood glucose and markers of hepatitis B on 612 Roma. The data obtained are compared to those of the general population of the local community. Results And Conclusion: The social exclusion of the Roma population in the local community causes its significantly worse health status. Infectious diseases caused by poor hygienic conditions in Roma settlements and chronic diseases caused by stress, inadequate nutrition and poor housing conditions occur 5-20 times more frequently than in the general population. Answer: Yes, barriers in accessing health services in the Roma population are associated with worse health status among Roma. Studies have found that Roma ethnicity is significantly associated with poorer self-rated health status, and a considerable part of this association can be explained by barriers in accessing health services as perceived by Roma (PUBMED:23546390). Additionally, the Roma population has been found to have a higher prevalence of chronic diseases such as COPD, coronary disease, and peripheral artery disease, and they consider their health status worse than the general population (PUBMED:37210718). The public health situation and socioeconomic position of unemployed Roma persons are well below the average of the whole Hungarian population, with a clear relationship between examined parameters (public health and epidemiological safety, in-door living environmental hygiene, conditions for learning) and other factors adversely influencing the quality of life of Roma persons (PUBMED:24704770). Furthermore, foot health problems are more pronounced among Roma women, indicating a need for more professional training for health workers to avoid cultural diversity stereotypes (PUBMED:32252719). Roma women in the Republic of Srpska have unmet health needs despite the majority having health insurance and universal health access being legally guaranteed, with less education, unemployment, and being divorced or widowed as factors more likely to lead to unmet health needs (PUBMED:32211359). The Roma population in Ghent encounters various barriers in their search for care, including financial constraints, mobility issues, and a lack of trust in care providers (PUBMED:26926238). Overall, the health status of Roma is worse than that of the non-Roma population, with a high burden of communicable disease and low uptake of preventative care (PUBMED:11556609). Mental health outcomes among the Roma population are also concerning, with a relatively high prevalence of anxiety, depression, and substance abuse, and limited access to mental healthcare due to barriers such as language, lack of information, and economic status (PUBMED:37769608). Social exclusion of the Roma population in the local community significantly worsens their health status, with infectious and chronic diseases occurring more frequently than in the general population (PUBMED:23678843).
Instruction: Does previous fundoplication alter the surgical approach to esophageal adenocarcinoma? Abstracts: abstract_id: PUBMED:18774726 Does previous fundoplication alter the surgical approach to esophageal adenocarcinoma? Objective: The primary aim of this study was to test the widespread assumption that the viability of the gastric fundus is compromised by fundoplication, thereby limiting the use of stomach to reconstruct the upper gastrointestinal tract after esophageal resection. Methods: Between February 1991 and February 2006, a consecutive series of 142 patients with esophageal adenocarcinoma (EADC) underwent esophageal resection. To reconstruct the upper gastrointestinal tract, all patients had a narrow gastric tube (greater curvature of stomach based on the right gastroepiploic artery) transposed through the posterior mediastinum to the left neck where an anastomosis to the cervical esophagus was performed. From a prospective database, 15 patients were identified to have undergone an 'open' fundoplication (transabdominal Nissen, n=11; transthoracic Belsey, n=4) from 12 to 23 years earlier. Outcomes were compared between patients with EADC who had undergone previous fundoplication, and patients with EADC who never had antireflux surgery. Results: Gastric transposition and cervical esophagogastrostomy were technically feasible in all patients. No significant differences in outcome were found between patient groups. Gastric necrosis developed in only one patient, who had not undergone previous fundoplication. Anastomotic leak rates after esophageal resection and reconstruction were not statistically different based on whether patients had undergone previous fundoplication (2/15, 13.3%) or not (16/127, 12.6%; p=0.99). Conclusions: With careful attention to surgical technique, previous fundoplication does not preclude the use of stomach to reconstruct the foregut after esophageal resection, refuting the notion that previous antireflux surgery is a relative contraindication to, or alters the approach to esophageal cancer surgery. abstract_id: PUBMED:21546721 Impact of laparoscopic Nissen fundoplication on non-complicated Barrett's esophagus. Background/aim: Laparoscopic fundoplication can alter the natural course of Barrett's esophagus (BE). This study was undertaken to assess this role in patients with non-complicated BE. Materials And Methods: From October 2004 to October 2009, 43 patients with BE (32 men and 11 women) underwent laparoscopic Nissen fundoplication surgery in the Department of Surgery at Minia University Hospital. The median age of these patients was 46 years (range: 22-68 years). Patients with high-grade dysplasia, invasive cancer, or previous antireflux surgery were excluded. All 43 patients had gastroesophageal reflux symptoms. Heartburn was present in all patients, regurgitation in 41 (95.3%), dysphagia in 8 (18.6%), retrosternal pain in 30 (69.8%), upper gastrointestinal hemorrhage in 6 (13.9%), and respiratory symptoms in 19 (44.2%). Nissen fundoplication was performed in all patients. Thirty-four patients (79.1%) had concomitant hiatal hernia and nine patients (20.9%) had low-grade dysplasia. Results: The median follow-up period was 25.6 months. There was significant improvement of symptoms after surgery (P&lt;0.05). Eight (18.6%) of those with short-segment BE had total regression and four (9.3%) of those with long-segment BE had a decrease in total length. Among the nine patients with preoperative low-grade dysplasia, dysplasia disappeared in seven, remained unchanged in one, and progressed to in situ adenocarcinoma in one patient. Conclusions: laparoscopic fundoplication succeeded in controlling symptoms but had unpredictable effect on dysplasia and regression of BE. Laparoscopic fundoplication does not eliminate the risk of developing esophageal adenocarcinoma and therefore, endoscopic follow-up should be continued in these patients. abstract_id: PUBMED:32311280 Laparoscopic Nissen Fundoplication: How I Do It? Surgical treatment of gastroesophageal reflux disease (GERD) aims to control patients' symptoms, improve patients' quality of life, and prevent GERD-related complications (bleeding, esophageal stenosis, Barrett's esophagus, and/or esophageal adenocarcinoma). A careful patient selection and a properly executed operation are key for the success of the procedure. We aimed to describe the operative technique of a laparoscopic Nissen fundoplication, stressing the critical surgical steps we believe should be respected to obtain good surgical outcomes. abstract_id: PUBMED:12704882 Laparoscopic fundoplication--what is proven? Laparoscopic fundoplication has become a popular procedure but several questions remain unresolved. Although randomised trials suggest that antireflux surgery is a good alternative to continuous medical therapy, the selection of patients for surgery remains crucial to achieve success. There is currently no proof that antireflux surgery will reduce the risk for developing esophageal adenocarcinoma. Complications, side effects and failures appear to be higher after a laparoscopic approach as compared to conventional open surgery, particularly in un-experienced hands. Partial fundoplications and tailored approaches have so far not shown convincingly better results than those that can be achieved by short and floppy 360 fundoplication. abstract_id: PUBMED:12688140 Laparoscopic fundoplication in the treatment of Barrett esophagus Unlabelled: Barrett's esophagus (BE) the serious complication of gastroesophageal reflux disease (GERD) is discussed. BE has been defined as the complete intestinal metaplasia of distal esophagus. The most serious complication of BE is esophageal adenocarcinoma. We present our results with the group of patients with GERD from the years 1998-2000. We prospectively followed 67 patients with GERD (group A) and 8 patients with GERD/BE (group B). All patients underwent laparoscopic fundoplication. The average length of the Barrett's segment was 4.3 cm. Average time of the surgery was 75 min. Nissen fundoplication was used in seven cases in group B, in one case we used Rossetti laparoscopic fundoplication. The postoperative endoscopic controls were performed at two months after surgery and then every one-year. Results: In one case we observed the complete reepithelization with the mixed spinocelullar and columnar components. We didn't observe any one case of histological deterioration. There was on any difference between two groups regarding the subjective complains. Six patients form group B reported complete disappearing of pyrosis. Two patients reported significant improvement, with very rare pyrosis or dysphagia. Our experience (together with the literature) proved the surgical antireflux therapy is very safe, very effective and with very long lasting effect. The effectivity of surgical therapy is increased by the laparoscopy. Described laparoscopic approaches can lead to the reepithelization of Barrett's metaplasia or, at least, they can diminish the symptoms without any long-term medication. Surveillance endoscopy and biopsy are strictly recommended in all patients with BE and also in the patients with BE after antireflux surgery. The intervals depend on the grade of dysplasia in metaplastic epithelium. abstract_id: PUBMED:25097689 The effect of laparoscopic fundoplication in therapy of Barrett's esophagus. Introduction: Barrett's esophagus is the most significant precancer of the esophagus. Its malignization gives rise to most adenocarcinomas of the esophagus. Therefore selection of adequate therapy for this precancerous condition is of the utmost importance. Aim: The authors of the work addressed the question of whether effective therapy of reflux disease alone may halt the process of malignization of Barrett's mucosa or even cause its regression. Material And Methods: The analyzed set comprised 50 patients with Barrett's esophagus, who in 48 cases underwent laparoscopic fundoplication and in two cases underwent an indirect antireflux procedure in the form of gastric resection with a Roux-en-Y gastrojejunal anastomosis. The effect of the procedure was evaluated by comparing preoperative and postoperative endoscopic examinations, as well as histological analysis by biopsy taken from Barrett's mucosa. Results: In 19 patients (38%), Barrett's mucosa was not detected postoperatively. An improved finding in terms of disappearance of mucosal dysplasia was found in 8 (16%) patients. Findings remained unchanged in 18 (36%) patients. In 5 (10%) patients progression of the disease was discovered. Conclusions: A surgical antireflux procedure, primarily in the form of laparoscopic fundoplication, is considered an effective method for treating Barrett's esophagus up to the stage of mild dysplasia. If this therapy is unsuccessful, the method of choice is local therapy, either an endoscopic mucosectomy or radiofrequency ablation. abstract_id: PUBMED:33347088 Mind the Gap: Current Treatment Alternatives for GERD Patients Failing Medical Treatment and Not Ready for a Fundoplication. Background: Gastroesophageal reflux disease is associated with Barrett esophagus, esophageal adenocarcinoma, and significantly impacts quality of life. Medical management is the first line therapy with surgical fundoplication as an alternative therapy. However, a small portion of patients who fail medical therapy are referred for surgical consultation. This creates a "gap" in therapy for those patients dissatisfied with medical therapy but are not getting referred for surgical consultation. Three procedures have been designed to address these patients. These include radiofrequency ablation (RFA) of the lower esophageal sphincter, transoral incisionless fundoplication (TIF), and magnetic sphincter augmentation. Materials And Methods: A Pubmed literature review was conducted of all publications for RFA, TIF, and MSA. Four most common endpoints for the 3 procedures were compared at different intervals of follow-up. These include percent of patients off proton pump inhibitors (PPIs), GERD-HRQL score, DeMeester score, and percent of time with pH &lt;4. A second query was performed for patients treated with PPI and fundoplications to match the same 4 endpoints as a control. Results: Variable freedom from PPI was reported at 1 year for RFA with a weighted mean of 62%, TIF with a weighted mean of 61%, MSA with a weighted mean of 85%, and fundoplications with a weighted mean of 84%. All procedures including PPIs improved quality-of-life scores but were not equal. Fundoplication had the best improvement followed by MSA, TIF, RFA, and PPI, respectively. DeMeester scores are variable after all procedures and PPIs. All MSA studies showed normalization of pH, whereas only 4 of 17 RFA studies and 3 of 11 TIF studies reported normalization of pH. Conclusions: Our literature review compares 3 rival procedures to treat "gap" patients for gastroesophageal reflux disease with 4 common endpoints. Magnetic sphincter augmentation appears to have the most reproducible and linear outcomes but is the most invasive of the 3 procedures. MSA outcomes most closely mirrors that of fundoplication. abstract_id: PUBMED:36413570 Sleeve-Dor Fundoplication - An Innovative Surgical Technique to Avoid the Epidemic Long Term de Novo Gastroesophageal Reflux and Barrett´s Esophagus After Sleeve Gastrectomy for Obesity. Background: Sleeve gastrectomy (SG) in current literature showed an increased risk of "de novo" gastroesophageal reflux disease (GERD) and increased risk for Barrett's esophagus in longer follow-up series, with a possibility of esophageal adenocarcinoma in this population. Adding primarily an anterior Dor Fundoplication to SG (Sleeve-Dor) may protect the patient for future and can potentially avoid these chronic complications for patients with obesity. Methodology: A standard SG is performed laparoscopically, and a small redundance of the fundus is maintained as a wrap, and this will be fixed to the right crura without dissection of the anatomy of the hiatus. The resulted anterior 180° Dor fundoplication is usually sufficient to relieve or to avoid reflux symptomatic. Discussion: Based on our preliminary and literature experiences, the SG with anterior Dor fundoplication (Sleeve-Dor) procedure could provide favorable safety profile, satisfactory reflux control and good bariatric outcomes. The complication rate is lower compared to published for Nissen Sleeve or Sleeve-Rossetti technique, with no leaks or major complications recorded to date. Sleeve-Dor procedure may be a potential primary and standard surgery for morbidly obese patients, especially for patients with preoperative GERD symptoms without major findings at endoscopy. abstract_id: PUBMED:23837098 Laparoscopic Nissen fundoplication in the treatment of Barrett's esophagus - 10 years of experience. Introduction: Barrett's esophagus (BE) is a state in which the distal portion of esophageal mucosa becomes lined with cylindrical epithelium as a result of adaptive remodeling. It is widely accepted that the metaplastic lesions result from chronic irritation with gastric and/or duodenal contents in the course of reflux disease. For many years, research centered on the risk factors of BE and resulting adenocarcinoma. Anti-reflux operations are the only procedures which offer the possibility of treating the cause by restoring the anatomic barrier responsible for guarding against irritating effects of gastroduodenal content on the distal esophagus. Total (i.e. 360°) laparoscopic Nissen fundoplication (LNF) is considered the most effective amongst these procedures. Still, controversies related to the indications for anti-reflux surgery are frequently encountered. Aim: Retrospective analysis of long-term treatment outcomes in patients with BE subjected to laparoscopic Nissen fundoplication. Material And Methods: The group included 42 BE patients, amongst them 30 men and 12 women. Initially, all the patients were treated conservatively for at least 1 year. The subgroup with dysplasia was subjected to preoperative argon plasma coagulation (APC). From 1 year after surgery (laparoscopic Nissen fundoplication), control biopsy specimens were obtained from the gastroesophageal junction of all the patients. Results: None of the patients showed the development of esophageal adenocarcinoma during the follow-up period. Furthermore, no cases of dysplasia progression or de novo development of dysplasia were observed in the analyzed group. In the initial 12-24 months after surgery, complete regression of metaplasia was documented in 7 (31.8%) patients from group A, and a reduction in the area of Barrett's metaplasia was observed in another 7 patients (31.8%). Throughout the period of this study, persistent planoepithelial re-epithelialization was observed in 14 (70%) group B patients, i.e. in individuals with baseline dysplasia subjected to preoperative argon plasma ablation. In the remaining patients of this group, the developed changes of BE character were less advanced than at baseline. Conclusions: Our opinion is that laparoscopic Nissen fundoplication, as a result of high effectiveness, represents the method of choice in the treatment of BE in the case of patients who were qualified for surgery. abstract_id: PUBMED:21110164 Esophageal adenocarcinoma after Nissen's fundoplication for Barrett's esophagus: report of a case. It remains unclear whether surgical treatment of gastro-esophageal reflux disease (GERD) and Barrett's esophagus (BE) decreases the long-term risk of lower esophagus malignancy; yet, proposed reductions in Barrett's epithelial transformation have been used as a rationale for antireflux surgery. We report the case of a 63-year-old woman with a 40-year history of GERD, whose symptoms returned after a Nissen fundoplication. A gastroscopy done 3 years later revealed BE. Despite close surveillance, adenocarcinoma of the lower esophagus was diagnosed 8 years after the original surgery. Thus far, there is insufficient evidence to convince practitioners that surgery should be the first line of treatment to prevent malignant change in BE. Further well-standardized, prospective trials are required. Our case demonstrates that antireflux surgery in a patient with GERD and BE cannot be relied upon to prevent adenocarcinoma. Answer: Previous fundoplication does not necessarily alter the surgical approach to esophageal adenocarcinoma. A study conducted between February 1991 and February 2006 on 142 patients with esophageal adenocarcinoma (EADC) who underwent esophageal resection found that gastric transposition and cervical esophagogastrostomy were technically feasible in all patients, including those who had undergone previous fundoplication. The outcomes between patients with EADC who had undergone previous fundoplication and those who had not were compared, and no significant differences were found. Specifically, gastric necrosis developed in only one patient who had not undergone previous fundoplication, and anastomotic leak rates after esophageal resection and reconstruction were not statistically different based on whether patients had undergone previous fundoplication or not. The study concluded that with careful attention to surgical technique, previous fundoplication does not preclude the use of the stomach to reconstruct the foregut after esophageal resection, refuting the notion that previous antireflux surgery is a relative contraindication to, or alters the approach to, esophageal cancer surgery (PUBMED:18774726).
Instruction: Can total cardiac calcium predict the coronary calcium score? Abstracts: abstract_id: PUBMED:19615766 Can total cardiac calcium predict the coronary calcium score? Background: Mitral annular calcification (MAC) shares the same risk factors as atherosclerosis and is associated with coronary artery disease as well as cardiovascular events. However, sensitivity and positive predictive value are low. We hypothesized that a global echocardiographic calcium score would better predict coronary atherosclerotic burden, as assessed by coronary artery calcium score (CAC), than MAC alone. Methods: An echocardiographic score was devised to measure global cardiac calcification in a semi-quantitative manner; this included calcification in the aortic valve and root, the mitral valve and annulus, and the sub-mitral apparatus. This score, and a simplified version, were compared with a similar calcification score by CT scan, as well as the CAC. Results: There was a good correlation between the two global calcification scores; the echocardiographic score also correlated with CAC. Using CAC &gt;400 as a measure of severe coronary atherosclerosis, an echocardiographic score ≥5 had a positive predictive value of 60%. Importantly, the simplified score performed equally well (≥3 had a positive predictive value of 62%). Conclusions: Global cardiac calcification, assessed by CT scan or echocardiography, correlates with the extent of coronary calcium. A semi-quantitative calcium score can be easily applied during routine echocardiographic interpretation and can alert the reader to the possibility of severe coronary atherosclerosis. abstract_id: PUBMED:33614311 Coronary Artery Calcium Score: Assessment of SYNTAX Score and Prediction of Coronary Artery Disease. Background With the advent of modern imaging technologies, non-invasive assessment of the coronary system is not only possible but its complexity and plaque burden can be quantified. This study aims to determine whether calcium score on computed tomography coronary angiography (CTCA) can be associated with the complexity of coronary artery disease (CAD), which is determined by the SYNTAX score on coronary angiography, as well as to determine which cut-off value of coronary artery calcium (CAC) score can predict severe CAD in our population. Methodology This was a cross-sectional study conducted at the Rawalpindi Institute of Cardiology, Pakistan from January 2019 to March 2020. The calcium score of all patients with low-to-intermediate pretest probability of CAD was calculated on CTCA. All patients who had significant disease on CTCA were subjected to conventional coronary angiography and SYNTAX score was calculated, which was later used to determine the association between calcium and SYNTAX score. Results A total of 90 patients were included in the study. CAC and SYNTAX score were found to be positively correlated (Pearson coefficient [r] = 0.354; p = 0.001). The total CAC score with a cut-off value of 212 recognized patients with the SYNTAX score of &gt;27. The sensitivity was 66.7% and specificity was 70.5% with an area under the curve of 0.743. The mean calcium score of patients in our study group was 223, with the maximum score of 1,216 and the minimum score of zero. Conclusion A CAC score greater than 212 is associated with a high SYNTAX score indicating complex disease. Only age is an independent predictor of calcium score. abstract_id: PUBMED:29383526 Leveraging the coronary calcium scan beyond the coronary calcium score. Non-contrast cardiac computed tomography in order to obtain the coronary artery calcium score has become an established diagnostic procedure in the clinical setting, and is commonly employed in clinical and population-based research. This state-of-the-art review paper highlights the potential gain in information that can be obtained from the non-contrast coronary calcium scans without any necessary modifications to the scan protocol. This includes markers of cardio-metabolic health, such as the amount of epicardial fat and liver fat, but also markers of general health including bone density and lung density. Finally, this paper addresses the importance of incidental findings and of radiation exposure accompanying imaging with non-contrast cardiac computed tomography. Despite the fact that coronary calcium scan protocols have been optimized for the visualization of coronary calcification in terms image quality and radiation exposure, it is important for radiologists, cardiologists and medical specialists in the field of preventive medicine to acknowledge that numerous additional markers of cardio-metabolic health and general health can be readily identified on a coronary calcium scan. Key Points: • The coronary artery calcium score substantially increased the use of cardiac CT. • Cardio-metabolic and general health markers may be derived without changes to the scan protocol. • Those include epicardial fat, aortic valve calcifications, liver fat, bone density, and lung density. • Clinicians must be aware of this potential additional yield from non-contrast cardiac CT. abstract_id: PUBMED:37660746 Interpreting the Coronary Artery Calcium Score - Critical Information for the Practicing Physician. Coronary artery calcium scanning is a routine test for assessing the severity of atherosclerosis in asymptomatic individuals. This inexpensive, noninvasive test quantifies the calcium deposition in the 4 principal coronary arteries. Correct interpretation is important to the physician (for recommending therapy) and to the patient (for determining his or her lifetime risk of a cardiovascular event). A score of 0 indicates that a cardiovascular event is extremely unlikely in the next 5 years. In contrast, a score greater than 0 portends a coronary event. The higher the score, the greater the risk. Both the arterial location of the calcium and the number of coronary arteries involved alter the interpretation of the calcium score. At any given age, females have significantly lower scores than males. One-third of individuals with scores greater than 1000 will have a cardiovascular event within 3 years. For all elevated calcium scores, aggressive treatment is warranted, including significant lifestyle changes and medications to reduce low-density lipoprotein cholesterol. Understanding the importance of the coronary artery calcium score will result in improved therapy and patient compliance. abstract_id: PUBMED:23341839 A head-to-head comparison of the coronary calcium score by computed tomography with myocardial perfusion imaging in predicting coronary artery disease. Objectives: The coronary artery calcium (CAC) score has been shown to predict future cardiac events. However the extent to which the added value of a CAC score to the diagnostic performance of myocardial perfusion imaging (MPI) by single photon emission computed tomography (SPECT) is unclear. The purpose of this study is to investigate the correlation between CAC score and SPECT in patients with suspected coronary artery disease. Methods: A retrospective review of the CAC scores by use of the Agatston calcium scoring method and cardiac SPECT diagnostic reports was conducted in 48 patients, who underwent both coronary computed tomography (CT) and SPECT examinations due to suspected coronary artery disease. A Pearson correlation test was used to determine the relation between CAC scores and MPI-SPECT assessments with regard to the evaluation of the extent of disease. Results: Forty-seven percent of the patients had CAC scores more than 100, while 42% of these patients demonstrated abnormal, or probably abnormal, MPI-SPECT. Of the 23% of patients with a zero CAC score, only 7% had normal MPI-SPECT findings. No significant correlation was found between the CAC scores and MPI-SPECT assessments (r value ranged from 0.012 to 0.080), regardless of the degree of coronary calcification. Conclusions: There is a lack of correlation between the CAC scores and the MPI-SPECT findings in the assessment of the extent of coronary artery disease. CAC scores and MPI-SPECT should be considered complementary approaches in the evaluation of patients with suspected coronary artery disease. abstract_id: PUBMED:34979296 The ratio of the max-to-mean coronary artery calcium score in the most calcified vessel is associated with the presence of coronary artery disease. Purpose: This study aimed to clarify the relationship between the severity of the calcium burden in the most calcified coronary vessel and coronary artery disease (CAD). Method: Of 2150 patients, 376 examined by both coronary computed tomographic angiography and invasive coronary angiography (ICA) within 30 days at Sun Yat-sen Memorial Hospital between March 2011 and July 2020 were included. Three coronary artery calcium scores (CACSs), including the Agatston score, volume score, and mass score, and other clinical variables were recorded. The ratio of max-to-mean CACS in the most calcified vessel (CACSmax:mean) was defined as the CACS in the most calcified vessel/average CACS of the four major epicardial coronary arteries. Logistic regression and least absolute shrinkage and selection operator (LASSO) analyses were performed to assess the relationship between CACSmax:mean and CAD. Results: CACSmax:mean was higher in 81.1% of subjects diagnosed with CAD than in subjects without CAD. In multivariate logistic regression analysis, CACSmax:mean determined by the Agatston score, volumetric score, and mass score was associated with CAD. In LASSO analysis, Agatston scoremax:mean (not the total Agatston score or other CACSmax:mean) had the strongest correlation with CAD (β = 0.125). AUCs in the training set and the validation set were 0.811 and 0.789, respectively. Increased age, diabetes and hypertension correlated with higher Agatston scoremax:mean. Conclusions: In addition to total CACS, CACSmax:mean may be a novel diagnostic parameter for CAD, showing the calcium burden severity. abstract_id: PUBMED:34495426 Cardiac calcium score in systemic sclerosis. Cardiac coronary Ca score (CCS), and extra coronary Ca score (ECCS) estimation in asymptomatic systemic sclerosis (SSc) patients and their relation to different disease and patients' variables. The CCS and ECCS were estimated in asymptomatic 20 SSc patients compared to 20 age and sex-matched healthy control using non-contrast cardiac computed tomography. All were applied for cardiac history taking, examination, echocardiography, body mass index (BMI), complete blood picture, erythrocyte sedimentation rate, and lipid profile estimation. The SSc patients were 11 females and 9 males with a mean age of (42.55 ± 9.145) and mean disease duration (12.9 ± 6.774). CCS was reported in 9 (45%) SSc cases and 2 (10%) of the control; (p = 0.013) and was significantly greater in SSc patients (58.4 ± 175.443) than in the control group (0.7 ± 2.25); (p = 0.01). The ECCS was significantly higher in SSc cases (194.45 ± 586.511) than control group (2.8 ± 7.8); (p = 0.001) and reported in 16 (80%) SSc cases and 3 (15%) of controls; (p = 0.000). Limited scleroderma cases had higher scores than diffuse type. Patients with total ca score (&gt; 100) were older (p = 0.016), had longer disease duration (p = 0.001) and greater BMI (p = 0.002). Significant correlation was found between the log-transformed CCS and disease duration, age, BMI, left ventricular mass, and mass index. Systemic sclerosis patients are at increased risk of subclinical cardiovascular disease determined by cardiac Ca scoring as a noninvasive and reliable method. Extra coronary calcification may be an earlier indicator for this. Disease duration is a determinant risk factor for cardiac calcification in SSc. Key Points • Although the association between interleukin-6 (IL-6) promoter polymorphism and rheumatic arthritis (RA) has been discussed in the previous meta-analysis, their conclusions are inconsistent. • Systemic sclerosis patients are at high risk of accelerated atherosclerosis and cardiovascular diseases. Coronary atherosclerosis was previously estimated in SSc patients through coronary angiography. A novel method of assessing coronary artery disease is the coronary calcium score, as determined by multidetector computed tomography, it measures coronary artery calcification that occurs in atherosclerotic plaque. In this study, the cardiac coronary and extra coronary Ca score were evaluated in relation to disease characteristics in asymptomatic SSC patients for early detection of coronary artery disease. abstract_id: PUBMED:28028450 SYNTAX Score in Patients with High Computed Tomography Coronary Calcium Score. Objectives: To study the conventional coronary angiogram ( CA) findings in patients with high coronary calcium on multidetector computed tomogram. Materials And Methods: Fifty patients with coronary calcium high enough in its extent and location to interfere with the interpretation of a contrast-filled coronary artery for a significant lesion were studied with conventional CA. Framingham risk score (FRS), computed tomography (CT) coronary calcium score (CCS), and SYNTAX score (SS) from the CA were calculated by separate investigators who were blinded to other scores. Effectively, 250 coronary arteries (left main, left anterior descending, left circumflex, and right coronary artery and posterior descending artery in each subject) with calcium scores were studied for lesions on CA. Results: Thirty-five subjects had high FRS, 10 had intermediate FRS, and 5 had low FRS. Eight subjects of 25 (32%) with CCS between 350 and 1000 had no significant coronary artery disease (CAD). Overall, the CCS and the SS had a strong agreement with each other (r = 0.68, P &lt; 0.01) that persisted in those with very high scores &gt;1000 (r = 0.55, P &lt; 0.01, n = 30), but only a nonsignificant weak correlation with scores between 350 and 1000 (r = 0.1, P = 0.62, n = 20). Individual vessel calcium scores correlated strongly for the presence of any lesion (r = 0.52, P &lt; 0.01) in the same artery but only weakly for a significant lesion (r = 0.29, P = 0.05). Conclusion: High CT CCS in this cohort of intermediate to high (Framingham score) risk patients correlated strongly with the subject's global burden of the CAD as derived by the SS, more so for subjects with very high scores. Similarly, CCS correlated strongly with the presence of any lesion but only weakly for a significant stenosis; also, about one-third of patients with CCS between 350 and 1000 may not have significant disease on conventional CA. abstract_id: PUBMED:23806196 Accuracy and clinical outcomes of computed tomography coronary angiography in the presence of a high coronary calcium score. Background: A high coronary calcium burden may adversely affect image quality of CT coronary angiography (CTCA). The ability to rule out clinically significant disease in this setting is uncertain. Methods: We examined CTCA findings in patients with a calcium score of &gt;600. Utilising a search of death notices, structured patient interview and medical records, downstream investigations, cardiovascular events, revascularisation and mortality were recorded. Results: Sixty patients with a calcium score &gt;600 had CTCA performed on the same day. Coronary disease findings were: mild 28%, moderate 33%, severe 32% and non-diagnostic 7%. During a median 1.75-year follow-up, 31 (53%) of patients underwent further assessment for coronary disease, eight patients (13%) underwent revascularisation and there were two non-cardiovascular and one cardiovascular deaths. No patient with mild or moderate disease at CTCA had subsequently demonstrated ischaemia, was deemed to require PCI or suffered cardiac mortality. The negative predictive value of CTCA for subsequent PCI and all-cause mortality was 97% (100% for cardiac mortality only). The positive predictive value of CTCA for revascularisation or CV death was 42%. Conclusion: In patients with an elevated coronary calcium score, a negative CTCA implies an excellent short-term outcome and appears to exclude clinically significant coronary disease. abstract_id: PUBMED:34770366 Deep-Learning-Based Coronary Artery Calcium Detection from CT Image. One of the most common methods for diagnosing coronary artery disease is the use of the coronary artery calcium score CT. However, the current diagnostic method using the coronary artery calcium score CT requires a considerable time, because the radiologist must manually check the CT images one-by-one, and check the exact range. In this paper, three CNN models are applied for 1200 normal cardiovascular CT images, and 1200 CT images in which calcium is present in the cardiovascular system. We conduct the experimental test by classifying the CT image data into the original coronary artery calcium score CT images containing the entire rib cage, the cardiac segmented images that cut out only the heart region, and cardiac cropped images that are created by using the cardiac images that are segmented into nine sub-parts and enlarged. As a result of the experimental test to determine the presence of calcium in a given CT image using Inception Resnet v2, VGG, and Resnet 50 models, the highest accuracy of 98.52% was obtained when cardiac cropped image data was applied using the Resnet 50 model. Therefore, in this paper, it is expected that through further research, both the simple presence of calcium and the automation of the calcium analysis score for each coronary artery calcium score CT will become possible. Answer: Yes, total cardiac calcium can predict the coronary calcium score (CAC). Studies have shown that global cardiac calcification, assessed by CT scan or echocardiography, correlates with the extent of coronary calcium. A semi-quantitative calcium score, which includes calcification in the aortic valve and root, the mitral valve and annulus, and the sub-mitral apparatus, can be easily applied during routine echocardiographic interpretation and can alert the reader to the possibility of severe coronary atherosclerosis. An echocardiographic score ≥5 had a positive predictive value of 60% for indicating severe coronary atherosclerosis when using CAC >400 as a measure (PUBMED:19615766). Additionally, a coronary artery calcium (CAC) score greater than 212 is associated with a high SYNTAX score, indicating complex disease, and age is an independent predictor of calcium score (PUBMED:33614311). The coronary artery calcium score has also been used to predict future cardiac events and is considered complementary to myocardial perfusion imaging in the evaluation of patients with suspected coronary artery disease (PUBMED:23341839). Moreover, the ratio of the max-to-mean coronary artery calcium score in the most calcified vessel is associated with the presence of coronary artery disease (CAD), suggesting that in addition to total CACS, this ratio may be a novel diagnostic parameter for CAD, showing the severity of the calcium burden (PUBMED:34979296). In systemic sclerosis patients, cardiac coronary and extra coronary calcium scores are higher compared to controls, indicating an increased risk of subclinical cardiovascular disease (PUBMED:34495426). High computed tomography coronary calcium scores have been found to correlate strongly with the global burden of CAD as derived by the SYNTAX score, especially in subjects with very high scores (PUBMED:28028450). In summary, total cardiac calcium, as measured by various scoring methods, can predict the coronary calcium score and is indicative of the presence and severity of coronary artery disease.
Instruction: Indurometer vs. Tonometer: Is the Indurometer Currently Able to Replace and Improve Upon the Tonometer? Abstracts: abstract_id: PUBMED:26091409 Indurometer vs. Tonometer: Is the Indurometer Currently Able to Replace and Improve Upon the Tonometer? Background: The Indurometer is a tool designed by the Flinders Biomedical Engineering Department to replace the Tissue Tonometer. It measures the resistance to an applied force to quantify the amount of fibrosis present within the epifascial compartment of tissue. The aim of the study was to compare the current model of the Indurometer with the Tonometer to determine the level of variability and repeatability between the two devices and how the variability differs when compared to an individual's ISL stage. Methods And Results: Data were gathered on a total of 180 participants with unilateral arm lymphedema as part of an international multicenter trial. For each participant three repeat measurements were obtained with the Indurometer and Tonometer at the anterior mid forearm and anterior mid upper arm. The Indurometer gave lower measurement values than the Tonometer. The Tonometer was found to be less variable than the Indurometer when measurements were taken from the anterior mid upper arm and the anterior mid forearm site. The Tonometer values were significantly less variable than the Indurometer values in ISL stages 1 and 2. No significant variability was found in stage 3. Conclusion: The Indurometer is easier to use as compared to the Tonometer. Modification of the Indurometer needs to take place in order to improve its variability before it can be considered as a replacement for the Tonometer in the assessment of lymphedema. The lack of understanding of the ISL classification system is a significant issue. abstract_id: PUBMED:29540041 Diagnostic accuracy of the iCare rebound tonometer compared to the Perkins applanation tonometer in assessing intraocular pressure in rural patients. Background: Vision health is recognized as a critical unmet need in North America. The ocular morbidity associated with glaucoma results from increased intraocular pressure (IOP) and early detection is crucial for the management of glaucoma. Our objective was to find a diagnostically accurate screening tool for intraocular hypertension that can be used in rural communities. We sought to validate the diagnostic accuracy of the iCare rebound tonometer against the gold standard Perkins applanation tonometer (PAT) in measuring IOP. Methods: Patients from two rural communities in Ontario, Canada visiting their optometrists for routine appointments had their IOP measured by a non-contact tonometer (NCT), an iCare rebound tonometer, and a Perkins applanation tonometer (PAT). Values of sensitivity, specificity, and likelihood ratios for a positive and negative result were calculated for the iCare and the NCT. Results: Complete data was collected from 209 patients. Overall, the iCare tonometer had high levels of validity, as compared to the gold standard PAT. The iCare tonometer displayed excellent sensitivity of 98.3% (90-99%, 95% CI) and excellent negative likelihood ratio of 0.024 (0.0088-0.066, 95% CI) which is useful for ruling out intraocular hypertension. Conclusions: The iCare tonometer is a reasonably valid tool for detecting elevated IOP. Its ease of use, simplicity, and accessibility makes it a good screening tool to improve eye health in rural areas. abstract_id: PUBMED:30364813 Comparison of Intraocular Pressure Measurement with Schiotz Tonometer and Tono-Pen Vet Tonometer in Healthy Dogs. Introduction: Measurement of intraocular pressure (IOP) in dogs has high diagnostic value because of the possibility of detecting ocular and systemic diseases. Various types of tonometers are available for this measurement in small animal practice. The aim of the study was to compare the IOP values measured with Schiotz and Tono-Pen Vet tonometers in healthy dogs. Clinical diagnostic usefulness of both models was also evaluated. Material And Methods: The examination was performed in 62 eyes in 31 clinically healthy dogs of different races, gender, and ages. Results: The values for intraocular pressure obtained with Schiotz tonometer were in the range of 12 to 24 mmHg, with the mean of 16.3 ± 2.1 mmHg. The intraocular pressure measured with Tono-Pen Vet tonometer was in the range of 11-25 mmHg, with a mean of 18.1 ± 3.8 mmHg. The mean results of measurements taken using the two tonometers differed statistically significantly, the difference being 1.79 mmHg and the higher values being read from the Tono-Pen Vet tonometer. Conclusion: Correlation coefficients calculated for the results obtained in the right and left eyes using two tonometers indicated highly correlative relationships between the results. The study shows that both tonometers can be advantageously used in clinical practice to measure intraocular pressure in dogs. abstract_id: PUBMED:30632407 Self-measurement with Icare HOME tonometer, patients' feasibility and acceptability. Purpose: To evaluate and compare the accuracy of self-measurement of intraocular pressure using Icare Home rebound tonometer with Goldmann applanation tonometer and assess acceptability of self-tonometry in patients with glaucoma and ocular hypertension. Methods: In the study, 117 subjects were trained to use Icare Home for self-measurement. Icare Home tonometer readings were compared with Goldmann applanation tonometer, including one eye per patient. Agreement between the two methods of measurement was evaluated by Bland and Altmann analysis. Questionnaire was used to evaluate patients' perception of self-tonometry. Results: One hundred and three out of 117 patients (88%) were able to measure their own intraocular pressure and 96 (82%) fulfilled the requirements for certification. The mean (SD) difference Goldmann applanation tonometer minus Icare Home was 1.2 (2.4) mmHg (95% limits of agreement, -3.4 to 5.9 mmHg). The magnitude of bias between the two methods depended on central corneal thickness, with greater bias at central corneal thickness &lt;500 µm. In 65 out of 96 subjects (67.7%), Icare Home results were within 2 mmHg of the Goldmann applanation tonometer. Seventy-three out of 93 (78.5%) felt that self-tonometry was easy to use and 75 patients (80.6%) responded that they would use the device at home. Conclusion: Icare Home tonometry tends to slightly underestimate intraocular pressure compared to Goldmann applanation tonometer. Most patients were able to perform self-tonometry and found it acceptable for home use. Measurements using rebound self-tonometry could improve the quality of intraocular pressure data and optimize treatment regimen. abstract_id: PUBMED:23293511 Comparative evaluation of intraocular pressure with an air-puff tonometer versus a Goldmann applanation tonometer. Purpose: Tonometry, or measurement of intraocular pressure (IOP), is one of the most important examination procedures in ophthalmic clinics, and IOP is an important parameter in the diagnosis of glaucoma. Because there are numerous types of tonometer available, it is important to evaluate the differences in readings between different tonometers. Goldmann applanation tonometers (GATs) and noncontact air-puff tonometers (APTs) are largely available in ophthalmic clinics. The purpose of this study was to evaluate the role of AP tonometer by comparing the measurements of IOP made using this device with those made using a GAT. Patients And Methods: This study involved 196 eyes from 98 study participants, all of whom were patients attending an ophthalmic outpatient clinic. Each patient's IOP was measured using both Goldmann applanation tonometry and AP tonometry, and the difference in readings between the two methods was calculated. Results: The mean IOP as measured by GAT was 13.06 ± 4.774 mmHg, while that as measured by AP tonometer was 15.91 ± 6.955 mmHg. The mean difference between the two methods of measurement was 2.72 ± 2.34 mmHg. The readings obtained by AP tonometer were higher than those obtained by GAT in 74% of patients, and this difference was most obvious when the GAT measurement of IOP exceeded 24 mmHg. No statistically significant variation in IOP was noted between the devices when the patients' age, sex, and laterality (right and left eyes) were considered. Conclusion: There is a significant difference in the measurement of IOP between GATs and AP tonometers. Goldmann applanation tonometry remains the most suitable and reliable method for measuring IOP. Because measurements of IOP by AP tonometer are usually higher than those obtained by GAT regardless of the patient's age, sex, or laterality of eyes, AP tonometry is a suitable method for community or mass screenings of IOP. abstract_id: PUBMED:9932318 Can the Tono-Pen replace the Schiøtz tonometer in general practice? Objective: To study if the Tono-Pen, a hand held digital tonometer, can replace Schiøtz tonometry in general practice. Design: Tonometry with Tono-Pen and Schiøtz tonometer were compared. Setting: Group practice with two general practitioners (GPs). Patients: 48 consecutive patients over 40 years of age. Main Outcome Measures: The mean difference between intra ocular pressure (IOP) measured with the two methods with 95% confidence intervals and 95% limits of agreement between them. Results: One observer found a mean difference between methods of 0.3 mmHg and 95% limits of agreement of +/- 4 mmHg. The other observer with a different Schiøtz tonometer, had a mean difference of approximately -2 mmHg and 95% limits of agreement from -8 to +4 mmHg. Conclusion: We consider the Tono-Pen to be an alternative to Schiøtz tonometry. The maximum mean difference between the methods for one GP, 1.9 mmHg, was within clinically acceptable limits. Possible reasons for the different agreement for the two observers are discussed. abstract_id: PUBMED:37203058 Assessment of intraocular pressure measurement between Goldman applanation tonometer, rebound tonometer, non-contact tonometer, and its correlation with central corneal thickness. Purpose: To compare readings of intraocular pressure (IOP) taken with the Goldmann applanation tonometer (GAT), the non-contact tonometer (NCT), and the rebound tonometer (RBT), and to compare their correlation with central corneal thickness (CCT). Methods: This was a prospective, cross-sectional, observational study to which patients above 18 years of age were enrolled. A total of 400 eyes of 200 non-glaucomatous patients underwent IOP recordings using the GAT, NCT, and RBT, and CCT was also noted. Informed consent of the patients was taken. The IOP readings taken via the three methods were compared and correlated with CCT. Paired t test was used to compare the two devices. Simple and multivariate linear regression analyses were used to study the relationship between factors. A P value less than 0.05 was considered significant. Correlation was determined using the Pearson correlation coefficient, and a Bland-Altman graph was plotted. Results: Mean IOP measured by the NCT was 15.65 ± 2.80 mmHg, by the RBT was 14.23 ± 3.05 mmHg, and by the GAT was 14.69 ± 2.97 mmHg. The mean CCT was 510.61 ± 33.83 microns. The difference between mean IOP recorded by the NCT and that by the RBT was 1.41 ± 2.39 mmHg, between the NCT and GAT was 0.95 ± 2.03 mmHg, and between the GAT and RBT was 0.45 ± 2.22 mmHg. The difference between the IOP values was statistically significant (P &lt; 0.005). All tonometers showed a statistically significant correlation with CCT, but it was observed that the NCT had a stronger correlation (0.4037). Conclusion: The IOP readings taken by all the three methods were comparable; however, RBT values were closer to GAT values. CCT did influence the IOP values, and this should be kept in mind while evaluating. abstract_id: PUBMED:32690372 Comparision of intraocular pressure measured using the new icare 200™ rebound tonometer and the Perkins™ applanation tonometer in healthy subjects and in patients with primary congenital glaucoma. Objective: To compare intraocular pressure (IOP) measurements obtained using the Icare 200™ (IC200) rebound tonometer and the hand-held version of the Goldmann Applanation Tonometer (Perkins™ tonometer, GAT) in patients with primary congenital glaucoma (PCG) and in healthy subjects. Material And Methods: A total of 42 eyes of healthy subjects (G1) and 40 patients with PCG (G2) were analysed. The following clinical data were collected: gender, age, Cup/Disc ratio, central corneal thickness (CCT). IOP was determined in the examination room using the IC200 and GAT tonometers, in the same order. Agreement between both tonometers was determined using the intraclass correlation coefficient (ICC) and Bland-Altman plot. A linear regression analysis was used to establish the IOP was affected by the studied variables. Results: Mean IOP between both tonometers (IC200 minus GAT) was: G1=15.91 (2.57) mmHg vs. 15.06 (2.12) mmHg (mean difference, MD=0.84 (0.50) mmHg; P&lt;.101) and G2=20.10 (6.37) vs.19.12 (5.62) (MD=0.98 (1.36); P=.474). Excellent agreement was found between IC200 and GAT in both groups (ICC=G1: 0.875 (95% CI; 0.768-0.933; P&lt;.001); G2: 0.924 (95% CI; 0.852-0.961; P&lt;.001), and there was a statistically significant correlation between the IOP difference measured with IC200 and GAT and CCT in G1 (B=0.021; 95% CI; 0.005-0.037; P=.008), but was not statistically significant in G2. Conclusion: There was excellent agreement between the IC200 and GAT tonometers, both in healthy subjects and PCG, with a trend to overestimate IOP when measured with IC200. There was no influence by CCT on IOP measurements in patients with PGC. abstract_id: PUBMED:29354698 Disposable versus non-disposable tonometer prisms: a UK national survey. Purpose: To determine the prevalence of disposable tonometer versus non-disposable tonometer use in the UK and to determine methods of decontamination and frequency of replacement of prisms.A total of 137 ophthalmology departments were interviewed by telephone using a structured questionnaire. The main outcome measured were:types of tonometer prisms used in clinic (disposable, non-disposable and/or other)average disposable prisms used per clinic sessionaverage lifespan of non-disposable prismsprism preference by glaucoma and other teams within department.A cost and benefit analysis was then performed on the data acquired. Results: One hundred and fifty-five departments were identified for the survey. Of these, 137 (88.3%) responded. Eighty-one departments (59.1%) used Tonosafe prisms alone, whereas 22 departments (16.1%) used Goldmann non-disposable prisms exclusively. Thirty-five departments (64%) on average have a change rate of 26.5% per year (range: 0-100, median: 20) attributed to damage, loss or theft. Sixteen departments (29%) reported that prisms were used until damaged or lost. Four departments (7%) were uncertain of their prism usage and could not provide further information. Conclusions: Majority of eye departments in the UK opt for disposable prisms. This survey shows the perceived cost-effectiveness of disposable prisms is overestimated when the true cost of disinfection and damage is taken into account. Significant cost savings coupled with the low risk of infectivity (if decontaminated properly) should prompt clinicians and ophthalmic departments worldwide to reconsider the use of non-disposable prisms. abstract_id: PUBMED:32971622 Agreement and repeatability of Icare ic100 tonometer. Purpose: To find the agreement and repeatability of Icare ic100 tonometer. Methods: We included 150 subjects above the age of 18 years for this cross-sectional, multicenter study with intraocular pressure (IOP) ≥7 mmHg. After the initial ophthalmic examination, two masked examiners took five IOP measurements using three different instruments; Icare ic100, Icare TA01i, and Goldmann applanation tonometer (GAT) in only one eye of the participants. Comparison of agreement of IOP using different instruments was quantified with intraclass correlation coefficient (ICC) using the two-way random effects models of absolute agreement and Cronbach's alpha. The test-retest variability of the instruments was assessed by deriving repeatability coefficient (RC) and coefficient of variation (CV). Results: Agreement between the tonometers across the different IOP groups had no statistically significant difference in their mean IOP. Icare ic100 was found to have good reliability across all IOP groups (ICC value &gt;0.78) when compared with Icare TA01i. In comparison with GAT, Icare ic100 showed good reliability across all IOP groups (ICC &gt;0.87) except &gt;16 to &lt;23 mmHg group where it showed moderate reliability (ICC = 0.52). Icare ic100 showed good repeatability with RC and CV of 2.67 and 4.89, respectively. Conclusion: Icare ic100 rebound tonometer can measure IOP with relatively small measurement error and can provide a reliable and repeatable reading in comparison with GAT across a wide pressure range without hampering corneal health. Answer: The Indurometer, designed to measure the resistance to an applied force and quantify the amount of fibrosis in tissue, was compared to the Tonometer in a study involving 180 participants with unilateral arm lymphedema. The study aimed to determine the level of variability and repeatability between the two devices and how this variability differed across individual ISL stages. The results showed that the Indurometer gave lower measurement values than the Tonometer and was easier to use. However, the Tonometer was less variable in its measurements, particularly in ISL stages 1 and 2. No significant variability was found in stage 3. The conclusion was that the Indurometer requires modification to improve its variability before it can be considered a replacement for the Tonometer in assessing lymphedema. The study also highlighted a significant issue with the lack of understanding of the ISL classification system (PUBMED:26091409). In the context of this study, the Indurometer is not yet able to replace the Tonometer due to its higher variability in measurements. Further improvements are needed for the Indurometer to match the reliability of the Tonometer in clinical assessments.
Instruction: Infant social attention: an endophenotype of ASD-related traits? Abstracts: abstract_id: PUBMED:27861851 Infant social attention: an endophenotype of ASD-related traits? Background: As a neurodevelopmental disorder, symptoms of ASD likely emerge from a complex interaction between preexisting genetic vulnerabilities and the child's environment. One way to understand causal paths to ASD is to identify dimensional ASD-related traits that vary in the general population and that predispose individuals with other risk factors toward ASD. Moving beyond behavioral traits to explore underlying neurocognitive processes may further constrain the underlying genetics. Endophenotypes are quantitative, heritable, trait-related differences that are generally assessed with laboratory-based methods, can be identified in the general population, and may be more closely tied to particular causal chains that have a more restricted set of genetic roots. The most fruitful endophenotypes may be those observed in infancy, prior to the emergence of behavioral symptoms that they are hypothesized to cause. Social motivation is an ASD-related trait that is highly heritable. In this study, we investigate whether infant endophenotypes of social attention relate to familial risk for lower social motivation in the general population. Methods: We examined whether infant social attention (measured using habituation, EEG power, and event-related potential tasks previously used in infants/toddlers with ASD) varies quantitatively with parental social motivation in 117 six-month-old and 106 twelve-month-old typically developing infants assessed cross-sectionally. To assess heritable aspects of social motivation, primary caregiver biological parents completed two self-report measures of social avoidance and discomfort that have shown high heritability in previous work. Results: Parents with higher social discomfort and avoidance had infants who showed shorter looks to faces but not objects; reduced theta power during naturalistic social attention; and smaller P400 responses to faces versus objects. Conclusions: Early reductions in social attention are continuously related to lower parental social motivation. Alterations in social attention may be infant endophenotypes of social motivation traits related to ASD. abstract_id: PUBMED:34500376 Social cognition and self-reported ASD traits in young adults who have reported psychotic experiences: A population-based, longitudinal study. Objective: Social cognition is considered a trait marker of psychosis, and has rarely been investigated in young adults who have reported psychotic experiences (PE). The aim of the present study was to explore whether social cognition, self-reported Autism Spectrum Disorder (ASD) traits and functioning were associated with PE. Methods: Participants were invited to take part in a 10-year follow-up study (mean age 20.9 years), of whom 103 participants returned, including 41 who had reported PE either past or current. Social cognition (theory of mind, social perception and locus of control) was assessed and a self-report measure of ASD traits was administered. Psychopathology and global functioning were assessed using the SCID-5. Analyses investigated group differences between PE and controls, and the association between social cognition and functioning in the PE group. A mediation analysis investigated if the association between PE and social cognition was explained by ASD traits. Results: The PE group had poorer theory of mind scores (F = 4.22, p = .043), specifically for neutral and negative stimuli, and an external locus of control (F = 5.73, p = .019) in young adulthood. The PE group had a significantly greater number of self-reported ASD traits than the controls (χ2(2) = 10.65). External locus of control had a significant negative correlation with current role functioning in the PE group. The association between PE and social cognition abilities was not mediated by ASD traits. Conclusion: Psychotic experiences are associated with poorer theory of mind and an external locus of control, as well as proportionally more self-reported ASD traits, in young adulthood. abstract_id: PUBMED:29752661 Tracking the Influence of Autistic Traits on Competencies Among School Aged Children with Subthreshold Autistic Traits: A Longitudinal Study. This study aims to further explore the implications of autism spectrum disorder (ASD) symptoms for children who do not meet full diagnostic criteria. More specific characterization of how challenges present relative to traits of ASD such as social responsiveness is vital to developing an understanding of what competency and mental health difficulties these impairments are related to, and if they persist over time. Assessments of autistic traits, clinical symptomotology, and competency were used to quantify the relation of these domains cross-sectionally and across time. Social Responsiveness Scale (SRS) scores significantly contributed to a teacher-report Happy scale from the Teacher's Report Form and a parent-report Social scale from the Child Behavior Checklist. No significant longitudinal models emerged. Splitting the SRS scores into three severity classes revealed that impaired social responsiveness is significantly related to competency, unlike average or below average deficits. Implications of subthreshold ASD traits on competency outcomes are discussed. abstract_id: PUBMED:28000078 Autistic Traits and Symptoms of Social Anxiety are Differentially Related to Attention to Others' Eyes in Social Anxiety Disorder. Autism spectrum disorder (ASD) and social anxiety disorder (SAD) have partly overlapping symptoms. Gaze avoidance has been linked to both SAD and ASD, but little is known about differences in social attention between the two conditions. We studied eye movements in a group of treatment-seeking adolescents with SAD (N = 25), assessing SAD and ASD dimensionally. The results indicated a double dissociation between two measures of social attention and the two symptom dimensions. Controlling for social anxiety, elevated autistic traits were associated with delayed orienting to eyes presented among distractors. In contrast, elevated social anxiety levels were associated with faster orienting away from the eyes, when controlling for autistic traits. This distinction deepens our understanding of ASD and SAD. abstract_id: PUBMED:28244271 Parent-delivered early intervention in infants at risk for ASD: Effects on electrophysiological and habituation measures of social attention. Prospective longitudinal studies of infants with older siblings with autism spectrum disorder (ASD) have indicated that differences in the neurocognitive systems underlying social attention may emerge prior to the child meeting ASD diagnostic criteria. Thus, targeting social attention with early intervention might have the potential to alter developmental trajectories for infants at high risk for ASD. Electrophysiological and habituation measures of social attention were collected at 6, 12, and 18 months in a group of high-risk infant siblings of children with ASD (N = 33). Between 9 and 11 months of age, infant siblings received a parent-delivered intervention, promoting first relationships (PFR), (n = 19) or on-going assessment without intervention (n = 14). PFR has been previously shown to increase parental responsivity to infant social communicative cues and infant contingent responding. Compared to infants who only received assessment and monitoring, infants who received the intervention showed improvements in neurocognitive metrics of social attention, as reflected in a greater reduction in habituation times to face versus object stimuli between 6 and 12 months, maintained at 18 months; a greater increase in frontal EEG theta power between 6 and 12 months; and a more comparable P400 response to faces and objects at 12 months. The high-risk infants who received the intervention showed a pattern of responses that appeared closer to the normative responses of two groups of age-matched low-risk control participants. Though replication is necessary, these results suggest that early parent-mediated intervention has the potential to impact the brain systems underpinning social attention in infants at familial risk for ASD. Autism Res 2017, 10: 961-972. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. abstract_id: PUBMED:37847979 Cortical responses to social stimuli in infants at elevated likelihood of ASD and/or ADHD: A prospective cross-condition fNIRS study. Autism spectrum disorders (ASD) and attention-deficit hyperactivity disorder (ADHD) are highly prevalent neurodevelopmental conditions that often co-occur and present both common and distinct neurodevelopmental profiles. Studying the developmental pathways leading to the emergence of ASD and/or ADHD symptomatology is crucial in understanding neurodiversity and discovering the mechanisms that underpin it. This study used functional near-infrared spectroscopy (fNIRS) to investigate differences in cortical specialization to social stimuli between 4- to 6-month-old infants at typical and elevated likelihood of ASD and/or ADHD. Results showed that infants at both elevated likelihood of ASD and ADHD had reduced selectivity to vocal sounds in left middle and superior temporal gyrus. Furthermore, infants at elevated likelihood of ASD showed attenuated responses to visual social stimuli in several cortical regions compared to infants at typical likelihood. Individual brain responses to visual social stimuli were associated with later autism traits, but not ADHD traits. These outcomes support our previous observations showing atypical social brain responses in infants at elevated likelihood of ASD and align with later atypical brain responses to social stimuli observed in children and adults with ASD. These findings highlight the importance of characterizing antecedent biomarkers of atypicalities in processing socially relevant information that might contribute to both phenotypic overlap and divergence across ASD and ADHD conditions and their association with the later emergence of behavioural symptoms. abstract_id: PUBMED:33582879 Dysfunctions in Infants' Statistical Learning are Related to Parental Autistic Traits. Statistical learning refers to the ability to extract the statistical relations embedded in a sequence, and it plays a crucial role in the development of communicative and social skills that are impacted in the Autism Spectrum Disorder (ASD). Here, we investigated the relationship between infants' SL ability and autistic traits in their parents. Using a visual habituation task, we tested infant offspring of adults (non-diagnosed) who show high (HAT infants) versus low (LAT infants) autistic traits. Results demonstrated that LAT infants learned the statistical structure embedded in a visual sequence, while HAT infants failed. Moreover, infants' SL ability was related to autistic traits in their parents, further suggesting that early dysfunctions in SL might contribute to variabilities in ASD symptoms. abstract_id: PUBMED:30632286 Social complexity and the early social environment affect visual social attention to faces. Diminished attention to socially relevant information appears to be an early emerging risk factor associated with autism spectrum disorders (ASD). However, inconsistencies across studies suggest that atypicalities in visual social attention in infants at high-risk for ASD during the first postnatal year may be subtle and more apparent under certain contexts. Here we explore factors that may moderate developmental trajectories in attention to faces, including the social complexity of the dynamic visual stimuli used to measure visual social attention and the early social environment of the infant as indexed by parental affectedness of ASD-related traits. Across infants at both high (HR) and low risk for ASD, attention to faces increased during the first postnatal year, with overall greater attention being allocated to schematic faces in the simpler video stimulus. Moreover, greater parental affectedness of ASD-related traits was associated with reduced developmental gains in attention to faces. For HR infants, greater attention to faces was positively associated with social communicative competence, including better joint attention skills and lower social impairments. Altogether, our findings highlight the importance of considering developmental level when selecting stimuli to longitudinally examine visual social attention, and the clinical relevance of including measures of infant's social environment in understanding early markers of ASD risk. Autism Res 2019, 12: 445-457 © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Attention to faces is an important means for infants to learn about the social world. The complexity of the social scene and an infant's early social environment both affect the amount of time infants at high- and low-risk for ASD look at faces during the first postnatal year. For infants at high-risk for ASD, greater attention to faces was associated with better social skills. Understanding an infant's social environment may have a positive impact on social communicative development. abstract_id: PUBMED:31351456 Infant regulatory function acts as a protective factor for later traits of autism spectrum disorder and attention deficit/hyperactivity disorder but not callous unemotional traits. Background: Reduced executive functions (EF) are commonly associated with developmental conditions (e.g., autism spectrum disorder, ASD; attention deficit/hyperactivity disorder, ADHD), although EF seems to be typical in children with callous unemotional (CU) traits. Regulatory function (RF) is a proposed infant precursor that maps on onto factors driving later EF. Here, we first test whether RF is specifically and negatively associated with ASD and ADHD traits, but not CU traits. Second, we test whether RF can act as a protective factor, by moderating the association between infant markers and subsequent ASD and ADHD traits. Methods: Participants were 79 infants at high (N = 42) and low (N = 37) familial risk for ASD. Data come from the 14-month infant visit (Autism Observational Scale for Infants; AOSI; activity level and RF from the Infant Behavior Questionnaire; IBQ) and the 7-year visit (ASD traits: Social Responsiveness Scale, SRS; ADHD traits: Conners 3, CU traits: Inventory of Callous Unemotional Traits). Results: Infant RF was negatively associated with later traits of ASD (B = - 0.5, p = 0.01) and ADHD inattention (B = - 0.24, p = 0.02) but not hyperactivity (B = - 0.25, p = 0.10) or CU traits (B = 0.02, p = 0.86). RF moderated the association between infant AOSI score and ASD traits, with a significant effect in those with low RF (B = 0.10, p = 0.006), not high RF (B = 0.01, p = 0.78). Similarly, for ADHD, infant activity level was associated with later ADHD inattention in those with low (B = 0.17, p = 0.04) but not high RF (B = 0.07, p = 0.48). For ADHD hyperactivity symptoms, activity level was predictive at both high and low levels of RF. Conclusions: Strong RF may allow children to compensate for other atypicalities, thus attenuating the association between infant markers and later disorder traits. Whilst infant RF was associated with both ASD and ADHD inattention traits, there was no association with ADHD hyperactivity or CU traits. This suggests that any protective effect may not be universal and emphasises the need for a better understanding of the underlying moderating mechanisms. abstract_id: PUBMED:24057094 Social smiling and its components in high-risk infant siblings without later ASD symptomatology. Impaired affective expression, including social smiling, is common in children with autism spectrum disorder (ASD), and may represent an early marker for ASD in their infant siblings (Sibs-ASD). Social smiling and its component behaviors (eye contact and non-social smiling) were examined at 15 months in Sibs-ASD who demonstrated later ASD symptomatology (Sibs-ASD/AS), those who did not (Sibs-ASD/NS), and low-risk controls (Sibs-TD). Both Sibs-ASD subgroups demonstrated lower levels of social smiling than Sibs-TD, suggesting that early social smiling may reflect elevated genetic vulnerability rather than a specific marker for ASD. Only the Sibs-ASD/AS demonstrated less eye contact and non-social smiling than Sibs-TD, suggesting that different processes, threshold effects, or protective factors may underlie social smiling development in the two Sibs-ASD subgroups. Answer: The concept of infant social attention as an endophenotype of ASD-related traits is supported by several studies that have investigated the relationship between early social attention and later ASD traits or familial risk factors for ASD. One study found that parents with higher social discomfort and avoidance had infants who showed shorter looks to faces, reduced theta power during naturalistic social attention, and smaller P400 responses to faces versus objects, suggesting that alterations in social attention may be infant endophenotypes of social motivation traits related to ASD (PUBMED:27861851). Another study reported that infants at elevated likelihood of ASD showed attenuated responses to visual social stimuli in several cortical regions compared to infants at typical likelihood, with individual brain responses to visual social stimuli associated with later autism traits (PUBMED:37847979). Furthermore, dysfunctions in infants' statistical learning, which plays a crucial role in the development of communicative and social skills, were related to parental autistic traits, indicating that early dysfunctions in statistical learning might contribute to variabilities in ASD symptoms (PUBMED:33582879). Additionally, diminished attention to socially relevant information, such as faces, has been identified as an early emerging risk factor associated with ASD, with greater parental affectedness of ASD-related traits associated with reduced developmental gains in attention to faces (PUBMED:30632286). Moreover, an intervention study showed that parent-delivered early intervention in infants at risk for ASD led to improvements in electrophysiological and habituation measures of social attention, suggesting the potential to impact the brain systems underpinning social attention in infants at familial risk for ASD (PUBMED:28244271). In summary, these findings support the notion that infant social attention may serve as an endophenotype of ASD-related traits, with early alterations in social attention potentially predisposing individuals to ASD when combined with other risk factors.
Instruction: Is the ratio of transmitral peak E-wave velocity to color flow propagation velocity useful for evaluating the severity of heart failure in atrial fibrillation? Abstracts: abstract_id: PUBMED:15564696 Is the ratio of transmitral peak E-wave velocity to color flow propagation velocity useful for evaluating the severity of heart failure in atrial fibrillation? Background: Although analysis of the transmitral inflow (TMF) pattern is widely used for evaluating left ventricular diastolic function and provides valuable information for the management of heart failure (HF) in sinus rhythm, its utility in patients with atrial fibrillation (AF) has not been established. The aim of this study was to investigate the relationship between the ratio of transmitral peak E-wave velocity to flow propagation velocity (E/Vp) obtained by a newly developed dual Doppler system and the plasma B-type natriuretic peptide (BNP) concentration or pulmonary capillary wedge pressure (PCWP) for evaluating the severity of heart failure with AF. Methods And Results: In 68 patients with AF, the E/Vp was compared with plasma BNP concentration and PCWP. A cutoff value of &gt; or =1.7 for E/Vp predicted a plasma BNP concentrationl of &gt; or =200 pg/ml, with 80% sensitivity and 84% specificity. Only E/Vp was found to be independently significant by stepwise multilinear regression analysis (r=0.40, p=0.01). PCWP values had good correlation with E/Vp (r=0.63, p&lt;0.01) and were significantly higher in the group with E/Vp &gt;/=1.7 (16+/-6 mmHg vs 11+/-4 mmHg, p&lt;0.05). Conclusion: The Doppler-derived index of E/Vp correlated well with the neurohormonal and hemodynamic parameters, and was useful for evaluating the severity of heart failure with AF. abstract_id: PUBMED:28785715 Echocardiographic diastolic function assessment is of modest utility in patients with persistent and longstanding persistent atrial fibrillation. Background: Detection of concurrent diastolic dysfunction (DD) may be beneficial in patients with persistent and longstanding persistent atrial fibrillation (AF). The role of transthoracic echocardiography (TTE) in assessing DD in patients with AF has not been well characterized. We sought to determine the utility of TTE in detecting elevated left atrial pressure (LAP) in patients with persistent and longstanding persistent non-valvular AF using directly measured LAP as the reference standard. Methods: We retrospectively studied 157 patients with persistent AF and preserved left ventricular ejection fraction who underwent pulmonary vein isolation (PVI). LAP was determined in conjunction with trans-septal puncture at the time of catheter ablation. TTE was performed 1 day after PVI and included two dimensional, pulse wave spectral Doppler and tissue Doppler assessments. Results: The clinical parameter that strongly correlated with elevated LAP is longstanding persistent AF. Four strongest TTE parameters identified to moderately correlate with LAP include 1. left atrial minimum volume (LAVmin), 2. peak velocity of early mitral diastolic inflow velocity (E), 3. pulmonary vein systolic flow velocity (PVS), and 4. ratio of early diastolic transmitral inflow velocity to mitral annular velocity at the lateral site (E/E' lateral). Conclusion: Accurate assessment of diastolic dysfunction in patients with persistent and longstanding persistent AF is difficult using TTE. A combination of LAVmin, PVS, and E might be helpful to determine elevated LAP. abstract_id: PUBMED:36647000 Automated 4D flow cardiac MRI pipeline to derive peak mitral inflow diastolic velocities using short-axis cine stack: two centre validation study against echocardiographic pulse-wave doppler. Background: Measurement of peak velocities is important in the evaluation of heart failure. This study compared the performance of automated 4D flow cardiac MRI (CMR) with traditional transthoracic Doppler echocardiography (TTE) for the measurement of mitral inflow peak diastolic velocities. Methods: Patients with Doppler echocardiography and 4D flow cardiac magnetic resonance data were included retrospectively. An established automated technique was used to segment the left ventricular transvalvular flow using short-axis cine stack of images. Peak mitral E-wave and peak mitral A-wave velocities were automatically derived using in-plane velocity maps of transvalvular flow. Additionally, we checked the agreement between peak mitral E-wave velocity derived by 4D flow CMR and Doppler echocardiography in patients with sinus rhythm and atrial fibrillation (AF) separately. Results: Forty-eight patients were included (median age 69 years, IQR 63 to 76; 46% female). Data were split into three groups according to heart rhythm. The median peak E-wave mitral inflow velocity by automated 4D flow CMR was comparable with Doppler echocardiography in all patients (0.90 ± 0.43 m/s vs 0.94 ± 0.48 m/s, P = 0.132), sinus rhythm-only group (0.88 ± 0.35 m/s vs 0.86 ± 0.38 m/s, P = 0.54) and in AF-only group (1.33 ± 0.56 m/s vs 1.18 ± 0.47 m/s, P = 0.06). Peak A-wave mitral inflow velocity results had no significant difference between Doppler TTE and automated 4D flow CMR (0.81 ± 0.44 m/s vs 0.81 ± 0.53 m/s, P = 0.09) in all patients and sinus rhythm-only groups. Automated 4D flow CMR showed a significant correlation with TTE for measurement of peak E-wave in all patients group (r = 0.73, P &lt; 0.001) and peak A-wave velocities (r = 0.88, P &lt; 0.001). Moreover, there was a significant correlation between automated 4D flow CMR and TTE for peak-E wave velocity in sinus rhythm-only patients (r = 0.68, P &lt; 0.001) and AF-only patients (r = 0.81, P = 0.014). Excellent intra-and inter-observer variability was demonstrated for both parameters. Conclusion: Automated dynamic peak mitral inflow diastolic velocity tracing using 4D flow CMR is comparable to Doppler echocardiography and has excellent repeatability for clinical use. However, 4D flow CMR can potentially underestimate peak velocity in patients with AF. abstract_id: PUBMED:36340256 Prognostic Impact of Echocardiographic Congestion Grade in HFpEF With and Without Atrial Fibrillation. Background: Atrial fibrillation (AF) is common in heart failure with preserved ejection fraction (HFpEF). Objectives: This study aimed to investigate the prognostic value of echocardiographic markers of congestion that can be applied to both AF and patients without AF with HFpEF. Methods: We conducted a multicenter study of 505 patients with HFpEF admitted to hospitals for acute decompensated heart failure. The ratio of early diastolic transmitral flow velocity to mitral annulus velocity (E/e'), the tricuspid regurgitation peak velocity, and the collapsibility of the inferior vena cava were obtained at discharge. Congestion was determined by echocardiography if any one of E/e' ≥14 (E/e' ≥11 for AF), tricuspid regurgitation peak velocity ≥2.8 m/s, or inferior vena cava collapsibility &lt;50% was positive. We classified patients into grade A, grade B, and grade C according to the number of positive congestion indices. The primary endpoint was the composite of cardiovascular death and heart failure hospitalization. Results: During the follow-up period (median: 373 days), 162 (32%) patients experienced the primary endpoint. Grade C patients had a higher risk for the primary endpoint than grade A (HR: 2.98; 95% CI: 1.97-4.52) and grade B patients (HR: 1.92; 95% CI: 1.29-2.86) (log-rank P &lt; 0.0001). Echocardiographic congestion grade improved the predictive value when added to the age, sex, New York Heart Association functional class, and N-terminal pro-B-type natriuretic peptide, not only in sinus rhythm (Uno C-statistic: 0.670 vs 0.655) but in AF (Uno C-statistic: 0.667 vs 0.639). Conclusions: Echocardiographic congestion grade has prognostic value in patients with HFpEF with and without AF. abstract_id: PUBMED:27278798 Is the E/V p index useful for evaluating prognosis in chronic heart failure with atrial fibrillation? A pilot study. Background: The ratio of transmitral peak E wave velocity to color flow propagation velocity (E/V p index) has proved to be a significant predictor of prognosis in cardiac diseases with sinus rhythm. However, its usefulness in patients with atrial fibrillation (AF) and heart failure has not yet been established. The aim of this study was to determine the feasibility of using the E/V p index for the prediction of mortality and heart failure hospitalization in this group. Methods: We studied 66 ambulatory patients with stable congestive heart failure (CHF) functional class I-III and AF. Patients were divided into group A and B according to an E/V p index &lt;1.5 and ≥1.5, respectively. Results: During follow-up (average 430 days) events were more common in group B (75 vs. 17%, log rank test; hazard ratio (HR) = 6.8). By means of multivariate logistic regression analysis, E/V p proved to be an independent predictor of events (p = 0.0012). Conclusions: In our patients with stable CHF and AF the E/V p index is a significant predictor of clinical outcome. abstract_id: PUBMED:17390200 Simultaneous echocardiographic evaluation of coronary flow velocity reserve and aortic distensibility indices in hypertension. The coronary flow velocity reserve (CFVR), a measure of endothelial function in coronary circulation, can be measured semi-invasively in the left anterior descending coronary artery by using stress transesophageal echocardiography (TEE). During the same stress TEE, aortic distensibility indices (elastic modulus [E(p)] and Young's circumferential static elastic modulus [E(s)]) can be assessed. The purpose of the present study was to examine whether stress TEE is valuable method for parallel evaluation of CFVR, E(p), and E(s) in patients with hypertension. A total of 38 patients with chest pain but with a negative coronary angiogram were enrolled into the present study, which included examination of the presence or absence of hypertension. Significant coronary stenosis was considered present in the event of a luminal diameter reduction of &gt;50% on use of the "worst view method." Patients with significant coronary or valvular heart disease, atrial fibrillation, heart failure, unstable angina pectoris, acute myocardial infarction, or diabetes mellitus were excluded from the study. Stress TEE was performed in each case to evaluate CFVR, E(p), and E(s). Coronary flow velocity reserve and diastolic coronary flow velocities measured at peak stress were decreased in hypertensive patients as compared with normotensive subjects. E(p) and E(s) were significantly increased in hypertensive patients. In conclusion, it can be stated that Stress TEE is a useful tool for the simultaneous evaluation of the CFVR, E(p), and E(s) in hypertension. The CFVR and aortic distensibility are decreased in hypertension. abstract_id: PUBMED:18773996 Prognostic value of pulse-wave tissue Doppler parameters in patients with systolic heart failure. The aim was to study the prognostic value of left ventricular (LV) function using pulse-wave tissue Doppler imaging (TDI) in an ordinary population with heart failure (HF). One hundred fifty-six patients hospitalized for HF and LV ejection fraction &lt; or =40% were examined using conventional echocardiography and pulse-wave TDI for the assessment of longitudinal LV function. Mitral annular systolic and early diastolic (e') velocities were recorded from a mean of 4 annular sites from the apical 2- and 4-chamber views. Noninvasive LV filling pressure was calculated from the ratio between transmitral early inflow velocity (E) and e'. All patients were followed up for 2 years, and data from the National Registry of Deaths were collected. Mean LV ejection fraction was 24.7 +/- 7.2%. TDI recordings showed a mean mitral annular systolic velocity of 5.0 +/- 1.0 cm/s and e' velocity of 6.2 +/- 1.9 cm/s. E/e' ratio was 14.1 +/- 4.8. Thirty patients (19%) had atrial fibrillation. During follow-up, 27 patients (17%) died of a cardiovascular cause. Multivariate analysis showed that only E/e' ratio and age were predictors of cardiovascular mortality. A cut-off value for E/e' ratio &gt;13 had sensitivity of 84% and specificity of 45% to identify patients who died within 2 years of cardiac reasons. In conclusion, in the acute stage of HF, E/e' ratio is a strong independent predictor of long-term cardiovascular mortality in an ordinary population with HF and systolic dysfunction. Systolic and diastolic velocities had no independent prognostic value. abstract_id: PUBMED:12714150 Doppler transmitral flow indexes and risk of atrial fibrillation (the Framingham Heart Study). Atrial fibrillation (AF) is characterized by structural remodeling and atrial systolic failure. It is unclear if atrial filling abnormalities precede the onset of AF. We evaluated 942 Framingham Study subjects (587 women; mean age 75 years) who underwent Doppler echocardiographic evaluation at a routine examination and who did not have a history of AF. We used multivariable Cox regression models (stratified by gender and prevalent cardiovascular disease) to examine the relations of Doppler transmitral flow indexes (ratio of the velocity-time integrals of the early [E] and late [A] diastolic filling waves [VTI E/A], a correlate of atrial conduit function; E-wave deceleration time; the atrial filling fraction, an index of atrial systolic function; and peak A wave velocity) to the incidence of AF. At follow-up (mean 7 years), 85 subjects (41 women) developed AF. In models adjusting for established risk factors for AF (including left atrial size) at baseline, and for heart failure and myocardial infarction on follow-up, a 1 SD increment in VTI E/A was associated with a 28% increase in risk of AF (hazards ratio 1.28, 95% confidence interval 1.02 to 1.59). A 1 SD decrease in the atrial filling fraction was associated with a 28% higher risk of AF (hazards ratio 1.28, 95% confidence interval 0.98 to 1.67). There was a U-shaped relation between peak A-wave velocity and risk of AF. Thus, in our elderly community-based sample, increased VTI E/A and a low atrial filling fraction were markers of increased risk of AF, suggesting that altered atrial filling may antedate AF. abstract_id: PUBMED:28929616 Noninvasive assessment of left ventricular end-diastolic pressure by deceleration time of early diastolic mitral annular velocity in patients with heart failure. Objective: To examine the diagnostic ability of the deceleration time of early mitral annular velocity (e'DT) as determined by tissue Doppler velocity image, a method for assessing LV filling pressure. Backgrounds: Estimation of LV filling pressure by Doppler echocardiography requires a combination of various parameters. Therefore, there remains a need for a simple index in LV filling pressure estimation. The e' is known to be reduced and delayed with increased LV filling pressure during development of heart failure. Thus, we hypothesized that e'DT would be shortened as LV filling pressure is increased. Methods: Simultaneous LV end-diastolic pressure (LVEDP) measurement and Doppler echocardiography were performed in 94 patients who were admitted to our hospital for heart failure. Exclusion criteria were atrial fibrillation, mitral valve surgery, and acute coronary syndrome. Results: The e'DT in 31 patients with LVEDP &gt;16 mm Hg (68±13 ms) was significantly shorter than that in 63 patients with LVEDP ≤16 mm Hg (103±27 ms). Both e'DT and early transmitral flow velocity (E)/e' were significantly correlated with LVEDP. In 30 patients with 10-14 E/e', significance of correlation in e'DT was remained, while E/e' was not. The area under the ROC curve for prediction of LVEDP &gt;16 mm Hg for e'DT was greater than that for E/e' (0.91 vs 0.74, P=.046). Conclusion: The e'DT is useful to assess LV filling pressure, especially in 10-14 E/e'. This simple tissue Doppler index may be a potential parameter for easily distinguishing between mild and severe heart failures. abstract_id: PUBMED:26053342 Utility of Isovolumic Contraction Peak Velocity for Evaluation of Adult Patient Status after Transcatheter Closure of Atrial Septal Defect. Background: Transcatheter closure is a well-established treatment for patients with atrial septal defect (ASD), but long-term outcome prognostic factors for adults have not been fully identified yet. Methods: Forty-nine consecutive patients (age 57 ± 17 years, 59% female), who underwent transcatheter closure of ASD, were the subjects of this study. Transthoracic echocardiography was performed before and midterm after the procedure (6 ± 1 months). Isovolumic contraction peak velocity (IVV) was measured at the lateral site of the tricuspid annulus using spectral tissue Doppler imaging, and ΔIVV was determined as the absolute change at midterm follow-up. Long-term unfavorable outcome events, tracked for 19 ± 9 months, were prespecified as primary end points comprising newly developed atrial fibrillation, cerebral infarction, and heart failure. Results: Symptomatic improvement, defined as an improvement in New York Heart Association functional class by one grade or more at midterm after the procedure, was observed in 24 patients (49%), and the remaining 25 (51%) were classified as not symptomatically improved. ΔIVV was significantly larger for patients with symptomatic improvement than for those without (from 11.5 ± 4.3 cm/s to 14.2 ± 3.7 cm/s vs. from 11.8 ± 4.1 cm/s to 12.5 ± 2.9 cm/s; P = 0.045). An important finding of the multivariate Cox proportional-hazards analysis was that only ΔIVV was independently associated with cardiovascular events (HR: 0.701; 95% CI 0.537-0.916; P = 0.01). Kaplan-Meier analysis showed that more patients with enhanced ΔIVV presented with favorable long-term outcome than those with diminished ΔIVV (log-rank P = 0.0001). Conclusions: IVV, which is a less volume-sensitive parameter, can be useful for comprehensive evaluation of ASD patients referred for transcatheter closure. Answer: Yes, the ratio of transmitral peak E-wave velocity to color flow propagation velocity (E/Vp) is useful for evaluating the severity of heart failure in atrial fibrillation (AF). A study found that the E/Vp ratio correlated well with neurohormonal and hemodynamic parameters in patients with AF. Specifically, a cutoff value of ≥1.7 for E/Vp predicted a plasma B-type natriuretic peptide (BNP) concentration of ≥200 pg/ml with 80% sensitivity and 84% specificity. Additionally, pulmonary capillary wedge pressure (PCWP) values had a good correlation with E/Vp, and PCWP was significantly higher in the group with E/Vp ≥1.7. This suggests that the E/Vp ratio is a useful Doppler-derived index for evaluating the severity of heart failure with AF (PUBMED:15564696). Moreover, another study indicated that the E/Vp index is a significant predictor of clinical outcomes in patients with stable congestive heart failure (CHF) and AF. Patients with an E/Vp index ≥1.5 experienced more events compared to those with an E/Vp index <1.5, and E/Vp proved to be an independent predictor of events (PUBMED:27278798). These findings support the utility of the E/Vp ratio as a noninvasive tool for assessing the severity and prognosis of heart failure in patients with atrial fibrillation.
Instruction: Can a rural community-based work-related activity program make a difference for chronic pain-disabled injured workers? Abstracts: abstract_id: PUBMED:17542788 Can a rural community-based work-related activity program make a difference for chronic pain-disabled injured workers? Objective: To assess the effectiveness of a clinical guidelines-informed multidisciplinary work-related activity program, and to improve the physical, psychological and occupational functioning of chronic pain-disabled injured workers. Design: An uncontrolled, repeated-measures, pilot study was conducted. Setting: The intervention was delivered in a community setting in regional New South Wales. Participants: Participants (n = 30), mean age of 41 years, had a compensable musculoskeletal injury: 60% were male, 63% had back injuries; the mean time off work was 13 months. Intervention: A cognitive-behavioural, interdisciplinary intervention was delivered using a multi-contributor provider model (a clinical psychologist and physiotherapist from separate practices, working in liaison with the participant's occupational rehabilitation provider and treating doctor). Groups of six participants attended for one half day per week for six weeks. Main Outcome Measures: The outcome measures included: physical functioning, pain intensity and psychological variables, which were assessed pre- and post-program; and medical certification and work participation, which was recorded pre-program and at six-month follow up. Results: Significant gains were made in pain intensity, physical and psychological functioning, and medical certification. The mean effect size of the intervention was medium to large (d = 0.70). There was no significant change in employment status at six-month follow up. Conclusions: The results of this pilot study suggest that independent, rural or community-based practitioners, working collaboratively using an integrated treatment program, can produce positive outcomes for pain-disabled injured workers, and achieve results similar to those reported by metropolitan-based pain clinics. abstract_id: PUBMED:23618646 Mental health status of Ontario injured workers with permanent impairments. Objective: Each year, approximately 31,000 Canadian injured worker claimants are certified to have permanent impairments associated with the initial workplace incident. Permanent impairments are characterized by ongoing pain, and limitations in physical function and activity participation--all predisposing factors to mental health problems. Here we examine the post-accident mental health status of a sample of Ontario injured workers with permanent impairments. Methods: We analyze data from a cross-sectional telephone survey of 494 injured workers. Mental health status is examined using nine dichotomous diagnostic, symptomatic and functional mental health indicators identified by survey respondents as non-present, or having pre- or post-injury onset, and the CES-D. We describe the relationship of these indicators and work injury, demographic and socio-economic factors. Results: Post-injury onset mental health problems are elevated compared to pre-injury onset in seven of nine indicators. Diagnosed depression, medication abuse, inability to concentrate, and sleep problems are elevated compared to general Canadian population prevalence. Diagnosed depression is elevated compared to populations with pain and chronic health conditions. Higher education and pre-injury income are associated with lower depressive symptoms. Men and older individuals are less likely to report a diagnosis of depression. Older individuals are less likely to report concentration problems. Conclusion: These data paint a troubling mental health picture among injured workers with permanent impairments. Implications for return-to-work and income recovery, health service access and rehabilitation within and outside the compensation system are discussed. abstract_id: PUBMED:26759531 Adding Genetic Testing to Evidence-Based Guidelines to Determine the Safest and Most Effective Chronic Pain Treatment for Injured Workers. Published guidelines for treating injured workers include the need for personalized treatment to manage chronic pain symptoms and increase functional status. However, they often fail to clarify how to objectively personalize these treatments. Further, certain patients need analgesic relief beyond the short term. In these cases, it is not sufficient or reasonable to utilize the typical broad protocol-based justifications for reduction of opioids and other medications in a haphazard manner based purely on poor response, without attempting to elucidate possible pharmacogenetic reasons for this. These guidelines acknowledge the problem of substance abuse and set forth methods for treatment and prevention. Although it has been established in the scientific community that an individual's experience of pain and likelihood for addiction both have genetic components, genetic testing is not routinely included as part of the overall treatment plan for injured workers with chronic pain. Because decisions in cases of workplace injury should be based on scientific evidence, genetic testing results can add some objective information to the existing subjective and objective clinical data; help ascertain the efficacy and potential for toxicity of treatment; and therefore provide more information for accurate clinical decisions. We propose the addition of genetic testing to consensus guidelines for treating injured workers in order to improve patients' functional status, increase productivity, improve safety of prescribing, decrease the likelihood of substance abuse, and save on overall healthcare costs. abstract_id: PUBMED:26231959 Guideline for Prescribing Opioids to Treat Pain in Injured Workers. Recently, there has been a dramatic increase in the use of opioids to treat chronic noncancer pain. Opioids are also being prescribed in stronger potencies and larger doses for musculoskeletal injuries. In some cases, the use of opioids for work-related injuries may actually increase the likelihood of disability. Chronic opioid use is associated with increased risk for overdose morbidity and mortality and other nonfatal adverse outcomes. The risk of dependence and addiction is much more common than previously thought. This guideline provides recommendations for prudent opioid prescribing and addresses issues critical to the care and rehabilitation of injured workers. abstract_id: PUBMED:32651725 Workforce Reintegration After Work-Related Permanent Impairment: A Look at the First Year After Workers' Compensation Claim Closure. Purpose The purpose of this study was to descriptively quantify experiences of injured workers with permanent impairment during their first year of work reintegration. Methods A representative survey was conducted to characterize health, disability, pain, employment, reinjury, and economic outcomes for 598 workers with permanent impairment who had returned to work during the year after workers' compensation claim closure. Survey responses were summarized by degree of whole body impairment (&lt; 10% vs. ≥ 10%). Results Injured workers who had returned to work reported that permanent impairment made it difficult to get a job (47%) and to keep their job (58%). A year after claim closure, 66% reported moderate to very severe pain; 40% reported pain interference with work. About 13% reported new work injuries; over half thought permanent impairment increased their reinjury risk. Asked to compare current to pre-injury work status, workers with a higher degree of impairment more frequently reported working fewer hours (OR 1.60; 95% CI 1.06, 2.42), earning less (OR 1.56; 95% CI 1.04, 2.36), and being at higher risk of losing their current job due to their impairment (OR 1.66; 95% CI 1.01, 2.71). Conclusions Injured workers with permanent impairment face long-term challenges related to health limitations, chronic pain, work reintegration, and economic impacts. Workers with a higher degree of impairment more frequently reported several economic and job security challenges. Developing workplace and workers' compensation-based interventions that reduce return-to-work interruption and reinjury for workers with permanent impairment should be prioritized as an important public health and societal goal. abstract_id: PUBMED:35708644 Education and Consultation Needs of Social Workers in Practice with Rural Older Adults. The aging population and shortage of primary care physicians lead to increasing gaps in access to rural geriatric healthcare. Of concern is the lack of access to geriatric expertise, leading to adverse effects on rural older adults' health outcomes and quality of life. The Geriatric Rural Extension of Expertise through Telegeriatric Services (also known as GREETS) project surveyed rural physical and behavioral healthcare practitioners to identify gaps in geriatric competencies in the rural workforce. Using the Qualtrics platform, a survey was distributed to professional membership lists throughout the State of Michigan. A total of 106 responses were completed, of which 50 were from respondents who identified their profession as a social worker. As would be expected based on social workers' scope of practice and the settings in which they provide services, social worker respondents noted a higher need than the other practitioner respondents for education related to (a) managing chronic pain, (b) managing care of patients with multiple chronic conditions, (c) having serious illness conversations, (d) diagnosing dementia, and (e) discussing advance care planning. Having identified these needs provides a basis for identifying and implementing training and resources for social work and other disciplines involved in geriatric care and services. abstract_id: PUBMED:34425422 Impacts on patient-centered outcomes of a chronic pain self-management program in a rural community: A feasibility study. This study explored if a self-management training program was feasible for a predominantly older rural Latino adults with chronic pain who had limited access to non-pharmacologically based pain treatment. Physical therapy doctoral students delivered the six-week low-literacy low-cost patient-centered program. The intervention was feasible to the participants (n=38) who showed improvement in a majority of the eight outcome measures at 6-week posttest and three measures at 18-week followup. The changes in pain severity, pain interference and pain-related physical functions reached minimally clinically important difference at follow-up. A randomized controlled trial with long-term follow-up is needed to test the program effectiveness in partnership with community health centers to increase access to pain management in rural communities. abstract_id: PUBMED:32568155 Developing guidelines to support injured workers who live and work with chronic pain. Background: Living and working with chronic pain requires persons to alter lifestyles and have the knowledge as well as support to manage unforeseen challenges. Knowledge for persons living with pain who want to participate in meaningful paid and unpaid work is not easily accessible. While there is literature on chronic pain management, work transitions and return to work, less emphasis has been placed on the complexity of living and working with chronic pain. The Creating a Way Forward Project was envisioned to address this gap and to identify the informational needs of workers with pain, health/helping professionals (workers' advisors, return to work specialists, legal representatives), and stakeholders. The overarching aim of the project was to use evidence and experiential knowledge to inform the development of a foundation for educational guides and toolkits to support workers with pain to achieve their outcomes for remaining at work. Methods: Phase one of the project involved a scoping review of chronic pain and work. Phase two involved stakeholder consultations, a focus group and knowledge integration of the literature and experiential insights. Knowledge synthesis drew on a Template Analysis of multiple sources of data. Results: Knowledge domains and key components were identified for persons with pain and for the health/helping professions. Conclusion: These domains reflect a foundation for knowledge in practical training and the development of curriculum for education in self-management program and in inter professional health profession education. These knowledge domains provide a basis for future research in integrated approaches and knowledge use toward improving transitions for persons living with chronic pain who want to participate in productive paid and unpaid work. Ongoing research in knowledge domains that health providers and persons with pain need will expand the potential for improving health outcomes in living with and managing pain. abstract_id: PUBMED:37996721 Workers' Worries, Pain, Psychosocial Factors, and Margin of Manoeuvre, in Relation to Outcomes in a Return-to-Work Program: An Exploratory Study. Purpose: To explore the intensity and variation of workers' worries, pain, psychosocial factors, and margin of manoeuvre before and after a return-to-work program, and identified the psychosocial factors associated with non-return to work at the end of the rehabilitation program. Methods: A pre-post study design was used. A convenience sample of 80 workers starting a return-to-work program and having a compensated musculoskeletal injury that caused an absence of more than three months from their regular work was recruited. Data were collected at baseline and at the end of the rehabilitation program on the nature of the worries and maintenance factors defined in Dugas' generalized anxiety and worry model, using validated questionnaires. The margin of manoeuvre was assessed by the treating occupational therapist. A series of descriptive analyses were performed, as well as Generalized Estimating Equations analyses. Results: Workers' worries were work-related or disability-related 83% of the time at baseline. These worries were essentially based on the situation then occurring at work 90% of the time. For the Generalized Estimating Equations analyses on work status, the final model was significant, explaining 54% of the variance in non-return to work (Pseudo R2 = 0.54; p = 0.0001). Workers were 8.52 times less likely to return to work when the margin of manoeuvre was insufficient, and twice as likely not to return to work in the presence of intense worry. Worries were significantly associated with insufficient margin of manoeuvre. Conclusion: A strong association between workers' lack of margin of manoeuvre at work and their worries about their return to work, and poor work outcomes, supports the importance of the worker-environment interaction in rehabilitation programs. abstract_id: PUBMED:17955346 Rehabilitation of injured workers with chronic pain: a stage of change phenomenon. Objective: This study was designed to explore the stages of change involved in return to work (RTW) for a group of workers who had been suffering from chronic pain. Method: The participants were 67 (mean age = 42.67) injured workers who participated in a six-week RTW program. Assessments of the participants on various measures were conducted before and during the program, and at the end of the follow-up period (3 months post-program). Results: The overall RTW rate of the participants was 65.7%. Most importantly, the participants' RTW outcomes were found to be significantly predicted by their readiness for RTW (action) at the baseline. In the third week, their RTW was significantly predicted by their confidence in returning to work and their readiness for RTW (action). By the sixth week, their RTW outcomes were predicted by their readiness for RTW (pre-contemplation). Conclusion: The results enable us to better understand the stage-of-change factors that might influence injured workers' resumption of a productive work role and the rehabilitation process of RTW. Answer: Yes, a rural community-based work-related activity program can make a difference for chronic pain-disabled injured workers. A pilot study conducted in a community setting in regional New South Wales demonstrated that an interdisciplinary intervention informed by clinical guidelines could produce significant gains in pain intensity, physical and psychological functioning, and medical certification for chronic pain-disabled injured workers. The intervention, which was cognitive-behavioral and delivered by a multi-contributor provider model, showed a medium to large mean effect size (d = 0.70). However, there was no significant change in employment status at the six-month follow-up (PUBMED:17542788). This suggests that while such programs can improve various aspects of functioning among injured workers, translating these improvements into increased employment may require additional strategies or longer-term support.
Instruction: Isolated fractures of the greater tuberosity in proximal humerus: does the direction of displacement influence functional outcome? Abstracts: abstract_id: PUBMED:24684914 Surgical management of isolated greater tuberosity fractures of the proximal humerus. Because the greater tuberosity is the insertion site of the posterior superior rotator cuff, fractures can have a substantial impact on functional outcome. Isolated fractures should not inadvertently be trivialized. Thorough patient evaluation is required to make an appropriate treatment decision. In most cases surgical management is considered when there is displacement of 5 mm or greater. Although reduction of displaced greater tuberosity fractures has traditionally been performed with open techniques, arthroscopic techniques are now available. The most reliable techniques of fixation of the greater tuberosity incorporate the rotator cuff tendon bone junction rather than direct bone-to-bone fixation. abstract_id: PUBMED:24458167 Isolated fractures of the greater tuberosity in proximal humerus: does the direction of displacement influence functional outcome? An analysis of displacement in greater tuberosity fractures. Background And Aim Of The Work: The treatment of isolated fractures of the greater tuberosity of the proximal humerus is still debated, especially in the case of minimally dispaced fractures. Differently from other proximal humerus fractures this kind of injury frequently affects young patients with high functional request. The aim of this study is to assess the outcome of patients treated conservatively for greater tubero-sity isolated fractures, comparing the clinical results in patients with different direction of fracture displacement. Methods: 38 patients with isolated fracture of greater tuberosity participated to the study and were evaluated at 17 months (10-26) of follow up using Constant Murley Score and DASH Score. Patients were divided in groups with different direction of displacement (10 no displacement, 13 postero-superior, 6 antero-superior and 9 antero-inferior displacement). Results: Patients with no displaced fractures had a mean Constant Murley Score of 82 and a mean DASH Score of 15; those with postero-superior displacement had a mean a Constant Murley Score of 61 and a mean DASH Score of 43; for antero-superior displacement the mean Constant Murley Score was 76 and the mean DASH Score of 18; for the antero-inferiorly displaced the mean Constant Murley Score was 83 and the DASH Score 16. Conclusion: Data suggest that postero-superior displacement leads to poorer outcomes than undisplaced or displaced in other directions fractures treated conservatively. abstract_id: PUBMED:29486672 Reverse shoulder arthroplasty for the treatment of acute complex proximal humeral fractures: Influence of greater tuberosity healing on the functional outcomes. Purpose: To investigate the influence of greater tuberosity healing on the functional outcomes of reverse shoulder arthroplasty (RSA) for the treatment of acute complex proximal humeral fractures (PHFs), and to investigate the influence of patient- and surgery-related factors in the healing of the greater tuberosity. Methods: Retrospective study including 41 consecutive PHFs treated using RSA with minimum 2-year follow-up. In all the cases, tuberosities were reattached with a standardized technique. All the patients were assessed at the last follow-up with constant score. Body mass index, surgery delay, comorbidities, polyethylene size, glenosphere size, overhanging of glenosphere, and scapular notch were recorded, and their influence in final constant score and in greater tuberosity healing was analyzed. Results: Mean final constant score was of 60.7 points (standard deviation (SD) = 9.9). Greater tuberosity healed in proper position in 68% of the cases. There were no significant differences in constant score between patients with (mean = 61; SD = 9.5) and without (mean = 61; SD = 11.3) the healing of greater tuberosity. All patients scored above 90° in forward elevation. Scapular notch was reported in 14.6% of the cases. Age significantly affected the constant score ( p = 0.008). Comorbidities significantly interfered with greater tuberosity healing ( p = 0.03). There was one reoperation after dislocation. Conclusion: In spite of expecting good functional outcome with low complication rate after RSA for acute PHFs, the influence of greater tuberosity healing on shoulder function could not be demonstrated. The presence of comorbidities, but not age or gender, negatively influenced the healing of the greater tuberosity. abstract_id: PUBMED:29453500 Isolated greater tuberosity fractures of the proximal humerus: anatomy, injury patterns, multimodality imaging, and approach to management. The greater tuberosity is an important anatomic structure and its integrity is important for shoulder abduction and external rotation. Isolated fractures of the greater tuberosity are often subtle and may not be detected on initial radiographs. Clinically, these patients display symptoms which mimic a full thickness rotator cuff tear. It is important to differentiate these two entities, as their treatment is different (typically nonsurgical management for minimally displaced fractures versus rotator cuff repair for acute full thickness rotator cuff tears). When greater tuberosity fractures are significantly displaced and allowed to heal without anatomic reduction, they can lead to impingement. This article will review greater tuberosity anatomy and function, as well as the clinical presentation and multimodality imaging findings of greater tuberosity fractures. Imaging optimization, pitfalls, and clinical management of these fractures will also be discussed. abstract_id: PUBMED:32933866 Outcomes of surgical fixation of greater tuberosity fractures: A systematic review. Background: Isolated greater tuberosity fractures account for up to a fifth of all proximal humeral fractures. There have been several retrospective cohort studies and case series reporting outcomes after treatment of this pathology. This study aims to report on the clinical outcomes of surgically treated isolated greater tuberosity fractures, as well as diagnostic workup and complications associated with fracture fixation. Methods: A systematic review was performed under PRISMA guidelines to identify studies that reported the results or clinical outcomes of isolated greater tuberosity fracture. The searches were performed using MEDLINE through PubMed, the Elsevier Embase database, and the Cochrane Database of Systematic Reviews. Results: Sixteen studies met inclusion criteria comprising 345 patients and 345 shoulders. The mean age was 52.9 years and mean follow-up was 3.4 months. The mean postoperative American Shoulder and Elbow Surgeon Score, the most frequently utilized patient reported outcome measure across studies, was 90.1% of ideal maximum. All studies used standard shoulder radiographs in their initial workup and most commonly referred to a minimum of 5mm displacement as an indication for surgery. Fifty five percent of patients were treated using open fixation and 35.9 with arthroscopic fixation. Ninety three percent of patients were able to return to work. A total of fifty-two (15.1%) complications were reported in the included studies. Conclusions: The current literature describes overall satisfactory functional outcomes and minimal occupational morbidity following either open or arthroscopic fixation of isolated greater tuberosity fractures despite a notable rate of complications. Level Of Evidence: IV, systematic review. abstract_id: PUBMED:35611289 Combined Greater and Lesser Tuberosity Fracture of Proximal Humerus with Intact Humeral Head and Shaft - A Case Report. Introduction: Combined fracture of greater and lesser tuberosity is an extremely rare injury. Such fracture pattern has not been described in the Neer's, AO Muller, and Jakob's classification system. Case Presentation: We report a unique fracture pattern consisting of combined greater and lesser tuberosity of the proximal humerus (PH) with intact humeral head and shaft in a 32-year-old female. Routine radiographs and computed tomography scan with three-dimensional reconstruction were advised for better understanding of the fracture pattern, later managed by open reduction and fixation with PH locking plate. The patient has excellent clinical, radiological, and functional outcome at the end of 2 years follow-up. Conclusion: This is one among the few case reports highlighting this unique variant of PH fracture, not much has been described in the literature about this injury. Anatomic reconstruction of the fractured tuberosities combined with stable construct by PH locking plate facilitates optimum bone healing and aggressive shoulder rehabilitation. abstract_id: PUBMED:30805412 Quantitative 3-dimensional Computerized Tomography Modeling of Isolated Greater Tuberosity Fractures with and without Shoulder Dislocation. Background: The aim of this study was to assess differences in fracture morphology and displacement between isolated greater tuberosity (GT) fractures (i.e. fractures of the greater tuberosity without other fractures of the proximal humerus) with and without shoulder dislocation utilizing quantitative 3-dimensional CT imaging. Methods: Thirty-four CT-scans of isolated greater tuberosity fractures were measured with 3-dimensional modeling. Twenty patients (59%) had concomitant dislocation of the shoulder that was reduced prior to CT-scanning. We measured: degree and direction of GT displacement, size of the main fracture fragment, the number of fracture fragments, and overlap of the GT fracture fragment over the intact proximal humerus. Results: We found: (1) more overlap -over the intact humerus- in patients without concomitant shoulder dislocation as compared to those with shoulder dislocation (P=0.03), (2) there was a trend towards greater magnitude of displacement between those without (mean 19mm) and those with (mean 11mm) a concomitant shoulder dislocation (P=0.07), and (3) fractures were comparable in direction of displacement (P=0.50) and size of the fracture fragment (P=0.53). Conclusion: We found substantial variation in degree and direction of displacement of GT fracture fragments. Variation in degree of overlap and displacement is partially explained by concomitant shoulder dislocation. abstract_id: PUBMED:36819442 Proximal Humerus Fracture/Dislocation: Look for the Greater Tuberosity. The shoulder is the commonest major joint involved in dislocations. These are often associated with fractures of the surgical neck and/or of the greater tuberosity of the proximal humerus. A good functional recovery is associated with a successful union of the tuberosity fragment, as this carries the insertion of the superior and posterior rotator cuff tendons. A 29-year-old male patient presented to our Emergency Department (ED) after a fall off his motorbike, resulting in a left shoulder fracture dislocation and an axillary nerve injury. His shoulder was reduced under sedation in the ED, with post-reduction radiographs demonstrating a seemingly satisfactory fracture position. Later on, a computerized tomography (CT) scan was arranged which actually confirmed significant displacement of his greater tuberosity, which was not picked up on initial post-reduction radiographs. As a result of identifying the displacement, surgical fixation with a locking plate and suture construct was undertaken. This case demonstrates the ease with which greater tuberosity fractures can mistakenly be presumed as reduced on post-reduction films, whilst in fact they can be significantly displaced. This risk is especially great when only one radiographic view is obtained. The sign of the 'disappearing tuberosity' on a plain radiograph should prompt the clinician to seek further imaging by way of CT, to uncover the true position of the greater tuberosity. abstract_id: PUBMED:27090009 Reverse total shoulder arthroplasty for acute head-splitting, 3- and 4-part fractures of the proximal humerus in the elderly. Background: Anatomic reduction and stable internal fixation of complex proximal humeral fractures in the elderly is challenging. Secondary displacement, screw perforation, and humeral head necrosis are common complications. The outcome of hemiarthroplasty is unpredictable and strongly dependent on the uncertain healing of the greater tuberosity. This multicenter study retrospectively analyzes the midterm results of primary reverse total shoulder arthroplasty for the treatment of acute, complex fractures of the humerus in an elderly population. Methods: Fifty-two shoulders in 51 patients with a mean age of 77 years treated with reverse total shoulder arthroplasty for an acute, complex fracture of the proximal humerus were clinically and radiographically analyzed after a mean follow-up period of 35 months (range, 12-90 months). Results: There were no intraoperative complications. Revision surgery was performed in 4 shoulders. At final follow-up, the absolute and relative Constant scores averaged 62 points (range, 21-83 points) and 86% (range, 30%-100%), respectively, with a mean Subjective Shoulder Value of 83% (range, 30%-100%). Of the patients, 92% rated the treatment outcome as excellent or good. Patients with a resected or secondarily displaced greater tuberosity had an inferior clinical outcome to those with a healed greater tuberosity. Conclusion: The midterm clinical results are predictably good, with low complication rates and a rapid postoperative recovery of painfree everyday function. If secondary displacement of the greater tuberosity occurs, revision surgery may warrant consideration in view of potential improvement of ultimate outcome. abstract_id: PUBMED:27660656 Fractures of the greater tuberosity of the humerus: a study of associated rotator cuff injury and atrophy. Background: This is a retrospective prognostic study on soft tissue injury following isolated greater tuberosity (GT) fractures of the proximal humerus with respect to the relationship between rotator cuff tears and GT displacement. Methods: Forty-three patients with isolated GT fractures were recruited and evaluated with a standardized interview and physical examination, quality of life and shoulder function questionnaires (Western Ontario Rotator Cuff Index, SF-12 Version 2, Constant, Quick-Disabilities of the Arm, Shoulder and Hand, Visual Analogue Scale), standard shoulder radiographs and an ultrasound. The main outcome measurements were: incidence of rotator cuff tears and atrophy, biceps pathology and sub-acromial impingement; superior displacement of the GT fragment; and questionnaire scores. Results: Mean age was 57 years (31 years to 90 years) with a follow-up of 2.4 years (0.8 years to 6.8 years). In total, 16% had a full rotator cuff tear and 57% showed subacromial impingement on ultrasound. Full rotator cuff tears and supraspinatus fatty atrophy significantly correlated with decreased function and abduction strength. Significant atrophy (&gt;50%) of the supraspinatus and infraspinatus, without a rotator cuff tear, was correlated with the worst function in the presence of a residual displacement of the greater tuberosity at the last-follow-up (7 mm). Conclusions: Residual displacement, full rotator cuff tear and muscle atrophy are associated with the worst outcomes. Soft tissue imaging could benefit patients with an unfavourable outcome after a GT fracture to treat soft tissue injury. Answer: Yes, the direction of displacement in isolated fractures of the greater tuberosity in the proximal humerus does influence functional outcome. A study that assessed the outcome of patients treated conservatively for greater tuberosity isolated fractures found that postero-superior displacement leads to poorer outcomes than undisplaced or displaced in other directions fractures treated conservatively. Specifically, patients with postero-superior displacement had a lower mean Constant Murley Score and a higher mean DASH Score, indicating worse function and higher disability, respectively, compared to those with no displacement or antero-superior and antero-inferior displacement (PUBMED:24458167).
Instruction: Does halothane interfere with the release, action, or stability of endothelium-derived relaxing factor/nitric oxide? Abstracts: abstract_id: PUBMED:8311324 Does halothane interfere with the release, action, or stability of endothelium-derived relaxing factor/nitric oxide? Background: Halothane attenuates endothelium-dependent relaxation. To differentiate halothane's effect on endothelium-derived relaxing factor/nitric oxide (EDRF/NO) production from its effect on nitric oxide action on vascular smooth muscle, halothane's effect on endothelium-dependent relaxation was studied in a bioassay system. Methods: Indomethacin-treated, bovine aortic endothelial cells (BAEC) grown on microcarrier beads, continuously perfused by oxygenated and carbonated (95% O2, 5% CO2) Krebs-Ringer solution served as nitric oxide donors while an isolated denuded rabbit aortic ring directly superfused by the effluent of the BAEC and precontracted with phenylephrine was used to detect EDRF/NO release. The effect of basal and bradykinin-stimulated EDRF release on the tension of the vascular ring was measured. In the bioassay, it was possible to treat either the vascular denuded ring alone or the vascular ring plus the BAEC with halothane by adding it to the perfusate either upstream or downstream from the BAEC. Halothane (final concentration 2.2%) was added to the perfusate at these two positions, and its effect on the relaxation induced by EDRF/NO was determined. In some experiments, the preparations were treated with hemoglobin or L-monomethyl-L-arginine to attenuate the relaxation induced by the EDRF/NO pathway. Finally, halothane's effect on vascular relaxation induced by an increasing concentration of sodium nitroprusside was measured. Halothane's concentration in the perfusate was determined by gas chromatography using electron capture for anesthetic measurement. Results: EDRF/NO released by the BAEC was responsible for the relaxation of the vascular ring. Halothane added to the perfusate potentiated the tension induced by phenylephrine (7.1 +/- 1.89%) and attenuated the relaxation induced by the release of EDRF/NO. This effect was reversible after discontinuation of halothane. Halothane's effect was present even when the anesthetic was added to the perfusate downstream to the perfusion of the endothelial cells. Halothane had no effect on the vascular relaxation induced by sodium nitroprusside. Conclusions: The authors' data demonstrate that halothane does not interfere with endothelial cell release of EDRF/NO and its smooth muscle cell relaxation but seems to modify either EDRF/NO half-life or its activated redox form. abstract_id: PUBMED:7544960 Inhalation anesthetics inhibit the release of endothelium-derived hyperpolarizing factor in the rabbit carotid artery. Background: Inhalation anesthetics may interfere with the synthesis or action of endothelium-derived vasoactive factors. We investigated the effects of desflurane, enflurane, halothane, isoflurane, and sevoflurane on the release of nitric oxide and endothelium-derived hyperpolarizing factor (EDHF) in the isolated endothelium-intact carotid artery of the rabbit. Methods: Isolated segments of the carotid artery were suspended in Krebs-Henseleit solution (37 degrees C) and preconstricted with phenylephrine (1 microM). Relaxations caused by acetylcholine (ACh) (0.03-10 microM) or sodium nitroprusside (0.01-10 microM) were compared in the presence or absence of the nitric oxide synthase inhibitor NG-nitro-L-arginine (0.1 mM) in segments exposed to desflurane (8%), enflurane (2-4%), halothane (2-3.5%), isoflurane (2-4%), or sevoflurane (2%) as well as in NG-nitro-L- arginine-treated segments exposed to enflurane (2%) in combination with the KCa(+)-channel blocker tetrabutylammonium (0.3 mM) or the cytochrome P450 inhibitor clotrimazole (3 microM). Results: Desflurane, enflurane, and sevoflurane selectively inhibited the ACh-induced release of EDHF. Halothane and isoflurane also weakly affected the nitric oxide-mediated relaxant response to ACh. The inhibitory effect of these two anesthetics on EDHF release was concentration-dependent. Relaxations induced by sodium nitroprusside were not inhibited by any of the anesthetics tested. Three structurally unrelated cytochrome P450 inhibitors clotrimazole (0.1 mM), metyrapone (1 mM), and SKF525a (proadifen, 0.1 mM) abolished the EDHF-mediated relaxation elicited by ACh. The pharmacologic profile of the inhibitory effect of enflurane on the release of EDHF closely resembled that of clotrimazole but not that of tetrabutylammonium. Moreover, all anesthetics inhibited the cytochrome P450-catalyzed O-dealkylation of 7-ethoxycoumarin by rabbit liver microsomes in a concentration-dependent manner. Conclusions: Inhalation anesthetics significantly attenuate the EDHF-mediated relaxant response to ACh in the rabbit carotid artery. This effect appears to be attributable to inhibition of the cytochrome P450-dependent synthesis of EDHF by the endothelium. abstract_id: PUBMED:2393128 Endothelium-derived relaxing factor is not responsible for inhibition of hypoxic pulmonary vasoconstriction by inhalational anesthetics. Inhalational anesthetics inhibit hypoxic pulmonary vasoconstriction (HPV). One mechanism suggested for this action is stimulation of release of endothelium-derived relaxing factor. The present study has tested this hypothesis. These studies were performed in 66 ventilated and perfused isolated rat lungs. There were three study protocols. Study 1 examined the effect of HPV of the inhibition of soluble guanylate cyclase by methylene blue (MB). In the presence or absence of MB, the lungs constricted to hypoxia with pulmonary artery pressure increases of 8.6 +/- 0.2 cmH2O and 11.5 +/- 0.4 cmH2O, respectively, and halothane, enflurane, and isoflurane caused a reversible 50% decrease in the pulmonary pressor response, but acetylcholine (ACh) was vasodilatory in the saline group and vasoconstrictor in the MB group. In Study II a dose-response curve was established for the potent stimulator (Sin 1) of the enzyme guanylate cyclase. In the presence of MB the dose-response curve for Sin 1 was shifted to the right with an increase in the ED50 for Sin 1 from 44 microM for the control to 85 microM for the MB group. In Study III, baseline pulmonary artery pressure was increased with U46619, and the hypoxic pressor response was increased (28.9 +/- 2.5 cmH2O), but halothane again caused a 50% decrease (11.0 +/- 1.8 cmH2O) in the response to hypoxia. In summary, when soluble guanylate cyclase activity is inhibited by MB, the inhibition of hypoxic pulmonary vasoconstriction by halothane, isoflurane, or enflurane was unaltered, and release of endothelium-derived relaxing factor (EDRF) is therefore not an essential mechanism underlying this action. abstract_id: PUBMED:3257364 Attenuation of endothelium-mediated vasodilation by halothane. To determine whether halothane alters endothelium-mediated vasodilation of vascular smooth muscle, isolated ring preparations of rabbit aorta and canine femoral and carotid arteries were suspended for isometric tension recordings in Krebs-Ringer bicarbonate solution at 37 degrees C. Acetylcholine and bradykinin have been shown to relax these norepinephrine contracted arteries via an endothelium-dependent process. In this study, these relaxations were reversibly and significantly attenuated by 2% halothane. However, halothane did not affect relaxations caused by nitroglycerin, which, in these vessels, acts by an endothelium independent mechanism. These results suggest that halothane is not interfering with cyclic guanylate-monophosphate mediated relaxation of vascular smooth muscle, but may interfere with the synthesis, release, or transport of the endothelium-derived relaxing factor. In addition, during contractions evoked by norepinephrine, halothane caused significant decreases in tension in both the canine carotid and rabbit aortic preparations, but increased tension in the femoral artery rings. These effects were not altered by mechanical removal of the endothelium. These results suggest a direct action of halothane on the vascular smooth muscle, which can result in either an increase or decrease in tension, depending on the specific vessel. In addition to its direct vascular effect, this study suggests a new action of halothane; it interferes with endothelium-derived relaxing factor-mediated relaxation of vascular smooth muscle. This action may contribute in part to the vascular alterations seen clinically during administration of halothane. abstract_id: PUBMED:9241340 Lidocaine-induced hemodynamic effects are enhanced by the inhibition of endothelium-derived relaxing factor in dogs. Background: Lidocaine has been shown to have direct vasoconstrictive effects at low concentrations. Since lidocaine inhibits endothelium-dependent vasodilation in vitro, the vasoconstrictor effect of lidocaine may be due to inhibition of endothelium-derived relaxing factor(EDRF/NO). Therefore, the current study was designed to determine the effects of NG-nitro-L-arginine (L-NNA), a potent inhibitor of nitric oxide synthase, on systemic and pulmonary hemodynamics during lidocaine infusion. Methods: Systemic and pulmonary hemodynamic effects of lidocaine infusion, 1 mg.kg-1.min-1, for 10 min were measured in dogs anesthetized with 1% halothane in oxygen. Dogs were studied twice with an interval of 1 week in a cross-over study, and were assigned to one of two groups that received saline or L-NNA intravenously in group 1 (n = 8), or L-NNA or L-NNA + L-arginine which reverses the nitric oxide synthesis inhibitor effect of L-NNA, intravenously in group 2 (n = 8) prior to lidocaine infusion. The free serum concentration of and protein-binding ratio for lidocaine were measured. Results: With saline pretreatment in group 1, lidocaine infusion significantly decreased cardiac index (CI) and significantly increased mean pulmonary arterial pressure (MPAP), pulmonary arterial occlusion pressure (PAOP), systemic vascular resistance (SVR) and pulmonary vascular resistance (PVR). L-NNA pre-treatment in group 1 significantly enhanced the changes in CI, MPAP, PAOP, SVR and PVR during lidocaine infusion. In group 2, L-arginine infusion partially offset the additive effects of L-NNA to the depressive effects of lidocaine. There were no significant differences in free serum concentration of or protein-binding ratio for lidocaine among the groups. Conclusion: In contrast to in vitro study, vasoconstrictor effect of lidocaine is enhanced when a capacity for compensatory vasodilation including EDRF/NO pathway is exhausted in halothane-anesthetized dogs. abstract_id: PUBMED:7832295 Halothane impairs the hemodynamic influence of endothelium-derived nitric oxide. Background: The endogenous vasodilator endothelium-derived nitric oxide (EDNO) contributes to the regulation of vascular tone and organ perfusion. It has been suggested that some volatile anesthetics may diminish the influence of EDNO and thereby decrease regional blood flow. Methods: Radioactive microspheres were used to determine regional hemodynamics in rats. The authors tested the hypothesis that halothane inhibits EDNO and, therefore, should diminish the response to nitric oxide synthesis inhibition by NW-nitro-L-arginine methyl ester (L-NAME) compared with either conscious or barbiturate-anesthetized rats. Results: NW-nitro-L-arginine methyl ester decreased blood flow to the brain by 23% (P &lt; 0.005) in conscious rats to a level similar to that seen with either anesthetic agent. In both conscious and barbiturate-anesthetized rats, L-NAME increased blood pressure (BP) by 24 +/- 2 (P &lt; 0.001) and 20 +/- 1 (P &lt; 0.001) mmHg and total peripheral resistance (TPR) by 132% (P &lt; 0.001) and 105% (P &lt; 0.001), respectively. In contrast, during halothane anesthesia, both the pressor response (only 7 +/- 1 mmHg) and the increase in TPR (only 22%) were greatly diminished (P &lt; 0.001). NW-nitro-L-arginine methyl ester decreased cardiac output (CO) by 47% (P &lt; 0.001) and heart rate (HR) by 28% (P &lt; 0.001) in conscious rats. In barbiturate-anesthetized rats, L-NAME decreased CO by 38% (P &lt; 0.005) and HR by 13% (P &lt; 0.001). In halothane-anesthetized rats, L-NAME changed neither CO nor HR. Thus halothane anesthesia largely eliminated the systemic response to EDNO synthesis inhibition. In conscious rats, L-NAME decreased blood flow to the heart (30%) and kidneys (47%). In barbiturate-anesthetized rats, L-NAME did not alter blood flow to the heart but decreased renal blood flow by 35% (P &lt; 0.005). In halothane-anesthetized rats, L-NAME did not alter blood flow to either the heart or the kidneys. Overall, halothane blunted or blocked the systemic and regional hemodynamic responses to EDNO synthesis inhibition seen in conscious and barbiturate-anesthetized rats. Conclusions: Halothane anesthesia greatly diminished or eliminated all systemic and regional hemodynamic responses to L-NAME. These data indicate that halothane anesthesia inhibits EDNO-mediated regulation of systemic and organ hemodynamics. abstract_id: PUBMED:8624010 Halothane and isoflurane inhibit vasodilation due to constitutive but not inducible nitric oxide synthase. Implications for the site of anesthetic inhibition of the nitric oxide/guanylyl cyclase signaling pathway. Background: Inhalational anesthetics inhibit the nitric oxide-guanylyl cyclase signaling pathway, but the site of this inhibition is not yet clear. This study was designed to test the hypothesis that receptor activation or downstream signaling events leading to nitric oxide synthase activation are important sites for this inhibition by comparing the effect of anesthetics on vasodilation caused by the calcium-dependent constitutive endothelial nitric oxide synthase versus the calcium-independent inducible nitric oxide synthase. Methods: Endothelium-intact or -denuded rat thoracic aorta rings preincubated with or without lipopolysaccharide were mounted for isometric tension measurement, constricted with phenylephrine, then relaxed with methacholine in the presence or absence of halothane (1-3%) or isoflurane (1-3%). The cyclic guanosine 3,5-monophosphate content in the endothelium-denuded rings preincubated with or without lipopolysaccharide in the presence or absence of 3% halothane or 3% isoflurane was quantified by radioimmunoassay. The activity of partially purified inducible nitric oxide synthase from activated mouse macrophage was assayed in the presence or absence of halothane (1-4%) or isoflurane (1-5%) by the conversion of 3H-L-arginine to 3H-L-citrulline. Results: Halothane and isoflurane inhibited methacholine-stimulated, nitric oxide-mediated vasorelaxation in endothelium-intact aortic rings. Neither halothane nor isoflurane affected the vasorelaxation caused by basal endothelial nitric oxide synthase or inducible nitric oxide synthase activity. Neither anesthetic altered the cyclic guanosine 3,5-monophosphate increase caused by inducible nitric oxide synthase in the lipopolysaccharide-treated rings. Conclusions: The results demonstrated that halothane and isoflurane inhibit only receptor/calcium-activated nitric oxide synthase action and that direct inhibition of nitric oxide synthase, soluble guanylyl cyclase, or an interaction with nitric oxide are not responsible for anesthetic inhibition of endothelium-dependent vasorelaxation. abstract_id: PUBMED:1632532 Halothane and isoflurane inhibit endothelium-dependent relaxation elicited by acetylcholine. The purpose of this study was to determine whether volatile anesthetics modify the release of endothelium-derived relaxing factor. We examined the effects of halothane and isoflurane on endothelium-dependent relaxation and 3',5'-cyclic guanosine monophosphate formation elicited by acetylcholine and ionophore A23187 in isolated rat aorta. Halothane and isoflurane (1%-2%) significantly attenuated acetylcholine-induced relaxation of the phenylephrine-contracted aorta but had no significant effect on relaxation induced by A23187, nitroprusside, and nitroglycerin. Basal and A23187 (10(-7) M)-stimulated levels of 3',5'-cyclic guanosine monophosphate were slightly lowered by halothane and isoflurane (2%). In contrast, the increase of 3',5'-cyclic guanosine monophosphate elicited by acetylcholine (10(-5) M) was significantly attenuated by halothane (2%) and abolished by isoflurane (2%). These findings indicate that halothane and isoflurane strongly inhibit the release of endothelium-derived relaxing factor elicited by acetylcholine. abstract_id: PUBMED:7574063 Halothane and isoflurane inhibit endothelium-derived relaxing factor-dependent cyclic guanosine monophosphate accumulation in endothelial cell-vascular smooth muscle co-cultures independent of an effect on guanylyl cyclase activation. Background: Interaction of inhalational anesthetics with the nitric oxide signaling pathway and the mechanism of such effects are controversial. The aim of this study was to clarify the sites and mechanism of inhalational anesthetic interaction with the vascular nitric oxide and guanylyl cyclase signaling pathway. Methods: To specifically study the mechanism of anesthetic interaction with the nitric oxide-guanylyl cyclase pathway, cultured vascular smooth muscle and endothelial cell-vascular smooth muscle (EC-VSM) co-culture models were chosen. Monolayer cultures of VSM with or without cultured endothelial cells grown on microcarrier beads were preequilibrated with anesthetic and stimulated with agonists. The effect of inhalational anesthetics on cyclic guanosine monophosphate (GMP) content of unstimulated VSM and of VSM in which soluble guanylyl cyclase had been activated by the endothelium-independent nitrovasodilators, sodium nitroprusside, nitroglycerin, or nitric oxide was determined. Experiments were also performed to assess the effect of inhalational anesthetics on unstimulated endothelial cell-vascular smooth muscle co-cultures and on co-cultures in which nitric oxide synthase and subsequent cyclic GMP production had been activated by the receptor-mediated agonists bradykinin and adenosine triphosphate and by the non-receptor-mediated calcium ionophore A23187. Results: Increasing concentrations of halothane and isoflurane from 0.5 to 5% had no effect on basal cyclic GMP concentrations in cultured VSM alone or in endothelial cell-vascular smooth muscle co-cultures, and had no effect on sodium nitroprusside, nitroglycerin, or nitric oxide stimulated cyclic GMP accumulation in cultured VSM. In agonist-stimulated co-cultures, however, halothane and isoflurane significantly (P &lt; 0.05) inhibited increases in cyclic GMP concentration in response to both receptor- and non-receptor-mediated nitric oxide synthase activating agents. Conclusions: Inhalational anesthetics do not stimulate or inhibit basal cyclic GMP production in co-cultures or VSM, suggesting that inhalational anesthetics do not activate soluble or particulate guanylyl cyclase and do not activate nitric oxide synthase. Inhalational anesthetics do not inhibit nitrovasodilator-induced cyclic GMP formation, suggesting a lack of interference with soluble guanylyl cyclase activation. Inhalational anesthetics inhibit both agonist and calcium ionophore-stimulated nitric oxide-dependent cyclic GMP accumulation in endothelial cell-vascular smooth muscle co-cultures. Consistent with previous vascular ring studies, anesthetics appear to inhibit nitric oxide-guanylyl cyclase signaling distal to receptor activation in the endothelial cell and proximal to nitric oxide activation of guanylyl cyclase. abstract_id: PUBMED:7504535 Halothane vasodilation and nitric oxide in rat pial vessels. We investigated whether halothane (HAL), administered via cerebral cortical suffusion at concentrations of 1, 2, and 3%, could induce cerebral microvascular dilatation in vivo and whether the vasodilatory response was dependent on nitric oxide (NO) synthesis. The studies were performed using N2O/fentanyl-anesthetized, paralyzed, and mechanically ventilated rats. A closed cranial window and an intravital microscopy technique were employed. This system permitted the controlled delivery of various vasoactive agents in an artificial cerebrospinal fluid (aCSF) solution and the measurement of diameters of pial arterioles and venules. Each experiment included evaluations of (a) the direct smooth muscle relaxing action of NO, using sodium nitroprusside (SNP), and (b) the capacity for generation and release of endogenous NO, using adenosine diphosphate (ADP). Following confirmation of an intact NO-relaxing and generating capacity, HAL (in aCSF) was suffused at increasing concentrations. Nitric oxide synthase (NOS) inhibition was established with topical nitro-L-arginine (L-NA) or its methyl ester (L-NAME) and the above sequence repeated. The results for rats treated with L-NA (n = 5) or L-NAME (n = 5) were analyzed separately and as a combined group. No significant differences in vascular responses were observed when comparing the two groups. Initially, both SNP and ADP produced significant diameter increases (all groupings) in arterioles (14-28% change) and venules (14-25% change). For all groups, suffusions of 1 to 3% HAL produced arteriolar dilation, ranging from a 10 to 25% increase over baseline diameter. A statistically significant dose dependency was only observed with the combined data.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: Halothane does not interfere with the release of endothelium-derived relaxing factor/nitric oxide (EDRF/NO) from endothelial cells, nor does it affect the relaxation of smooth muscle cells by EDRF/NO. However, it seems to modify either the half-life of EDRF/NO or its activated redox form, as evidenced by the fact that halothane potentiated the tension induced by phenylephrine and attenuated the relaxation induced by the release of EDRF/NO. This effect was reversible after discontinuation of halothane and was present even when halothane was added downstream to the perfusion of the endothelial cells. Halothane had no effect on vascular relaxation induced by sodium nitroprusside, which suggests that its effect is specific to EDRF/NO and not to other vasodilators (PUBMED:8311324). Additionally, halothane has been shown to attenuate endothelium-mediated vasodilation, suggesting that it may interfere with the synthesis, release, or transport of the endothelium-derived relaxing factor (PUBMED:3257364). Halothane and isoflurane have also been reported to inhibit endothelium-dependent relaxation elicited by acetylcholine, indicating that these anesthetics strongly inhibit the release of EDRF/NO elicited by acetylcholine (PUBMED:1632532). Furthermore, halothane has been found to impair the hemodynamic influence of endothelium-derived nitric oxide (EDNO), as it greatly diminished or eliminated all systemic and regional hemodynamic responses to nitric oxide synthesis inhibition by NW-nitro-L-arginine methyl ester (L-NAME) in rats (PUBMED:7832295). In summary, while halothane does not appear to interfere with the release of EDRF/NO from endothelial cells or its action on smooth muscle cells, it does seem to affect the stability or half-life of EDRF/NO, as well as potentially interfering with the synthesis, release, or transport of EDRF/NO, particularly in response to acetylcholine.
Instruction: Validation of an acute ischemic stroke model: does diffusion-weighted imaging lesion volume offer a clinically significant improvement in prediction of outcome? Abstracts: abstract_id: PUBMED:17446421 Validation of an acute ischemic stroke model: does diffusion-weighted imaging lesion volume offer a clinically significant improvement in prediction of outcome? Background And Purpose: Prediction models for ischemic stroke outcome have the potential to contribute prognostic information in the clinical and/or research setting. The importance of diffusion-weighted magnetic resonance imaging (DWI) in the prediction of clinical outcome, however, is unclear. The purpose of this study was to combine acute clinical data and DWI lesion volume for ischemic stroke patients to determine whether DWI improves the prediction of clinical outcome. Methods: Patients (N=382) with baseline DWI data from the Glycine Antagonist In Neuroprotection and citicoline (010 and 018) trials were used to develop the prediction models by multivariable logistic regression. Data from prospectively collected patients (N=266) from the Acute Stroke Accurate Prediction Study were used to externally validate the model equations. The models predicted either full recovery or nursing home-level disability/death, as defined by the National Institutes of Health Stroke Scale, Barthel Index, or modified Rankin Scale. Results: The full-recovery models with DWI lesion volume had areas under the receiver operating characteristic curves (AUCs) of 0.799 to 0.821, and those without DWI lesion volume had AUCs of 0.758 to 0.798. The nursing home-level disability/death models with DWI had AUCs of 0.832 to 0.882, and those without DWI had AUCs of 0.827 to 0.867. All models had mean absolute errors &lt; or =0.4 for calibration. Conclusions: All 12 models had excellent discrimination and calibration, with 8 of 12 meeting prespecified performance criteria (AUC &gt; or =0.8, mean absolute error &lt; or =0.4). Although DWI lesion volume significantly increased model explanatory power, the magnitude of increase was not large enough to be clinically important. abstract_id: PUBMED:23760215 Impact of diffusion-weighted imaging lesion volume on the success of endovascular reperfusion therapy. Background And Purpose: Diffusion-weighted imaging (DWI) lesion volume is associated with poor outcome after thrombolysis, and it is unclear whether endovascular therapies are beneficial for large DWI lesion. Our aim was to assess the impact of pretreatment DWI lesion volume on outcomes after endovascular therapy, with a special emphasis on patients with complete recanalization. Methods: We analyzed data collected between April 2007 and November 2011 in a prospective clinical registry. All acute ischemic stroke patients with complete occlusion of internal carotid artery or middle cerebral artery treated by endovascular therapy were included. DWI lesion volumes were measured by the RAPID software. Favorable outcome was defined by modified Rankin Scale of 0 to 2 at 90 days. Results: A total of 139 acute ischemic stroke patients were included. Median DWI lesion volume was 14 cc (interquartile range, 5-43) after a median onset time to imaging of 110 minutes (interquartile range, 77-178). Higher volume was associated with less favorable outcome (adjusted odds ratio, 0.55; 95% confidence interval, 0.31-0.96). A complete recanalization was achieved in 65 (47%) patients after a median onset time of 238 minutes (interquartile range, 206-285). After adjustment for volume, complete recanalization was associated with more favorable outcome (adjusted odds ratio, 6.32; 95% confidence interval, 2.90-13.78). After stratification of volume by tertiles, complete recanalization was similarly associated with favorable outcome in the upper 2 tertiles (P&lt;0.005). Conclusions: Our results emphasize the importance of initial DWI volume and recanalization on clinical outcome after endovascular treatment. Large DWI lesions may still benefit from recanalization in selected patients. abstract_id: PUBMED:37332178 A differential detailed diffusion-weighted imaging-ASPECTS for cerebral infarct volume measurement and outcome prediction. Background: Diffusion-weighted imaging-Alberta Stroke Program Early Computed Tomography Score (DWI-ASPECTS) has been used to estimate infarct core volume in acute stroke. However, the same and indiscriminate score deduction for punctate or confluent DWI high-intensity lesion might lead to variation in performance. Aims: To develop and evaluate a differential detailed DWI-ASPECTS method in comparison with the conventional DWI-ASPECTS in core infarct volume measurement and clinical outcome prediction. Methods: We retrospectively recruited patients with acute ischemic stroke (AIS) treated with endovascular treatment between April 2013 and October 2019. In differential detailed DWI-ASPECTS, restricted diffusion lesion that was punctate or less than half of a cortical region (M1-M6) would not lead to subtraction of point. A favorable outcome was modified Rankin Scale score ⩽2 at 90 days after stroke onset. Results: Among 298 AIS patients, mean age was 75 years (interquartile range (IQR) 67-82), and 194 patients (65%) were males. Mean infarct core volume was 11 mL (IQR 3-37). Overall, the score by detailed DWI-ASPECTS was significantly higher than conventional DWI-ASPECTS (8 (7-9) vs. 7 (5-9); P &lt; 0.01). The detailed DWI-ASPECTS resulted in a higher correlation coefficient (r) for core infarct volume estimation than the conventional DWI-ASPECTS (r = 0.832 vs. 0.773; P &lt; 0.01). Upon re-classification of those scored ⩽6 in conventional DWI-ASPECTS (n = 134) by detailed DWI-ASPECTS, the rate of favorable outcome in patients with detailed DWI-ASPECTS &gt;6 was significantly higher than those with ⩽6 (29 (48%) vs. 14 (19%); P &lt; 0.01). Conclusions: Detailed DWI-ASPECTS appeared to provide a more accurate infarct core volume measurement and clinical outcome correlation than conventional DWI-ASPECTS among AIS patients treated with endovascular therapy. abstract_id: PUBMED:16525124 MR diffusion-weighted imaging and outcome prediction after ischemic stroke. Background: MR diffusion-weighted imaging (DWI) shows acute ischemic lesions early after stroke so it might improve outcome prediction and reduce sample sizes in stroke treatment trials. Previous studies of DWI and outcome produced conflicting results. Objective: To determine whether DWI lesion characteristics independently predict outcome in a broad range of patients with acute stroke. Methods: The authors recruited hospital-admitted patients with all severities of suspected stroke, assessed stroke severity on the NIH Stroke Scale (NIHSS), performed early brain DWI, and assessed outcome at 3 months (modified Rankin Scale). Clinical data and DWI lesion parameters were evaluated in a logistic regression model to identify independent predictors of outcome at 3 months and a previously described "Three-Item Scale" (including DWI) was tested for outcome prediction. Results: Among 82 patients (mean NIHSS 7.1 [+/-6.3 SD]), the only independent outcome predictors were age and stroke severity. Neither DWI lesion volume nor apparent diffusion coefficient nor the previously described Three-Item Scale predicted outcome independently. Comparison with previous studies suggested that DWI may predict outcome only in patients with more severe cortical ischemic strokes. Conclusions: Across a broad range of stroke severities, diffusion-weighted imaging (DWI) did not predict outcome beyond that of key clinical variables. Thus, DWI is unlikely to reduce sample sizes in acute stroke trials assessing functional outcome, especially where estimated treatment effects are modest. abstract_id: PUBMED:26860664 Cortical vessel sign on susceptibility weighted imaging reveals clinically relevant hypoperfusion in internal carotid artery stenosis. Background And Purpose: Internal carotid artery (ICA) stenosis can lead to cerebral hypoperfusion and is a common cause of stroke. As susceptibility weighted imaging (SWI) has been used for penumbra imaging in acute ischemic stroke, we aimed at analyzing hypoperfusion using SWI in patients with ICA stenosis. Material And Methods: Clinical characteristics, asymmetric cortical vessel sign (more and/or larger, hypointense asymmetric cortical vessels) on SWI, Doppler sonography results and diffusion weighted imaging (DWI) lesion volume were retrospectively analyzed in patients with ICA stenosis. In a subgroup of patients, volume of prolonged time to peak and volume of prolonged time to peak of the residue curve (Tmax) were measured as reference standard. Outcome was assessed as modified Rankin score at discharge. Results: 104 patients were included. Median age was 72 and median degree of stenosis 70% according to NASCET. 13% had a asymmetric cortical vessel sign. These patients had a higher degree of stenosis (80% vs. 70%, p=0.004), were more often symptomatic (93% vs. 61%, p=0.020) and had higher DWI volume (7.3ml vs. 0.2ml, p=0.011). Specificity for the prediction of DWI lesions was 86%. Also, patients with asymmetric cortical vessel sign had lower rates of favorable outcome (mRS=0-2; 57% vs. 82%, p=0.033) and volumes of Tmax≥4s, ≥6s, ≥8s, ≥10s and TTP≥2s, ≥4s, ≥6s were significantly higher. In multivariate analysis, asymmetric cortical vessel sign was an independent negative predictor of favorable outcome (mRS 0-2; OR 0.184; CI [0.039; 0.875] p=0.033). Conclusion: In patients with ICA stenosis, asymmetric cortical vessel sign is a sign of clinically relevant hypoperfusion. abstract_id: PUBMED:37741168 Stroke lesion size - Still a useful biomarker for stroke severity and outcome in times of high-dimensional models. Background: The volumetric size of a brain lesion is a frequently used stroke biomarker. It stands out among most imaging biomarkers for being a one-dimensional variable that is applicable in simple statistical models. In times of machine learning algorithms, the question arises of whether such a simple variable is still useful, or whether high-dimensional models on spatial lesion information are superior. Methods: We included 753 first-ever anterior circulation ischemic stroke patients (age 68.4±15.2 years; NIHSS at 24 h 4.4±5.1; modified Rankin Scale (mRS) at 3-months median[IQR] 1[0.75;3]) and traced lesions on diffusion-weighted MRI. In an out-of-sample model validation scheme, we predicted stroke severity as measured by NIHSS 24 h and functional stroke outcome as measured by mRS at 3 months either from spatial lesion features or lesion size. Results: For stroke severity, the best regression model based on lesion size performed significantly above chance (p &lt; 0.0001) with R2 = 0.322, but models with spatial lesion features performed significantly better with R2 = 0.363 (t(752) = 2.889; p = 0.004). For stroke outcome, the best classification model based on lesion size again performed significantly above chance (p &lt; 0.0001) with an accuracy of 62.8%, which was not different from the best model with spatial lesion features (62.6%, p = 0.80). With smaller training data sets of only 150 or 50 patients, the performance of high-dimensional models with spatial lesion features decreased up to the point of being equivalent or even inferior to models trained on lesion size. The combination of lesion size and spatial lesion features in one model did not improve predictions. Conclusions: Lesion size is a decent biomarker for stroke outcome and severity that is slightly inferior to spatial lesion features but is particularly suited in studies with small samples. When low-dimensional models are desired, lesion size provides a viable proxy biomarker for spatial lesion features, whereas high-precision prediction models in personalised prognostic medicine should operate with high-dimensional spatial imaging features in large samples. abstract_id: PUBMED:38277008 Modeling diffusion-weighted imaging lesion expansion between 2 and 24 h after endovascular thrombectomy in acute ischemic stroke. Purpose: Diffusion-weighted imaging (DWI) lesion expansion after endovascular thrombectomy (EVT) is not well characterized. We used serial diffusion-weighted magnetic resonance imaging (MRI) to measure lesion expansion between 2 and 24 h after EVT. Methods: In this single-center observational analysis of patients with acute ischemic stroke due to large vessel occlusion, DWI was performed post-EVT (&lt; 2 h after closure) and 24-h later. DWI lesion expansion was evaluated using multivariate generalized linear mixed modeling with various clinical moderators. Results: We included 151 patients, of which 133 (88%) had DWI lesion expansion, defined as a positive change in lesion volume between 2 and 24 h. In an unadjusted analysis, median baseline DWI lesion volume immediately post-EVT was 15.0 mL (IQR: 6.6-36.8) and median DWI lesion volume 24 h post-EVT was 20.8 mL (IQR: 9.4-66.6), representing a median change of 6.1 mL (IQR: 1.5-17.7), or a 39% increase. There were no significant associations among univariable models of lesion expansion. Adjusted models of DWI lesion expansion demonstrated that relative lesion expansion (defined as final/initial DWI lesion volume) was consistent across eTICI scores (0-2a, 0.52%; 2b, 0.49%; 2c-3, 0.42%, p = 0.69). For every 1 mL increase in lesion volume, there was 2% odds of an increase in 90-day mRS (OR: 1.021, 95%CI [1.009, 1.034], p &lt; 0.001). Conclusion: We observed substantial lesion expansion post-EVT whereby relative lesion expansion was consistent across eTICI categories, and greater absolute lesion expansion was associated with worse clinical outcome. Our findings suggest that alternate endpoints for cerebroprotectant trials may be feasible. abstract_id: PUBMED:11062281 Is early ischemic lesion volume on diffusion-weighted imaging an independent predictor of stroke outcome? A multivariable analysis. Background And Purpose: The heterogeneity of stroke makes outcome prediction difficult. Neuroimaging parameters may improve the predictive value of clinical measures such as the National Institutes of Health Stroke Scale (NIHSS). We investigated whether the volume of early ischemic brain lesions assessed with diffusion-weighted imaging (DWI) was an independent predictor of functional outcome. Methods: We retrospectively selected patients with nonlacunar ischemic stroke in the anterior circulation from 4 prospective Stanford Stroke Center studies evaluating early MRI. The baseline NIHSS score and ischemic stroke risk factors were assessed. A DWI MRI was performed within 48 hours of symptom onset. Clinical characteristics and early lesion volume on DWI were compared between patients with an independent outcome (Barthel Index score &gt;/=85) and a dependent outcome (Barthel Index score &lt;85) at 1 month. A logistic regression model was performed with factors that were significantly different between the 2 groups in univariate analysis. Results: Sixty-three patients fulfilled the entry criteria. One month after symptom onset, 24 patients had a Barthel Index score &lt;85 and 39 had a Barthel Index score &gt;/=85. In univariate analysis, patients with independent outcome were younger, had lower baseline NIHSS scores, and had smaller lesion volumes on DWI. In a logistic regression model, DWI volume was an independent predictor of outcome, together with age and NIHSS score, after correction for imbalances in the delay between symptom onset and MRI. Conclusions: DWI lesion volume measured within 48 hours of symptom onset is an independent risk factor for functional independence. This finding could have implications for the design of acute stroke trials. abstract_id: PUBMED:31641377 Outcome prediction for patients with anterior circulation acute ischemic stroke following endovascular treatment: A single-center study. Previous studies have identified various factors associated with the outcomes of acute ischemic stroke (AIS) but considered only 1 or 2 predictive factors. The present study aimed to use outcome-related factors derived from biochemical, imaging and clinical data to establish a logistic regression model that can predict the outcome of patients with AIS following endovascular treatment (EVT). The data of 118 patients with anterior circulation AIS (ACAIS) who underwent EVT between October 2014 and August 2018 were retrospectively analyzed. The patients were divided into 2 groups based on the modified Rankin Scale score at three months after surgery, where 0-2 points were considered to indicate a favorable outcome and 3-6 points were considered a poor outcome. Non-conditional logistic stepwise regression was used to identify independent variables that were significantly associated with patient outcome, which were subsequently used to establish a predictive statistical model, receiver operating characteristic (ROC) curve was used to show the performance of statistical model and analyze the specific association between each factor and outcome. Among the 118 patients, 47 (39.83%) exhibited a good and 71 (60.17%) exhibited a poor outcome. Multivariate analysis revealed that the predictive model was statistically significant (χ2=78.92; P&lt;0.001), and that the predictive accuracy of the model was 83.1%, which was higher compared with that obtained using only a single factor. ROC curve analysis shows the area under curve of the statistical model was 0.823, the analysis of diagnostic threshold for prognostic factors indicated that age, diffusion-weighted imaging lesion volume, glucose on admission, National Institutes of Health Stroke Scale score on admission and hypersensitive C-reactive protein were valuable predictive factors for the outcome of EVT (P&lt;0.05). In conclusion, a predictive model based on non-conditional logistic stepwise regression analysis was able to predict the outcome of EVT for patients with ACAIS. abstract_id: PUBMED:21415399 Pretreatment diffusion-weighted imaging lesion volume predicts favorable outcome after intravenous thrombolysis with tissue-type plasminogen activator in acute ischemic stroke. Background And Purpose: Stroke magnetic resonance imaging with perfusion and diffusion weighting has shown its potential to select patients likely to benefit from intravenous thrombolysis with tissue-type plasminogen activator (IV-tPA). We aimed to determine the predictors of favorable outcome in magnetic resonance imaging-selected, acute stroke patients treated with IV-tPA. Methods: We analyzed the data of acute ischemic stroke patients from a prospective, multicenter, observational study of magnetic resonance imaging-based IV-tPA treatment initiated ≤6 hours from symptom onset. Neurologic deficit on admission was assessed by the National Institutes of Health Stroke Scale. Clinical outcome was assessed after 90 days according to the modified Rankin Scale. Favorable outcome was defined as a modified Rankin Scale score of 0 to 1. Patients were compared regarding baseline parameters. Multivariate regression analysis was used to identify predictors of favorable outcome. Results: Of 174 patients, 83 (48%) reached a favorable outcome. They were younger (median age, 62 versus 67 years; P=0.001), had a lower National Institutes of Health Stroke Scale score on admission (median, 11 versus 15; P&lt;0.001), and had smaller diffusion-weighted imaging lesions (median, 12.9 versus 20 mL; P=0.001). Perfusion-weighted imaging lesion volumes and onset-to-treatment time were comparable between the groups. Age (P=0.017), National Institutes of Health Stroke Scale score on admission (P&lt;0.001), and diffusion-weighted imaging lesion volume (P=0.047) were identified as independent predictors of favorable outcome. Conclusions: A lower age, lower National Institutes of Health Stroke Scale score on admission, and smaller pretreatment diffusion-weighted imaging lesion volume were found to be associated with a favorable outcome after treatment with IV-tPA. Pretreatment perfusion lesion volume and onset-to-treatment time were not associated with outcome when patients were selected for IV-tPA by magnetic resonance imaging within 6 hours of symptom onset. Answer: The validation of an acute ischemic stroke model incorporating diffusion-weighted imaging (DWI) lesion volume was explored in a study by analyzing patients from the Glycine Antagonist In Neuroprotection and citicoline trials, as well as prospectively collected patients from the Acute Stroke Accurate Prediction Study. The models aimed to predict either full recovery or nursing home-level disability/death using clinical data and DWI lesion volume. The results showed that models including DWI lesion volume had areas under the receiver operating characteristic curves (AUCs) ranging from 0.799 to 0.882, indicating excellent discrimination and calibration. However, the increase in explanatory power provided by DWI lesion volume was not large enough to be considered clinically important, as all models performed well even without the inclusion of DWI lesion volume (AUCs of 0.758 to 0.867) (PUBMED:17446421). In contrast, another study assessing the impact of pretreatment DWI lesion volume on outcomes after endovascular therapy found that higher DWI lesion volumes were associated with less favorable outcomes. However, complete recanalization was associated with more favorable outcomes, even in patients with large DWI lesions, suggesting that DWI lesion volume is important and that patients with large lesions may still benefit from recanalization (PUBMED:23760215). A different approach, the differential detailed DWI-ASPECTS, was developed to provide a more accurate estimation of infarct core volume and clinical outcome correlation. This method showed a higher correlation with core infarct volume estimation and a significant difference in the rate of favorable outcomes when compared to the conventional DWI-ASPECTS, indicating its potential for better outcome prediction in acute ischemic stroke patients treated with endovascular therapy (PUBMED:37332178). Overall, while DWI lesion volume does contribute to the prediction models and can be associated with outcomes, its addition to acute ischemic stroke models does not always result in a clinically significant improvement in the prediction of outcomes, as the magnitude of its impact varies across studies and may depend on the context in which it is used.
Instruction: Does a PEEK Femoral TKA Implant Preserve Intact Femoral Surface Strains Compared With CoCr? Abstracts: abstract_id: PUBMED:27020431 Does a PEEK Femoral TKA Implant Preserve Intact Femoral Surface Strains Compared With CoCr? A Preliminary Laboratory Study. Background: Both the material and geometry of a total knee arthroplasty (TKA) component influence the induced periprosthetic bone strain field. Strain, a measure of the local relative deformation in a structure, corresponds to the mechanical stimulus that governs bone remodeling and is therefore a useful in vitro biomechanical measure for assessing the response of bone to new implant designs and materials. A polyetheretherketone (PEEK) femoral implant has the potential to promote bone strains closer to that of natural bone as a result of its low elastic modulus compared with cobalt-chromium (CoCr). Questions/purposes: In the present study, we used a Digital Image Correlation (DIC) technique to answer the following question: Does a PEEK TKA femoral component induce a more physiologically normal bone strain distribution than a CoCr component? To achieve this, a DIC test protocol was developed for periprosthetic bone strain assessment using an analog model; the protocol aimed to minimize errors in strain assessment through the selection of appropriate analysis parameters. Methods: Three synthetic bone femurs were used in this experiment. One was implanted with a CoCr femoral component and one with a PEEK femoral component. The third (unimplanted) femur was intact and used as the physiological reference (control) model. All models were subjected to standing loads on the corresponding polyethylene (ultrahigh-molecular-weight polyethylene) tibial component, and speckle image data were acquired for surface strain analysis using DIC in six repeat tests. The strain in 16 regions of interest on the lateral surface of each of the implanted bone models was plotted for comparison with the corresponding strains in the intact case. A Wilcoxon signed-rank test was used to test for difference at the 5% significance level. Results: Surface analog bone strain after CoCr implantation indicated strain shielding (R2 = 0.6178 with slope, β = 0.4314) and was lower than the intact case (p = 0.014). The strain after implantation with the PEEK implant deviated less from the intact case (R2 = 0.7972 with slope β = 0.939) with no difference (p = 0.231). Conclusions: The strain shielding observed with the contemporary CoCr implant, consistent with clinical bone mineral density change data reported by others, may be reduced by using a PEEK implant. Clinical Relevance: This bone analog in vitro study suggests that a PEEK femoral component could transfer more physiologically normal bone strains with a potentially reduced stress shielding effect, which may improve long-term bone preservation. Additional studies including paired cadaver tests are necessary to test the hypothesis further. abstract_id: PUBMED:27426028 Corrosion Damage and Wear Mechanisms in Long-Term Retrieved CoCr Femoral Components for Total Knee Arthroplasty. Background: Metal debris and ion release has raised concerns in joint arthroplasty. The purpose of this study was to characterize the sources of metallic ions and particulate debris released from long-term (in vivo &gt;15 years) total knee arthroplasty femoral components. Methods: A total of 52 CoCr femoral condyles were identified as having been implanted for more than 15 years. The femoral components were examined for incidence of 5 types of damage (metal-on-metal wear due to historical polyethylene insert failure, mechanically assisted crevice corrosion at taper interfaces, cement interface corrosion, third-body abrasive wear, and inflammatory cell-induced corrosion [ICIC]). Third-body abrasive wear was evaluated using the Hood method for polyethylene components and a similar method quantifying surface damage of the femoral condyle was used. The total area damaged by ICIC was quantified using digital photogrammetry. Results: Surface damage associated with corrosion and/or CoCr debris release was identified in 51 (98%) CoCr femoral components. Five types of damage were identified: 98% of femoral components exhibited third-body abrasive wear (mostly observed as scratching, n = 51/52), 29% of femoral components exhibited ICIC damage (n = 15/52), 41% exhibited cement interface damage (n = 11/27), 17% exhibited metal-on-metal wear after wear-through of the polyethylene insert (n = 9/52), and 50% of the modular femoral components exhibited mechanically assisted crevice corrosion taper damage (n = 2/4). The total ICIC-damaged area was an average of 0.11 ± 0.12 mm2 (range: 0.01-0.46 mm2). Conclusion: Although implant damage in total knee arthroplasty is typically reported with regard to the polyethylene insert, the results of this study demonstrate that abrasive and corrosive damage occurs on the CoCr femoral condyle in vivo. abstract_id: PUBMED:33189013 Stress and strain distribution in femoral heads for hip resurfacing arthroplasty with different materials: A finite element analysis. Femoral bone loss due to stress and strain shielding is a common problem in hip resurfacing arthroplasty (HRA), which arises from the different stiffness of implant materials and the adjacent bone. Usually, the implants used in HRA are made of cobalt-chromium alloy (CoCr). As a novel concept, implants may also be made of ceramics, whose stiffness exceeds that of the adjacent bone by a multiple. Therefore, this computational study aimed to evaluate whether poly (ether-ether-ketone) (PEEK) or a hybrid material with a PEEK body and ceramic surface made of alumina toughened zirconia (ATZ) might be more suitable implant alternatives for HRA, as they can avoid stress and strain shielding. A reconstructed model of a human femur with an HRA implant was simulated, whereby the material of the HRA was varied between CoCr, ATZ, zirconia toughened alumina (ZTA), PEEK, and a hybrid PEEK-ATZ material. The implant fixation method also varied (cemented or cementless). The simulated models were compared with an intact model to analyze stress and strain distribution in the femoral head and neck. The strain distribution was evaluated at a total of 30,344 (cemented HRA) and 63,531 (uncemented HRA) nodes in the femoral head and neck region and divided into different strain regions (&lt;400 µm/m: atrophy; 400-3000 μm/m: bone preserving and building; 3000-20,000 μm/m: yielding and &gt;20,000 μm/m fracture). In addition, the mechanical stability of the implants was evaluated. When the material of the HRA implant was simulated as metal or ceramic while evaluating the strains, it was seen that around 22-26% of the analyzed nodes in the femoral head and neck were in an atrophic region, 47-51% were in a preserving or building region, and 27-28% were in a yielding region. In the case of PEEK implant, less than 0.5% of the analyzed nodes were in an atrophic region, 66-69% in a preserving or building region, and 31-34% in a yielding region. The fixation technique also had a small influence. When a hybrid HRA was simulated, the strains at the analyzed nodes depended on the thickness of the ceramic material. In conclusion, the material of the HRA implant was crucial in terms of stress and strain distribution in the adjacent bone. HRA made of PEEK or a hybrid material leads to decisively reduced stress and strain alteration compared to stiffer materials such as CoCr, ATZ, and ZTA. This confirms the potential for reduction in stress and strain shielding in the femoral head with the use of a hybrid material with a PEEK body for HRA. abstract_id: PUBMED:35462265 A FE study on the effect of interference fit and coefficient of friction on the micromotions and interface gaps of a cementless PEEK femoral component. The use of a more compliant material, such as polyetheretherketone (PEEK), for a cementless femoral component is a potential solution to prevent aseptic loosening caused by peri-prosthetic stress-shielding. Long-term fixation of a cementless femoral component is achieved by a proper primary fixation of the bone-implant interface, which is influenced by the interference fit and frictional properties of the implant surface. This computational study investigates the sensitivity of micromotions and interface gaps of a cementless PEEK femoral component to the interference fit and coefficient of friction. 24 finite element models of the femur and femoral component were created with variations in implant material, interference fit and coefficient of friction. Peak loads of a jogging activity were applied on the models. Micromotions and interface gaps were both sensitive to the interference fit, coefficient of friction and implant material. Besides the implant material, the micromotions and interface gaps of the implant were most sensitive to the interference fit. Compared to the cobalt-chrome (CoCr) femoral component, the PEEK femoral component generated higher micromotions and interface gaps when equal interference fit and friction values were applied. However, increasing the interference fit and friction of the PEEK component resulted in micromotion values comparable with the CoCr component. This result leads to possibilities using cementless PEEK femoral components. abstract_id: PUBMED:38463834 Decreased stress shielding with poly-ether-ether-ketone tibial implant for total knee arthroplasty - A preliminary study using finite element analysis. In total knee arthroplasty (TKA), the mechanical mismatch between cobalt-chromium (CoCr) alloy tibial implant and bone has been implicated in stress shielding and subsequent implant failure and bone resorption. This study investigates the biomechanical advantages of poly-ether-ether-ketone (PEEK) tibial implant, which exhibit properties analogous to those of the surrounding bone. A finite element analysis (FEA) was employed to assess and compare the biomechanical performances of PEEK and CoCr tibial implants in patients with and without osteoporosis. Four FEA models were constructed with PEEK and CoCr alloy implants in normal and osteoporotic tibias. Based on previous literature and our clinical experience, stresses measurements were taken at 16 points on the tibial plateau and 8 points on the two surfaces which were 10 mm and 20 mm apart from the tibial plateau, with specific regions quantified for stress shielding. The results showed significant differences in stress distribution between PEEK and CoCr implants. The PEEK implants exhibited higher equivalent stresses on the tibial plateau in all models (normal bone: 0.22 ± 0.07 MPa vs. 0.13 ± 0.06 MPa, p &lt; 0.01; osteoporotic bone: 0.39 ± 0.06 MPa vs. 0.17 ± 0.07 MPa, p &lt; 0.01). In non-osteoporotic models, the mean equivalent stresses on proximal tibial surfaces were similarly elevated for PEEK implants (0.29 ± 0.13 MPa vs. 0.21 ± 0.08 MPa, p = 0.02). The CoCr implants demonstrated more stress shielding across all measured regions (tibial plateau: 23.47% vs. 2.73%; surface 1: 15.93% vs. 1.37%; surface 2: 10.71% vs. 6.56%). These disparities were even more pronounced in osteoporotic models in the CoCr group (tibial plateau: 32.50% vs. 8.36%). The maximum equivalent stresses on the tibial plateau further supported this trend (normal bone: 1.02 MPa vs. 0.52 MPa; osteoporotic bone: 1.43 MPa vs. 0.67 MPa). These data confirm the hypothesis that a PEEK tibial implant can reduce peri-prosthetic stress shielding, suggesting that PEEK implants have the capability to distribute loads more uniformly and maintain a closer approximation to physiological conditions. abstract_id: PUBMED:32722599 The Effects of Cyclic Loading and Motion on the Implant-Cement Interface and Cement Mantle of PEEK and Cobalt-Chromium Femoral Total Knee Arthroplasty Implants: A Preliminary Study. This study investigated the fixation of a cemented PEEK femoral TKA component. PEEK and CoCr implants were subjected to a walking gait cycle for 10 million cycles (MC), 100,000 cycles or 0 cycles (unloaded control). A method was developed to assess the fixation at the cement-implant interface, which exposed the implants to a fluorescent penetrant dye solution. The lateral condyles of the implants were then sectioned and viewed under fluorescence to investigate bonding at the cement-implant interface and cracking of the cement mantle. When tested for 100,000 cycles, debonding of the cement-implant interface occurred in both PEEK (61%) and CoCr (13%) implants. When the duration of testing was extended (10 MC), the percentage debonding was further increased for both materials to 88% and 61% for PEEK and CoCr, respectively. The unloaded PEEK specimens were 79% debonded, which suggests that, when PEEK femoral components are cemented, complete bonding may never occur. Analysis of cracks in the cement mantle showed an absence of full-thickness cracks in the unloaded control group. For the 100,000-cycle samples, on average, 1.3 and 0.7 cracks were observed for PEEK and CoCr specimens, respectively. After 10 MC, these increased to 24 for PEEK and 19 for CoCr. This was a preliminary study with a limited number of samples investigated, but shows that, after 10 MC under a walking gait, substantial debonding was visible for both PEEK and CoCr implants at the cement-implant interface and no significant difference in the number of cement cracks was found between the two materials. abstract_id: PUBMED:28711589 Fixation strength of a polyetheretherketone femoral component in total knee arthroplasty. Introduction: Introducing polyetheretherketone (PEEK) polymer as a material for femoral components in total knee arthroplasty (TKA) could potentially lead to a reduction of the cemented fixation strength. A PEEK implant is more likely to deform under high loads, rendering geometrical locking features less effective. Fixation strength may be enhanced by adding more undercuts or specific surface treatments. The aim of this study is to measure the initial fixation strength and investigate the associated failure patterns of three different iterations of PEEK-OPTIMA® implants compared with a Cobalt-Chromium (CoCr) component. Methods: Femoral components were cemented onto trabecular bone analogue foam blocks and preconditioned with 86,400 cycles of compressive loading (2600 N-260 N at 1 Hz). They were then extracted while the force was measured and the initial failure mechanism was recorded. Four groups were compared: CoCr, regular PEEK, PEEK with an enhanced cement-bonding surface and the latter with additional surface primer. Results: The mean pull-off forces for the four groups were 3814 N, 688 N, 2525 N and 2552 N, respectively. The initial failure patterns for groups 1, 3 and 4 were the same; posterior condylar foam fracture and cement-bone debonding. Implants from group 2 failed at the cement-implant interface. Conclusions: This study has shown that a PEEK-OPTIMA® femoral TKA component with enhanced macro- and microtexture is able to replicate the main failure mechanism of a conventional CoCr femoral implant. The fixation strength is lower than for a CoCr implant, but substantially higher than loads occurring under in-vivo conditions. abstract_id: PUBMED:37629896 The Effects of Different Femoral Component Materials on Bone and Implant Response in Total Knee Arthroplasty: A Finite Element Analysis. Due to the high stiffness of the biomaterials used in total knee arthroplasty, stress shielding can lead to decreased periprosthetic bone mineral density and bone resorption. As different materials and 3D-printed highly porous surfaces are available for knee femoral components from the industry nowadays, this study aimed to compare the effects of two same-design cruciate-retaining femoral components, made with CoCr and titanium alloy, respectively, on periprosthetic bone stresses through a finite element model of the implanted knee in order to evaluate the induced stress shielding. Moreover, the effect of the cementless highly porous surface of the titanium implant was analyzed in comparison to the cemented interface of the CoCr implant. The von Mises stresses were analyzed in different periprosthetic regions of interest of the femur with different configurations and knee flexion angles. The titanium component induced higher bone stresses in comparison with the CoCr component, mostly in the medial compartment at higher knee flexion angles; therefore, the CoCr component led to more stress shielding. The model was revealed to be effective in describing the effects of different femoral component materials on bone stress, highlighting how a cementless, highly porous titanium femoral component might lead to less stress shielding in comparison to a cemented CoCr implant with significant clinical relevance and reduced bone resorption after total knee arthroplasty. abstract_id: PUBMED:33578052 Decreased stress shielding with a PEEK femoral total knee prosthesis measured in validated computational models. Due to their high stiffness, metal femoral implants in total knee arthroplasty may cause stress shielding of the peri-prosthetic bone, which can lead to loss of bone stock. Using a polymer (PEEK) femoral implant reduces the stiffness mismatch between implant and bone, and therefore has the potential to decrease strain shielding. The goal of the current study was to evaluate this potential benefit of PEEK femoral components in cadaveric experiments. Cadaveric femurs were loaded in a materials testing device, while a 3-D digital image correlation set-up captured strains on the surface of the intact femurs and femurs implanted with PEEK and CoCr components. These experimental results were used to validate specimen-specific finite element models, which subsequently were used to assess the effect of metal and PEEK femoral components on the bone strain energy density. The finite element models showed strain maps that were highly comparable to the experimental measurements. The PEEK implant increased strain energy density, relative to the preoperative bone and compared to CoCr. This was most pronounced in the regions directly under the implant and near load contact sites. These data confirm the hypothesis that a PEEK femoral implant can reduce peri-prosthetic stress shielding. abstract_id: PUBMED:34405258 Preoperative radiographic parameters in the case of using a narrow-version femoral implant in total knee arthroplasty. Background: Recently, total knee arthroplasty (TKA) designs that allow the use of narrow-version femoral implants have been introduced to avoid femoral overhang. The purpose of this study was to investigate the frequency of the use of narrow-version femoral implants and identify the difference in radiographic parameters between using a narrow-version femoral implant and a standard-version femoral implant in TKA. Methods: A retrospective study was conducted on 504 primary TKAs using a TKA system (Anthem or Persona) that allowed narrow-version femoral implants. Anteroposterior (AP) dimension, mediolateral (ML) dimension, and modified aspect percentage ratio (ML/AP dimension) of the distal femur in preoperative radiographs were compared between a standard-version group (n = 275) and a narrow-version group (n = 229). A cut-off value of a modified aspect percentage ratio indicating the need for a narrow-version femoral implant was determined using the receiver operating characteristic (ROC) curve. Results: Mean ML dimension was 80.9 ± 6.1 mm in the standard-version group and 77.3 ± 4.4 mm in the narrow-version group (p &lt; 0.001). Mean modified aspect percentage ratio was 138.8 ± 8.1% in the standard-version group and 131.7 ± 6.3% in the narrow-version group (p &lt; 0.001). The optimum cut-off point of the modified aspect percentage ratio for narrow-version femoral implants was 135.4% (sensitivity: 72.0%; specificity: 66.7%) for Anthem and 133.3% (sensitivity: 75.9%, specificity: 76.4%) for Persona. Conclusion: In the narrow-version femoral implant group, the ML dimension and the mean modified aspect percentage ratio were smaller than in the standard-version femoral implant group. A smaller modified aspect percentage ratio of the distal femur in preoperative radiographs could predict the need for narrow-version femoral implants in TKA. It was suggested that the cut-off point could be suggested as 135.4% for Anthem TKA design and 133.3% for Persona TKA design. These radiographic parameters are cost-effective and easily applicable for planning a TKA.A smaller modified aspect percentage ratio of the distal femur in preoperative radiographs could predict the need for narrow-version femoral implants in TKA. The cut-off point was 135.4% for Anthem TKA design and 133.3% for Persona TKA design. Answer: Yes, a PEEK femoral TKA implant does preserve intact femoral surface strains compared with CoCr. A study using Digital Image Correlation (DIC) technique found that the strain after implantation with a PEEK implant deviated less from the intact case (R2 = 0.7972 with slope β = 0.939) with no significant difference (p = 0.231), whereas the surface analog bone strain after CoCr implantation indicated strain shielding (R2 = 0.6178 with slope, β = 0.4314) and was lower than the intact case (p = 0.014) (PUBMED:27020431). This suggests that a PEEK femoral component could transfer more physiologically normal bone strains with a potentially reduced stress shielding effect, which may improve long-term bone preservation.
Instruction: Does location of rotation center in artificial disc affect cervical biomechanics? Abstracts: abstract_id: PUBMED:25868102 Does location of rotation center in artificial disc affect cervical biomechanics? Study Design: A 3-dimensional finite element investigation. Objective: To compare the biomechanical performances of different rotation centers (RCs) in the prevalent artificial cervical discs. Summary Of Background Data: Various configurations are applied in artificial discs. Design parameters may influence the biomechanics of implanted spine. The RC is a primary variation in the popular artificial discs. Methods: Implantation of 5 prostheses was simulated at C5-C6 on the basis of a validated finite element cervical model (C3-C7). The prostheses included ball-in-socket design with a fixed RC located on the inferior endplate (BS-FI) and on the superior endplate (BS-FS), with a mobile RC at the inferior endplate (BS-MI), dual articulation with a mobile RC between the endplates (DA-M), and sliding articulation with various RCs (SA-V). The spinal motions in flexion and extension served as a displacement loading at the C3 vertebrae. Results: Total disc replacements reduced extension moment. The ball-in-socket designs required less flexion moment, whereas the flexion stiffness of the spines with DA-M and SA-V was similar to that of the healthy model. The contributions of the implanted level to the global motions increased in the total disc replacements, except in the SA-V and DA-M models (in flexion). Ball-in-socket designs produced severe stress distributions in facet cartilage, whereas DA-M and SA-V produced more severe stress distribution on the bone-implant interface. Conclusion: Cervical stability was extremely affected in extension and partially affected in flexion by total disc replacement. With the prostheses with mobile RC, cervical curvature was readjusted under a low follower load. The SA-V and BS-FS designs exhibited better performances in the entire segmental stiffness and in the stability of the operative level than the BS-MI and BS-FI designs in flexion. The 5 designs demonstrated varying advantages relative to the stress distribution in the facet cartilages and on the bone-implant interface. Level Of Evidence: 5. abstract_id: PUBMED:25694869 Biomechanics of a fixed-center of rotation cervical intervertebral disc prosthesis. Background: Past in vitro experiments studying artificial discs have focused on range of motion. It is also important to understand how artificial discs affect other biomechanical parameters, especially alterations to kinematics. The purpose of this in vitro investigation was to quantify how disc replacement with a ball-and-socket disc arthroplasty device (ProDisc-C; Synthes, West Chester, Pennsylvania) alters biomechanics of the spine relative to the normal condition (positive control) and simulated fusion (negative control). Methods: Specimens were tested in multiple planes by use of pure moments under load control and again in displacement control during flexion-extension with a constant 70-N compressive follower load. Optical markers measured 3-dimensional vertebral motion, and a strain gauge array measured C4-5 facet loads. Results: Range of motion and lax zone after disc replacement were not significantly different from normal values except during lateral bending, whereas plating significantly reduced motion in all loading modes (P &lt; .002). Plating but not disc replacement shifted the location of the axis of rotation anteriorly relative to the intact condition (P &lt; 0.01). Coupled axial rotation per degree of lateral bending was 25% ± 48% greater than normal after artificial disc replacement (P = .05) but 37% ± 38% less than normal after plating (P = .002). Coupled lateral bending per degree of axial rotation was 37% ± 21% less than normal after disc replacement (P &lt; .001) and 41% ± 36% less than normal after plating (P = .001). Facet loads did not change significantly relative to normal after anterior plating or arthroplasty, except that facet loads were decreased during flexion in both conditions (P &lt; .03). Conclusions: In all parameters studied, deviations from normal biomechanics were less substantial after artificial disc placement than after anterior plating. abstract_id: PUBMED:20149505 The influence of the axial, antero-posterior and lateral positions of the center of rotation of a ball-and-socket disc prosthesis on the cervical spine biomechanics. Background: Previous studies documented the importance of the positioning and the design parameters of the prosthesis in determining the biomechanics of the implanted spine. However, a comprehensive biomechanical evaluation of the significance of these parameters is still lacking. Therefore, the paper is aimed to the quantification of their influence on the flexibility of the implanted spine and the force transmitted through the facet joints. Methods: A finite element model of the C5-C6 spine unit including a ball-and-socket disc prosthesis was built. Three probabilistic variables were considered: the axial, antero-posterior and lateral positions of the center of rotation. Randomized input parameters were generated with the Monte Carlo method. Pure moments of 1.6 Nm in flexion, extension, lateral bending and axial rotation were imposed to the upper endplate of C5; 100 simulations were conducted for the each of the considered loading conditions. Findings: Axial position of the center of rotation influenced the spine flexibility in all loading conditions and the facet force in extension, lateral bending and axial rotation. The antero-posterior position was found to influence the spine flexibility in flexion and extension, and the facet force in lateral bending and axial rotation. The lateral position was not significant. Interpretation: The effects of the positioning of a cervical disc prosthesis were estimated. A wide range of mechanical behaviors can be obtained by the manufacturers by appropriately manipulating the position of the center of rotation. A proper positioning of the artificial disc during the surgery, in particular in the antero-posterior direction, was found to be of critical importance. abstract_id: PUBMED:29462729 The Influence of Artificial Cervical Disc Prosthesis Height on the Cervical Biomechanics: A Finite Element Study. Objective: Artificial cervical disc replacement is expected to maintain normal cervical biomechanics. At present, the effect of the Prestige LP prosthesis height on cervical biomechanics has not been thoroughly studied. This finite element study of the cervical biomechanics aims to predict how the parameters, like range of motion (ROM), adjacent intradiscal pressure, facet joint force, and bone-implant interface stress, are affected by different heights of Prestige LP prostheses. Methods: The finite element model of intact cervical spine (C3-C7) was obtained from our previous study, and the model was altered to implant Prestige LP prostheses at the C5-C6 level. The effects of the height of 5, 6, and 7 mm prosthesis replacement on ROM, adjacent intradiscal pressure, facet joint force, as well as the distribution of bone-implant interface stress were examined. Results: ROM, adjacent intradiscal pressure, and facet joint force increased with the prosthesis height, whereas ROM and facet joint force decreased at C5-C6. The maximal stress on the inferior surface of the prostheses was greater than that on the superior surface, and the stresses increased with the prosthesis height. The biomechanical changes were slightly affected by the height of 5 and 6 mm prostheses, but were strongly affected by the 7-mm prosthesis. Conclusions: An appropriate height of the Prestige LP prosthesis can preserve normal ROM, adjacent intradiscal pressure, and facet joint force. Prostheses with a height of ≥2 mm than normal can lead to marked changes in the cervical biomechanics and bone-implant interface stress. abstract_id: PUBMED:28736356 Finite Element Analysis of Influence of Axial Position of Center of Rotation of a Cervical Total Disc Replacement on Biomechanical Parameters: Simulated 2-Level Replacement Based on a Validated Model. Background: Finite element models have been widely used to predict biomechanical parameters of the cervical spine. Previous studies investigated the influence of position of rotational centers of prostheses on cervical biomechanical parameters after 1-level total disc replacement. The purpose of this study was to explore the effects of axial position of rotational centers of prostheses on cervical biomechanics after 2-level total disc replacement. Methods: A validated finite element model of C3-C7 segments and 2 prostheses, including the rotational center located at the superior endplate (SE) and inferior endplate (IE), was developed. Four total disc replacement models were used: 1) IE inserted at C4-C5 disc space and IE inserted at C5-C6 disc space (IE-IE), 2) IE-SE, 3) SE-IE, and 4) SE-SE. All models were subjected to displacement control combined with a 50 N follower load to simulate flexion and extension motions in the sagittal plane. For each case, biomechanical parameters, including predicted moments, range of rotation at each level, facet joint stress, and von Mises stress on the ultra-high-molecular-weight polyethylene core of the prostheses, were calculated. Results: The SE-IE model resulted in significantly lower stress at the cartilage level during extension and at the ultra-high-molecular-weight polyethylene cores when compared with the SE-SE construct and did not generate hypermotion at the C4-C5 level compared with the IE-SE and IE-IE constructs. Conclusions: Based on the present analysis, the SE-IE construct is recommended for treating cervical disease at the C4-C6 level. This study may provide a useful model to inform clinical operations. abstract_id: PUBMED:33979732 The effect of cervical intervertebral disc degeneration on the motion path of instantaneous center of rotation at degenerated and adjacent segments: A finite element analysis. Background: The motion path of instantaneous center of rotation (ICR) is a crucial kinematic parameter to dynamically characterize cervical spine intervertebral patterns of motion; however, few studies have evaluated the effect of cervical disc degeneration (CDD) on ICR motion path. The purpose of this study was to investigate the effect of CDD on the ICR motion path of degenerated and adjacent segments. Method: A validated nonlinear three-dimensional finite element (FE) model of a healthy adult cervical spine was used. Progressive degeneration was simulated with six FE models by modifying intervertebral disc height and material properties, anterior osteophyte size, and degree of endplate sclerosis at the C5-C6 level. All models were subjected to a pure moment of 1 Nm and a compressive follower load of 73.6 N to simulate physical motion. ICR motion paths were compared among different models. Results: The normal FE model results were consistent with those of previous studies. In degenerative models, average ICR motion paths shifted significantly anterior at the degenerated segment (β = 0.27 mm; 95% CI: 0.22, 0.32) and posterior at the proximal adjacent segment (β = -0.09 mm; 95% CI: -0.15, -0.02) than those of the normal model. Conclusion: CDD significantly affected ICR motion paths at the degenerated and proximal adjacent segments. The changes at adjacent segments may be a result of compensatory mechanisms to maintain the balance of the cervical spine. Surgical treatment planning should take into account the restoration of ICR motion path to normal. These findings could provide a basis for prosthesis design and clinical practice. abstract_id: PUBMED:16135984 Kinematic analysis of the cervical spine following implantation of an artificial cervical disc. Study Design: Prospective cohort study. Objective: To assess the biomechanical profile of the cervical spine following cervical arthroplasty. Summary Of Background Data: Spinal arthroplasty offers the promise of maintaining functional spinal motion, thereby potentially avoiding adjacent segment disease. Disc replacement may become the next gold standard for the treatment of degenerative cervical spine disease, and must be studied rigorously to ensure in vivo efficacy and safety. Methods: A total of 20 patients underwent single or 2-level implantation of the Bryan artificial cervical disc (Medtronic Sofamor Danek, Memphis TN) for treatment of cervical degenerative disc disease producing radiculopathy and/or myelopathy. Lateral neutral, flexion, and extension cervical radiographs were obtained before surgery and at intervals up to 24 months after surgery. Kinematic parameters, including sagittal rotation, horizontal translation, change in disc height, and center of rotation (COR), were assessed for each spinal level using quantitative motion analysis software. Results: Motion was preserved in the operated spinal segments (mean range of motion 7.8 degrees) up to 24 months following surgery. The relative contribution of each spinal segment to overall spinal sagittal rotation differed depending on whether the disc was placed at C5-C6 or C6-C7. Overall cervical motion (C2-C7) was moderately but significantly increased during late follow-up. Sagittal rotation, anterior and posterior disc height, translation, and COR coordinates did not change significantly following surgery. The COR was most frequently located posterior and inferior to the center of the disc space. Conclusions: The Bryan artificial cervical disc provided in vivo functional spinal motion at the operated level, reproducing the preoperative kinematics of the spondylotic disc. abstract_id: PUBMED:34336799 Biomechanical Analysis of Cervical Artificial Disc Replacement Using Cervical Subtotal Discectomy Prosthesis. Background: Anterior cervical discectomy and fusion (ACDF) sacrifices segmental mobility, which can lead to the acceleration of adjacent segment degeneration. The challenge has promoted cervical artificial disc replacement (CADR) as a substitute for ACDF. However, CADR has revealed a series of new issues that are not found in ACDF, such as hypermobility, subsidence, and wear phenomenon. This study designed a cervical subtotal discectomy prosthesis (CSDP) consisting of a cervical disc prosthesis structure (CDP structure), cervical vertebra fixation structure (CVF structure), link structure, and locking screw, aiming to facilitate motion control and reduce subsidence. The aim of this study was to assess the biomechanics of the CSDP using finite element (FE) analysis, friction-wear test, and non-human primates implantation study. Study Design: For the FE analysis, based on an intact FE C2-C7 spinal model, a CSDP was implanted at C5-C6 to establish the CSDP FE model and compare it with the Prestige LP prosthesis (Medtronic Sofamor Danek, Minneapolis, MN, United States). The range of motion (ROM), bone-implant interface stress, and facet joint force were calculated under flexion extension, lateral bending, and axial rotation. In addition, CSDP was elevated 1 mm to mimic an improper implantation technique to analyze the biomechanics of CSDP errors in the FE model. Moreover, the friction-wear test was conducted in vitro to research CSDP durability and observe surface wear morphology and total wear volume. Finally, the CSDP was implanted into non-human primates, and its properties were evaluated and verified by radiology. Results: In the FE analysis, the ROM of the CSDP FE model was close to that of the intact FE model in the operative and adjacent segments. In the operative segment, the CSDP error FE model increased ROM in flexion extension, lateral bending, and axial rotation. The maximum stress in the CSDP FE model was similar to that of the intact FE model and was located in the peripheral cortical bone region. The facet joint force changes were minimal in extension, lateral bending, and axial rotation loads in CSDP. In the friction-wear test, after the 150-W movement simulation, both the CVF-link-junction and the CDP-link-junction had slight wear. In the CSDP non-human primate implantation study, no subsidence, dislocation, or loosening was observed. Conclusion: In the FE analysis, the biomechanical parameters of the CSDP FE model were relatively close to those of the intact FE model when compared with the Prestige LP FE model. In terms of CSDP error FE models, we demonstrated that the implantation position influences CSDP performance, such as ROM, bone-implant interface stress, and facet joint force. In addition, we performed a friction-wear test on the CSDP to prove its durability. Finally, CSDP studies with non-human primates have shown that the CSDP is effective. abstract_id: PUBMED:31940714 The Feasibility of Optimal Surgical Result Prediction according to the Center of Rotation Shift after Multilevel Cervical Total Disc Replacement. Study Design: Retrospective design/spine clinic and tertiary referral hospital. Purpose: This study investigated the association between shifted location of center of rotation (COR) and subsequent surgical outcomes and identified radiological parameters that corresponded to COR change after multilevel cervical total disc replacement (MCTDR). Overview Of Literature: Cervical total disc replacement (TDR) maintains normal cervical kinematics after surgery. However, there is a paucity of literature analyzing the relationship between radiological shifts of COR and subsequent clinical success, especially when it comes to the issue of MCTDR switch. Methods: This study included 24 consecutive patients treated with MCTDR following the diagnosis of multilevel cervical disc herniation or stenosis. Numeric rating scale, range of motion (ROM) at both C2-7 segment and TDR implanted levels, and location of COR at TDR level were evaluated pre- and post-MCTDR. These parameters were compared between patients who experienced successful and unsuccessful pain relief. Results: The inherent CORs relatively at ventrocranial coordinates demonstrated significant migrations to dorsocaudal location, more prominent shifts for the successful group, after MCTDR switch. The unsuccessful group showed markedly reduced C2-7 ROM and reduced angular improvement at C2-7 segment and MCTDR level compared with the successful group. Postoperative C2-7 ROM was related to postoperative COR along the x-axis. Conclusions: Aside from ROM preservation at both C2-7 and TDR levels, COR restoration from ventrocranial location close to normal coordinates by posterior and inferior shifts was marked as a clinical success after MCTDR. The COR position along the x-axis after MCTDR was an important factor to determine maintenance of C2-7 ROM. abstract_id: PUBMED:33369503 The discrepant clinical outcome predictions according to the differentiated centre of rotation shift after multilevel cervical total disc replacement. Purpose: Cervical total disc replacement (TDR) aims to maintain normal cervical kinematics after surgery. This study investigates the relation between shifted location of centre of rotation (COR) and subsequent surgical outcomes after multilevel cervical TDR (MCTDR) and identifies radiological parameter that corresponded to this change of COR after MCTDR. Methods: The study included a consecutive 24 patients treated with MCTDR following the diagnosis of multilevel cervical disc herniation or stenosis. Numeric Rating Scale (NRS), range of motion (ROM) at both C2-7 segment and TDR implanted levels, and location of COR at TDR level were evaluated at pre- and post-MCTDR. These parameters were compared between patients who experienced successful and unsuccessful pain relief. Results: The inherent CORs relatively at ventro-cranial coordinates have demonstrated significant migrations to dorso-caudal location, more prominent shifts for the successful group, after MCTDR switch. The unsuccessful group showed markedly reduced C2-7 ROM and reduced angular improvement at C2-7 as well as MCTDR level in comparison with the successful group. Postoperative C2-7 ROM was related to postoperative COR along the X axis. Conclusion: The determinant for clinical success after MCTDR, other than mere preservation of the ROM both at C2-7 and TDR levels, was restoration of COR from ventro-cranial location close to normal coordinates by posterior and inferior shifts. The position of COR along the X axis after MCTDR was important factor to determine maintenance of C2-7 ROM. Answer: Yes, the location of the rotation center (RC) in an artificial disc does affect cervical biomechanics. Studies have shown that the RC is a primary variation in popular artificial discs and that its position can significantly influence the biomechanics of the implanted spine (PUBMED:25868102). For instance, the axial position of the RC influences spine flexibility in all loading conditions and the facet force in extension, lateral bending, and axial rotation (PUBMED:20149505). The antero-posterior position of the RC affects spine flexibility in flexion and extension, as well as the facet force in lateral bending and axial rotation (PUBMED:20149505). Moreover, the biomechanical performances of different RCs in prevalent artificial cervical discs have been compared, showing that designs with mobile RCs allow cervical curvature to be readjusted under a low follower load (PUBMED:25868102). Furthermore, the biomechanical analysis of cervical artificial disc replacement using different prostheses, such as the Prestige LP prosthesis, has indicated that the height of the prosthesis can also affect cervical biomechanics, including range of motion (ROM), adjacent intradiscal pressure, facet joint force, and bone-implant interface stress (PUBMED:29462729). Additionally, the axial position of the RC of prostheses has been shown to affect biomechanical parameters after 2-level total disc replacement, with certain constructs, such as SE-IE (superior endplate-inferior endplate), resulting in lower stress at the cartilage level during extension and at the ultra-high-molecular-weight polyethylene cores compared to other constructs (PUBMED:28736356). The effect of cervical intervertebral disc degeneration on the motion path of the instantaneous center of rotation (ICR) at degenerated and adjacent segments has also been studied, revealing that cervical disc degeneration significantly affects ICR motion paths at the degenerated and proximal adjacent segments (PUBMED:33979732). Lastly, the feasibility of predicting optimal surgical results according to the COR shift after multilevel cervical total disc replacement has been investigated, showing that the restoration of COR from ventrocranial location close to normal coordinates by posterior and inferior shifts was marked as a clinical success after MCTDR (PUBMED:31940714, PUBMED:33369503). In summary, the location of the RC in an artificial disc is crucial in determining the biomechanics of the cervical spine following disc replacement. The position of the RC can influence the flexibility of the spine, the force transmitted through the facet joints, and the overall motion and stability of the cervical segments. Proper positioning of the artificial disc during surgery, particularly in the antero-posterior direction, is critical for achieving desired mechanical behavior and clinical outcomes (PUBMED:20149505). Additionally, the biomechanical analysis of a cervical artificial disc replacement using a cervical subtotal discectomy prosthesis (CSDP) has shown that the biomechanical parameters of the CSDP model were relatively close to those of the intact model, suggesting that the implantation position influences the performance of the prosthesis, such as ROM, bone-implant interface stress, and facet joint force (PUBMED:34336799). Moreover, the clinical success after multilevel cervical total disc replacement (MCTDR) is associated with the restoration of the COR from a ventrocranial location to a position closer to normal coordinates, as indicated by posterior and inferior shifts. The position of the COR along the x-axis after MCTDR is an important factor in maintaining the ROM of the C2-C7 segment (PUBMED:31940714, PUBMED:33369503). In conclusion, the location of the rotation center in an artificial disc has a significant impact on cervical biomechanics, affecting the motion, stability, and stress distribution in the cervical spine. Proper design and placement of the artificial disc are essential for preserving normal kinematics and ensuring successful clinical outcomes.
Instruction: Clinical and magnetic resonance imaging outcomes after surgical repair of complete proximal hamstring ruptures: does the tendon heal? Abstracts: abstract_id: PUBMED:31316242 Evaluation of the anterolateral ligament of the knee by magnetic resonance imaging in patients with chronic anterior cruciate ligament rupture. Objective: The purpose of this study was to evaluate the visibility of the anterolateral ligament (ALL) by magnetic resonance imaging (MRI) in patients with chronic anterior cruciate ligament (ACL) rupture. Materials And Methods: This retrospective case - control study compared 1.5 - T MRI scans for 50 patients with a chronic ACL rupture with those of a control group of 50 patients with an intact ACL. The ALL was evaluated in three portions: femoral, meniscal, and tibial. The status of each portion was classified as visualized or non-visualized. Two radiologists separately reviewed all the MRI scans to evaluate interobserver reliability. Results: At least one portion of the ALL was visualized in 100% of the control group and 72% of the chronic ACL rupture group. All three portions of the ALL were identified in 72% of the control group but only 10% of the chronic ACL rupture group. In both groups, the most commonly visualized portion was the meniscal portion and the least visualized was the tibial portion. In 18% of the chronic ACL rupture group, no portion of the ALL was visualized. Conclusions: The visibility of the ALL of the knee was significantly lower in patients with a chronic ACL rupture than in those with an intact one. abstract_id: PUBMED:31754746 Open repair of proximal hamstring tears Objective: Surgical refixation intends to restore the continuity of the hamstrings and anatomically reattach the torn tendons. Indications: In patients with 2‑tendon tear/ruptur with more than 2 cm retraction or with complete 3‑tendon tears, surgical fixation should be performed. Contraindications: Massive obesity, immobility. Surgical Technique: Surgery is performed via a transverse skin incision in the gluteal fold. Refixation of the torn tendons is achieved by means of suture anchors on the footprint of the ischial tuberosity. Postoperative Management: Postoperative treatment should be performed with a hip joint orthosis for 6 weeks, accompanied by physiotherapy. Results: Regarding surgical treatment, positive results are clearly described in the literature. Of the 31 patients who underwent surgery between 2010 and 2018, 90 % were satisfied with the surgical results 1 year postoperatively and 75 % reached their previous activity level. abstract_id: PUBMED:35941041 Perioperative lethality after endovascular and open repair of ruptured abdominal aortic aneurysms: An analysis of administrative data of the AOK health insurance fund Objective: In this paper we will report the perioperative outcome after endovascular (EVAR) and open (OAR) repair of ruptured abdominal aortic aneurysms (rAAA) in Germany based on data of the AOK health insurance fund. Methods: Anonymised data of all patients with rAAA (n = 3,227) who were treated from 01/01/2010 to 12/31/2016 were analysed, using SPSS 27 (IBM Deutschland GmbH, Ehningen, Germany). Results: 41.9% (1,353/3,227) of the patients were treated with EVAR and 58.1% (1,874/3,227) with OAR. Patients ≥80 years made up 38.4% for EVAR and 32.9% for OAR (p = 0.002). The proportion of patients undergoing surgery within 24 hours after admission was significantly higher for OAR (87.8%) than for EVAR (73.0%) (p = 0.000). The perioperative lethality rate for OAR was 42.4%, and thus almost twice as high as for EVAR with 21.3% (p = 0.000). Women had higher perioperative lethality rates for both EVAR (perioperative lethality 24.6%) and OAR (perioperative lethality 51.7%) compared to men with 20.6% (EVAR) and 40.2% (OAR), respectively. With EVAR, 35.8% of the patients showed a complication-free postoperative course, with OAR it was 17.7% (p = 0.000). Blood transfusions (whole blood, red cell concentrates, and autotransfusions) were administered in 57.6% of the patients with EVAR, but in 92.3% with OAR (p = 0.000). The highest perioperative lethality was found in EVAR and OAR patients who received both surgery within 24 hours after admission and blood transfusions (perioperative lethality EVAR 36.0%, OAR 46.0%; p = 0.000). In contrast, patients who did not require blood transfusions and were treated later than 24 hours after admission had the lowest perioperative lethality with 3.2% for EVAR vs. 5.4% for OAR (p = 0.623). Conclusion: The data confirm the observation that the perioperative mortality of rAAA patients is lower with EVAR than with OAR. However, strict attention must be paid to the time of the intervention. The low perioperative lethality of patients who were treated later than 24 hours after hospital admission and who did not require blood transfusions indicates that cases of symptomatic AAA without rupture have also been recorded in this administrative database under the diagnosis rAAA. One point of criticism is that the decision not to adjust for the patient groups with EVAR and with OAR in order to be able to better analyse the properties of routine data includes a considerable risk of bias in the statements of this work due to confounding variables. abstract_id: PUBMED:37701320 Flexor Tendon Pulley Injuries: A Systematic Review of the Literature and Current Treatment Options. Closed injuries of the finger flexor pulley system are rare among the general population, and most of them occur during rock climbing. During the last few decades, scientific interest on this topic has increased. We conducted a systematic review of the literature according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. The research was limited from January 2000 to March 2022. PubMed and Scopus databases were investigated for full-text articles published in English, French, and Italian using the following MeSH terms: ([pulley rupture] OR [finger pulley lesion]) AND ([injur*] OR [ruptur*] OR [damage] OR [trauma*]). Initial screening results identified 461 studies, among which 172 were included after including additional records identified through other sources and excluding repeated studies. Finally, four clinical studies were included in the analysis. The methodological quality of the articles was evaluated through the methodological index for nonrandomized studies (MINORS) score. Our search identified four studies that enrolled a total of 189 patients, of whom 164 were male and 25 were female. We finally examined 154 patients with a total of 208 pulleys damaged. Except for the thumb, all fingers were involved. Depending on the type of flexor pulley injury, graded with Schöffl classification, 69 patients underwent a surgical procedure, whereas 85 patients were treated conservatively. Closed finger pulleys injury occurred in rock climbers and non - rock climbers. All patients had excellent results on the Buck-Gramcko score regardless of the return to sports activity. Considering the overall outcomes of the reviewed articles, functional results were satisfactory in both conservative and surgical treatment. Moreover, in grade 3 and 4, surgical results were positive regardless of the specific technique used for finger pulley reconstruction. Only minor complications were reported. Closed flexor tendon pulley injuries require a careful clinical and imaging examination to confirm the diagnosis. In most cases, positive clinical results can be achieved with either conservative or surgical therapy. abstract_id: PUBMED:22999117 Radical cystectomy for urothelial bladder cancer: prognostic impact of lymph node metastasis and soft tissue surgical margins Purpose: To assess the association of soft tissue surgical margins (STSM) and/or lymph node metatstasis (pN+) with characteristics and outcomes of patients treated with radical cystectomy (RC) for urothelial carcinoma of the bladder (UCB). Materials And Methods: We retrospectively collected the data of 242 patients treated with RC and pelvic lymphadenectomy for UCB between January 2005 and June 2009. Different parameters were studied: age, PSAt, pathological stage of cystectomy specimen (pT and pN), tumor grade, number (nb) of nodes (N) in lymphadenectomy, nb of metastatic nodes (nb N+), bigger diameter of N+, ganglionic density, nb of N with capsular ruptur, associated CIS, associated prostate cancer, follow-up, global and specific survival, date and etiology of death. Results: Positive STSM were identified in 22 patients (9.1%) and lymph node metastasis in 59 (24.4%). pN+ status was significantly associated with lower global (GS) and specific survival (SS) (P&lt;0.003). So was it for patients with positive STSM R+ with actuarial 3-year GS and SS respectively of 5% and 25% versus 35% and 43.9% no STSM (P&lt;0.001). Conclusions: Positive soft tissue surgical margin and/or lymph node metatstasis on cystectomy specimen is a strong predictor of GS and SS from urothelial carcinoma of the bladder. So it is for capsular rupture, ganglionic density greater or equal to 0.10 and nb of N in lymphadenectomy less than 14 for pN+ patients. abstract_id: PUBMED:36647165 Single-port thoracoscopic removal of an azygos vein aneurysm: a case report and literature review. Background: Azygos vein aneurysms (AVAs) are extremely rare. The majority of patients have no obvious clinical symptoms, so they are found by physical examination or by chance. There is limited clinical treatment experience that can be referred to, and there are no clear guidelines or research evidence standardizing the surgical and interventional therapy. Here, we report a patient with idiopathic AVA whose three-dimensional reconstruction of the tumor was completed before surgery. On the basis of three-dimensional reconstruction, single-port thoracoscopic resection of the AVA was successfully completed and reported for the first time. The previously reported cases are summarized to provide guidance for the diagnosis and treatment of patients with AVAs. Case Presentation: A 56-year-old man was transferred to our hospital due to "dysphagia". The diagnosis of AVA was made after enhanced computed tomography, gastroscopy, fiberoptic bronchoscopy, and three-dimensional reconstruction. Congenital weakness or degenerative changes causes the vein walls to be extremely thin that the AVA had the risk of ruptur. Furthermore, the patient had symptoms of dysphagia, he received single-port thoracoscopic surgery. After the operation, his dysphagia disappeared. The postoperative pathology confirmed hemangioma. The patient was discharged 3 days after surgery without any complications. Conclusions: AVAs are rare. Preoperative three-dimensional reconstruction can greatly help surgeons clarify the disease diagnosis, formulate the surgical plan, avoid damage to the surrounding vital organs, and reduce intraoperative bleeding. Thoracoscopic surgery to remove AVAs is difficult and has a high risk of bleeding, while more minimally invasive single-port thoracoscopic surgery is also safe and effective for the treatment of AVAs. abstract_id: PUBMED:2529709 The significance of vascular surgery treatment of complications following percutaneous transluminal angioplasty of the lower extremities In 147 patients suffering from arterial occlusive disease of the lower extremities 162 PTAs of the iliac, femoral and popliteal arteries were performed. Thirteen complications occurred in 12 patients, while surgical intervention was necessary in seven cases (4.3%). Hematoma at the site of the arterial puncture were the most frequent complications, requiring acute surgical intervention (n = 3). In two patients, acute vascular occlusion after PTA required bypass operation and thrombendarteriectomy, respectively. In one patient, presenting with an iliac stenosis, PTA was performed using a transaxillar approach and was complicated by an axillar hematoma due to the ruptur of the arteria subscapularis. Furthermore, one patient with an extended hematoma of the abdominal wall required surgical intervention. The results presented indicate that PTA of the lower extremities for treatment of arterial occlusive disease is a procedure with low risk. Complications, which require acute operative therapy, can be treated mostly successful by small surgical interventions. abstract_id: PUBMED:23588573 Muscle rupture after minimal trauma of the spastic muscle: three case reports of patients with spinal cord injury. Study Design: Retrospective study of three cases. Objectives: To report three cases of muscle rupture caused by minimal trauma in spinal cord injury (SCI) patients with severe spasticity and a literature review of the underlying mechanisms. Setting: Department of Physical and Rehabilitation Medicine, University Hospitals Leuven, Belgium Methods: Retrospective study of three cases of muscle ruptures in SCI patients with severe spasticity. All muscle lesions were diagnosed by ultrasound. Literature review (Pubmed) was performed to identify extrinsic and intrinsic risk factors. Results: According to the literature and our clinical findings, several structural and mechanical alterations of the spastic muscle in combination with specific stretching during therapy or a transfer can cause a muscle rupture after minimal trauma. Conclusion: To the authors' knowledge, this is the first report of muscle rupture due to spasticity in SCI patients. Altered mechanical properties of the spastic muscle in combination with extreme stretching may cause partial or complete ruptur. Although this is a rare complication of spasticity, medical staff and therapists should be aware of the risk factors in order to prevent and quickly identify muscle lesions. abstract_id: PUBMED:997743 Investigations of the normal stability of Chopart's joint (author's transl) The stability of the so-called Chopart-joint was evaluated in 50 healthy and asymptomatic patients by x-ray examination under strain. Complete stability was found in 38%. Widening of the lateral joint space up to 4 mm in the remaining patients could be demonstrated. Greater instability must be regarded to be a ruptur of the Lig. calcaneocubiodeum. abstract_id: PUBMED:3132784 Tension fixation of rupture of the symphysis with woven polydioxanone sutures In seven patients (mean age 22 years) with injuries of the pelvic ring, the rupture of the symphysis was treated by polydioxanone ligaments. Complete rupture of the ileosacral joint was treated twice by additional external fixation, twice a compression screw osteosynthesis was applied. Incomplete iliosacral rupture in another three cases was not operated. Patients were treated functionally without signs of loosening or instability. After a control examination after 5 months all patients were free of symptoms in the area of the symphysis. Infection of the operation area did not accur. In our opinion the advantages of this method are simple operative technique, reduced risk of infection and no need for removal of metallic implants. Our excellent initial experience with synthetic PDS ligaments in operative treatment of symphysis ruptur seem to justify its application for other injuries like iliosacral or acromioclavicular rupture. Answer: The provided abstracts do not directly address the specific question of whether the tendon heals after surgical repair of complete proximal hamstring ruptures, in terms of clinical and magnetic resonance imaging (MRI) outcomes. However, one abstract does provide some relevant information regarding the outcomes after surgical repair of proximal hamstring tears: In abstract PUBMED:31754746, it is mentioned that surgical refixation aims to restore the continuity of the hamstrings and anatomically reattach the torn tendons. The study reports that 90% of the 31 patients who underwent surgery between 2010 and 2018 were satisfied with the surgical results 1 year postoperatively, and 75% reached their previous activity level. This suggests that the surgical repair was effective in terms of patient satisfaction and functional recovery, which indirectly implies that the tendon likely heals to a significant extent to allow for such outcomes. However, without specific MRI outcomes or direct clinical evidence of tendon healing post-surgery, it is not possible to conclusively answer the question based solely on the information provided in the abstracts. Additional studies with MRI follow-up and detailed clinical assessments would be necessary to fully understand the healing process of the tendon after surgical repair of complete proximal hamstring ruptures.
Instruction: International retrospective cohort study of neural tube defects in relation to folic acid recommendations: are the recommendations working? Abstracts: abstract_id: PUBMED:15722368 International retrospective cohort study of neural tube defects in relation to folic acid recommendations: are the recommendations working? Objectives: To evaluate the effectiveness of policies and recommendations on folic acid aimed at reducing the occurrence of neural tube defects. Design: Retrospective cohort study of births monitored by birth defect registries. Setting: 13 birth defects registries monitoring rates of neural tube defects from 1988 to 1998 in Norway, Finland, Northern Netherlands, England and Wales, Ireland, France (Paris, Strasbourg, and Central East), Hungary, Italy (Emilia Romagna and Campania), Portugal, and Israel. Cases of neural tube defects were ascertained among liveborn infants, stillbirths, and pregnancy terminations (where legal). Policies and recommendations were ascertained by interview and literature review. Main Outcome Measures: Incidences and trends in rates of neural tube defects before and after 1992 (the year of the first recommendations) and before and after the year of local recommendations (when applicable). Results: The issuing of recommendations on folic acid was followed by no detectable improvement in the trends of incidence of neural tube defects. Conclusions: Recommendations alone did not seem to influence trends in neural tube defects up to six years after the confirmation of the effectiveness of folic acid in clinical trials. New cases of neural tube defects preventable by folic acid continue to accumulate. A reasonable strategy would be to quickly integrate food fortification with fuller implementation of recommendations on supplements. abstract_id: PUBMED:38321376 Preconception folic acid supplementation for the prevention of birth defects: a prospective, population-based cohort study in mainland China. Background: Folic acid supplementation is recommended for reducing the risk of birth defects. We aimed to assess the protective association of periconception folic acid supplements with birth defects in real-world setting. Methods: This prospective, population-based cohort study utilized national preconception registered data of married Chinese couples planning a pregnancy within 6 months between 2010 and 2012 in Mainland China. Participated women are freely provided folic acid starting 3 months before conception till 3 months after conception. Birth defects were self-reported at 42 days postpartumn followup. R software (v4.0.2) was applied for statistical analyses. Results: Complete data of 567,547 couples with pregnancy outcomes and folic acid supplementation were extracted for final analysis. A total of 74.7% women were with folic acid supplementation, and 599 birth defects were self-reported. The odd of birth defects was lower among women taking folic acid compared to their counterparts not taking (0.102% vs 0.116%, P &lt; 0.001). In the multiple logistic regression analyses, the odd of birth defects was lower among couples with maternal folic acid supplementation (OR = 0.78, 95%CI: 0.66-0.95, P = 0.011), especially decreased odd of neural tube defects (NTDs) (OR = 0.56, 95%CI: 0.39-0.82, P = 0.003). This association was confirmed by 1:4 and 1:10 case control analysis. Odds of birth defects were significantly lower among women with folic acid supplementation more than 3 months before pregnancy (P &lt; 0.001), and moreover, the odds of cleft (P = 0.007) and NTDs (P = 0.007) were of notable decrease. Conclusion: This retrospective case cohort study provides programmatic evidence for public health strategy-making to for reducing the risk of NTDs and clefts. abstract_id: PUBMED:32166991 Evaluating folic acid supplementation among Japanese pregnant women with dietary intake of folic acid lower than 480 µg per day: results from TMM BirThree Cohort Study. Objective: In Japan, supplementation with 400 µg of folic acid per day is recommended for women who are planning to get pregnant to decrease the risk of their babies getting neural tube defects (NTD). However, the proportion of women who have taken folic acid supplements before conception is low among Japanese pregnant women. In addition, the dietary intake of folic acid has not yet reached the government recommended dietary intake levels (480 µg per day). This study aimed to clarify the prevalence and determinants of adequate folic acid supplements among Japanese pregnant women with dietary folic acid intake lower than 480 µg per day. Methods: This cross-sectional study was a part of the Tohoku Medical Megabank Project Birth and Three-Generation Cohort Study. We collected information on folic acid supplements before conception, sociological/lifestyle characteristics, and food consumption. The primary outcome was the use of folic acid supplements (adequate or inadequate, based on the timing of initiation of folic acid supplements). Multiple logistic regression analysis was used to examine the association between sociological/lifestyle characteristics and the adequate intake of folic acid supplements. Results: Among the 11,562 pregnant women who took lower than 480 µg per day of folic acid from food, the prevalence of adequate users was 18.0%. Pregnant women who reported adequate use of folic acid supplements were more likely to be older and educated; and reported higher household income, and history of fertility treatment. Conversely, they were less likely to be ever or current smokers and multipara. Conclusion: This study found that the prevalence of folic acid supplements use for the prevention of NTD among Japanese pregnant women was still low. abstract_id: PUBMED:12573374 Folic acid supplements during early pregnancy and likelihood of multiple births: a population-based cohort study. Background: Folic acid supplements are recommended for women of childbearing age to prevent neural tube defects in their offspring. Results of some studies, however, suggest an increase in multiple births associated with use of vitamin supplements that contain folic acid during pregnancy. Our aim was to assess this association. Methods: We used data from a population-based cohort study from which we assessed the occurrence of multiple births in women (n=242015) who had participated in a campaign to prevent neural tube defects with folic acid supplements (400 microg per day) in China. Folic acid use was ascertained before pregnancy outcome was known. We studied the relation between multiple births and any use of folic acid pills before or during early pregnancy; additionally, we investigated mechanisms by which folic acid could potentially affect the occurrence of multiple births by examining pill-taking at three time periods: before ovulation, around the time of fertilisation, and after conception. Findings: 1496 (0.62%) multiple births occurred in a cohort of 242015 women who had registered with the study between October, 1993, and September, 1995, and who had a pregnancy not affected by a birth defect; the rate of multiple births in women who did and did not take folic acid before or during early pregnancy was 0.59% and 0.65%, respectively (rate ratio 0.91; 95% CI 0.82-1.00). Interpretation: Our findings suggest that consumption of folic acid supplements during pregnancy is not associated with an increased occurrence of multiple births. abstract_id: PUBMED:24603317 Periconceptional folic acid supplementation and the risk of preterm births in China: a large prospective cohort study. Background: Folic acid-containing multivitamins have been associated with a reduced risk of preterm birth. We examined whether periconceptional use of folic acid alone reduced this risk. Methods: Data were derived from a large population-based cohort study conducted in China to evaluate the prevention of neural tube defects with folic acid supplementation. The sample comprised 207 936 singleton live births delivered at gestational ages of 20-42 weeks to women from two provinces in southern China. Healthcare workers recorded folic acid intake prospectively each month. Gestational age calculation was based on the first day of the last menstrual period. Preterm births were categorized into three clinical subtypes: iatrogenic preterm birth, preterm premature rupture of membranes (PPROM) and spontaneous preterm birth. Logistic regression was used to evaluate the association between folic acid use and the risk of preterm birth, adjusting for potential confounders. Results: The incidence of preterm birth was significantly lower among folic acid users (5.28%) than among non-users (6.10%). Folic acid use showed a 14% risk reduction for preterm birth overall [adjusted risk ratio (RR) = 0.86, 95% confidence interval (CI) 0.82-0.90]. This association was strongest for spontaneous preterm birth (adjusted RR = 0.81, 95% CI 0.78-0.86) and was not significant for iatrogenic preterm birth (adjusted RR = 0.97, 95% CI 0.88-1.07) or PPROM (adjusted RR = 1.07, 95% CI 0.93-1.23). Conclusions: Daily intake of 400 μg folic acid alone during the periconceptional period was associated with a reduced risk of spontaneous preterm birth. abstract_id: PUBMED:28545876 Impact of Periconceptional Folic Acid Supplementation on Low Birth Weight and Small-for-Gestational-Age Infants in China: A Large Prospective Cohort Study. Objectives: To explore the effects of maternal folic acid supplementation alone during pregnancy on the incidence of low birth weight (LBW) and small-for-gestational-age (SGA) infant status. Study Design: Data were derived from a large population-based cohort study performed in China to evaluate the prevention of neural tube defects with folic acid supplementation. The sample comprised 200 589 singleton live births registered in 2 southern Chinese provinces by mothers for whom detailed information on folic acid use was available. Gestational age was calculated from the first day of the last menstrual period. LBW was defined as a birth weight &lt;2500 g. Infants were considered SGA when the age-adjusted birth weight was below the 10th percentile as defined by a national survey performed in 1998. Logistic regression was used to estimate the effects of folic acid after adjusting for the principal potential confounders. Results: The overall incidence of LBW and SGA status was 2.18% and 5.82%, respectively. The incidence of LBW and SGA status was 2.09% and 5.73% in women who took folic acid, and 2.27% and 5.90% in those who did not. The adjusted risk ratios associated with folic acid use were 0.85 (95% CI: 0.80-0.90) for LBW and 0.93 (95% CI: 0.89-0.96) for SGA status. Folic acid use during pregnancy appeared to be particularly important to prevent LBW and SGA status. Conclusions: A maternal daily intake of 400 µg folic acid alone significantly reduced the risks of infant LBW and SGA status. abstract_id: PUBMED:31109064 Social Inequalities in Prenatal Folic Acid Supplementation: Results from the ELFE Cohort. Most professional and international organizations recommend folic acid supplementation for women planning pregnancy. Various studies have shown high levels of non-compliance with this recommendation. This study aimed to identify sociodemographic characteristics related to this compliance. The analyses were based on 16,809 women from the French nationwide ELFE cohort (Etude Longitudinale Française depuis l'Enfance). Folic acid supplementation was assessed at delivery, and sociodemographic characteristics were collected at two months postpartum. The association between sociodemographic characteristics and compliance with recommendations on folic acid supplementation (no supplementation, periconceptional supplementation, and supplementation only after the periconceptional period) was examined using multivariate multinomial logistic regression. Only 26% of French women received folic acid supplementation during the periconceptional period, 10% of women received supplementation after the periconceptional period, and 64% received no supplementation. Young maternal age, low education level, low family income, multiparity, single parenthood, maternal unemployment, maternal overweight, and smoking during pregnancy were related to lower likelihood of folic acid supplementation during the periconceptional period compared to no supplementation. These associations were not explained by unplanned pregnancy. Immigrant and underweight women were more likely to receive folic acid supplementation after the periconceptional period. Our study confirms great social disparities in France regarding the compliance with the recommendations on folic acid supplementation. abstract_id: PUBMED:17029289 Trends of selected malformations in relation to folic acid recommendations and fortification: an international assessment. Background: Two crucial issues relative to the benefits and impact of folic acid in the prevention of birth defects are whether supplementation recommendations alone, without fortification, are effective in reducing the population-wide rates of neural tube defects (NTDs), and whether such policies can reduce the occurrence of other birth defects. Using data from 15 registries, we assessed rates and trends of 14 major defects, including NTDs, in areas with official recommendations or fortification to assess the effectiveness of recommendations and fortification on a wide range of major birth defects. Methods: We evaluated surveillance data through 2003 on major birth defects from population-based registries from Europe, North America, and Australia. All included ascertainment of pregnancy terminations (where legal). Trends before and after policies or fortification were assessed via Poisson regression and were compared via rate ratios. Results: Significant changes in trends were seen for NTDs in areas with fortification but not in areas with supplementation recommendations alone. For other major birth defects, there was an overall lack of major trend changes after recommendations or fortification. However, some significant declines were observed for select birth defects in individual areas. Conclusions: Recommendations alone remain an ineffective approach in translating the known protective effect of folic acid in population-wide decline in NTD rates. Fortification appears to be effective in reducing NTDs. The effect on other birth defects remains unclear. abstract_id: PUBMED:33081287 Prevention of Neural Tube Defects by Folic Acid Supplementation: A National Population-Based Study. Folic acid supplementation is recommended for neural tube defect prevention during pregnancy. We conducted an observational, retrospective national registry study to determine the rate of dispensing of periconceptional folic acid after prescription in a sample of French women representative of the general population. Our study population (n = 186,061) was a representative sample of the French population, recorded in the Health Data System database on pharmacy dispensing of medication and mandatory reporting of pregnancy. Between 2006 and 2016, 14.3% of pregnant women had a prescription for folic acid supplementation during the month preceding conception and for the first 12 weeks of pregnancy. Of these prescriptions, 30.9% were issued before the start of pregnancy. This percentage was lower for first pregnancies. The rate of pharmacy dispensing during the preconception period increased progressively from 3.8% to 8.3% between 2006 and 2016. In France, the rate of pharmacy dispensing of periconceptional folic acid after medical prescription is very low and does not follow international recommendations. It seems essential to implement awareness-raising policies targeting the general population and physicians regarding effective periconceptional supplementation, particularly starting in the preconception period. Clarification of international recommendations and fortification of flour could improve the efficacy of folate supplementation at population level. abstract_id: PUBMED:29900595 Preconception folic acid supplementation use and the occurrence of neural tube defects in Japan: A nationwide birth cohort study of the Japan Environment and Children's Study. We evaluated the relationship between preconception folic acid supplementation and the occurrence of neural tube defects (NTDs) in offspring, using data from the Japan Environment and Children's Study (a nationwide prospective birth cohort study) database. Of 92 269 participants with single pregnancies, 74 cases (offspring or fetuses) had NTDs, including 32 cases of spina bifida, 24 cases of anencephaly, and 19 cases of encephalocele. A total of 7634 participants (8.27%) used preconception folic acid supplementation, and of these, 621 (0.67%) also took in dietary folic acid at ≥480 μg/day. Multivariate logistic regression analyses demonstrated no association between preconception folic acid supplementation and NTDs in offspring or fetuses (odds ratio [OR] 0.622; 95% confidence interval [CI]: 0.226-1.713). Moreover, the participants who combined preconception folic acid supplement use with dietary folic acid intake ≥480 μg/day demonstrated no incidence of NTDs in offspring or fetuses. Our analysis is limited by the absence of the data on the daily amount of supplementary folic acid intake, requiring careful attention to the interpretation. Additional surveys are required in Japan to resolve those limitations for further comprehensive assessment. Answer: The effectiveness of folic acid recommendations in reducing the occurrence of neural tube defects (NTDs) has been evaluated in various international studies with mixed results. One study found that the issuance of recommendations on folic acid did not lead to a detectable improvement in the trends of incidence of NTDs, suggesting that recommendations alone were insufficient to influence trends in NTDs up to six years after the confirmation of the effectiveness of folic acid in clinical trials (PUBMED:15722368). This study concluded that a more effective strategy would be to quickly integrate food fortification with the fuller implementation of recommendations on supplements. In contrast, a large prospective cohort study in China showed that periconceptional folic acid supplementation was associated with a lower incidence of birth defects, including a notable decrease in the odds of NTDs (PUBMED:38321376). This suggests that in a real-world setting, folic acid supplementation can have a protective association with birth defects. Another study in Japan found that the prevalence of folic acid supplement use among pregnant women was low, despite recommendations for supplementation to prevent NTDs (PUBMED:32166991). This indicates that compliance with folic acid recommendations may be an issue in some populations. Further research from China indicated that folic acid supplementation during pregnancy was not associated with an increased occurrence of multiple births (PUBMED:12573374), and was associated with a reduced risk of spontaneous preterm birth (PUBMED:24603317), low birth weight, and small-for-gestational-age infants (PUBMED:28545876). A study in France highlighted social inequalities in prenatal folic acid supplementation, with various sociodemographic factors influencing compliance with recommendations (PUBMED:31109064). Another international assessment found that fortification, but not supplementation recommendations alone, was effective in reducing NTD rates (PUBMED:17029289). In France, the rate of pharmacy dispensing of periconceptional folic acid after medical prescription was very low, suggesting the need for improved awareness and possibly fortification policies (PUBMED:33081287). Lastly, a nationwide birth cohort study in Japan did not find an association between preconception folic acid supplementation and NTDs, although the study had limitations such as the absence of data on the daily amount of supplementary folic acid intake (PUBMED:29900595). In summary, while folic acid supplementation has been shown to be effective in reducing the risk of NTDs in some settings, the implementation and compliance with recommendations vary internationally. Fortification appears to be more effective than recommendations alone in reducing NTD rates.
Instruction: Clinical re-examination 10 or more years after polytrauma: is there a gender related difference? Abstracts: abstract_id: PUBMED:19996800 Clinical re-examination 10 or more years after polytrauma: is there a gender related difference? Background: : Previous studies documented that premenopausal women better tolerate severe injuries than men in regards to mortality and intensive care complications. We test the hypothesis whether surviving women have a better long-term outcome than surviving men. Methods: : We reassessed 637 polytrauma patients 10 or more years (mean 17 +/- 5 years) after trauma. Mental health was assessed by the posttraumatic stress diagnostic scale and the Hospital Anxiety and Depression Scale. The clinical outcome was assessed by standardized scores (Hannover Score for Polytrauma Outcome and short form-12). A physical examination was performed by an orthopedic surgeon. Results: : We studied 479 men (M; 75.4%) and 158 women (F; 24.6%) aged M = 26.7 +/- 12.2 versus F = 25.9 +/- 13.0 years (p = 0.47). Injury Severity Score showed M = 20.4 +/- 9.9 versus F = 21.7 +/- 9.7 (p = 0.13). Women showed a higher rate of posttraumatic stress disorder (F = 14.5% vs. M = 6.2%; p = 0.035) and psychologic support (F = 28.0% vs. M = 15.0%; p &lt; 0.001), longer duration of rehabilitation, and longer sick leave time. Quality-of-life was significantly lower in women (Short form-12 psychologic F = 48.6 +/- 10.8 vs. M = 50.8 +/- 9.4; p = 0.02), but the same rate of women (75.3%) and men (75.4%; p = 0.995) felt well rehabilitated. Conclusion: : Late after polytrauma, women suffer more severe psychologic impairment than men who had similar injuries. This finding is independent of physical impairment and the subjective feeling of a good rehabilitation outcome. Clinically, women require special support even years after injury to improve their psychologic status. In the future, they might benefit from early concomitant psychologic treatment if mental problems are suspected. abstract_id: PUBMED:10851415 Is clinical examination an accurate indicator of raised intra-abdominal pressure in critically injured patients? Objectives: To determine the rate of elevated intra-abdominal pressure (IAP) and to evaluate the accuracy of clinical abdominal examination in the assessment of IAP in the critically injured trauma patient. Design: A prospective blinded study. Setting: The medical-surgical critical care unit of a university-affiliated regional adult trauma centre. Patients: Forty-two adult blunt trauma victims, who had a mean injury severity score of 36. Interventions: Urinary bladder pressure was measured daily and classified as normal (10 mm Hg or less), elevated (more than 10 mm Hg) or significantly elevated (more than 15 mm Hg). A blinded clinical assessment of abdominal pressure was concurrently performed and recorded as elevated or normal. Main Outcome Measures: The sensitivity, specificity and accuracy and the positive and negative predictive values of the 2 interventions in identifying elevated IAP. Results: Twenty-one patients (50%) had an elevated IAP at some point during the study. Of the 147 bladder pressure measurements done in these 42 patients, 47 (32%) were more than 10 mm Hg and 16 (11%) were more than 15 mm Hg. The sensitivity, specificity, positive predictive value, negative predictive value and accuracy of clinical abdominal examination for identifying elevated IAP were 40%, 94%, 76%, 77% and 77%, respectively. Clinical abdominal examination had a sensitivity, specificity, positive predictive value, negative predictive value and accuracy of 56%, 87%, 35%, 94% and 84% respectively, for significantly elevated IAP. Conclusions: Urinary bladder pressure was commonly elevated among our population of critically injured adults. Compared with bladder pressure measurements, clinical abdominal assessment showed poor sensitivity and accuracy for elevated IAP. These findings suggest that more routine measurements of bladder pressure in patients at risk for intra-abdominal hypertension should be performed. abstract_id: PUBMED:37029436 Diagnostic accuracy of clinical examination to identify life- and limb-threatening injuries in trauma patients. Background: Timely and accurate identification of life- and limb-threatening injuries (LLTIs) is a fundamental objective of trauma care that directly informs triage and treatment decisions. However, the diagnostic accuracy of clinical examination to detect LLTIs is largely unknown, due to the risk of contamination from in-hospital diagnostics in existing studies. Our aim was to assess the diagnostic accuracy of initial clinical examination for detecting life- and limb-threatening injuries (LLTIs). Secondary aims were to identify factors associated with missed injury and overdiagnosis, and determine the impact of clinician uncertainty on diagnostic accuracy. Methods: Retrospective diagnostic accuracy study of consecutive adult (≥ 16 years) patients examined at the scene of injury by experienced trauma clinicians, and admitted to a Major Trauma Center between 01/01/2019 and 31/12/2020. Diagnoses of LLTIs made on contemporaneous clinical records were compared to hospital coded diagnoses. Diagnostic performance measures were calculated overall, and based on clinician uncertainty. Multivariate logistic regression analyses identified factors affecting missed injury and overdiagnosis. Results: Among 947 trauma patients, 821 were male (86.7%), median age was 31 years (range 16-89), 569 suffered blunt mechanisms (60.1%), and 522 (55.1%) sustained LLTIs. Overall, clinical examination had a moderate ability to detect LLTIs, which varied by body region: head (sensitivity 69.7%, positive predictive value (PPV) 59.1%), chest (sensitivity 58.7%, PPV 53.3%), abdomen (sensitivity 51.9%, PPV 30.7%), pelvis (sensitivity 23.5%, PPV 50.0%), and long bone fracture (sensitivity 69.9%, PPV 74.3%). Clinical examination poorly detected life-threatening thoracic (sensitivity 48.1%, PPV 13.0%) and abdominal (sensitivity 43.6%, PPV 20.0%) bleeding. Missed injury was more common in patients with polytrauma (OR 1.83, 95% CI 1.62-2.07) or shock (systolic blood pressure OR 0.993, 95% CI 0.988-0.998). Overdiagnosis was more common in shock (OR 0.991, 95% CI 0.986-0.995) or when clinicians were uncertain (OR 6.42, 95% CI 4.63-8.99). Uncertainty improved sensitivity but reduced PPV, impeding diagnostic precision. Conclusions: Clinical examination performed by experienced trauma clinicians has only a moderate ability to detect LLTIs. Clinicians must appreciate the limitations of clinical examination, and the impact of uncertainty, when making clinical decisions in trauma. This study provides impetus for diagnostic adjuncts and decision support systems in trauma. abstract_id: PUBMED:10437836 Role of clinical examination in screening for blunt cervical spine injury. Background: The purpose of this study was to evaluate the hypothesis that awake and alert blunt trauma patients with Glasgow Coma Scores of 14 or 15 (regardless of blood ethanol level or other injuries sustained) can be effectively evaluated with clinical examination without radiographic evaluation of the cervical spine. Study Design: During a 32-month period at an urban Level 1 Trauma Center, 2,176 consecutive blunt trauma patients who presented with Glasgow Coma Scores of 14 or 15 were prospectively evaluated by trauma resident housestaff. Housestaff performed physical examinations of the neck and questioned the patients for the presence of neck pain. Following study form documentation of the cervical neck examination, a lateral cervical spine x-ray was performed. Further studies such as swimmer's view and CAT scan were performed if the lateral x-ray could not completely evaluate C1 to C7. These further studies were considered part of the lateral cervical spine (c-spine) x-ray screen. Attending radiologists performed final x-ray interpretations. Results: The study consisted of 2,176 patients, 33 (1.6%) of whom were diagnosed with cervical spine injury. Of the 33 patients with cervical spine injury, 3 had negative clinical examinations (sensitivity, 91%). Lateral c-spine x-ray screen was negative in 1 of these 3 patients. The 2 patients with negative c-spine clinical examination but positive lateral c-spine x-ray screens were diagnosed with C2 spinous process fracture and C6-C7 body fractures. Thirteen patients with negative lateral c-spine screens (sensitivity, 61%) were diagnosed with cervical spine injury. We evaluated 463 patients with blood ethanol levels greater than 100 mg/dL, and 6 (1.3%) were diagnosed with c-spine injury. No injuries were missed on clinical examination in this subgroup with elevated blood ethanol levels. Conclusions: 1) Clinical examination of the neck can reliably rule out significant cervical spine injury in the awake and alert blunt trauma patient. Addition of lateral c-spine x-ray does not improve the sensitivity of clinical examination in the diagnosis of significant cervical spine injury. 2) Elevated ethanol level is not a contraindication to the use of clinical examination as the screening tool for cervical spine injury. Level of consciousness, as determined by Glasgow Coma Score, is a more effective criterion to dictate a screening method for cervical spine injury. abstract_id: PUBMED:3137851 Decision to hospitalize: objective diagnosis-related group criteria versus clinical judgment. Three hundred fifty patients in an observation unit attached to an emergency department received diagnostic workup of nine critical diagnostic syndromes (abdominal pain, flank pain, headache, possible cerebrovascular accident, chest pain, dizziness or syncope, head injury, seizure, multiple trauma). The decision to hospitalize for acute care after observation for 11.1 +/- 3.9 hours was examined. The objective diagnosis-related group (DRG) criteria for admission were compared retrospectively with the physician's clinical judgment of need for hospitalization. Clinical outcome was used to establish the correctness of the decision to hospitalize. Clinical judgment was compared with objective DRG criteria for reliability in predicting the presence of serious pathology necessitating acute care hospitalization; respective values were sensitivity, 100% vs 76%; specificity, 86% vs 80%; positive predictive value, 75% vs 62%; and negative predictive value, 100% vs 89%. The difference between the sensitivity of the two admission criteria was highly significant (P less than 10(-8); chi 2, 26.12). We conclude that the physician's clinical judgment outperforms DRG objective criteria in identifying which patients with critical diagnostic syndromes need acute care hospitalization for emergency medical or surgical therapy. abstract_id: PUBMED:26151507 The validity of abdominal examination in blunt trauma patients with distracting injuries. Background: Many trauma care providers often disregard the abdominal clinical examination in the presence of extra-abdominal distracting injuries and mandate abdominal computed tomographic scan in these patients. Ignoring the clinical examination may incur undue expense and radiation exposure. The purpose of this study was to assess the efficacy of abdominal clinical examination in patients with distracting injuries. Methods: During a 1-year period, all awake and alert blunt trauma patients with Glasgow Coma Scale (GCS) score of 14 or 15 were entered into a prospective study. Abdominal clinical examination was performed and documented prospectively on all patients. Abdominal clinical examination included four-quadrant anterior abdominal palpation, flank palpation, lower thoracic palpation, pelvis examination, and palpation of the thoracolumbar spine. Following examination documentation, all patients underwent computed tomographic scan of the abdomen and pelvis with intravenous contrast. Results: A total of 803 patients were enrolled: 451 patients had distracting injuries, and 352 patients did not. Of the 352 patients without distracting injuries, 19 (5.4%) had intra-abdominal injuries, of whom 2 (10.5%) had negative clinical examination result. Of the 451 patients with distracting injuries, 48 (10.6%) were diagnosed with intra-abdominal injury, of whom 5 (10.4%) had negative clinical examination result. All five missed injuries in patients with distracting injuries were solid organ injuries, none of which required surgical intervention or blood transfusion. The sensitivity and negative predictive value of abdominal examination for patients with distracting injuries were 90.0% and 97.0%, respectively. The sensitivity and negative predictive value of abdominal examination for surgically significant and transfusion-requiring injuries were both 100%. Conclusion: Distracting injuries do not seem to diminish the efficacy of clinical abdominal examination for the diagnosis of clinically significant abdominal injury. These data suggest that clinical examination of the abdomen is valid in awake and alert blunt trauma patients, regardless of the presence of other injuries. Level Of Evidence: Diagnostic study, level III. abstract_id: PUBMED:30612277 Impact of seizure-related injuries on quality of life. Objective: Our aim is to assess the types and frequency of seizure-related injuries and to determine their effects on Quality of Life (QoL). Methods: Fifty-seven patients with epilepsy were included to our study. The demographic and clinical data of all the participants were recorded. All patients completed the Quality of Life in Epilepsy Inventory 89 (QOLIE-89). Injury types were classified as burns, head trauma, dental traumas, fractures, body injuries, penetrating traumas, road injuries, and drowning. Results: Forty-two patients had seizure-related physical injury history whereas 15 of them declared no history of injury. Lower education levels and more frequent seizures were associated with higher seizure-related injury rates (p &lt; 0.05). The most common types of seizure-related injuries were head trauma (22%) and fractures (17%). Fifty-seven (64%)of the injuries took place at home. There was no difference in QOLIE-89 scores between patients with or without seizure-related injury. Multiple injuries, admission to emergency, older than 20 years of the first seizure-related injury, and shorter than 10 years after last seizure-related injury are negatively effective on the QoL scores. Conclusion: Patients with epilepsy are likely to have seizure-related injuries which may be severe but do not affect the patients' QoL. Seizure-related injuries most commonly occur at home and therefore simple precautions (supervised bathing, using microwave ovens instead of classical stoves, avoiding electric irons and electric heaters, and sleeping close to the floor to avoid falling) taken to reduce the incidence of seizure-related injuries will help reduce hospitalizations and will also be cost-effective. abstract_id: PUBMED:10528594 Use of an Objective Structured Clinical Examination (OSCE) for the assessment of physician performance in the ultrasound evaluation of trauma. Background: A reliable means of assessing physician competency in performing ultrasound (US) is critical for training and credentialing. Objective Structured Clinical Examinations (OSCE) have been used successfully to assess clinical competency in other areas of surgical education but have not been applied previously to trauma ultrasound training. The objectives of this study were to assess physician performance in the focused abdominal sonography in trauma (FAST) examination by using a specifically designed OSCE, and to determine whether the OSCE detects differences in two determinants of competency (knowledge acquisition and clinical interpretation skills). Methods: Eighty-two physicians in surgery (n = 49) and emergency medicine (n = 33) at a Level I trauma center were evaluated. All participated in a FAST course consisting of didactic sessions on US physics, indications, and technique, FAST examination videos, and a hands-on session with human models. The OSCE consisted of two parts: written examination that assessed factual knowledge, and videotape of real-time US examinations that assessed interpretation skills. The OSCE was administered before and after the FAST course. Results: Significant improvements in postcourse OSCE scores were observed for factual knowledge (52.5 +/- 2.0 vs. 87.5 +/- 1.1, p &lt; 0.001) and interpretation skills (27.2 +/- 1.4 vs. 62.9 +/- 1.3, p &lt; 0.007). Scores for US interpretation were significantly lower than those for factual knowledge at both precourse (27.2 +/- 1.4 vs. 52.5 - 2.0, p &lt; 0.001) and postcourse (62.9 +/- 1.3 vs. 87.5 +/- 1.1, p &lt; 0.01). No performance differences were observed between surgeons and emergency medicine physicians and no effect of training level on test scores was observed. Conclusion: Knowledge acquisition and US interpretation skills can be assessed reliably with a specifically designed OSCE. Although both skills improved after participation in a FAST course, US interpretation scores were consistently lower than those for factual knowledge. This study supports the use of the objective structured clinical examination in both the design of ultrasound teaching programs and the assessment of physician competency. abstract_id: PUBMED:11265900 The clinical presentation of pediatric pelvic fractures. Background: Few studies have addressed the presentation and clinical impact of pediatric pelvic fractures. We sought to describe pediatric blunt trauma patients with pelvic fracture (PF) and to evaluate the sensitivity and specificity of physical examination at presentation for diagnosis. Methods: Retrospective analysis of all PF and control (NPF) patients from our pediatric institution over an 8-year period. Results: A total of 174 patients (88 PF, 86 NPF) were included. Median patient age was 8 years (range, 3 months to 18 years), with 54% males. The most common mechanisms of injury for PF patients were automobile-related accidents (75%). There were 140 patients (87%) who were transported by air or ground medical services. At presentation, approximately 16% of PF patients had a Glasgow Coma score of &lt;15, a mean Revised Trauma Score of 7.49, and a median Injury Severity Score (ISS) of 9. Thirty-one PF patients (35%) had an ISS of &gt;15 indicating severe, multiple injuries. Sixty-eight PF patients (77%) had severe isolated injuries (Abbreviated Injury Scale 1990 value of &gt;3); 11% of PF patients required transfusions, and 2% died. Fifteen PF patients (17% ) had no pelvic ring disruption; 39 (43%) had a single pelvic ring fracture, 22 (2%) had two pelvic ring fractures, 2 (2%) had acetabular fractures, and 10 (11%) had a combination of pelvic fractures. An abnormal physical examination of the pelvis was noted in 81 patients with PF (92% sensitivity, 95% confidence interval [CI] = 0.89-0.95), 15 NPF patients had an abnormal examination (79% specificity, 95% CI = 0.74-0.84). The positive predictive value of the pelvis examination was 0.84, and the negative predictive value was 0.89. The most common abnormal pelvis examination finding was pelvic tenderness in 65 PF patients (73%). A total of seven PF patients had a normal examination of the pelvis; four had a depressed level of consciousness (defined as GCS &lt;15), and six patients had a distracting injury. Conclusions: Pediatric blunt trauma patients with pelvic fracture represent a severely injured population but generally have lower transfusion rates and mortality than noted in adult studies. The pelvis examination appears to be sensitive and specific in this retrospective study. However, an altered level of consciousness and/or distracting injuries may affect examination sensitivity and specificity. Based on this retrospective study, we cannot advocate eliminating pelvic radiographs in the severely injured, blunt trauma patient. Prospective studies are recommended. abstract_id: PUBMED:10432092 The utility of physical examination in proximity penetrating extremity trauma. This study assessed the efficacy of physical examination as a screening modality for the diagnosis of surgically significant arterial injury in proximity penetrating extremity trauma (PPET). All cases of PPET were assessed and admitted per established protocol over a 30-month period from January 1, 1993, to June 30, 1995. No patients were excluded if other body regions were also injured. Landmarks defining upper extremity injuries were the deltopectoral groove to the wrist and for lower extremities from the inguinal ligament to the ankle. Patients admitted with PPET fell into one of three categories: 1) no hard signs of vascular injury present-admitted for 24-hour observation; 2) presence of at least one hard sign of vascular injury-taken immediately to the operating room; 3) positive sign of arterial injury that requires angiography (i.e., diminished but appreciable pulse by physical examination or doppler, large nonexpanding hematoma, bilateral pulse deficit, no appreciable pulse with unreconstructable trajectory). Four hundred six patients with 489 injured extremities secondary to PPET were admitted over a 30-month period. Sixty-two extremities suffered multiple injuries. Of the extremities injured, 83 per cent were secondary to gunshot wounds, 12 per cent were attributed to stabs/lacerations, and 5 per cent were due to shotgun injuries. Four hundred twenty-one extremities with PPET fell into Group 1. There were four missed injuries (specificity, 99%) in this group (one ulnar artery, one radial artery, one posterior tibial artery, and one anterior tibial artery). Two of these injuries were considered surgically significant. None of the patients suffered limb or functional loss as a consequence of their missed arterial injury. Thirty-nine extremities were entered into Group 2, with two patients found to have no arterial injury. Twenty-nine extremities were placed into Group 3 with 10 (35%) found to have surgically significant injury on angiogram. The overall sensitivity and specificity for physical examination was 92 per cent and 95 per cent, respectively, for surgically significant injury. Physical examination is a highly sensitive and specific screening modality for the identification of surgically significant arterial injury in PPET. Patients who present with diminished, but appreciable, pulses by physical examination or doppler, large nonexpanding hematomas as the only sign, bilateral pulse deficits and nonappreciable pulses with unreconstructable trajectory benefit from further investigation with angiography. Answer: Yes, there is a gender-related difference in the long-term outcomes of polytrauma survivors when re-examined 10 or more years after the injury. A study that reassessed 637 polytrauma patients (479 men and 158 women) found that women exhibited a higher rate of posttraumatic stress disorder (PTSD) and required psychological support more frequently than men. Women also had longer durations of rehabilitation and sick leave. Despite similar physical impairments and a subjective feeling of good rehabilitation outcomes between genders, women reported a significantly lower quality of life in the psychological domain. These findings suggest that women may suffer more severe psychological impairment than men after experiencing similar injuries and highlight the need for special support for women even years after the injury to improve their psychological status. The study also suggests that early concomitant psychological treatment might be beneficial for women if mental problems are suspected (PUBMED:19996800).
Instruction: Can 50 Hz magnetic fields alter iron metabolism and induce anaemia? Abstracts: abstract_id: PUBMED:11133049 Can 50 Hz magnetic fields alter iron metabolism and induce anaemia? Unlabelled: PURPOSE. Some changes in tissue iron concentration have been reported in animals exposed to electromagnetic fields. In other studies, variations in the haemoglobin level were occasionally observed. In the present experiment, the effects of exposing a rat to a 50 Hz magnetic field (MF) were therefore investigated for the possible induction of anaemia due to changes in iron metabolism. Materials And Methods: Male Brown Norway rats (n=225) were exposed to a sinusoidal 50 Hz MF of 500 microT for 15 weeks. Haematological parameters, differential bone marrow cell counts and sideroblasts were investigated. Blood parameters of iron metabolism were measured. Iron concentration and total iron content were also determined in the spleen and liver, to assess iron storage in these organs. Results: Significant differences between the exposed and control rat were only detected for iron storage in the spleen, and for the percentage of bone marrow cells of the red cell lineage. Conclusion: The changes observed were not associated with anaemia during the 15 weeks of MF exposure. However, the decrease in bone marrow cells of the red cell lineage and the changes in iron storage detected at the end of the experiment did not allow the possibility to be ruled out that exposure to 50 Hz MFs may induced delayed biological effects. abstract_id: PUBMED:24036104 The complex interplay of iron metabolism, reactive oxygen species, and reactive nitrogen species: insights into the potential of various iron therapies to induce oxidative and nitrosative stress. Production of minute concentrations of superoxide (O2(*-)) and nitrogen monoxide (nitric oxide, NO*) plays important roles in several aspects of cellular signaling and metabolic regulation. However, in an inflammatory environment, the concentrations of these radicals can drastically increase and the antioxidant defenses may become overwhelmed. Thus, biological damage may occur owing to redox imbalance-a condition called oxidative and/or nitrosative stress. A complex interplay exists between iron metabolism, O2(*-), hydrogen peroxide (H2O2), and NO*. Iron is involved in both the formation and the scavenging of these species. Iron deficiency (anemia) (ID(A)) is associated with oxidative stress, but its role in the induction of nitrosative stress is largely unclear. Moreover, oral as well as intravenous (iv) iron preparations used for the treatment of ID(A) may also induce oxidative and/or nitrosative stress. Oral administration of ferrous salts may lead to high transferrin saturation levels and, thus, formation of non-transferrin-bound iron, a potentially toxic form of iron with a propensity to induce oxidative stress. One of the factors that determine the likelihood of oxidative and nitrosative stress induced upon administration of an iv iron complex is the amount of labile (or weakly-bound) iron present in the complex. Stable dextran-based iron complexes used for iv therapy, although they contain only negligible amounts of labile iron, can induce oxidative and/or nitrosative stress through so far unknown mechanisms. In this review, after summarizing the main features of iron metabolism and its complex interplay with O2(*-), H2O2, NO*, and other more reactive compounds derived from these species, the potential of various iron therapies to induce oxidative and nitrosative stress is discussed and possible underlying mechanisms are proposed. Understanding the mechanisms, by which various iron formulations may induce oxidative and nitrosative stress, will help us develop better tolerated and more efficient therapies for various dysfunctions of iron metabolism. abstract_id: PUBMED:12430618 Diseases of iron metabolism. Diseases of iron metabolism are likely to be both more frequent than expected, and exhibit a wider range of clinic severity and effects. Some present without evidence of anemia. Unexplained diseases of end organs that are affected by iron (liver, heart, pancreas, kidney, adrenals, and cerebellum) should have an iron metabolism disorder considered. Review of the blood indices and serum iron and ferritin markers may alert the clinician to most disorders. Further research is likely to define the scope and approach to clinical diagnosis of the diseases of iron metabolism. abstract_id: PUBMED:36528614 Influence of magnetic nanoparticle biotransformation on contrasting efficiency and iron metabolism. Magnetic nanoparticles are widely used in biomedicine for MRI imaging and anemia treatment. The aging of these nanomaterials in vivo may lead to gradual diminishing of their contrast properties and inducing toxicity. Here, we describe observation of the full lifecycle of 40-nm magnetic particles from their injection to the complete degradation in vivo and associated impact on the organism. We found that in 2 h the nanoparticles were eliminated from the bloodstream, but their initial biodistribution changed over time. In 1 week, a major part of the nanoparticles was transferred to the liver and spleen, where they degraded with a half-life of 21 days. MRI and a magnetic spectral approach revealed preservation of contrast in these organs for more than 1 month. The particle degradation led to the increased number of red blood cells and blood hemoglobin level due to released iron without causing any toxicity in tissues. We also observed an increase in gene expression level of Fe-associated proteins such as transferrin, DMT1, and ferroportin in the liver in response to the iron particle degradation. A deeper understanding of the organism response to the particle degradation can bring new directions to the field of MRI contrast agent design. abstract_id: PUBMED:38412767 Hookworm infection as a model for deepen knowledge of iron metabolism and erythropoiesis in anemia. Over the years, there has been progress in understanding the molecular aspects of iron metabolism and erythropoiesis. However, despite research conducted both in laboratories and living organisms, there are still unanswered questions due to the complex nature of these fields. In this study we investigated the effects of hookworm infection on iron metabolism and how the hosts response to anemia is affected using hamsters infected with Ancylostoma ceylanicum as a model. Our data revealed interesting relationships between infection-induced anemia, erythropoiesis, iron metabolism, and immune modulation, such that the elevated production of erythropoietin (EPO) in renal tissue indicated intensified erythropoiesis in response to anemia. Additionally, the increased expression of the erythroferrone (ERFE) gene in the spleen suggested its involvement in iron regulation and erythropoiesis. Gene expression patterns of genes related to iron metabolism varied in different tissues, indicating tissue-specific adaptations to hypoxia. The modulation of pro-inflammatory and anti-inflammatory cytokines highlighted the delicate balance between immune response and erythropoiesis. Data derived from the investigation of changes induced in iron metabolism and stress erythropoiesis following anemia aid in our understanding of mechanisms related to blood spoliation and anemia, which could potentially be extrapolated or compared to other types or causes of anemia. These findings also contribute to our understanding of the pathophysiology of erythropoiesis in the context of blood loss. abstract_id: PUBMED:31387234 A Short Review of Iron Metabolism and Pathophysiology of Iron Disorders. Iron is a vital trace element for humans, as it plays a crucial role in oxygen transport, oxidative metabolism, cellular proliferation, and many catalytic reactions. To be beneficial, the amount of iron in the human body needs to be maintained within the ideal range. Iron metabolism is one of the most complex processes involving many organs and tissues, the interaction of which is critical for iron homeostasis. No active mechanism for iron excretion exists. Therefore, the amount of iron absorbed by the intestine is tightly controlled to balance the daily losses. The bone marrow is the prime iron consumer in the body, being the site for erythropoiesis, while the reticuloendothelial system is responsible for iron recycling through erythrocyte phagocytosis. The liver has important synthetic, storing, and regulatory functions in iron homeostasis. Among the numerous proteins involved in iron metabolism, hepcidin is a liver-derived peptide hormone, which is the master regulator of iron metabolism. This hormone acts in many target tissues and regulates systemic iron levels through a negative feedback mechanism. Hepcidin synthesis is controlled by several factors such as iron levels, anaemia, infection, inflammation, and erythropoietic activity. In addition to systemic control, iron balance mechanisms also exist at the cellular level and include the interaction between iron-regulatory proteins and iron-responsive elements. Genetic and acquired diseases of the tissues involved in iron metabolism cause a dysregulation of the iron cycle. Consequently, iron deficiency or excess can result, both of which have detrimental effects on the organism. abstract_id: PUBMED:11509103 Iron and the brain: neurotransmitter receptors and magnetic resonance spectroscopy. Earlier studies show that in iron deficiency with anaemia and in latent iron deficiency neurotransmitters are altered. The changes induced in the fetal brain are irreversible on rehabilitation. The important alterations in glutamate metabolism in latent iron deficiency stimulated studies on gamma aminobutyric acid and glutaminate receptors. It was observed that binding of 3H-muscimol at pH 7.5 and 1 mg protein/assay increased significantly in synaptic vesicular membranes and under similar conditions 3H-glutamate binding showed reduction. Thus iron deficiency played a role in both excitatory and inhibitory neurotransmitter receptors. To elucidate the role of body iron status on the brain, anaemic children with thalassemia and iron deficiency were subjected to 'magnetic resonance spectroscopy' of globus pallidus, caudate and dentate nuclei and there was no change in iron content. The concentrations of creatinine and aspartate increased, with lowering of choline content. The findings were similar in thalassemia as well as iron deficiency anaemia, suggesting that in anaemia changes operate through reduced oxygen availability. abstract_id: PUBMED:12475052 Nitric oxide and changes of iron metabolism in exercise. Accumulated data imply that exercise itself might not lead to a true iron deficiency or 'sport anaemia' in a healthy athlete who has adequate iron intake. The higher prevalence of iron deficiency anaemia in younger female athletes might be not due to exercise itself, but probably results from dietary choices, inadequate iron intake and menstruation. These factors can also induce iron deficiency or anaemia in the general population. However, exercise does affect iron metabolism, leading to low or sub-optimal iron status. The underlying mechanism is unknown. In this review, recent advances in the study of the effect of exercise on iron metabolism and nitric oxide, and the relationship between nitric oxide and iron status in exercise are discussed. A hypothesis that increased production of nitric oxide might contribute to sub-optimal iron status in exercise is proposed. abstract_id: PUBMED:38057018 One advantageous reflection of iron metabolism in context of normal physiology and pathological phases. Purpose (background): The presented review is an updating of Iron metabolism in context of normal physiology and pathological phases. Iron is one of the vital elements in humans and associated into proteins as a component of heme (e.g. hemoglobin, myoglobin, cytochromes proteins, myeloperoxidase, nitric oxide synthetases), iron sulfur clusters (e.g. respiratory complexes I-III, coenzyme Q10, mitochondrial aconitase, DNA primase), or other functional groups (e.g. hypoxia inducible factor prolyl hydroxylases). All these entire iron-containing proteins ar e needed for vital cellular and organismal functions together with oxygen transport, mitochondrial respiration, intermediary and xenobiotic metabolism, nucleic acid replication and repair, host defense, and cell signaling. Methods (metabolic Strategies): Cells have developed metabolic strategies to import and employ iron safely. Regulatory process of iron uptake, storage, intracellular trafficking and utilization is vital for the maintenance of cellular iron homeostasis. Cellular iron utilization and intracellular iron trafficking pathways are not well established and very little knowledge about this. The predominant organs, which are associated in the metabolism of iron, are intestine, liver, bone marrow and spleen. Iron is conserved, recycled and stored. The reduced bioavailability of iron in humans has developed extremely efficient mechanisms for iron conservation. Prominently, the losses of iron cannot considerably enhance through physiologic mechanisms, even if iron intake and stores become excessive. Loss of iron is balanced or maintained from dietary sources. Results (outcomes): Numerous physiological abnormalities are associated with impaired iron metabolism. These abnormalities are appeared in the form of several diseases. There are duodenal ulcer, inflammatory bowel disease, sideroblastic anaemia, congenital dyserythropoietic anemias and low-grade myelodysplastic syndromes. Hereditary hemochromatosis and anaemia are two chronic diseases, which are responsible for disturbing the iron metabolism in various tissues, including the spleen and the intestine. Impairment in hepatic hepcidin synthesis is responsible for chronic liver disease, which is grounding from alcoholism or viral hepatitis. This condition directs to iron overload that can cause further hepatic damage. Iron has important role in several infectious diseases are tuberculosis, malaria trypanosomatid diseases and acquired immunodeficiency syndrome (AIDS). Iron is also associated with Systemic lupus erythematosus [SLE], cancer, Alzheimer's disease (AD) and post-traumatic epilepsy. Conclusion: Recently, numerous research studies are gradually more dedicated in the field of iron metabolism, but a number of burning questions are still waiting for answer. Cellular iron utilization and intracellular iron trafficking pathways are not well established and very little knowledge about this. Increased information of the physiology of iron homeostasis will support considerate of the pathology of iron disorders and also make available the support to advance treatment. abstract_id: PUBMED:9066977 The role of nitric oxide in the regulation of cellular iron metabolism. Eukaryotic cellular iron homeostasis becomes impaired during inflammation, manifesting itself most dramatically as the anemia of chronic disease. This alteration in cellular iron metabolism is the result of a complex network of events, acting at the transciptional and translational levels to alter the expression of proteins involved in the uptake, storage, and utilization of iron. With the discovery of nitric oxide (NO), its role in host defense, and its interactions with a number of different iron-containing proteins, investigators have begun unravelling the connection between iron metabolism and NO. Following a brief discussion of normal cellular iron metabolism, this review focuses on alterations in iron homeostasis observed during inflammation with an emphasis on the role of NO. A working model involving NO in the pathogenesis of the anemia of chronic disease is proposed. Answer: The exposure of rats to a 50 Hz magnetic field (MF) of 500 microT for 15 weeks resulted in significant differences in iron storage in the spleen and the percentage of bone marrow cells of the red cell lineage compared to control rats. However, these changes were not associated with anaemia during the 15 weeks of MF exposure. The study concluded that while the observed changes did not lead to anaemia within the experiment's timeframe, the possibility that exposure to 50 Hz MFs may induce delayed biological effects could not be ruled out (PUBMED:11133049). In general, iron metabolism is a complex process that involves the interplay of various radicals, including reactive oxygen species (ROS) and reactive nitrogen species (RNS), which can lead to oxidative and/or nitrosative stress. Iron deficiency (anemia) is associated with oxidative stress, but its role in inducing nitrosative stress is less clear. Iron therapies themselves, both oral and intravenous, may also induce oxidative and/or nitrosative stress, potentially affecting iron metabolism (PUBMED:24036104). Diseases of iron metabolism can present with a wide range of clinical severity and effects, sometimes without evidence of anemia. These diseases can affect various organs that are involved in iron storage and metabolism, such as the liver, heart, pancreas, kidney, adrenals, and cerebellum (PUBMED:12430618). Therefore, while the specific study on 50 Hz MF exposure did not conclusively demonstrate the induction of anaemia, the broader context of iron metabolism suggests that any factors, including MF exposure, that could potentially alter iron homeostasis or induce oxidative/nitrosative stress might have the capacity to impact iron metabolism and potentially contribute to the development of anaemia under certain conditions or over extended periods. However, more research would be needed to establish a direct causal relationship between 50 Hz magnetic fields and the induction of anaemia.
Instruction: Should we place ureteral stents in retroperitoneal laparoscopic ureterolithotomy? Abstracts: abstract_id: PUBMED:25132944 Should we place ureteral stents in retroperitoneal laparoscopic ureterolithotomy?: Consideration of surgical techniques and complications. Purpose: There is some debate over the necessity of ureteral stenting after laparoscopic ureterolithotomy. We evaluated the need for ureteral stenting after retroperitoneal laparoscopic ureterolithotomy (RLU). Materials And Methods: Between January 2009 and January 2013, 41 patients underwent RLU to remove upper ureteral stones. The retroperitoneal approach was used in all patients by a single surgeon. A double J (D-J) stent was placed in the first 17 patients after the procedure but not in any of the next 24 patients. Results: The mean patient age, serum creatinine levels, and stone size were not significantly different between the two groups. The stone-free rate was 100%. The mean operative time was significantly shorter in the stentless group than in the stent group (59.48 minutes vs. 77.88 minutes, p&lt;0.001). Parenteral analgesic use and anticholinergic medication use were observed in the stent group only. The blood loss, drain removal day, and hospital stay were not significantly different between the two groups. No other significant complications occurred during or after the operation in any patients. Conclusions: RLU is a safe and effective treatment modality for large impacted ureteral stones. In this study, D-J stent placement was not necessary after RLU. In the future, large-scale studies of RLU without D-J stenting, especially on the frequency of the development of complications according to the surgical technique, may be needed. abstract_id: PUBMED:31399878 Analysis of the clinical effect and long-term follow-up results of retroperitoneal laparoscopic ureterolithotomy in the treatment of complicated upper ureteral calculi (report of 206 cases followed for 10 years). Purpose: To summarize the experience of retroperitoneal laparoscopic ureterolithotomy (RLU) in the treatment of upper ureteral calculi and long-term follow-up results and compare them with ureteroscopic lithotomy (URL) and percutaneous nephrolithotomy (PCNL). Methods: The clinical data of 585 patients with complex upper ureteral calculi who met the inclusion criteria from January 2006 to December 2017 were retrospectively analyzed. There were 206 cases treated with RLU, 201 cases treated with URL and 178 cases treated with PCNL. The operation time, hospitalization time, stone clearance rate, incidence of postoperative complications and recurrence rate were observed and compared among the three groups. Results: Among 585 eligible patients, 206 cases were treated with RLU, 201 cases were treated with URL and 178 cases were treated with PCNL. The stone clearance rate (98.54%), postoperative complication types and rate (0.97%) in the RLU group were superior to those in the URL group and the PCNL group (P &lt; 0.05). There was no significant difference in the recurrence rate (3.88%) between the RLU group and the other two groups (P &gt; 0.05). Conclusion: Retroperitoneal laparoscopic ureterolithotomy has many advantages in the treatment of upper ureteral calculi, including high stone clearance rate, less complications especially in ureteral stricture. Retroperitoneal laparoscopic ureterolithotomy is also safe to treat upper ureteral calculi accompanied with severe urinary tract infection. abstract_id: PUBMED:25548791 Retroperitoneal laparoscopic ureterolithotomy for proximal ureteral calculi in selected patients. Objectives: To summarize our experience of retroperitoneal laparoscopic ureterolithotomy for ureteral calculi and evaluate the safety and efficiency of this procedure. Methods: We conducted a retrospective analysis of 197 patients with proximal ureteral calculi who accepted retroperitoneal laparoscopic ureterolithotomy from June 2005 to June 2014. Results: All procedures were performed successfully and the mean operating time and estimated blood loss were 87 min and 64 mL. The clearance rate was 98.5% and the rates of urine leak and ureteral stricture were 2.5% and 1.0%. Conclusions: Retroperitoneal laparoscopic ureterolithotomy is a safe and effective procedure for patients with complex stones or anatomic abnormalities, and, with experience of high volume series, it is also a reasonable choice as the primary treatment for such selected patients. abstract_id: PUBMED:24294547 Efficacy of retroperitoneal laparoscopic ureterolithotomy for the treatment of large proximal ureteric stones and its impact on renal function. Purpose: The purpose of this study was to evaluate the efficacy of retroperitoneal laparoscopic ureterolithotomy for the management of large proximal ureteric stones and the impact of this treatment on postoperative renal function. Methods: The data of 12 patients (7 men and 5 women; mean age, 68.5 ± 8.9 years) with large pyeloureteral junction (2 cases) and upper ureteral (10 cases) stones (25.3 ± 7.4 mm) that had undergone retroperitoneal laparoscopic ureterolithotomy were reviewed. Renal function was analyzed by the estimated glomerular filtration rate (eGFR) and renal scintigraphy using 99mTc-mercaptoacetyltriglycine (99mTc-MAG3) before and 3 months after surgery. Results: The mean operative time was 129.5 ± 21.4 minutes, with a mean blood loss of 64.4 ± 78.2 mL. The mean duration of hospital stay after surgery was 6.4 ± 2.7 days, and the mean duration of stenting was 7.2 ± 1.7 weeks. A stone clearance rate of 100% was achieved, and no patient developed ureteric stricture. 99mTc-MAG3 scintigraphy showed that laparoscopic removal of calculi did not affect renal function, but did improve ureteral occlusion. Conclusions: Retroperitoneal laparoscopic ureterolithotomy is a safe and effective treatment option for reducing ureteral obstruction in select patients with large proximal ureteric stones. abstract_id: PUBMED:15333221 Laparoscopic retroperitoneal ureterolithotomy: initial experience and review of literature. Purpose: To evaluate the place of retroperitoneal laparoscopic ureterolithotomy in the management of ureteral calculi through an initial experience and review of previously reported cases. Patients And Methods: Laparoscopic ureterolithotomy was attempted in six patients (five men and one woman) with an age of 31 to 56 years (mean 41.5 years) who had large and impacted stones in the upper ureter. The stone size ranged from 18 to 40 mm (mean 25.7 mm). The retroperitoneal route was used in all cases. Results: The stone was removed in all but one case. The operative time ranged from 120 to 180 minutes (mean 160 minutes). No intraoperative complications were encountered. Postoperative complications included prolonged urinary leak in one patient. On follow-up, all patients were stone free with decompression of the upper urinary tract. Conclusions: Laparoscopic ureterolithotomy is safe and effective and should be performed each time a ureterolithotomy is indicated. abstract_id: PUBMED:16253067 Innovative technique for ureteral stenting during retroperitoneal laparoscopic ureterolithotomy. Large calculi in the upper and mid ureter are best managed by laparoscopic ureterolithotomy. Stenting after laparoscopic ureterolithotomy is difficult because of the curls and pliability of the double-J stent. To make the procedure easy, we have devised a double-J stent, closed at both ends, that is threaded on a guidewire, the length of which is 1 cm greater than the stent. The guidewire is passed from the sideholes in the stent, leaving 1 or 2 cm of the guidewire outside. This guidewire makes the stent taut so that it is easily negotiated up and down the ureter. After the stone has been extracted by incising the ureter, the straightened stent on the guidewire is passed in the retroperitoneal space through a 5-mm port. The stent is passed down the ureter into the bladder first and then up the ureter to the renal pelvis. The portion of the guidewire lying outside the stent is positioned at the site of ureterolithotomy. The guidewire is pulled out with a grasper after insertion of stent, and the two ends of the stent coil up in the bladder and the kidney. This innovative technique with the modified stent and guidewire has made stenting during laparoscopic ureterolithotomy easy and saves precious time. abstract_id: PUBMED:29634145 Results of laparoscopic ureterolithotomy depending on the type of laparoscopic approach and location of the stone Aim: To investigate the results of laparoscopic management of patients with ureterolithiasis, depending on the type of laparoscopic approach and location of the stone. Materials And Methods: This study is a retrospective analysis of 30 ureterolithiasis patients who underwent laparoscopic ureterolithotomy from 2010 to 2015. Patients were divided into four subgroups depending on the type of laparoscopic approach - transperitoneal (n=17) and retroperitoneal (n=13) and on location of stones - upper ureteral stone (n=20) and mid ureteral stone (n=10). The patients comprised 23 (76.7%) men and 7 (23.3%) women aged from 18 to 68 years (mean age 46.2+/-2.3 years). 15 patients had stones in the right and 15 in left ureters. The groups were comparable by sex, age, the severity of hydronephrosis, body mass index, duration of urolithiasis, comorbidities and previous surgeries (p&gt;0.05). Results: There were no deaths, conversion to open surgery or intraoperative complications. The only statistically significant difference between transperitoneal and retroperitoneal approaches was the absence of postoperative complications (p&lt;0.05). The duration of drainage and hospital stay was significantly shorter in patients with mid ureteral stone than in patients with upper ureteral stone (p&lt;0.05). Conclusion: Surgical management of patients with ureterolithiasis of various locations using laparoscopic ureterolithotomy by different approaches showed positive results of in all cases thus indicating high clinical effectiveness of this method. abstract_id: PUBMED:27019576 Laparoscopic Ureterolithotomy. Background And Objective: The present study retrospectively analyzed the data of 213 patients who underwent laparoscopic ureterolithotomy. Methods: We retrospectively analyzed the data of 213 patients, in whom we performed conventional laparoscopic ureterolithotomy from April 2006 and January 2015 based on the diagnosis of an upper or middle ureteral stone. Patients with large ureteral stones (&gt;15 mm) or a history of failed shock-wave lithotripsy or ureteroscopy were included in the study. Although the retroperitoneal approach was preferred for 170 patients, the transperitoneal approach was used in the remaining 43 patients. Results: The mean patient age was 39.3 ± 12.0 years (range, 18-73). The study population was composed of 78 (26.7%) female and 135 (63.3%) male patients. The mean stone size was 19.7 ± 2.5 mm. The mean operative time was 80.9 ± 10.9 minutes, and the mean blood loss was 63.3 ± 12.7 mL. Intraoperative insertion of a double-J catheter was performed in 76 patients. The overall stone-free rate was 99%. No major complication was observed in any patient. However, conversion to open surgery was necessary in 1 patient. Conclusion: With high success and low complication rates, laparoscopic ureterolithotomy is an effective and reliable method that ensures quick recovery and may be the first treatment option for patients with large, impacted ureteral stones, as well as for those with a history of failed primary treatment. abstract_id: PUBMED:19811058 Experience and learning curve of retroperitoneal laparoscopic ureterolithotomy for upper ureteral calculi. Purpose: To summarize our experience and evaluate the learning curve of retroperitoneal laparoscopic ureterolithotomy of the upper ureter. Patients And Methods: Between May 2004 and May 2007, 40 patients underwent retroperitoneal laparoscopic ureterolithotomy of the upper ureter. We divided the first and last 20 patients into group I and group II. There was no statistical difference in stone size between groups. Operative time and complications were measured as a basis for the assessment of the learning curve. Results: In group I, the complication rate was 15% (3/20), including two patients whose procedure was converted to open surgery because of intraoperative bleeding, and one patient who experienced urine leakage because of a displaced Double-J ureteral stent. In group II, no postoperative complications occurred, while the mean operative time was significantly shorter compared with the earlier operations (65 vs 120 min). Conclusion: Retroperitoneal laparoscopic ureterolithotomy is safe and effective for large or impacted stones of the upper ureter. It is associated with a short learning curve in the setting of an active laparoscopic practice for selected patients. abstract_id: PUBMED:24571878 Retroperitoneal laparoendoscopic single-site ureterolithotomy versus conventional laparoscopic ureterolithotomy. Background: Laparoendoscopic single-site (LESS) surgery through the retroperitoneal approach has been seldom reported. We aimed to evaluate the outcomes of LESS and conventional laparoscopic surgery via the retroperitoneal approach in the management of large, impacted ureteral calculi. Methods: Between November 2011 and July 2013, retroperitoneal LESS ureterolithotomy was performed in 12 patients using a homemade single-port device comprising a surgical glove and several strips of tape. Another 16 patients underwent conventional retroperitoneal laparoscopic ureterolithotomy. We compared the operative time, complications, and surgical outcomes, retrospectively. Results: All patients were completed without conversion to conventional laparoscopic or open surgery. The operative time of the LESS group and of the conventional laparoscopic group were (125.3 ± 12.8) minutes and (116.9 ± 14.4) minutes, respectively (P = 0.119). The intraoperative blood loss was (42.9 ± 8.9) and (43.4 ± 14.7) ml, respectively (P = 0.914). Postoperative radiologic evaluation revealed that the stones had been removed completely. Cosmetic results were superior in the LESS group (P = 0.001). Conclusion: Retroperitoneal LESS ureterolithotomy using a homemade single-port device can be considered a feasible and safe alternative to conventional laparoscopic ureterolithotomy. Answer: The necessity of placing ureteral stents after retroperitoneal laparoscopic ureterolithotomy (RLU) is debated. In one study, a double J (D-J) stent was placed in the first 17 patients after RLU but not in the subsequent 24 patients. The results showed no significant difference in blood loss, drain removal day, or hospital stay between the two groups, and no significant complications occurred during or after the operation in any patients. The study concluded that D-J stent placement was not necessary after RLU, suggesting that stenting may be omitted in certain cases (PUBMED:25132944). Another study reported on an innovative technique for ureteral stenting during RLU, which involved a modified stent and guidewire that made stenting easier and saved time. This suggests that while stenting can be technically challenging, innovations can simplify the process (PUBMED:16253067). A retrospective analysis of 213 patients who underwent laparoscopic ureterolithotomy, with intraoperative insertion of a double-J catheter performed in 76 patients, reported a high success rate and low complication rate. This indicates that stenting can be part of a successful treatment protocol for some patients (PUBMED:27019576). In summary, while some studies suggest that stenting may not be necessary after RLU and can be safely omitted (PUBMED:25132944), others have utilized stenting as part of their operative procedure (PUBMED:16253067, PUBMED:27019576). The decision to place a ureteral stent should be based on individual patient factors, surgical outcomes, and surgeon preference. Further large-scale studies may be needed to determine the frequency of complications and the necessity of stenting according to different surgical techniques.
Instruction: Are women birthing in New South Wales hospitals satisfied with their care? Abstracts: abstract_id: PUBMED:25890303 Are women birthing in New South Wales hospitals satisfied with their care? Background: Surveys of satisfaction with maternity care have been conducted using overnight inpatient surveys and dedicated maternity surveys in a number of Australian settings, however none have been used to report on satisfaction with maternity care among women in New South Wales. The aims of this study were to investigate the association between: 1) parity (first and subsequent births) and patient experience of hospital care at birth, and 2) other patient, birth and hospital characteristics and experience of hospital care at birth. Methods: Data were from the New South Wales (NSW) Ministry of Health surveys of overnight hospital inpatients, including maternity patients, between 2007 and 2011. Questionnaires were mailed to a sample of patients three months after receiving inpatient services involving at least 1 night in a public hospital. Experience of care included 12 items grouped into: satisfaction with care, staff and information. Results were weighted to overall hospital facility populations and age-standardised. Frequencies and chi-square tests were used. Results: Analysis of responses from 5,367 obstetric patients revealed three quarters of women were satisfied with care provided in hospital. Compared with women who had previously given birth, first-time mothers were more likely to recommend their birth hospital to friends and family (60.5% versus 56.4%; P &lt; 0.05), less likely to have experienced differing messages from staff (44.8% vs 59.4%; P &lt; 0.001), and less likely to feel they had received sufficient information about feeding (58.8% vs 65.0%; P &lt; 0.001) and caring for their babies (52.4% vs 65.2%; P &lt; 0.001). While metropolitan women were more likely to rate their birth hospital positively (76.0% vs. 71.3%; P &lt; 0.05) than their rural counterparts, rural women tended to rate the care they received (68.1% vs. 63.4%; P &lt; 0.05), and doctors (70.7% vs 61.1%; P &lt; 0.05) and nurses (73.5% vs. 66.9%; P &lt; 0.001) more highly than metropolitan women. Conclusions: The overall picture of maternity care satisfaction in New South Wales is a positive one, with three quarters of women satisfied with care. Further resources could be dedicated to ensuring consistency and amount of information provided, particularly to first-time mothers. abstract_id: PUBMED:27381020 Local birthing services for rural women: Adaptation of a rural New South Wales maternity service. Objective: To describe the outcomes of a public hospital maternity unit in rural New South Wales (NSW) following the adaptation of the service from an obstetrician and general practitioner-obstetrician (GPO)-led birthing service to a low-risk midwifery group practice (MGP) model of care with a planned caesarean section service (PCS). Design: A retrospective descriptive study using quantitative methodology. Setting: Maternity unit in a small public hospital in rural New South Wales, Australia. Participants: Data were extracted from the ward-based birth register for 1172 births at the service between July 2007 and June 2012. Main Outcome Measures: Birth numbers, maternal characteristics, labour, birthing and neonatal outcomes. Results: There were 750 births over 29 months in GPO and 277 and 145 births over 31 months in MGP and PCS, respectively, totalling 422 births following the change in model of care. The GPO had 553 (73.7%) vaginal births and 197 (26.3%) caesarean section (CS) births (139 planned and 58 unplanned). There were almost universal normal vaginal births in MGP (&gt;99% or 276). For normal vaginal births, more women in MGP had no analgesia (45.3% versus 25.1%) or non-invasive analgesia (47.9% versus 38.6%) and episiotomy was less common in MGP than GPO (1.9% versus 3.4%). Neonatal outcomes were similar for both groups with no difference between Apgar scores at 5 min, neonatal resuscitations or transfer to high-level special care nurseries. Conclusion: This study demonstrates how a rural maternity service maintained quality care outcomes for low-risk women following the adaptation from a GPO to an MGP service. abstract_id: PUBMED:32458415 Patterns of surgical care for women with ovarian cancer in New South Wales. Background: Little is known about the delivery of surgical services and outcomes for women with ovarian cancer across New South Wales (NSW). Aim: The study objective was to provide a descriptive analysis of the proportion of women who had surgery for ovarian cancer in NSW in specialist gynaecological oncology hospitals and compare outcomes for women attending specialist and non-specialist services in NSW. Materials And Methods: This study is a retrospective analysis of women with primary ovarian, fallopian tube or peritoneal cancer from 2009 to 2012. Data were analysed from the NSW Cancer Registry, NSW Admitted Patient Data Collection and Register of Births Deaths and Marriages. Treating hospitals were characterised as public specialist, public non-specialist and private. Morbidity and mortality outcomes are reported. Results: The study included 1106 women. Fifty-seven hospitals performed surgery: seven public specialist, 27 private and 23 public non-specialist hospitals. The highest proportion of surgery was performed in public specialist hospitals (61%). There was considerable variation in the utilisation of public specialist hospitals between local health districts. There was no significant difference in outcomes related to the type of hospital where surgery was performed. Conclusions: Although the majority of women are having surgery in a specialist gynaecological oncology public hospital across NSW, many are not. Women living in regional and remote NSW were less likely to have their surgery in a specialist hospital. This is the first step in understanding where women in NSW are currently receiving their surgical care, as well as the outcomes related to this. abstract_id: PUBMED:32795365 Hospital characteristics associated with low-value care in public hospitals in New South Wales, Australia. Background: Rates of low-value care vary between hospitals in New South Wales, Australia. Understanding factors associated with this variation will help in understanding the drivers of low-value care and in planning initiatives to reduce low-value care. Methods: For eight low-value procedures, we used Poisson regression of the number of low-value episodes at each hospital to assess the association between low-value care and hospital characteristics. We also used hierarchical clustering on the low-value procedures used and their rates at each hospital to try to identify groups of hospitals with higher or lower rates of low-value care across multiple procedures. Results: Some hospital characteristics, such as hospital peer group and proportion of total episodes that involve the specific procedure, showed associations for some procedures, but none were consistent across all eight procedures. We clustered hospitals into five groups, but low-value care rates did not differ much between these groups. Conclusion: Available hospital variables show little association with rates of low-value care and no patterns across different low-value procedures. We need to investigate factors within hospitals, such as clinician knowledge and beliefs about low-value care. abstract_id: PUBMED:24521258 Hospital Dementia Services Project: aged care and dementia services in New South Wales hospitals. Aims: To describe the availability of aged care and dementia services in public hospitals in New South Wales (NSW), Australia in 2006/2007. Method: Hospitals were surveyed about issues relevant to dementia in acute care including the types of aged care wards and staff, policies, practices, clinics, community services and resources for dementia care. Results: Responses were obtained from 163 hospitals (82%); responding hospitals represented 93.4% of NSW hospital beds, 96.7% of multiday episodes of care. Hospitals that had a Specialist Mental Health Service for Older People (SMHSOP) and an Aged Care Service (ACS) had the highest levels of dementia-related services and policies. Few hospitals without ACS or SMHSOP had clinics for dementia assessment, secure beds for disturbed behaviour, or services to manage patients with dementia and medical or behavioural comorbidity in the community. Conclusion: Dementia services in NSW hospitals are closely aligned with ACS and SMHSOP, with limited dementia services in hospitals without ACS or SMHSOP. abstract_id: PUBMED:29165860 Rehabilitation and convalescent hospital stay in New South Wales: an analysis of 3,979 women aged 75. Objectives: With a rapidly ageing population, it is imperative to examine health service costs and plan appropriately for the future. This paper determines the factors related to extended hospital stay for 'Rehabilitation' or 'Convalescence', as defined by ICD-10 coding, in acute hospital settings for older women in New South Wales, Australia. Methods: Participants were from the 1921-26 cohort of the Australian Longitudinal Study on Women's Health. For this analysis, self-reported survey data were linked to the NSW Admitted Patient Data Collection and the National Death Index. Results: Of the 3,979 participants, 88% had a hospitalisation in the 13-year observation period, and 37% had either a rehabilitation or convalescence admission in an acute hospital setting. In the multivariate model, living in a regional or remote area was the only variable positively associated with having a rehabilitation or convalescence hospitalisation (AOR=1.58 [1.33, 1.87]). Conclusions: Area of residence is the determining factor for rehabilitation or convalescence hospital admissions. These long stay admissions are not necessarily inappropriate, but due to a lack of other non-acute care options. Implications for public health: Increased availability of rehabilitation and respite care in non-acute settings will not only improve older patient care, but will also reduce the burden on acute hospitals. abstract_id: PUBMED:19236293 Reforming New South Wales public hospitals: an assessment of the Garling inquiry. The final report of the Special Commission of Inquiry into Acute Care Services in NSW Public Hospitals was published on 27 November 2008. The report acknowledges the challenges facing the New South Wales health system, including increasing numbers of patients, rising treatment costs, workforce pressures, and the breakdown of working relations between clinicians and management. Many of Commissioner Garling's 139 recommendations formalise aspects of clinical care that should and could be happening now if the system were better managed, including better supervision and training of junior staff. Commissioner Garling recommends that change should be driven by clinicians "from the bottom up", but does not adequately describe how this should happen. Implementation of the report's recommendations that will require strong leadership and continuing consultation with clinicians and the community. abstract_id: PUBMED:31227444 The cost of vaginal birth at home, in a birth centre or in a hospital setting in New South Wales: A micro-costing study. Background: Women want greater choice of place of birth in New South Wales, Australia. It is perceived to be more costly to health services for women with a healthy pregnancy to give birth at home or in a birth centre. It is not known how much it costs the health service to provide care for women planning to give birth in these settings. Aim: The aim of this study was to determine the direct cost of giving birth vaginally at home, in a birth centre or in a hospital for women at low risk of complications, in New South Wales. Methods: A micro-costing design was used. Observational (time and motion) and resource use data collection was undertaken to identify the staff time and resources required to provide care in a public hospital, birth centre or at home for women with a healthy pregnancy. Findings: The median cost of providing care for women who plan to give birth at home, in a birth centre and in a hospital were similar (AUD $2150.07, $2100.59 and $2097.30 respectively). Midwifery time was the largest contributor to the cost of birth at home, and overhead costs accounted for over half of the total cost of BC and hospital birth. The cost of consumables was low in all three settings. Conclusion: In this study, we have found there is little difference in the cost to the health service when a woman has an uncomplicated vaginal birth at home, in a birth centre or in a hospital setting. abstract_id: PUBMED:10178130 Total quality management in accredited New South Wales hospitals: a public/private comparison. Analysis of data collected in a 1994-95 survey of accredited New South Wales hospitals examined the adoption of key elements of total quality management practice in the public and private sectors. In a number of areas of practice widely considered to be central to a hospital's total quality management efforts, there was no statistically significant difference between the two sectors. Where differences existed, total quality management practices more likely to be adopted by public hospitals were limited in their scope and likely to be explained by structural peculiarities. In contrast, private hospitals were more likely to adopt practices more critical to the successful implementation of total quality management. abstract_id: PUBMED:34733582 Assessing Barriers to Implement Birth Kangaroo Care in Kentucky Birthing Hospitals. Introduction: Kentucky continues to have one of the lowest state breastfeeding rates in the country. In 2014, the majority of the birthing hospitals in Kentucky implemented a practice change to the healthcare model known as Birth Kangaroo Care (BKC) as an effort to increase breastfeeding initiation. The goal of this study was to identify current practices and barriers to implementing BKC. Methods: An evaluation/surveillance study that incorporated an Internet survey to collect information about the practices and policies of BKC in birthing hospitals in Kentucky was completed. Findings: The response rate was 54% (n = 25). The birthing hospitals responders to the survey (84%) reported that a BKC policy was established after the educational intervention. Data identified two perceived barriers regarding uninterrupted BKC. One barrier was the interruption by family members to hold the newborn, and the second was a delay in BKC for medical evaluations of the baby by staff members. Conclusion: Breastfeeding rates after implementation of the BKC policy in Kentucky birthing hospitals showed a statistically significant (p = .02) improvement of "ever breastfed" infants. Answer: Yes, women birthing in New South Wales (NSW) hospitals are generally satisfied with their care. A study investigating patient experience of hospital care at birth in NSW revealed that three quarters of women were satisfied with the care provided in hospital. First-time mothers were more likely to recommend their birth hospital to friends and family compared to women who had previously given birth. However, first-time mothers were less likely to feel they had received sufficient information about feeding and caring for their babies. Satisfaction rates were higher among metropolitan women for their birth hospital, but rural women rated the care they received, and the doctors and nurses more highly than metropolitan women (PUBMED:25890303).
Instruction: Does admission during morning rounds increase the mortality of patients in the medical ICU? Abstracts: abstract_id: PUBMED:22556321 Does admission during morning rounds increase the mortality of patients in the medical ICU? Background: Early optimization of treatment is crucial when admitting patients to the ICU and could depend on the organization of the medical team. The aim of this retrospective observational study was to determine whether admissions during morning rounds are independently associated with hospital mortality in a medical ICU. Methods: The 3,540 patients admitted from May 2000 to April 2010 to the medical ICU of Sainte Marguerite Hospital in Marseille, France, were divided into two groups based on the time of admission.The non-morning rounds group was admitted between 1:00 PM and 7:59 AM , and the morningrounds group was admitted between 8:00 AM and 12:59 PM . Hospital mortality (crude and adjusted)was compared between the two groups. Results: The 583 patients (16.5%) admitted during morning rounds were older and sicker upon admission compared with those patients admitted during non-morning rounds. The crude hospital mortality was 35.2% (95% CI , 31.4-39.1) in the group of patients admitted during morning rounds and 28.0% (95% CI, 26.4-29.7) in the other group ( P &lt; .001). An admission during morning rounds was not independently associated with hospital death (adjusted hazard ratio, 1.10; 95% CI,0.94-1.28; P 5=.24). Conclusions: Being admitted to the medical ICU during morning rounds is not associated with a poorer outcome than afternoon and night admissions. The conditions of the patients admitted during morning rounds were more severe, which underlines the importance of the ICU team’s availability during this time. Further studies are needed to evaluate if the presence of a specific medical team overnight in the wards would be able to improve patients’ outcome by preventing delayed ICU admission. abstract_id: PUBMED:19505985 Association between ICU admission during morning rounds and mortality. Background: No previous study has evaluated the association between admission to ICUs during round time and patient outcome. The objective of this study was to determine the association between round-time ICU admission and patient outcome. Methods: This retrospective study included 49,844 patients admitted from October 1994 to December 2007 to four ICUs (two surgical, one medical, and one multispecialty) of an academic medical center. Of these patients, 3,580 were admitted to the ICU during round time (8:00 am to 10:59 am) and 46,264 were admitted during nonround time (from 1:00 pm to 6:00 am). The medical ICU had 24-h/7-day per week intensivist coverage during the last 2 years of the study. We compared the baseline characteristics and outcome of patients admitted to the ICU between the two groups. Data were abstracted from the acute physiology and chronic health evaluation (APACHE) III database. Results: The round-time and non-round-groups were similar in gender, ethnicity, and age. The predicted hospital mortality rate of the round time group was higher (17.4% vs 12.3% predicted, respectively; p &lt; 0.001). The hospital length of stay was similar between the two groups. The round-time group had a higher hospital mortality rate (16.2% vs 8.8%, respectively; p &lt; 0.001). Most of the round-time ICU admissions and deaths occurred in the medical ICU. Round-time admission was an independent risk factor for hospital death (odds ratio, 1.321; 95% CI, 1.178 to 1.481). This independent association was present for the whole study period except for the last 2 years. Conclusions: Patients admitted to the ICU during morning rounds have higher severity of illness and mortality rates. abstract_id: PUBMED:24018942 Medical ICU admissions during weekday rounds are not associated with mortality: a single-center analysis. This study investigated whether intensive care unit (ICU) admissions to the research team's tertiary care academic hospital during morning rounds was associated with increased mortality. Discharge data were analyzed on 1912 patients admitted to the ICUs between July 2007 and June 2011. Measures included discharge disposition, time of admission to the ICU, source of admission, and expected mortality score. Descriptive statistics were generated to examine the proportion of subjects who died based on admission time to the ICU, and Pearson's χ(2) test was used to test the null hypothesis that mortality rates for admissions during rounds and those at other times of the day would be similar. No difference in mortality was detected between admissions during rounds and all other times, whether analyzed using a bivariate (P = .55) or multivariable (P = .78) analysis. In this study, mortality was associated with severity of illness and not associated with admission during morning rounds. abstract_id: PUBMED:22554100 Impact of intensive care unit admission during morning bedside rounds and mortality: a multi-center retrospective cohort study. Introduction: Recent data have suggested that patient admission during intensive care unit (ICU) morning bedside rounds is associated with less favorable outcome. We undertook the present study to explore the association between morning round-time ICU admissions and hospital mortality in a large Canadian health region. Methods: A multi-center retrospective cohort study was performed at five hospitals in Edmonton, Canada, between July 2002 and December 2009. Round-time ICU admission was defined as occurring between 8 and 11:59 a.m. Multivariable logistic regression analysis was used to explore the association between round-time admission and outcome. Results: Of 18,857 unique ICU admissions, 2,055 (10.9%) occurred during round time. Round-time admissions were more frequent in community hospitals compared with tertiary hospitals (12.0% vs. 10.5%; odds ratio [OR] 1.16; 95% CI, 1.05-1.29, P &lt; 0.004) and from the ward compared with the emergency department (ED) or operating theater (17.5% vs. 9.2%; OR 2.1; 95% CI, 1.9-2.3, P &lt; 0.0001). Round-time admissions were more often medical than surgical (12.6% vs. 6.6%; OR 2.06; 95% CI, 1.83-2.31, P &lt; 0.0001), had more comorbid illness (11.9% vs. 10.5%; OR 1.15; 95% CI, 1.04-1.27, P &lt; 0.008) and higher APACHE II score (22.2 vs. 21.3, P &lt; 0.001), and were more likely to have a primary diagnosis of respiratory failure (37.0% vs. 31.3%, P &lt; 0.001) or sepsis (11.1% vs. 9.0%, P = 0.002). Crude ICU mortality (15.3% vs. 11.6%; OR 1.38; 95% CI, 1.21-1.57, P &lt; 0.0001) and hospital mortality (23.9% vs. 20.6%; OR 1.21; 95% CI, 1.09-1.35, P &lt; 0.001) were higher for round-time compared with non-round-time admissions. In multi-variable analysis, round-time admission was associated with increased ICU mortality (OR 1.19, 95% CI, 1.03-1.38, P = 0.017) but was not significantly associated with hospital mortality (OR 1.02; 95% CI, 0.90-1.16, P = 0.700). In the subgroup admitted from the ED, round-time admission showed significantly higher ICU mortality (OR 1.54; 95% CI, 1.21-1.95; P &lt; 0.001) and a trend for higher hospital mortality (OR 1.22; 95% CI, 0.99-1.51, P = 0.057). Conclusions: Approximately 1 in 10 patients is admitted during morning rounds. These patients are more commonly admitted from the ward and are burdened by comorbidities, are non-operative, and have higher illness severity. These patients admitted during morning rounds have higher observed ICU mortality but no difference in hospital mortality. abstract_id: PUBMED:28578599 Influence of Chemotherapy Within 30 Days Before ICU Admission on Mortality in Critically Ill Medical Patients With Cancer. Background: The main objective was to determine whether the administration of chemotherapy (CT) during the month before intensive care unit (ICU) admission of medical patients with cancer influences the survival rate. The design was a single-institution observational cohort study in an ICU of a tertiary university hospital. Methods: Our cohort included 248 oncology patients admitted to the ICU from 2005 to 2014 due to nonsurgical problems. Seventy-six (30.6%) patients had received CT in the month before admission (CT group) and 172 did not receive CT (control group). The main outcome measures were ICU, hospital, 30-day, 90-day, and 1-year mortalities. We performed survival analysis using the Kaplan-Meier estimator, comparing both groups using the log-rank test, and multivariate analysis using Cox regression adjusted for gender, age, maximum Sequential Organ Failure Assessment (SOFA), and delta maximum SOFA to calculate the hazard ratios (HRs) and their respective 95% confidence intervals. This association was also evaluated by a graphic representation of survival. Results: The CT group presented an ICU mortality rate of 27.6% versus 25.5% in the control group. The multivariate analysis adjusted for age, sex, and delta maximum SOFA showed significant differences between the groups (HR: 2.12; P = .009). The hospital mortality rate was 55.3% in the CT group compared to 45.4% in the control group (adjusted HR: 1.81; P = .003). At 30 days, the mortality rate was 56.6% in the CT group compared to 46.5% in the control group (adjusted HR: 1.69; P = .008). Mortality at 90 days was 65.8% in the CT group versus 59.9% in the control group (adjusted HR: 1.47; P = .03). One-year mortality was also higher in the CT group (79% vs 72.7%, adjusted HR: 1.44; P = .02). Conclusion: The administration of CT in the month before ICU admission in patients with cancer was associated with higher mortality in the ICU, in the hospital, and 30 and 90 days after admission when adjusted for the increase in organ failure measured by delta maximum SOFA. We provide useful new information for decision-making about ICU management of patients with cancer. abstract_id: PUBMED:37900381 Cardiac Parameters Better Predict ICU Admission and Short-Term Mortality in Hospitalized Patients With COVID-19. Background COVID-19 is a multisystemic disease that affects many organs, and the use of some parameters is recommended both during hospitalization and follow-up. In this study, we investigated the relationship between blood (liver and kidney function tests, lactate, and D-dimer), infection (C-reactive protein (CRP), lymphocyte count, ferritin, and albumin), and cardiac (creatine kinase-myocardial band (CK-MB), troponin, and brain natriuretic peptide (BNP)) parameters with intensive care unit (ICU) admission and mortality. Materials and methods Patients hospitalized in Erzurum City Hospital with the diagnosis of COVID-19 between April 2020 and November 2022 were included in this retrospective study. The patient's files and electronic media records were retrospectively reviewed, and the patient's anamnesis, physical examination, clinical findings, biochemical parameters, and treatment methods were recorded. The ICU needs of the patients and the treatment processes in intensive care were found in the in-hospital records. The hospital records and six-month mortality data were obtained retrospectively with the necessary permissions. Thus, blood parameters and their relation to each other in terms of prognosis were evaluated in determining the six-month mortality rates of the patients and estimating the need for ICU. Results A total of 5100 patients were included in the study. The mean age of patients with mortality was 74.2 ± 11.2 and that without mortality was 59.9 ± 15.7 (p &lt; 0.001). In the mortality (+) group, 61.5% of patients were male, and in the mortality (-) group, 47.4% of the patients were male (p &lt; 0.001). The mean age of patients with ICU admission was 69.6 ± 13.6 and without ICU admission was 60.3 ± 15.9 years (p &lt; 0.001). In the ICU admission (+) group, 60.5% of patients were male; and in the ICU admission (-) group, 47.2% of patients were male (p &lt; 0.001). Death and ICU admission were observed more frequently in elderly and male patients (p &lt; 0.001 for both mortality and ICU admission). Blood parameters were evaluated both in the mortality and ICU groups, and organ function tests, blood count parameters, inflammatory markers, and cardiac parameters were significantly associated with poor outcomes. Cox regression analysis showed that lactate, albumin, Ln(troponin), and Ln(BNP) were independent predictors of mortality and ICU admission. Receiver operating characteristics (ROC) curve analysis showed that Ln(troponin) and Ln(BNP) levels predicted the development of mortality and ICU admission better than other parameters. Discussion COVID-19 can cause problems in different systems as a result of an inflammatory response, secreted cytokines, hypercoagulability, and direct tissue damage. When treating patients, a more appropriate approach is to evaluate different parameters together rather than focusing on a single parameter and deciding accordingly. However, evaluating alterations in many parameters in a disease that affects many systems is difficult and increases the risk of mistakes. Although each blood parameter separately is important, it was observed that the cardiac parameters troponin I and BNP have better predictive values than others in predicting the course and prognosis of COVID-19. Conclusion Blood parameters are used in COVID-19 diagnosis, treatment, and follow-up. Although it is not primarily a cardiac disease, cardiac markers can provide better results in showing the course and prognosis of COVID-19. abstract_id: PUBMED:35494832 Interpretable deep learning for the prediction of ICU admission likelihood and mortality of COVID-19 patients. The global healthcare system is being overburdened by an increasing number of COVID-19 patients. Physicians are having difficulty allocating resources and focusing their attention on high-risk patients, partly due to the difficulty in identifying high-risk patients early. COVID-19 hospitalizations require specialized treatment capabilities and can cause a burden on healthcare resources. Estimating future hospitalization of COVID-19 patients is, therefore, crucial to saving lives. In this paper, an interpretable deep learning model is developed to predict intensive care unit (ICU) admission and mortality of COVID-19 patients. The study comprised of patients from the Stony Brook University Hospital, with patient information such as demographics, comorbidities, symptoms, vital signs, and laboratory tests recorded. The top three predictors of ICU admission were ferritin, diarrhoea, and alamine aminotransferase, and the top predictors for mortality were COPD, ferritin, and myalgia. The proposed model predicted ICU admission with an AUC score of 88.3% and predicted mortality with an AUC score of 96.3%. The proposed model was evaluated against existing model in the literature which achieved an AUC of 72.8% in predicting ICU admission and achieved an AUC of 84.4% in predicting mortality. It can clearly be seen that the model proposed in this paper shows superiority over existing models. The proposed model has the potential to provide tools to frontline doctors to help classify patients in time-bound and resource-limited scenarios. abstract_id: PUBMED:35743615 Usefulness of Hospital Admission Chest X-ray Score for Predicting Mortality and ICU Admission in COVID-19 Patients. We aimed to investigate the performance of a chest X-ray (CXR) scoring scale of lung injury in prediction of death and ICU admission among patients with COVID-19 during the 2021 peak pandemic in HCM City, Vietnam. CXR and clinical data were collected from Vinmec Central Park-hospitalized patients from July to September 2021. Three radiologists independently assessed the day-one CXR score consisting of both severity and extent of lung lesions (maximum score = 24). Among 219 included patients, 28 died and 34 were admitted to the ICU. There was a high consensus for CXR scoring among radiologists (κ = 0.90; CI95%: 0.89-0.92). CXR score was the strongest predictor of mortality (tdAUC 0.85 CI95% 0.69-1) within the first 3 weeks after admission. A multivariate model confirmed a significant effect of an increased CXR score on mortality risk (HR = 1.33, CI95%: 1.10 to 1.62). At a threshold of 16 points, the CXR score allowed for predicting in-hospital mortality and ICU admission with good sensitivity (0.82 (CI95%: 0.78 to 0.87) and 0.86 (CI95%: 0.81 to 0.90)) and specificity (0.89 (CI95%: 0.88 to 0.90) and 0.87 (CI95%: 0.86 to 0.89)), respectively, and can be used to identify high-risk patients in needy countries such as Vietnam. abstract_id: PUBMED:36704425 Scope and mortality of adult medical ICU patients in an Eastern Cape tertiary hospital. Background: The characteristics and mortality outcomes of patients admitted to South African intensive care units (ICUs) owing to medical conditions are unknown. Available literature is derived from studies based on data from high-income countries. Objectives: To determine ICU utilisation by medical patients and evaluate the scope of admissions and clinical associations with hospital mortality in ICU patients 12 years and older admitted to an Eastern Cape tertiary ICU, particularly in the subset with HIV disease. Methods: A retrospective descriptive one-year cohort study. Data were obtained from the LivAKI study database and demographic data, comorbidities, diagnosis, and mortality outcomes and associations were determined. Results: There were 261 (29.8%) medical ICU admissions. The mean age of the cohort was 40.2 years; 51.7% were female. When compared with the surgical emergencies, the medical subgroup had higher sequential organ failure assessment (SOFA) scores (median score 5 v. 4, respectively) and simplified acute physiology score III (SAPS 3) scores (median 52.7 v. 48.5), a higher incidence of acute respiratory distress syndrome (ARDS) (7.7% v. 2.9%) and required more frequent dialysis (20.3% v. 5.5%). Of the medical admissions, sepsis accounted for 32.4% of admission diagnoses. The HIV seroprevalence rate was 34.0%, of whom 57.4% were on antiretroviral therapy. ICU and hospital mortality rates were 11.1% and 21.5% respectively, while only acute kidney injury (AKI) and sepsis were independently associated with mortality. The HIV-positive subgroup had a higher burden of tuberculosis (TB), higher admission SOFA and SAPS 3 scores and required more organ support. Conclusion: Among medical patients admitted to ICU, there was a high HIV seroprevalence with low uptake of antiretroviral therapy. Sepsis was the most frequently identified ICU admission diagnosis. Sepsis and AKI (not HIV) were independent predictors of mortality. Co-infection with HIV and TB was associated with increased mortality. Contributions Of The Study: The epidemiology and outcomes of adults who are critically ill from medical conditions in South African intensive care units was previously unknown but has been described in this study. The association of sepsis, TB, HIV and acute kidney injury with mortality is discussed. abstract_id: PUBMED:34290804 Prediction of ICU admission after orthopedic surgery in elderly patients. Objectives: Prediction of ICU admission after surgery are important for rational decision-making for different patients in clinical practice. Little information is available about the risk factors of postoperative ICU admission in elderly patients undergoing orthopedic surgery. This study aimed to identify risk factors and develop a predictive model for postoperative ICU admission in elderly patients undergoing orthopedic surgery. Methods: A total of 2826 cases of elderly patients receiving orthopedic surgery from October 2010 to September 2016 were retrospectively collected and analyzed. Logistic regression was used to evaluate the impacts of covariates. Support vector machine (SVM) was employed to develop a predictive model based on all pre-operative covariates and the demographic information. Results: There were 256 patients transferred to ICU after surgery. ASA III or IV and emergency surgery were found to be independent risk factors while neuraxial anesthesia and joint surgery were protective factors. In addition, a SVM-based predictive model was developed, which had a sensitivity of 90.99%, a specificity of 99.10% and an area under ROC curve of 0.9678. Conclusions: Our study revealed that emergency surgery, anesthesia method, surgery type and ASA grade were risk factors to predict postoperative ICU admission in elderly orthopedic patients. Answer: The question of whether admission during morning rounds increases the mortality of patients in the medical ICU has been explored in several studies, with varying results. One study conducted at Sainte Marguerite Hospital in Marseille, France, found that patients admitted during morning rounds were older and sicker upon admission compared to those admitted during non-morning rounds. However, after adjusting for confounding factors, admission during morning rounds was not independently associated with hospital death (PUBMED:22556321). In contrast, another study that included patients from four ICUs of an academic medical center found that round-time admission was an independent risk factor for hospital death, with patients admitted during morning rounds having higher severity of illness and mortality rates (PUBMED:19505985). A single-center analysis at a tertiary care academic hospital showed no difference in mortality between admissions during rounds and other times of the day, suggesting that mortality was associated with severity of illness and not with admission during morning rounds (PUBMED:24018942). A multi-center retrospective cohort study in Edmonton, Canada, found that patients admitted during morning rounds had higher observed ICU mortality but no difference in hospital mortality when compared to non-round-time admissions (PUBMED:22554100). In summary, the evidence is mixed regarding the impact of morning round admissions on mortality in the medical ICU. Some studies suggest an association with higher mortality, while others do not find a significant difference after adjusting for confounding factors. It appears that the severity of illness at admission is a critical factor influencing mortality outcomes, rather than the timing of admission itself.
Instruction: Is "isolated home" hypertension as opposed to "isolated office" hypertension a sign of greater cardiovascular risk? Abstracts: abstract_id: PUBMED:11575977 Is "isolated home" hypertension as opposed to "isolated office" hypertension a sign of greater cardiovascular risk? Background: The SHEAF (Self-Measurement of Blood Pressure at Home in the Elderly: Assessment and Follow-up) study is an observational study (from February 1998 to early 2002) designed to determine whether home blood pressure (BP) measurement has a greater cardiovascular prognostic value than office BP measurement among elderly (&gt; or =60 years) French patients with hypertension. The objective of this present work is to describe the baseline characteristics of the treated patients in the SHEAF study from February 1998 to March 1999, placing special emphasis on "isolated office" and "isolated home" hypertension. Methods: Baseline office BP measurement was assessed using a mercury sphygmomanometer. Home BP measurement was performed over a 4-day period. A 140/90-mm Hg threshold was chosen to define office hypertension, and a 135/85-mm Hg threshold to define home hypertension. Results: Of the 5211 hypertensive patients in the SHEAF study with a valid home BP measurement, 4939 received treatment with at least 1 antihypertensive drug. Patients with isolated office hypertension represented 12.5% of this population, while patients with isolated home hypertension represented 10.8%. The characteristics of the patients with isolated office hypertension were similar to those of patients with controlled hypertension. However, patients with isolated office hypertension had fewer previous cardiovascular complications. In contrast, rates of cardiovascular risk factors and history of cardiovascular disease in patients with isolated home hypertension resembled those in patients with uncontrolled hypertension. Conclusions: This retrospective analysis suggests that patients with isolated home hypertension belong to a high-risk subgroup. The 3-year follow-up of these patients will provide prospective data about the cardiovascular prognosis of these subgroups. abstract_id: PUBMED:16623073 Isolated office hypertension: association with target organ damage and cardiovascular risk indices. Background: Isolated office hypertension (IOH) has been accepted as a benign condition by some researchers, whereas others believe that it is associated with cardiovascular abnormalities and increased cardiovascular risk. The aim of this present study was to evaluate the effects of IOH on target organ damage and cardiovascular risk indices. Methods: Arterial blood pressure (BP) measured in the office and by 24-hour ambulatory blood pressure measurement (ABMP), carotid intima-media thickness (CIMT), left ventricular mass index (LVMI), cardiothoracic index (CTI), duration of QTc, 24-hour microalbuminuria, fibrinogen, C-reactive protein (CRP), total cholesterol, low-density-lipoprotein (LDL) cholesterol, high-density-lipoprotein (HDL) cholesterol and triglyceride levels were evaluated. Thirty-three subjects with IOH (office BP &gt; or = 140/90 mmHg and daytime ambulatory BP &lt;135/85 mmHg), 17 patients with sustained hypertension (office BP &gt; or = 140/90 mmHg and daytime ambulatory BP &gt; or = 135/85 mmHg), and 17 normotensive control subjects were recruited in the study. The three groups were matched for age, sex and body mass index. Results: CIMT was greater in patients with IOH than in normotensive subjects, and it was significantly lower than that of sustained hypertension patients. Significantly higher LVMI was determined in subjects with IOH compared to normotensive subjects. CTI, QTc, microalbuminuria, fibrinogen, CRP, total cholesterol, LDL cholesterol, HDL cholesterol and triglyceride levels did not differ significantly among the three groups. Conclusions: IOH causes significant target organ damage and should not be regarded as a benign condition. abstract_id: PUBMED:28216288 Home, automated office, and conventional office blood pressure as predictors of cardiovascular risk. Automated office blood pressure (AOBP) has recently been shown to closely predict cardiovascular (CV) events in the elderly. Home blood pressure (HBP) has also been accepted as a valuable method in the prediction of CV disease. This study aimed to compare conventional office BP (OBP), HBP, and AOBP in order to evaluate their value in predicting CV events and deaths in hypertensives. We assessed 236 initially treatment naïve hypertensives, examined between 2009 and 2013. The end points were any CV and non-CV event including mortality, myocardial infarction, coronary heart disease, hospitalization for heart failure, severe arrhythmia, stroke, and intermittent claudication. We fitted proportional hazards models using the different modalities as predictors and evaluated their predictive performance using three metrics: time-dependent receiver operating characteristics curves, the Akaike's Information Criterion, and Harrell's C-index. After a mean follow-up of 7 years, 23 participants (39% women) had experienced ≥1 CV event. Conventional office systolic (hazard ratio [HR] per 1 mm Hg increase in BP, 1.028; 95% confidence interval [CI], 1.009-1.048), automated office systolic (HR per 1 mm Hg increase in BP, 1.031; 95% CI, 1.008-1.054), and home systolic (HR, 1.025; 95% CI, 1.003-1.047) were predictive of CV events. All systolic BP measurements were predictive after adjustment for other CV risk factors (P &lt; .05). The predictive performance of the different modalities was similar. Conventional OBP was significantly higher than AOBP and average HBP. AOBP predicts equally well to OBP and HBP CV events. It appears to be comparable to HBP in the assessment of CV risk, and therefore, its introduction into guidelines and clinical practice as the reference method for assessing BP in the office seems reasonable after verification of these findings by randomized trials. abstract_id: PUBMED:38063417 Predictive power of isolated high home systolic blood pressure for cardiovascular outcomes in individuals with type 2 diabetes mellitus: KAMOGAWA-HBP study. Aims/introduction: Isolated high home systolic blood pressure (IHHSBP) is a risk for cardiovascular disease (CVD). However, no study has shown an association between IHHSBP and CVD in diabetes. We examined the association between IHHSBP and CVD in type 2 diabetes. Materials And Methods: This retrospective cohort study included 1082 individuals with type 2 diabetes, aged 20 to 90 years, without a history of macrovascular complications. Home blood pressure (HBP) was measured three times every morning and evening for 14 days. Cox proportional hazards models were used to examine the relationship between IHHSBP and CVD incidence. Results: With the normal HBP group as the reference, the adjusted hazard ratio (HR) (95% confidence interval [CI]) for CVD was 1.58 (1.02-2.43) in the IHHSBP group. Correcting for antihypertensive medication use did not change HR. Based on sex, the adjusted HR (95% CI) for CVD was 1.25 (0.74-2.13) in males and 2.28 (1.01-5.15) in females. Conclusions: In individuals with type 2 diabetes, those with IHHSBP had a higher HR for cardiovascular disease than those with normal HBP. But, Isolated high home diastolic blood pressure and high HBP were not. The association between IHHSBP and CVD was stronger in females than in males. abstract_id: PUBMED:24477096 Overall cardiovascular prognosis of isolated systolic hypertension, isolated diastolic hypertension and pulse pressure defined with home measurements: the Finn-home study. Objective: The overall cardiovascular prognosis of isolated systolic hypertension, isolated diastolic hypertension and pulse pressure defined with home blood pressure (BP) measurements remains unclear. Methods: A prospective nationwide study was initiated in 2000-2001 on 1924 randomly selected participants aged 44-74 years. We determined home and office BP at baseline and classified the individuals into four groups according to their home BP levels: normotension, isolated diastolic hypertension, isolated systolic hypertension and systolic-diastolic hypertension. The primary endpoint was incidence of a composite cardiovascular event. Results: After a median follow-up of 11.2 years, 236 individuals had suffered a cardiovascular event. In multivariable Cox proportional hazard models, the relative hazards and 95% confidence intervals (CIs) for cardiovascular events were significantly higher in participants with isolated diastolic hypertension (relative hazard 1.95; 95% CI, 1.06-3.57; P=0.03), isolated systolic hypertension (relative hazard 2.08; 95% CI, 1.42-3.05; P&lt;0.001) and systolic-diastolic hypertension (relative hazard 2.79; 95% CI, 2.02-3.86; P&lt;0.001) than in participants with normotension. Home (relative hazard 1.21; 95% CI, 1.05-1.40; P=0.009 per 10 mmHg increase), but not office (relative hazard 1.10; 95% CI, 1.00-1.21, P=0.06) pulse pressure, adjusted for mean arterial pressure, was an independent predictor of cardiovascular risk. Conclusion: Isolated diastolic and systolic hypertension defined with home measurements are associated with an increased cardiovascular risk. Close follow-up and possible treatment of these patients is therefore warranted. Home-measured pulse pressure is an independent predictor of cardiovascular events while office-measured pulse pressure is not, which fortifies the view that home BP provides more accurate risk prediction than office BP. abstract_id: PUBMED:16093909 Isolated uncontrolled hypertension at home and in the office among treated hypertensive patients from the J-HOME study. Objectives: To evaluate the current status of blood pressure (BP) control as measured at home and in the office, as well as to clarify and compare the prevalence and characteristics of isolated uncontrolled hypertension as measured at home (home hypertension) and in the office (office hypertension). Design: A cross-sectional study. Setting: Primary care offices in Japan. Participants: A sample of 3400 patients with essential hypertension (mean age, 66 years; males, 45%) receiving antihypertensive treatment. Results: Overall, the mean home systolic BP (SBP)/diastolic BP (DBP) was 140/82 mmHg, and the mean office SBP/DBP was 143/81 mmHg. Of the 3400 subjects, 19% had controlled hypertension (home SBP/DBP &lt; 135/85 mmHg and office SBP/DBP &lt; 140/90 mmHg), 23% had isolated uncontrolled home hypertension (home SBP/DBP &gt;/= 135/85 mmHg and office SBP/DBP &lt; 140/90 mmHg), 15% had isolated uncontrolled office hypertension (home SBP/DBP &lt; 135/85 mmHg and office SBP/DBP &lt; 140/90 mmHg), and 43% had uncontrolled hypertension (home SBP/DBP &gt;/= 135/85 mmHg and office SBP/DBP &gt;/= 140/90 mmHg). Compared to controlled hypertension, factors associated with isolated uncontrolled home hypertension included obesity, relatively higher office SBP, habitual drinking, and the use of two or more prescribed antihypertensive drugs. Compared to uncontrolled hypertension, factors associated with isolated uncontrolled office hypertension included female gender, lower body mass index, and relatively lower office SBP. Conclusions: The use of all four, three of four, or all three predictive factors might be useful for the clinician to suspect isolated uncontrolled home or office hypertension. abstract_id: PUBMED:19924274 Is Peri-Operative Isolated Systolic Hypertension (ISH) a Cardiac Risk Factor? We are presenting a review of Isolated Systolic Hypertension (ISH) as a cardiovascular risk factor with emphasis on the perioperative period.Isolated systolic hypertension is associated with aging and is the most frequent subtype (65%) among patients with uncontrolled hypertension. ISH is strongly associated with increased risks of cardiac and cerebrovascular events exceeding those in comparably aged individuals with diastolic hypertension. Patients with ISH show an increase in left ventricular (LV) mass and an increase in the prevalence of left ventricular hypertrophy (LVH). These LV changes increase cardiovascular events and frequently lead to diastolic dysfunction (DD). Treatment to reduce elevated systolic blood pressure has been shown to reduce the risk of cardiovascular events.In the perioperative setting, essential hypertension has not been found to be a significant risk factor for cardiac complications. Most of the studies were based on the definition of essential hypertension and underpowered in sample size. The significance of perioperative ISH, however, is not well studied, partly due to its recognition only fairly recently as a cardiovascular risk factor in the non-surgical setting, and partly due to the evolving definition of ISH.Perioperative cardiac complications remain a significant problem to the healthcare system and to the patient. Although the incidence of perioperative cardiac complications is prominent in high-risk patients as defined by the Revised Cardiac Risk Index (RCRI), the bulk of the cardiac complications actually occur in low-risk group. Currently, little understanding exists on the occurrence of perioperative cardiac complications in low- risk patients. A factor such as ISH, with its known pathophysiological changes, is a potential perioperative risk factor.We believe ISH is an under-recognized perioperative risk factor and deserves further studying. Our research group has recently been funded by the Heart Stroke Foundation (HSF) to examine ISH as a perioperative risk factor (PROMISE Study). abstract_id: PUBMED:20385970 Home-measured blood pressure is a stronger predictor of cardiovascular risk than office blood pressure: the Finn-Home study. Previous studies with some limitations have provided equivocal results for the prognostic significance of home-measured blood pressure (BP). We investigated whether home-measured BP is more strongly associated with cardiovascular events and total mortality than is office BP. A prospective nationwide study was initiated in 2000 to 2001 on 2081 randomly selected subjects aged 45 to 74 years. Home and office BP were determined at baseline along with other cardiovascular risk factors. The primary end point was incidence of a cardiovascular event (cardiovascular mortality, nonfatal myocardial infarction, nonfatal stroke, hospitalization for heart failure, percutaneous coronary intervention, or coronary artery bypass graft surgery). The secondary end point was total mortality. After a mean follow-up of 6.8 years, 162 subjects had experienced a cardiovascular event, and 118 subjects had died. In Cox proportional hazard models adjusted for other cardiovascular risk factors, office BP (systolic/diastolic hazard ratio [HR] per 10/5 mm Hg increase in BP, 1.13/1.13; systolic/diastolic 95% confidence interval [CI], 1.05 to 1.22/1.05 to 1.22) and home BP (HR, 1.23/1.18; 95% CI, 1.13 to 1.34/1.10 to 1.27) were predictive of cardiovascular events. However, when both BPs were simultaneously included in the models, only home BP (HR, 1.22/1.15; 95% CI, 1.09 to 1.37/1.05 to 1.26), not office BP (HR, 1.01/1.06; 95% CI, 0.92 to 1.12/0.97 to 1.16), was predictive of cardiovascular events. Systolic home BP was the sole predictor of total mortality (HR, 1.11; 95% CI, 1.01/1.23). Our findings suggest that home-measured BP is prognostically superior to office BP. On the basis of the results of this and previous studies, it can be concluded that home BP measurement offers specific advantages more than conventional office measurement. abstract_id: PUBMED:36890269 Impact of diabetic status and contribution of office and home blood pressure across diabetic status for cardiovascular disease: the J-HOP study. Few studies have investigated whether the prognostic power of home blood pressure (BP) for cardiovascular disease (CVD) events differs across subjects with different diabetic status. We used the dataset of the J-HOP (Japan Morning Surge-Home Blood Pressure) study, which enrolled patients having cardiovascular risks to investigate relationships between home BP and CVD events. We classified the patients as having diabetes mellitus (DM), prediabetes or normal glucose metabolism (NGM) as follows: for DM, a self-reported history of physician-diagnosed DM and/or use of DM medication, a fasting plasma glucose ≥126 mg/dL, a casual plasma glucose level ≥200 mg/dL or hemoglobin A1c (HbA1c) ≥6.5% (n = 1034); for prediabetes, HbA1c of 5.7-6.4% (n = 1167), and for NGM, those who remained (n = 2024). CVD outcome was defined as coronary artery disease, stroke or heart failure. During a median 6.2 ± 3.8 years of follow-up, 259 CVD events occurred. Analysis found both prediabetes (Unadjusted Hazard ratio [uHR], 1.43; 95% confidence interval [CI], 1.05-1.95), and DM (uHR, 2.13; 95% CI, 1.59-2.85) as risks of CVD compared to NGM. In DM, patients with a 10-mmHg elevation of office systolic BP (SBP) and morning home SBP had 16% and 14% higher risks for CVD events. In the prediabetes group, only an elevated morning home SBP conferred a risk of CVD events (uHR, 1.15; 95% CI, 1.00-1.31), but this association did not hold for the adjusted model. Like DM, prediabetes should be recognized as a risk for CVD events, albeit weakly. Elevated home BP contributes to increased CVD risk in diabetes. Our study demonstrated the impact of prediabetes and diabetes on CVD and the impact of office and home BP on CVD events in each group. abstract_id: PUBMED:16141627 Isolated home hypertension in the morning is associated with target organ damage in patients with type 2 diabetes. To investigate the relationship between the blood pressure control level and cardiovascular risk in type 2 diabetic patients, we evaluated home blood pressure, office blood pressure, biochemical data, and carotid echographic and echocardiographic findings in 148 patients with type 2 diabetes. According to the criteria for hypertension in the guidelines of the Japanese Society of Hypertension, we classified patients into a normotensive group with home systolic blood pressure in the morning (morning HSBP)&lt;135 mmHg and office systolic blood pressure (OSBP)&lt;140 mmHg, an office hypertension group with a morning HSBP&lt;135 mmHg and OSBP&gt;or=140 mmHg, an isolated home hypertension in the morning group with morning HSBP&gt;or=135 mmHg and OSBP&lt;140 mmHg, and a sustained hypertension group with morning HSBP&gt;or=135 mmHg and OSBP&gt;or=140 mmHg. In the isolated home hypertension in the morning group, the fasting insulin level, urinary albumin excretion, maximum carotid artery intima-media complex thickness, and left ventricular posterior wall thickness were significantly higher and the coefficient of variation for RR intervals was significantly lower than in the normotensive group. These results suggest that isolated home hypertension in the morning is a risk factor for target organ damage in type 2 diabetic patients. Answer: Yes, "isolated home" hypertension is suggested to be a sign of greater cardiovascular risk compared to "isolated office" hypertension. The SHEAF study indicates that patients with isolated home hypertension have characteristics and a history of cardiovascular disease that resemble those of patients with uncontrolled hypertension, suggesting they belong to a high-risk subgroup (PUBMED:11575977). Additionally, the KAMOGAWA-HBP study found that individuals with type 2 diabetes and isolated high home systolic blood pressure had a higher hazard ratio for cardiovascular disease than those with normal home blood pressure, particularly in females (PUBMED:38063417). The Finn-Home study also supports the view that isolated systolic and diastolic hypertension defined with home measurements are associated with an increased cardiovascular risk (PUBMED:24477096). Furthermore, the Finn-Home study concluded that home-measured blood pressure is a stronger predictor of cardiovascular risk than office blood pressure (PUBMED:20385970). These findings collectively suggest that isolated home hypertension may indeed be a sign of greater cardiovascular risk.
Instruction: Does partial occlusion promote normal binocular function? Abstracts: abstract_id: PUBMED:22956613 Does partial occlusion promote normal binocular function? Purpose: There is growing evidence that abnormal binocular interactions play a key role in the amblyopia syndrome and represent a viable target for treatment interventions. In this context the use of partial occlusion using optical devices such as Bangerter filters as an alternative to complete occlusion is of particular interest. The aims of this study were to understand why Bangerter filters do not result in improved binocular outcomes compared to complete occlusion, and to compare the effects of Bangerter filters, optical blur and neutral density (ND) filters on normal binocular function. Methods: The effects of four strengths of Bangerter filters (0.8, 0.6, 0.4, 0.2) on letter and vernier acuity, contrast sensitivity, stereoacuity, and interocular suppression were measured in 21 observers with normal vision. In a subset of 14 observers, the partial occlusion effects of Bangerter filters, ND filters and plus lenses on stereopsis and interocular suppression were compared. Results: Bangerter filters did not have graded effect on vision and induced significant disruption to binocular function. This disruption was greater than that of monocular defocus but weaker than that of ND filters. The effect of the Bangerter filters on stereopsis was more pronounced than their effect on monocular acuity, and the induced monocular acuity deficits did not predict the induced deficits in stereopsis. Conclusions: Bangerter filters appear to be particularly disruptive to binocular function. Other interventions, such as optical defocus and those employing computer generated dichoptic stimulus presentation, may be more appropriate than partial occlusion for targeting binocular function during amblyopia treatment. abstract_id: PUBMED:25352120 The effect of Bangerter filters on binocular function in observers with amblyopia. Purpose: We assessed whether partial occlusion of the nonamblyopic eye with Bangerter filters can immediately reduce suppression and promote binocular summation of contrast in observers with amblyopia. Methods: In Experiment 1, suppression was measured for 22 observers (mean age, 20 years; range, 14-32 years; 10 females) with strabismic or anisometropic amblyopia and 10 controls using our previously established "balance point" protocol. Measurements were made at baseline and with 0.6-, 0.4-, and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. In Experiment 2, psychophysical measurements of contrast sensitivity were made under binocular and monocular viewing conditions for 25 observers with anisometropic amblyopia (mean age, 17 years; range, 11-28 years; 14 females) and 22 controls (mean age, 24 years; range, 22-27; 12 female). Measurements were made at baseline, and with 0.4- and 0.2-strength Bangerter filters placed over the nonamblyopic/dominant eye. Binocular summation ratios (BSRs) were calculated at baseline and with Bangerter filters in place. Results: Experiment 1: Bangerter filters reduced suppression in observers with amblyopia and induced suppression in controls (P = 0.025). The 0.2-strength filter eliminated suppression in observers with amblyopia and this was not a visual acuity effect. Experiment 2: Bangerter filters were able to induce normal levels of binocular contrast summation in the group of observers with anisometropic amblyopia for a stimulus with a spatial frequency of 3 cycles per degree (cpd, P = 0.006). The filters reduced binocular summation in controls. Conclusions: Bangerter filters can immediately reduce suppression and promote binocular summation for mid/low spatial frequencies in observers with amblyopia. abstract_id: PUBMED:27046118 Binocular Rivalry Measured 2 Hours After Occlusion Therapy Predicts the Recovery Rate of the Amblyopic Eye in Anisometropic Children. Purpose: Recent studies on adults have shown that short-term monocular deprivation boosts the deprived eye signal in binocular rivalry, reflecting homeostatic plasticity. Here we investigate whether homeostatic plasticity is present also during occlusion therapy for moderate amblyopia. Methods: Binocular rivalry and visual acuity (using Snellen charts for children) were measured in 10 children (mean age 6.2 ± 1 years) with moderate anisometropic amblyopia before the beginning of treatment and at four intervals during occlusion therapy (2 hours, 1, 2, and 5 months). Visual stimuli were orthogonal gratings presented dichoptically through ferromagnetic goggles and children reported verbally visual rivalrous perception. Bangerter filters were applied on the spectacle lens over the best eye for occlusion therapy. Results: Two hours of occlusion therapy increased the nonamblyopic eye predominance over the amblyopic eye compared with pretreatment measurements, consistent with the results in adults. The boost of the nonamblyopic eye was still present after 1 month of treatment, steadily decreasing afterward to reach pretreatment levels after 2 months of continuous occlusion. Across subjects, the increase in nonamblyopic eye predominance observed after 2 hours of occlusion correlated (rho = -0.65, P = 0.04) with the visual acuity improvement of the amblyopic eye measured after 2 months of treatment. Conclusions: Homeostatic plasticity operates during occlusion therapy for moderate amblyopia and the increase in nonamblyopic eye dominance observed at the beginning of treatment correlates with the amblyopic eye recovery rate. These results suggest that binocular rivalry might be used to monitor visual cortical plasticity during occlusion therapy, although further investigations on larger clinical populations are needed to validate the predictive power of the technique. abstract_id: PUBMED:29562559 Occupational therapists' perspectives on binocular diplopia in neurorehabilitation: A national survey. Background: Oculomotor dysfunction affects a significant number of adults with neurological conditions and binocular diplopia is a common symptom which impacts an individual's ability to participate in meaningful daily activities. Occupational therapists use partial and complete occlusion to minimize binocular diplopia, however a review of the literature reflected a lack of standardized protocol for each intervention technique. The purpose of this study was to examine occupational therapists' perspectives on the use of partial and complete occlusion and the clinical reasoning process used. Methods: An electronic survey was distributed to occupational therapists working in a variety of practice settings. The survey contained questions relating to demographics, the selected occlusion technique, and clinical reasoning for that selection. Results: More than half of the 106 respondents used partial occlusion more frequently than complete occlusion. There was no correlation between respondent experience and self-report of competence in managing binocular diplopia. Respondents based their clinical reasoning on available evidence, client factors, and clinical expertise. Conclusion: Respondents offered conflicting perspectives on each occlusion technique. Future studies are required to examine which occlusion technique benefits clients. abstract_id: PUBMED:8962854 Binocular function clarified Normal binocular function is defined as the bifoveal connection of the central and peripheral structures of the visual system. Binocular function in strabismus is defined as connections, more or less strong and more or less labile, of the fovea of the fixating eye with the "pseudo-fovea" of the deviating eye including all central and peripheral structures, and this connection represents, in fact, a progress of adaptation of all parameters to the new conditions of vision. abstract_id: PUBMED:34290584 Longitudinal Rehabilitation of Binocular Function in Adolescent Intermittent Exotropia After Successful Corrective Surgery. Purpose: To study the longitudinal rehabilitation of binocular visual function in adolescent intermittent exotropia (IXT) after successful surgery and compare the results with those of a normal population. The role of binocular function in ocular alignment stability was also evaluated postoperatively. Methods: In this prospective study, 30 adolescents with IXT successfully corrected after 1 month were followed for 12 months, and 30 children with normal vision were enrolled as controls. Stereopsis, the fusional vergence amplitude, sensory fusion, and accommodative flexibility were measured to assess binocular function at baseline and 6 and 12 months postoperatively. The controls were tested once when they were enrolled in the study. Results: The deviation was -32.00 ± 8.60 prism diopters (PD) at distance fixation and -36.0 ± 9.10 PD at near fixation preoperatively with an average correction of 28.53 ± 3.79 PD and 30.67 ± 1.34 PD at 1 month postoperatively. Distance stereoacuity and near stereoacuity improved from 1 to 12 months postoperatively (p = 0.025 and p = 0.041, respectively). Compared with the controls, the fusional convergence reserve at distance (p = 0.025) and near (p = 0.033) fixations and fusion reserve ratio at distance (p = 0.000) and near (p = 0.000) fixations remained subnormal, whereas sensory fusion (p = 0.237), distance stereopsis (p = 0.120), and the fusional divergence amplitude at a distance (p = 0.168) were normal. However, no significant correlations were found between binocular functions at 1 month postoperatively and the postoperative drift. Conclusion: Binocular function significantly improved from before to after successful corrective surgery and continued to improve from 1 to 12 months postoperatively in adolescents with IXT. No significant correlations were found between binocular functions at 1 month postoperatively and ocular alignment stability. abstract_id: PUBMED:37227617 Binocular metamorphopsia in patients with branch retinal vein occlusion: a multi-center study. Purpose: The pathology of branch retinal vein occlusion (BRVO), a retinal circulatory disease, is related to monocular metamorphopsia-related vision impairment of the affected eyes, but the association of binocular metamorphopsia in such patients is unclear. This study aimed to examine the frequency of binocular metamorphopsia and its association with the clinical characteristics of patients with BRVO. Methods: A total of 87 patients who were treated for BRVO-associated macular edema (ME) were included in this study. At baseline and 1 and 3 months after the initiation of anti-vascular endothelial growth factor (VEGF) treatment, we quantified metamorphopsia in the affected eyes and binocular metamorphopsia using the M-CHARTS® diagnostic tool. Results: At baseline, 53 and 7 patients had metamorphopsia in the affected eyes and binocular metamorphopsia, respectively. Although the visual acuity improved significantly after the initiation of anti-VEGF treatment, the mean M-CHARTS score in the affected eyes did not change from the baseline score. At 3 months, 9 patients showed binocular metamorphopsia; it was significantly associated with metamorphopsia in the affected eyes with a 95% confidence interval of 0.021-0.122 (β = 0.306, p = 0.006). Conclusion: Metamorphopsia in the affected eyes can cause binocular metamorphopsia in patients with BRVO-ME. abstract_id: PUBMED:25572348 Binocular visual training to promote recovery from monocular deprivation. Abnormal early visual experience often leads to poor vision, a condition called amblyopia. Two recent approaches to treating amblyopia include binocular therapies and intensive visual training. These reflect the emerging view that amblyopia is a binocular deficit caused by increased neural noise and poor signal-in-noise integration. Most perceptual learning studies have used monocular training; however, a recent study has shown that binocular training is effective for improving acuity in adult human amblyopes. We used an animal model of amblyopia, based on monocular deprivation, to compare the effect of binocular training either during or after the critical period for ocular dominance plasticity (early binocular training vs. late binocular training). We used a high-contrast, orientation-in-noise stimulus to drive the visual cortex because neurophysiological findings suggest that binocular training may allow the nondeprived eye to teach the deprived eye's circuits to function. We found that both early and late binocular training promoted good visual recovery. Surprisingly, we found that monocular deprivation caused a permanent deficit in the vision of both eyes, which became evident only as a sleeper effect following many weeks of visual training. abstract_id: PUBMED:30779616 Assessment of Binocular Imbalance with an Augmented Virtual Reality Platform in a Normal Population. We examined and analyzed binocular imbalance in a group of randomly selected normal subjects with corrected visual acuity of 0.8 or more in both eyes without ocular diseases, by using augmented virtual reality system. This study will quantify the correlation between binocular imbalance in these normal subjects and various spatial frequencies and temporal frequencies in the integrated binocular stimulation mode. A retrospective study was carried out and data from 97 patients (51 males and 46 females) who were diagnosed with no eye diseases, and with corrected binocular visual acuity of 0.8 or more, were collected. Their binocular visual function, Hirschberg test refraction status, and binocular imbalance were assessed using the quantified visual perception stimulation mode in an augmented virtual reality platform. When normal subjects (with corrected binocular vision of 0.8 or more without ocular diseases) were assessed using the integrated binocular stimulation mode with different temporal and spatial frequencies, a gradual reduction of binocular imbalance could be observed when spatial frequencies were increased. When contrast balance was adjusted, binocular imbalance was more prevalent in the horizontal state compared with the vertical state. This was seen when patients were stimulated by different temporal frequencies in the virtual reality testing mode. 17.53% and 29.90% of participants reached complete binocular fusion under low temporal frequency stimulation and high temporal frequency stimulation, respectively, while binocular imbalance was found in 65.9% and 62.89% of participants who were stimulated, respectively, by low temporal frequency and high temporal frequency. In addition, 16.5% and 7.21% experienced severe binocular imbalance under low temporal and high temporal frequency stimulation, respectively. The integrated binocular stimulation mode under low temporal frequency was more sensitive in detecting binocular imbalance. The status of binocular balance in normal subjects was not the same when stimulated by the augmented virtual reality platform, as many showed varying degrees of binocular imbalance. The results from our clinical study demonstrate that integrated binocular stimulation with different spatial and temporal frequencies enables a more sensitive detection of binocular imbalance, making it a powerful new tool for the diagnosis and analysis of clinical binocular abnormalities. In addition, since more than half of the normal subjects (without eye diseases who has corrected vision of 0.8 or more) exhibited binocular imbalance in our study, we believe that this brief binocular imbalance is likely a transient physiological phenomenon, which is not pathological to visual acuity and function, and should only be seen as pathological when binocular imbalance is prolonged, causing functional deficits such as reading impairment. abstract_id: PUBMED:33273866 Partial Central Retinal Artery Occlusion: An Underrecognized Entity. Background: The objective of this publication is to report a case of an atypical partial central retinal artery occlusion (CRAO) with substantial visual recovery without treatment. Case Presentation: An 83-year-old woman without significant medical history with sudden unilateral visual loss presented with no known significant ophthalmological or medical history besides systemic arterial hypertension. Examination showed multiple cotton-wool spots in a peripapillary distribution, as well as a heterogenous pattern of grey translucency in the macula resulting in an indistinct cherry-red spot. Fluorescein angiography showed normal choroidal filling and an important delay of dye transit through the retinal circulation. Carotid Doppler echography showed a small endothelial atherosclerotic plaque without hemodynamic repercussion. A detailed history and further examination revealed no other systemic diseases except for moderate hypercholesterolemia. The patient was referred for management of her hypertension but otherwise did not undergo specific therapy for CRAO because of the delayed presentation. Four weeks after the initial visual loss, the patient showed resolution of the retinal findings and a surprising improvement to 20/50 visual acuity. Conclusion: This case highlights a rare subtype of central retinal artery occlusion. In this disease, partial occlusion reveals atypical signs including large cotton-wool spots as the predominant finding, making the initial diagnosis difficult. Visual recovery may be significant in partial CRAO, even without treatment. Answer: Partial occlusion does not consistently promote normal binocular function. While Bangerter filters, a form of partial occlusion, have been shown to reduce suppression and promote binocular summation for mid/low spatial frequencies in observers with amblyopia (PUBMED:25352120), they can also be disruptive to binocular function. In a study with observers with normal vision, Bangerter filters were found to disrupt binocular function more than monocular defocus but less than neutral density filters. The filters had a more pronounced effect on stereopsis than on monocular acuity, and the induced deficits in monocular acuity did not predict the deficits in stereopsis (PUBMED:22956613). Furthermore, occupational therapists have conflicting perspectives on the use of partial and complete occlusion for managing binocular diplopia, indicating a lack of consensus and standardized protocol for promoting normal binocular function through occlusion techniques (PUBMED:29562559). In the context of amblyopia treatment, binocular training has been shown to be effective for improving acuity, suggesting that amblyopia may be a binocular deficit that can be addressed through binocular therapies rather than partial occlusion (PUBMED:25572348). Additionally, while partial occlusion with Bangerter filters is used in occlusion therapy for moderate amblyopia, the homeostatic plasticity observed during this therapy and its correlation with the recovery rate of the amblyopic eye suggest that binocular rivalry might be a more useful tool to monitor visual cortical plasticity during treatment (PUBMED:27046118). In summary, while partial occlusion can have some positive effects on binocular function in certain contexts, it is not universally effective and can sometimes be disruptive. Other interventions, such as binocular training and therapies targeting binocular function, may be more appropriate for promoting normal binocular function (PUBMED:22956613, PUBMED:25352120, PUBMED:25572348).
Instruction: Is colorectal cancer screening necessary in the preoperative assessment of inguinal herniorrhaphy? Abstracts: abstract_id: PUBMED:11419848 Is colorectal cancer screening necessary in the preoperative assessment of inguinal herniorrhaphy? A case-control study. Objectives: The question of whether patients presenting for inguinal hernia repair require pre-operative assessment for colon cancer has remained unanswered. A case-control study is necessary to assess whether the prevalence of premalignant or malignant colonic lesions is higher in patients presenting with inguinal hernia compared to the general population. Methods: Between 1990-2000, 614 inguinal herniorrhaphies were performed at the Veterans Affairs Palo Alto Health Care System (VAPAHCS). We retrospectively studied the 149 (24%) patients from this group with no prior history of colonic polyps, malignancy, or gastrointestinal bleeding who had flexible sigmoidoscopy or colonoscopy performed during the peri-operative period. Comparison was made to 149 controls undergoing colonoscopy or sigmoidoscopy during the same time period for colon cancer (CRC) screening. Results: The mean (+/-SEM) patient age was 67 +/- 0.7 (range 31-92) yr in the hernia patients and 66 +/- 0.8 (range 46-93) in the control group (p = 0.7). Eighty-two of the inguinal hernia patients had screening procedures performed preoperatively with a mean time (+/-SEM) of 1.4 +/- 0.14 yr, while endoscopy was performed in the post-operative period for the remaining 67 patients (average time 2.7 +/- 0.2 yr, p &lt; 0.001). More patients underwent colonoscopy in the control group compared to the hernia cohort (p = 0.004). Seven (5%) patients in the hernia group were found to have colorectal cancer compared to six (4%) in the control group (p = 0.8). Conclusions: This study does not support previously published findings that patients with inguinal hernias are more likely to have premalignant colonic lesions. Patients with inguinal hernias should undergo screening for colon cancer at the same rate as the general population. abstract_id: PUBMED:14997133 Colorectal cancer screening in patients presenting with an inguinal hernia: is it necessary? Background: It has been suggested that patients presenting with an inguinal hernia have an increased risk for colorectal cancer. Therefore, surgeons frequently request screening for colorectal cancer before surgery. The aim of this study was to assess the frequency of premalignant and malignant colonic lesions in a group of patients with an inguinal hernia, and to compare this with a control group of subjects undergoing screening colonoscopy. Methods: In a case-control study, 243 patients with an inguinal hernia and no history of colonic neoplasia or symptoms suggestive of colorectal cancer underwent perioperative colonoscopy. The patients were stratified into two age groups: less than 50 years old (Group I) and more than 50 years old (Group II). The colonoscopic findings were compared with findings in 534 asymptomatic control patients who underwent screening colonoscopy. Results: The mean age of patients (n=64) and control subjects (n=200) in Group I was similar, at 44 (3) years. The mean age of the patients (n=179) and control subjects (n=334) in Group II was, respectively, 70 (9) years and 64 (7) years (p&lt;0.001). In Group I, no colorectal cancer was found in patients with inguinal hernia, and only one colorectal cancer was found among control subjects (p=0.571). In Group II, a diagnosis of colorectal cancer was made in 7 patients (4%) with inguinal hernia as compared with 10 patients (3%) among the control subjects (p=0.769). In both groups, the size and the histopathologic type of the polyps were not significantly different. Conclusions: In otherwise asymptomatic patients, the presence of inguinal hernia is not associated with an increased risk for colorectal cancer. Therefore, the presence of an inguinal hernia alone does not justify screening colonoscopy before herniorrhaphy. abstract_id: PUBMED:23094899 Is a routine preoperative rectosigmoidoscopy necessary in patients with bilateral inguinal hernia? Objectives: The purpose of this prospective study was to evaluate the association between bilateral inguinal hernias and colorectal cancers. Purpose: Inguinal hernias are one of the most common subjects in surgical practice and have been known to be associated with some other pathologies since 1831. Although there are some series in literature reporting the association of colorectal cancers with inguinal hernias, it is still controversial to perform colorectal diagnostic tools in hernia patients. Colorectal cancers are particularly accused to be in association with synchronous bilateral hernias as they increase the intra-abdominal pressure. Methods: Rectosigmoidoscopy was performed in 110 consecutive bilateral hernia patients and the results were recorded prospectively. Patients having colorectal diseases were excluded. Results: There were no pathologies in 87 (%79,1) rectosigmoidoscopies, while benign pathologies (hemorrhoids, polyps and diverticulitis) were diagnosed in 23 (%20,9). Conclusions: It has not been proved yet that colorectal cancers increase the incidence of bilateral inguinal hernias. The incidence of benign pathologies in our series was similar to that of same age population without hernia. As a conclusion of this study we believe that rectosigmoidoscopy is not necessary for synchronous bilateral hernias unless the patient has any complaints or risk factors. Colorectal screening tools are performed when the clinical findings or the story of the patient support colorectal cancers) (Tab. 1, Ref. 25). abstract_id: PUBMED:36757851 Management of Perforated Sigmoid Colon Cancer in an Incarcerated Inguinal Hernia Using a Combined Abdominal and Inguinal Approach. The ideal management of perforated colorectal cancer in the hernia sac remains a challenge for general surgeons. We report such a case requiring a combined-incision surgical approach and two-stage operation. A male patient in his 80s presented with fatigue and a bulge in his left groin. Contrast-enhanced CT scan revealed a sigmoid colon perforation within the left inguinal hernia sac. A combination of midline abdominal and inguinal incisions was performed for the iliopubic tract repair and Hartmann procedure. Six months later, the recurrent inguinal hernia was repaired using the Kugel mesh. The treatment strategy for colonic perforation into the inguinal hernia remains nonstandardized, compared to those for nonperforated cases. Combined inguinal and midline abdominal incisions might be necessary for infected lesion removal and colon mesentery resection; using mesh to repair hernia is a rare option. Hernia repair using mesh can be performed in two stages when recurrence occurs. abstract_id: PUBMED:3827567 The role of flexible sigmoidoscopy in the preoperative screening of patients with inguinal hernia. A high yield of neoplasms. Disagreement persists as to whether all patients with inguinal hernia should undergo screening examinations for colorectal cancers before hernia repair. The purpose of this study was to prospectively evaluate the effectiveness of flexible sigmoidoscopy (FS) as a screening examination for these patients. In an 18-month period, 110 patients with inguinal hernia (99% men) with an average age of 63 +/- 10 years underwent FS. Diverticulosis was found in 36% of the patients, colorectal polyps in 26%, and colorectal cancers in 3.6%. Occult blood was found in the stool of only 11% of these patients and, thus, was an ineffective screening test. These results suggest that FS is an effective means of screening for premalignant and malignant colorectal lesions in patients with inguinal hernias. abstract_id: PUBMED:12030963 Inguinal hernia is not a sign of colon cancer: results of a prospective screening trial. Background: It has been suggested that patients with an inguinal hernia harbour an increased risk for colorectal cancer. Methods: In a prospective clinical trial, we compared the prevalence of colonic neoplasms in 100 cases with inguinal hernia and 100 controls without inguinal hernia. The number, size, histology type, and the location of all colorectal lesions found during a screening flexible sigmoidoscopy were recorded. Results: Not a single case of colorectal cancer was detected in the patients with inguinal hernia pending surgical repair. In the asymptomatic control subjects, one Dukes A and one Dukes B1 colon cancer were detected. Polypectomy was performed in 15% and 17% of the case and control subjects, respectively. During a 5-year period following the initial screening procedure, none of the case or control subjects was diagnosed with colon cancer. Conclusions: The decision for or against performing an endoscopic procedure in a patient with inguinal hernia should be guided by the general principles of screening for colorectal cancer. The mere presence of an inguinal hernia does not automatically increase the risk of colorectal cancer. abstract_id: PUBMED:37925659 The Effectiveness of Screening Total Colonoscopy for Preoperative Patients with Gastric Cancer. Background: Colorectal cancer (CRC) is the most common cancer that coincides with gastric cancer (GC). Although the usefulness of total colonoscopy (TCS) as a CRC screening tool has been reported in preoperative patients with GC, the long-term outcome of patients with synchronous CRC (SCRC) remains unclear. This study aims to clarify the significance of preoperative screening TCS for GC in terms of survival outcomes. Patients And Methods: We included 796 patients who underwent preoperative screening TCS for GC. The risk factors, clinicopathological features, and survival outcome of SCRC were examined. Furthermore, the cost-effectiveness was evaluated from the perspective of improving the rates of mortality caused by CRC. Results: SCRC was observed in 43 patients (5.4%). Endoscopic treatment for SCRC was performed on 30 patients. In total, 15 patients underwent surgical resection, including 2 patients requiring additional surgery after endoscopic treatment. Regarding pathological stages, 25 patients had stage 0, 12 patients had stage I, 5 patients had stage II, and 1 patient had stage IIIB disease. The cumulative mortality rates were as follows: GC-related deaths, 12.6%; deaths from cancers other than CRC, 1%; deaths from other causes, 5.5%. No deaths were attributed to SCRC. Comparing the patients who did not undergo TCS, an incremental cost-effectiveness ratio analysis suggested that a screening cost of 5.86 million yen was required to prevent one CRC death. Conclusions: Curative treatment was possible in all patients with SCRC. No deaths were attributed to SCRC, suggesting that screening TCS for GC is effective. abstract_id: PUBMED:22336793 Antibiotic prophylaxis for hernia repair. Background: The use of antibiotic prophylaxis for hernia repair is currently a controversial issue given the disparity among study results in this area. Objectives: The objective of this systematic review was to clarify the effectiveness of antibiotic prophylaxis in reducing postoperative wound infection rates in elective open inguinal hernia repair. Search Methods: We searched the Cochrane Colorectal Cancer Group specialized register, by crossing the terms herni* and inguinal or groin and the terms antimicr* or antibiot* , as free text and MeSH terms. A similar search were performed in Medline using the following terms: #1 antibiotic* OR antimicrob* OR anti infecti* OR antiinfecti*; #2 prophyla* OR prevent*; #3 #1 AND #2; #4 clean AND (surgery OR tech* OR proced*); #5 herni*; #6 (wound infection) AND #4; #7 #3 AND (#4 or #5 or #6). National Research Register, ISI-Web, DARE, Scirus, TRIPDATABASE, NHS EED, reference list of the included studies and web of clinical trials register (www.controlled-trials.com and clinicaltrials.gov) were checked to identify further studies. Selection Criteria: Only randomised clinical trials were included. Data Collection And Analysis: In the present review, we searched for eligible trials in October 2011. This revealed four new included trials, so seventeen trials are included in the meta-analysis. Eleven of them used prosthetic material for hernia repair (hernioplasty) whereas the remaining studies did not (herniorrhaphy). Pooled and subgroup analysis were conducted depending on whether prosthetic material was or not used. A fixed effects model was used in the analysis. Main Results: The total number of patients included was 7843 (prophylaxis group: 4703, control group: 3140). Overall infection rates were 3.1% and 4.5% in the prophylaxis and control groups, respectively (OR 0.64, 95% CI 0.50 - 0.82). The subgroup of patients with herniorrhaphy had infection rates of 3.5% and 4.9% in the prophylaxis and control groups, respectively (OR 0.71, 95% CI 0.51 - 1.00). The subgroup of patients with hernioplasty had infection rates of 2.4% and 4.2% in the prophylaxis and control groups, respectively (OR 0.56, 95% CI 0.38 - 0.81). Authors' Conclusions: Based on the results of this systematic review the administration of antibiotic prophylaxis for elective inguinal hernia repair cannot be universally recommended. Neither can the administration be recommended against when high rates of wound infection are observed. abstract_id: PUBMED:9122598 Diagnosis of inguinal hernias The diagnosis of inguinal hernia is more often obtained by the history on an intermittent inguinal tumefaction spontaneously reducible or using maneuvers well known by the patient and the clinician. The complementary explorations have a small interest, but the ultrasonography and the CT-scan can confirm the parietal defects in case of diagnostic doubt. The research of risk factors is necessary, in order to avoid recurrences after surgery. In a same way, associated severe diseases, like colorectal cancers and aneurysms of the abdominal aorta, have to be researched. The precise diagnosis will indicate the type of surgical procedure, inguinal or laparoscopic, but the validity of this last approach has to be proved. abstract_id: PUBMED:24277403 Predictive value of three preoperative nutritional assessment methods in postoperative complications of gastrointestinal cancers Objective: To compare the value of three preoperative nutritional assessment methods, European nutrition risk screening 2002(NRS 2002), mini-nutrition assessment(MNA) and subjective global assessment(SGA), in predicting postoperative complications of gastrointestinal cancer patients. Methods: A total of 235 patients with gastrointestinal cancers, including 31 esophageal cancers, 82 gastric cancers, and 122 colorectal cancers, in our hospital from January 2012 to June 2013 were prospectively enrolled. Preoperative nutritional status was evaluated with above 3 methods respectively. Postoperative complication rates were compared among different preoperative nutritional status. Results: According to SGA score, the morbidity of severe-moderate, mild and no malnourished patients was 40.5%(17/42), 25.3%(22/87) and 14.2%(15/106) respectively(P&lt;0.01). According to MNA score, the morbidity of patients with malnutrition, at risk of malnutrition and without malnutrition was 32.9%(23/70), 24.7%(18/73) and 14.1%(13/92) respectively(P&lt;0.05). According to NRS 2002, the morbidity of patients at malnutrition risk and without malnutrition risk was 27.6%(27/98) and 19.7%(27/137) respectively(P&gt;0.05). Multiple regression analysis revealed that both SGA and MNA scores were predictive factors for the development of postoperative complications(both P&lt;0.01). The sensitivity of SGA score for predicting complications was higher compared to MNA score (90.7% vs. 79.6%), while the specificity was similar(49.7% vs. 50.8%). Conclusions: Both SGA and MNA scores can effectively predict the development of postoperative complications in gastrointestinal cancer patients, and SGA score has better sensitivity. SGA score is recommended for decision-making regarding preoperative nutrition support. Answer: Based on the provided abstracts, colorectal cancer (CRC) screening does not appear to be necessary as a routine part of the preoperative assessment for patients undergoing inguinal herniorrhaphy, unless there are other indications or risk factors present. Several studies have investigated the association between inguinal hernias and the prevalence of colorectal cancer. A case-control study conducted at the Veterans Affairs Palo Alto Health Care System found no significant difference in the prevalence of colorectal cancer between patients presenting with inguinal hernias and control subjects undergoing screening for CRC (PUBMED:11419848). Similarly, another case-control study concluded that the presence of an inguinal hernia is not associated with an increased risk for colorectal cancer, suggesting that screening colonoscopy before herniorrhaphy is not justified based solely on the presence of an inguinal hernia (PUBMED:14997133). A prospective study evaluating the association between bilateral inguinal hernias and colorectal cancers also found no evidence to support the necessity of rectosigmoidoscopy for synchronous bilateral hernias unless the patient has complaints or risk factors (PUBMED:23094899). Furthermore, a prospective clinical trial comparing the prevalence of colonic neoplasms in patients with inguinal hernia to controls without inguinal hernia detected no cases of colorectal cancer in the hernia group, reinforcing the notion that the presence of an inguinal hernia does not automatically increase the risk of colorectal cancer (PUBMED:12030963). While there are reports of colorectal cancer being found in patients with inguinal hernias, such as a case of perforated sigmoid colon cancer in an incarcerated inguinal hernia (PUBMED:36757851), these instances do not seem to be indicative of a broader trend necessitating routine CRC screening in all inguinal hernia cases. In conclusion, the evidence from the provided abstracts suggests that routine colorectal cancer screening is not necessary in the preoperative assessment of inguinal herniorrhaphy unless there are specific symptoms, a history of colorectal disease, or other risk factors that would warrant such screening.
Instruction: In-bore transrectal MRI-guided prostate biopsies: Are there risk factors for complications? Abstracts: abstract_id: PUBMED:27842662 In-bore transrectal MRI-guided prostate biopsies: Are there risk factors for complications? Purpose: To systematically analyze risk factors for complications of in-bore transrectal MRI-guided prostate biopsies (MRGB). Materials And Methods: 90 patients, who were scheduled for MRGB were included for this study. Exclusion criteria were coagulation disorders, therapy with anticoagulant drugs, and acute infections of the urinary and the lower gastrointestinal tract. Directly after, one week and one year after the biopsy, we assessed biopsy related complications (e.g. hemorrhages or signs of prostatitis). Differences between patients with and without complications were analyzed regarding possible risk factors: age, prostate volume, number of taken samples, biopsy duration, biopsy of more than one lesion, diabetes, arterial hypertension, hemorrhoids, benign prostate hyperplasia, carcinoma or prostatitis (according to histopathological analysis), and lesion localization. Complications were classified according to the Clavien-Dindo classification. Results: We observed 15 grade I complications in 90 biopsies (16.7%) with slight hematuria in 9 cases (10%), minor vasovagal reactions in 4 cases (4.4%), and urinary retention and positioning-related facial dysesthesia in 1 case each (1.1%). One patient showed acute prostatitis requiring antibiotics as the only grade II complication (1.1%). There were no adverse events that occurred later than one week. Complications grade III or higher such as pelvic abscesses, urosepsis or severe hemorrhages were not seen. There were no significant associations between the assessed risk factors and biopsy-related complications. Conclusion: In-bore transrectal MRI-guided prostate biopsies can be considered safe procedures in the diagnosis of prostate cancer with very low complication rates. There seem to be no risk factors for complications. abstract_id: PUBMED:31268734 In-Bore Transrectal MRI-Guided Biopsy With Robotic Assistance in the Diagnosis of Prostate Cancer: An Analysis of 57 Patients. OBJECTIVE. The objective of our study was to analyze the feasibility and potential role of robotic-assisted transrectal MRI-guided biopsy for the diagnosis of prostate cancer. MATERIALS AND METHODS. A total of 57 patients (mean age, 67 ± 6 [SD] years; age range, 57-83 years; mean prostate-specific antigen level, 10.7 ± 6.1 ng/mL) with a single prostatic lesion visible on biparametric MRI (T2-weighted and DW images) underwent robotic-assisted MRI-guided transrectal biopsy. The procedure was analyzed in terms of technical success, defined by an accurate alignment of the needle guide with the lesion; occupation time of the MRI room; number of cores; cancer detection rate (CDR); and complications. RESULTS. The biparametric MRI score was 3, 4, and 5 in 11 (19%), 30 (53%), and 16 (28%) of the 57 patients, respectively. Twenty-three lesions (23/57, 40%) originated in the peripheral zone and 34 (34/57, 60%) in the transition zone. Software-based adjustments of the robot allowed the needle guide to be aligned with the target in all lesions. The number of cores was one, two, three, and four in one (2%), 36 (63%), 18 (32%), and three (5%) patients, respectively. Obtaining more than two cores had no incremental value in determining the Gleason score or the maximum cancer core length (MCCL). The overall CDR for any cancer was 67% (38/57). It was 95% (36/38) for tumors with Gleason grade of more than 3 or MCCL greater than 3 mm and 53% (20/38) for tumors with Gleason score greater than 6. No complications were observed. The median occupation time of the MRI room was 37.8 ± 9.7 minutes (range, 32-74 minutes). CONCLUSION. Robotic-assisted MRI-guided biopsy yields 100% technical success rate with a short MRI room occupation time and high CDRs using one or two cores. abstract_id: PUBMED:33209118 Patients' experience with MRI-guided in-bore biopsy versus TRUS-guided biopsy in prostate cancer: a pilot study. Background: Ultrasound-guided magnetic resonance imaging (MRI)-fusion biopsy and in-bore MRI-guided biopsy (MRGB) have improved the diagnostic pathway in patients with suspected prostate cancer compared to the traditional random sampling of the prostate gland under transrectal ultrasound guidance (TRUS-Bx). The aim of our study was to assess the psychological experiences of patients undergoing MRGB and TRUS-Bx. Method: Participants completed an ad hoc set of 11 items to be rated from 0 (not at all) to 10 (very much) on visual analogue scales and one open question on the most worrisome aspect of the procedure. The set of items evaluated satisfaction with the information received and the possibility to ask questions to the staff; the tolerability of the irritation, duration and discomfort associated with the exam; their level of worry or calm before the exam; the perceived need to undergo the exam; their satisfaction with the exam and willingness to repeat it in the future; and acceptability of the exam. Results: Between May 2018 and June 2019, 47 participants were enrolled on the day of their MRGB; 24 had previously undergone TRUS-Bx. The MRGB was rated with high positive scores on all 11 items. The lowest ratings regarded the duration of the exam (mean = 6.6) and feeling calm (mean = 6.6). Participants were significantly more satisfied with MRGB than TRUS-Bx, rating it as less painful and more comfortable, necessary and tolerable. Conclusion: These preliminary results indicate that the MRGB is likely to be more tolerable and acceptable to patients than TRUS-Bx. abstract_id: PUBMED:32755166 3-T Multiparametric MRI Followed by In-Bore MR-Guided Biopsy for Detecting Clinically Significant Prostate Cancer After Prior Negative Transrectal Ultrasound-Guided Biopsy. OBJECTIVE. The purpose of this study was to evaluate the rate of detection of clinically significant prostate cancer (csPCa), as assessed on the basis of Prostate Imaging Reporting and Data System version 2.1 (PI-RADSv2.1) guidelines, using 3-T in-bore MR-guided biopsy (MRGB) for a cohort of patients suspected of having csPCa despite having a history of recent negative transrectal ultrasound-guided biopsy results. MATERIALS AND METHODS. The cohort in this retrospective, single-center study was derived from a database of 330 patients who underwent multiparametric MRI (mpMRI) followed by in-bore transrectal 3-T MRGB. Seventy-nine patients (mean [± SD] age, 64.1 ± 8.6 years) with prior negative transrectal ultrasound-guided biopsy results and positive pre-MRGB mpMRI results (PI-RADS score ≥ 3) composed the final cohort. The rate of detection of PCa and csPCa (the latter of which was defined by a Gleason score of 3 + 4 or higher) was stratified according to updated PI-RADSv2.1 assessment. RESULTS. MRGB detected PCa in 36 patients (45.6%), 30 (83.3%) of whom had csPCa. The PI-RADSv2.1 score was a strong predictor (odds ratio, 3.97; 95% CI, 1.93-7.47) of csPCa detection. We found two benign transition zone target lesions that were downgraded from PI-RADSv2 category 3 to PI-RADSv2.1 category 2. PCa was detected in 18.4% (7/38), 65.2% (15/23), and 87.5% (14/16) of individuals with PI-RADSv2.1 category 3, 4, and 5 lesions, respectively, with 85.7% (6/7), 86.7% (13/15), and 78.6% (11/14) of these cases found to be csPCa, respectively. Of the seven PI-RADSv2.1 category 3 csPCa lesions, six had prostate-specific antigen density greater than 0.10 ng/mL/cc. CONCLUSION. With the use of 3-T in-bore MRGB, csPCa was detected in 38% of individuals with prior negative transrectal ultrasound-guided biopsy results. PI-RADSv2.1 was a strong predictor of csPCa detection. On the basis of our results, patients with PI-RADSv2.1 category 4 or 5 lesions and patients with PI-RADSv2.1 category 3 lesions and a prostate-specific antigen density greater than or equal to 0.10 ng/mL/cc may benefit from in-bore MRGB. abstract_id: PUBMED:30271182 Risk factors for infectious complications following transrectal ultrasound-guided prostate biopsy. Objective: To explore risk factors of infectious complications following transrectal ultrasound-guided prostate biopsy (TRUSPB). Methods: We retrospectively analyzed 1,203 patients with suspected prostate cancer who underwent TRUSPB at our center between December 2012 and December 2016. Demographics, clinical characteristics, and data regarding complications were collected, and then univariate and multivariate logistic regression analyses were used to identify independent risk factors for infectious complications after prostate biopsy. Results: Multivariate logistic analysis demonstrated that body mass index (BMI) (OR=2.339, 95% CI 2.029-2.697, P&lt;0.001), history of diabetes (OR=2.203, 95% CI 1.090-4.455, P=0.028), and preoperative catheterization (OR=2.303, 95% CI 1.119-4.737, P=0.023) were risk factors for infection after prostate biopsy. The area under the receiver operating characteristics curve for infectious complications was 0.930 (95% CI 0.907-0.953, P&lt;0.001). BMI=28.196 kg/m2 was the best cut-off threshold for predicting infection after TRUSPB. Conclusion: BMI &gt;28.196 kg/m2, history of diabetes, and preoperative catheterization are independent risk factors for infection after prostate biopsy. abstract_id: PUBMED:28225639 Cost-Effectiveness Comparison of Imaging-Guided Prostate Biopsy Techniques: Systematic Transrectal Ultrasound, Direct In-Bore MRI, and Image Fusion. Objective: Three commonly used prostate biopsy approaches are systematic transrectal ultrasound guided, direct in-bore MRI guided, and image fusion guided. The aim of this study was to calculate which strategy is most cost-effective. Materials And Methods: A decision tree and Markov model were developed to compare cost-effectiveness. Literature review and expert opinion were used as input. A strategy was deemed cost-effective if the costs of gaining one quality-adjusted life year (incremental cost-effectiveness ratio) did not exceed the willingness-to-pay threshold of €80,000 (≈$85,000 in January 2017). A base case analysis was performed to compare systematic transrectal ultrasound- and image fusion-guided biopsies. Because of a lack of appropriate literature regarding the accuracy of direct in-bore MRI-guided biopsy, a threshold analysis was performed. Results: The incremental cost-effectiveness ratio for fusion-guided biopsy compared with systematic transrectal ultrasound-guided biopsy was €1386 ($1470) per quality-adjusted life year gained, which was below the willingness-to-pay threshold and thus assumed cost-effective. If MRI findings are normal in a patient with clinically significant prostate cancer, the sensitivity of direct in-bore MRI-guided biopsy has to be at least 88.8%. If that is the case, the incremental cost-effectiveness ratio is €80,000 per quality-adjusted life year gained and thus cost-effective. Conclusion: Fusion-guided biopsy seems to be cost-effective compared with systematic transrectal ultrasound-guided biopsy. Future research is needed to determine whether direct in-bore MRI-guided biopsy is the best pathway; in this study a threshold was calculated at which it would be cost-effective. abstract_id: PUBMED:26700339 MRI-Guided In-Bore Biopsy: Differences Between Prostate Cancer Detection and Localization in Primary and Secondary Biopsy Settings. Objective: The objective of our study was to evaluate transrectal MRI-guided in-bore biopsy in patients who either were biopsy-naive (primary biopsy) or had undergone at least one previous negative transrectal ultrasound-guided biopsy (secondary biopsy) with regard to cancer detection rate, tumor localization, and lesion size. Materials And Methods: In total, 1602 biopsy cores from 297 consecutive patients (mean ± SD, 66.1 ± 7.8 years; median prostate-specific antigen value, 8.2 ng/mL) in primary (n = 160) and secondary (n = 137) prostate biopsy settings were evaluated in this retrospective study. All patients previously underwent prostate MRI (T2-weighted imaging, DWI, dynamic contrast-enhanced imaging) at 3 T. All described lesions were biopsied with MRI-guided in-bore biopsy and were examined histologically. Results: In 148 patients, overall 511 cores were positive for prostate cancer. Clinically significant prostate cancer (any Gleason pattern ≥ 4) was found in 82.4% of patients. The prostate cancer detection rate for patients who underwent primary biopsies was 55.6% and was 43.1% for patients who underwent secondary biopsies. In patients with primary versus secondary biopsies, prostate cancer was located peripherally in 62.9% versus 49.5% (p = 0.04), in the transition zone in 27.4% versus 27.5% (p = 1.0), and in the anterior stroma in 10.3% versus 22.9% (p &lt; 0.01), respectively. The prostate cancer detection rates for patients with smaller prostate volumes (&lt; 30 vs 30-50 vs &gt; 50 mL; p &lt; 0.01) or for patients with larger lesions (&gt; 0.5 vs 0.25-0.5 vs &lt; 0.25 cm(3); p &lt; 0.01) were significantly higher. Conclusion: MRI-guided in-bore biopsy led to high detection rates in primary and secondary prostate biopsies. Prostate cancer detection rates were significantly higher for patients with larger lesions and smaller prostate glands. In patients who underwent secondary biopsies, prostate cancer was located in the anterior stroma at a significantly more frequent rate. abstract_id: PUBMED:26116294 Prospective randomized trial comparing magnetic resonance imaging (MRI)-guided in-bore biopsy to MRI-ultrasound fusion and transrectal ultrasound-guided prostate biopsy in patients with prior negative biopsies. Background: A significant proportion of prostate cancers (PCas) are missed by conventional transrectal ultrasound-guided biopsy (TRUS-GB). It remains unclear whether the combined approach using targeted magnetic resonance imaging (MRI)-ultrasound fusion-guided biopsy (FUS-GB) and systematic TRUS-GB is superior to targeted MRI-guided in-bore biopsy (IB-GB) for PCa detection. Objective: To compare PCa detection between IB-GB alone and FUS-GB + TRUS-GB in patients with at least one negative TRUS-GB and prostate-specific antigen ≥4 ng/ml. Design, Setting, And Participants: Patients were prospectively randomized after multiparametric prostate MRI to IB-GB (arm A) or FUS-GB + TRUS-GB (arm B) from November 2011 to July 2014. Outcome Measurements And Statistical Analysis: The study was powered at 80% to demonstrate an overall PCa detection rate of ≥60% in arm B compared to 40% in arm A. Secondary endpoints were the distribution of highest Gleason scores, the rate of detection of significant PCa (Gleason ≥7), the number of biopsy cores to detect one (significant) PCa, the positivity rate for biopsy cores, and tumor involvement per biopsy core. Results And Limitations: The study was halted after interim analysis because the primary endpoint was not met. The trial enrolled 267 patients, of whom 210 were analyzed (106 randomized to arm A and 104 to arm B). PCa detection was 37% in arm A and 39% in arm B (95% confidence interval for difference, -16% to 11%; p=0.7). Detection rates for significant PCa (29% vs 32%; p=0.7) and the highest percentage tumor involvement per biopsy core (48% vs 42%; p=0.4) were similar between the arms. The mean number of cores was 5.6 versus 17 (p&lt;0.001). A limitation is the limited number of patients because of early cessation of accrual. Conclusions: This trial failed to identify an important improvement in detection rate for the combined biopsy approach over MRI-targeted biopsy alone. A prospective comparison between MRI-targeted biopsy alone and systematic TRUS-GB is justified. Patient Summary: Our randomized study showed similar prostate cancer detection rates between targeted prostate biopsy guided by magnetic resonance imaging and the combination of targeted biopsy and systematic transrectal ultrasound-guided prostate biopsy. An important improvement in detection rates using the combined biopsy approach can be excluded. abstract_id: PUBMED:34091134 In-bore MRI-guided prostate biopsy in a patient group with PI-RADS 4 and 5 targets: A single center experience. Purpose: To determine the diagnostic yield of magnetic resonance imaging (MRI) guided in-bore biopsy in patients with high likelihood multiparametric MRI (mpMRI) findings, regarding overall and clinically significant prostate cancer (csPCa) detection rates and concordance of biopsy and radical prostatectomy (RP) Gleason scores (GS). Methods: This retrospective study consisted of 277 Prostate Imaging Reporting and Data System (PI-RADS) assessment category 4 and 5 targets in 246 patients (mean age, 65.7 years; median prostate specific antigen value, 7.75 ng/mL) who had undergone in-bore biopsy at our institution between 2012 and 2020. Eighty-one patients who underwent RP were eligible for the concordance analysis of biopsy and RP specimen GS. Results: Overall PCa detection rates were 80.5 % per patient (198/246) and 78 % per target (216/277) and 83.5 % and 67.4 % in primary (biopsy naive) and secondary (at least one negative prior biopsy) settings. csPCa was found in 63 % overall, 66 % of patients (132/200) in the primary, and 50 % of patients (23/46) in the secondary biopsy settings (p &lt; 0.001). The prostate cancer detection rate was 68 % and 92 % in PI-RADS 4 and 5, respectively (p &lt; 0.001). In the radical prostatectomy subcohort, 27.2 % of patients were upgraded, 8.6 % of patients were downgraded from needle biopsy. Significant complications occurred in 1.2 % of patients. Conclusions: MRI-guided in-bore prostate biopsy has a high detection rate of csPCa in primary and secondary biopsy cohorts. Biopsy results were satisfactory in terms of the number of positive cores, cancer percentage in positive cores, and concordance of GS in needle biopsy and RP specimen. abstract_id: PUBMED:37559580 Prostate biopsy approach and complication rates. Prostate biopsy is the gold standard to confirm prostate cancer. In addition to standard 12-core biopsies, magnetic resonance imaging (MRI)-guided prostate biopsies have recently been introduced to improve the detection of clinically significant prostate cancer. The present study aimed to compare the complications after standard transrectal ultrasound-guided and standard plus targeted (MRI-guided) prostate biopsies, to study the impact of the number of biopsy cores on complication rates, and to compare complication rates after transrectal ultrasound-guided prostate biopsies with those following transperineal prostate biopsies from the literature. A prospective study was performed, which included 135 patients who underwent transrectal ultrasound-guided prostate biopsies between April 1 and June 30, 2022, at the Urology Department of the University Hospital of Pointe à Pitre (Pointe à Pitre, Guadeloupe). A total of 51 patients were excluded because of missing information concerning their post-biopsy surveillance. The median age at the time of biopsy was 69 years, median prostate-specific antigen value was 8.9 ng/ml, median prostate volume was 57.5 ml, and median number of cores was 15. A total of 35 of the 84 included patients (41.7%) had a standard biopsy only and 49 (58.3%) had targeted (MRI-guided) plus standard biopsies. A total of 53 patients (63.1%) experienced early side effects, whereas only 24 patients (28.6%) experienced late side effects. Three patients (3.6%) required hospitalization for post-biopsy complications. Early side effects, especially hematuria and hematospermia, occurred significantly more frequently in the targeted plus standard group, with more cores taken, with no significant difference concerning late side effects or infectious complications between the standard and standard plus targeted groups. The admission rate for sepsis after transperineal biopsy has been reported to vary between 0 and 1%, whereas the present study had an admission rate of 2.29% using the transrectal approach. Further studies are required to analyze the complications requiring hospitalization after transrectal and transperineal biopsies. Answer: In-bore transrectal MRI-guided prostate biopsies (MRGB) are considered safe procedures with very low complication rates. A study that included 90 patients scheduled for MRGB found that there were no significant associations between assessed risk factors and biopsy-related complications. The observed complications were minor, with 15 grade I complications such as slight hematuria, minor vasovagal reactions, urinary retention, and positioning-related facial dysesthesia. Only one patient showed acute prostatitis requiring antibiotics as a grade II complication. There were no complications grade III or higher, such as pelvic abscesses, urosepsis, or severe hemorrhages. The study concluded that there seem to be no risk factors for complications in the context of in-bore transrectal MRI-guided prostate biopsies (PUBMED:27842662). This finding is consistent with another study that reported no complications observed in a cohort of 57 patients who underwent robotic-assisted MRI-guided transrectal biopsy (PUBMED:31268734). However, it is important to note that these findings are specific to the patient groups and study designs and may not account for all potential risk factors across different populations or biopsy techniques. For comparison, a study exploring risk factors for infectious complications following transrectal ultrasound-guided prostate biopsy (TRUSPB) identified body mass index (BMI), history of diabetes, and preoperative catheterization as independent risk factors for infection after prostate biopsy (PUBMED:30271182). Although this study pertains to TRUSPB and not MRGB, it highlights that certain patient characteristics can influence the risk of complications in prostate biopsy procedures. In summary, the available evidence from the referenced studies suggests that in-bore transrectal MRI-guided prostate biopsies have a very low rate of complications, and no specific risk factors for complications have been identified in the context of MRGB (PUBMED:27842662).
Instruction: Is it necessary to do temporal bone computed tomography of the internal auditory canal in tinnitus with normal hearing? Abstracts: abstract_id: PUBMED:24379749 Is it necessary to do temporal bone computed tomography of the internal auditory canal in tinnitus with normal hearing? Objective: To investigate the compression of the vestibulocochlear nerve in the etiology of the tinnitus in the normal hearing ears with temporal bone computed tomography scans. Methods: A prospective nonrandomized study of 30 bilateral tinnitus and 30 normal hearing patients enrolled in this study. Results: A total of 60 patients (ages ranged from 16 to 87) were included. The tinnitus group comprised 11 males and 19 females (mean age 49,50 ± 12,008) and the control group comprised 6 males and 24 females (mean age 39,47 ± 12,544). Regarding the right and left internal acoustic canals measurements (inlet, midcanal, and outlet canal lengths), there were no significant differences between the measurements of the control and tinnitus groups (P &gt; 0.005). There was no narrowness in the internal acoustic canal of the tinnitus group compared with the control group. High-frequency audiometric measurements of the right and left ears tinnitus group at 8000, 9000, 10000, 11200, 12500, 14000, 16000, and 18000 Hz frequencies were significantly lower than the control group thresholds (P &lt; 0.05). There was high-frequency hearing loss in the tinnitus group. Conclusion: There were no anatomical differences in the etiology of tinnitus rather than physiological degeneration in the nerves. abstract_id: PUBMED:14653919 Congenital internal auditory canal stenosis. Congenital internal auditory canal stenosis is a rare cause of sensorineural hearing loss in children. A retrospective analysis including clinical manifestation and radiological findings was made for seven patients who were diagnosed with congenital internal auditory canal stenosis from 1996 to 2002. Chief presenting symptoms were hearing loss, facial nerve palsy, dizziness, and tinnitus. Hearing loss including deafness was found in five cases, vestibular function loss in four cases, and profound functional loss of facial nerve in two cases. In all cases, the diameter of the internal auditory canal was less than 2 mm on high-resolution temporal bone computed tomography (CT) scan. Two cases revealed bilateral internal auditory canal stenosis, and others were unilaterally involved cases. Congenital internal auditory canal stenosis can be an important cause of sensorineural hearing loss, facial nerve palsy, and vestibular dysfunction. High resolution temporal bone CT scan and magnetic resonance (MR) imaging were important tools for diagnosis. abstract_id: PUBMED:8726907 Temporal bone imaging. Ameliorated computed tomography techniques and new magnetic resonance sequences have led to an important improvement in temporal bone imaging. Computed tomography is still the method of choice for imaging of temporal bone fractures, middle ear disease, and conductive hearing loss, although magnetic imaging can add important information. Patients with lesions of the cerebellopontine angle; internal auditory canal; inner ear; and, in general, all patients with sensorineural hearing loss, vertigo, and tinnitus are best examined with magnetic resonance imaging. In some cases, however, such as congenital malformations and petrous apex lesions, magnetic resonance imaging and computed tomography are complementary. The value of both modalities is discussed. abstract_id: PUBMED:35354340 Hemangioma of the External Auditory Canal and Temporal Bone: A Case Report and Comprehensive Literature Review. Although hemangiomas are common in the head and neck area, they are uncommon in the temporal bone. External auditory canal (EAC) hemangiomas are a relatively uncommon otologic condition. In the English literature, only 29 cases of temporal bone hemangioma have been reported. We also present the case of a 32-year-old male patient who presented with an 8-month history of decreased hearing, tinnitus, and aural fullness on the right side. Otoscopic examination of the right ear revealed a smooth, oval, well-circumscribed, reddish, pulsatile, nontender mass, which was about 1 cm in diameter and occupied two-thirds of the EAC. It was soft in consistency, compressible, and appeared to be arising from the right posterior-superior portion of the bony canal wall; the tympanic membrane seemed to be uninvolved. The patient had mild conductive hearing loss of the right ear with an average air-bone gap of 20 dB; the left side was normal. High-resolution contrast-enhanced computed tomographic scanning of the temporal bone showed well-defined, rounded, homogenously enhancing lesions at the posterior-superior aspect of right EAC measuring 0.7 × 0.8 cm. Angiography was performed and there was evidence of vascular blush in the region of the right EAC. Three blood vessels were identified and embolized. The mass was completely removed by the endaural approach, and a retroauricular skin graft to restore cutaneous integrity was not required. Histopathology indicated a capillary hemangioma without cytological atypia or mitotic activity. There was no recurrence 1 year after the surgery. abstract_id: PUBMED:7741502 High resolution computed tomography of the temporal bone: preliminary experience. High Resolution Computed Tomography (HRCT) can demonstrate the detailed anatomy and pathology of the temporal bone and is fast becoming an important imaging modality in the diagnosis and preoperative management of our local patients with temporal bone diseases. We present our experience in the usage of HRCT in temporal bone lesions in Tan Tock Seng Hospital, using 8 cases as illustration. The protocol that we used to obtain the images is described. We also give an overview of those clinical conditions whereby HRCT can provide information which will help surgeons in the management of their patients. These clinical conditions are hearing loss, external auditory canal atresia, middle ear inflammation/cholesteatoma, temporal bone trauma, pulsatile tinnitus and vascular tympanic membrane. abstract_id: PUBMED:21585156 Comparative case series of exostoses and osteomas of the internal auditory canal. Exostoses and osteomas are benign bony lesions of the auditory canal. Although common in the external auditory canal, they are rare and difficult to distinguish in the internal auditory canal (IAC). In this literature review and case presentation, we define radiologic and histologic criteria to differentiate exostoses from osteomas of the IAC. Two patients with exostoses and 1 patient with an osteoma of the IAC are described here. Patient 1 presented with disabling vertigo and was found to have bilateral exostoses with nerve impingement on the right. After removal of the right-sided exostoses via retrosigmoid craniotomy, the patient had complete resolution of her symptoms over 1 year. Patient 2 presented with bilateral pulsatile tinnitus and vertigo and was found to have bilateral IAC exostoses. Patient 3 presented with hearing loss and tinnitus, and a unilateral IAC osteoma was ultimately discovered. Because of the mild nature of their symptoms, patients 2 and 3 were managed without surgery. We show that IAC osteomas can be differentiated from exostoses by radiographic evidence of bone marrow in high-resolution computed tomography scans, or by the presence of fibrovascular channels on histologic analysis. Management of these rare entities is customized on the basis of patient symptoms. abstract_id: PUBMED:25381831 Bilateral Internal Auditory Canal Metastasis of Non-small Cell Lung Cancer. We report on a patient with brain metastasis involving bilateral internal auditory canal from non-small cell lung cancer (NSCLC). A 49-year-old woman who had been diagnosed with NSCLC (T2aN1M0) complained of persistent vertigo and bilateral tinnitus for three months. The patient had refused all treatments, including surgery and chemotherapy; however, she sought alternative medicine. The patient's hearing loss showed rapid progression bilaterally, and rotatory vertigo with peripheral-type nystagmus developed. Magnetic resonance imaging of the brain showed irregular nodular enhancement within both internal auditory canals with leptomeningeal enhancement and multiple intracranial metastasis. The patient was treated with epidermal growth factor receptor-tyrosine kinase inhibitor, and the tumor showed partial response. This was a rare case of multiple brain metastases involving bilateral internal auditory canal from known NSCLC presenting with vertigo and hearing loss. abstract_id: PUBMED:19860637 A rare vascular tumor of the external auditory canal: the capillary hemangioma. A 32-year-old woman presented to our department with a 10-month history of right-sided intermittant otorrhagia. There was no history of hearing loss or pulsatile tinnitus. Otomicroscopic examination revealed a reddish mass arising from the right antero-superior portion of bony canal wall, which measured about 1 cm in diameter. The tympanic membrane seemed to be uninvolved. A computed tomography scan of the temporal bone showed 0.6 x 0.8 cm diameter soft-tissue mass arising from the right external auditory canal, 0.5 cm away from tympanic membrane. The lesion was excised via a transcanal approach under local anesthesia. The histopathologic assessment indicated a capillary hemangioma. There was no recurrence four years after the surgery. Hemangioma of the external auditory canal is a rare otologic entity. It is commonly classified as capillary or cavernous hemangioma. According to the literature, this case represents the second patient with capillary hemangioma of the external auditory canal. abstract_id: PUBMED:26251810 Lipochoristoma of the Internal Auditory Canal. Objective Lipochoristomas comprise 0.1% of all cerebellopontine angle tumors and may be misdiagnosed as vestibular schwannomas. We describe a lipochoristoma of the internal auditory canal (IAC) and present the salient features of the evaluation, diagnosis, and management. Study Design and Methods Retrospective case review. Results (Case Report) A 51-year-old woman presented with left-sided severe hearing loss, tinnitus, mild episodic vertigo, and facial paresthesias progressive over 1 year. Magnetic resonance imaging (MRI) demonstrated a small (5 × 4 mm) T1 hyperintense lobulated lesion in the distal internal auditory canal. With fat-suppressed images, there was no enhancement of the lesion. A diagnosis of IAC lipochoristoma was made. Conservative management was recommended, and on 17-month follow-up there was no interval growth. The patient remains symptomatically stable with improved equilibrium but persistent left-sided hearing loss. Conclusions Differential diagnosis of an enhancing lesion in the IAC includes acoustic neuroma, meningioma, epidermoid and arachnoid cysts, lipochoristoma, and metastatic tumors. Fat-suppressed MRI sequences aid in a definitive diagnosis of lipochoristomas. Because lipochoristomas may have a tendency for more indolent growth and intimate involvement of the auditory nerve, conservative management with interval imaging is recommended. Surgical treatment is reserved for growing lesions or those with disabling vestibular symptoms. abstract_id: PUBMED:31561213 Endoscopic transmastoid posterior petrosal approach for locally aggressive tumors in the petrous part of the temporal bone involving the internal auditory canal, jugular foramen, and hypoglossal canal. Objective: The posterior petrosal approach is an established surgical method offering wide access to skull base lesions through mastoid air cells. The authors describe their experience with the endoscopic transmastoid "posterior petrosal" approach (EPPAP) for skull base tumors involving the internal auditory canal (IAC), jugular foramen, and hypoglossal canal. Methods: The EPPAP was performed for 7 tumors (3 chordomas, 2 chondrosarcomas, 1 schwannoma, and 1 solitary fibrous tumor). All surgical procedures were performed under endoscopic visualization with mastoidectomy. The compact bone of the mastoid air cells and posterior surface of the petrous bone are carefully removed behind the semicircular canals. When removal of cancellous bone is extended superomedially through the infralabyrinthine space, the surgeon can expose the IAC and petrous portion of the internal carotid artery to reach the petrous apex (infralabyrinthine route). When removal of cancellous bone is extended inferomedially along the sigmoid sinus, the surgeon can safely reach the jugular foramen (transjugular route). Drilling of the inferior surface of petrous bone is extended further inferoposteriorly behind the jugular bulb to approach the hypoglossal canal and parapharyngeal space through the lateral aspect of the occipital condyle (infrajugular route). Results: Of the 7 tumors, gross-total resection was achieved in 4 (57.1%), subtotal resection (&gt; 95% removal) in 2 (28.6%), and partial resection (90% removal) in 1 (14.2%). Postoperatively, 2 of 3 patients with exudative otitis media showed improvement of hearing deterioration, as did 2 patients with tinnitus. Hypoglossal nerve palsy and swallowing difficulty were improved after surgery in 2 patients and 1 patient, respectively. In 1 patient with severe cranial nerve deficits before surgery, symptoms did not show any improvement. Conclusions: The authors present their preliminary experience with EPPAP for skull base tumors in the petrous part of the temporal bone and the lateral part of the occipital condyle involving the cranial nerves and internal carotid arteries. The microscope showed a higher-quality image and illumination in the low-power field. However, the endoscope could offer wider visualization of the surgical field and contribute to minimizing the size of the surgical pathways, necessity of brain retraction, and eventually the invasiveness of surgery. Thus, the EPPAP may be safe and effective for skull base tumors in the petrous region, achieving balance between the radicality and invasiveness of the skull base surgery. Answer: Based on the information provided in the abstracts, it appears that temporal bone computed tomography (CT) of the internal auditory canal may not be necessary in cases of tinnitus with normal hearing when there is no indication of anatomical abnormalities. A study specifically addressing this question found no significant anatomical differences in the internal auditory canal between a tinnitus group with normal hearing and a control group without tinnitus (PUBMED:24379749). The study concluded that there were no anatomical differences in the etiology of tinnitus, suggesting that physiological degeneration in the nerves might be responsible rather than compression of the vestibulocochlear nerve. However, it is important to note that temporal bone imaging can be crucial in diagnosing other conditions that may present with tinnitus, such as congenital internal auditory canal stenosis (PUBMED:14653919), hemangiomas (PUBMED:35354340; PUBMED:19860637), exostoses and osteomas (PUBMED:21585156), metastasis from non-small cell lung cancer (PUBMED:25381831), lipochoristoma (PUBMED:26251810), and other aggressive tumors involving the internal auditory canal (PUBMED:31561213). These conditions can be associated with sensorineural hearing loss, vertigo, and other symptoms, and imaging is essential for their diagnosis and management. In summary, while routine temporal bone CT may not be necessary for patients with tinnitus who have normal hearing and no other symptoms (PUBMED:24379749), it is an important diagnostic tool for a variety of other otologic conditions that can present with tinnitus as a symptom. Therefore, the decision to perform imaging should be based on the clinical presentation and the presence of other symptoms or risk factors that might indicate an underlying anatomical or pathological cause for the tinnitus.
Instruction: Socioeconomic trajectories across the life course and health outcomes in midlife: evidence for the accumulation hypothesis? Abstracts: abstract_id: PUBMED:15256527 Socioeconomic trajectories across the life course and health outcomes in midlife: evidence for the accumulation hypothesis? Background: Recent research in social epidemiology has established the importance of considering the accumulation of advantage and disadvantage across the life course when examining adult health outcomes. This paper examines (1) accumulation across trichotomous categories of socioeconomic position (SEP), and (2) accumulation in analysis stratified by adult SEP. Methods: Data are from the Whitehall II study. Each participant was categorized as having high (0), intermediate (1), or low (2) SEP at three time points in the life course, leading to 27 socioeconomic trajectories. These trajectories were summarized to yield a scale ranging from 0 (high SEP at all three time points) to 6 (low SEP at all three time points). Logistic regression was used to examine odds of incident coronary heart disease (CHD), poor mental and physical functioning, and minor psychiatric disorder. Results: There was a graded linear relationship between accumulation of socioeconomic exposure and health. Men with a score of 6 had increased odds of CHD (2.53, 95% CI: 1.3, 5.1), poor physical functioning (2.19, 95% CI: 1.4, 4.1), and poor mental functioning (2.60, 95% CI: 1.4, 4.9) compared with men with a score of 0. In women there was an accumulation effect for CHD and physical functioning. No cumulative effect of SEP on minor psychiatric disorder was observed. The effects of accumulation were weaker in analyses stratified by adult SEP, with early deprivation followed by high adult SEP particularly detrimental for CHD. Conclusions: The health effects of socioeconomic disadvantage accumulate over the life course. In addition to accumulation effects, analysis stratified by adult SEP also provided support for the critical period and the pathway model. abstract_id: PUBMED:25683678 Socioeconomic inequalities in health trajectories in Switzerland: are trajectories diverging as people age? Do socioeconomic differences in health status increase as people age, reflecting cumulative advantage or disadvantage in health trajectories? Life course research hypothesises that cumulative advantage/disadvantage (CAD) is an important underlying social process that shape inequalities as people age. The objective of this study is to examine whether health trajectories are diverging as people age across socioeconomic positions (education, employment status and income). In a random sample of 3,665 respondents living in Switzerland (Swiss Household Panel 2004-2011), trajectories of self-rated health, body mass index, depression and medicated functioning were examined with multilevel regression models. The results showed that employment status and income were associated with diverging health trajectories among men; however, only a few associations supported the CAD hypothesis. Education was rarely associated with diverging health trajectories. In conclusion, little evidence was found to support the CAD model. abstract_id: PUBMED:34733129 Lifetime socioeconomic determinants of health trajectories among older adults. Drawing on life course theory and research, we explored how socioeconomic circumstances during childhood and adulthood shape self-reported health trajectories among older Mexican adults. We used data from the Mexican Health and Aging Study panel survey (2001-2015) and used sequence analysis to estimate types of self-reported health trajectories in older adulthood. We then explored the association between those health trajectories and socioeconomic determinants at different life stages, including education, occupation, employment, economic status, parental education, and adverse living conditions and illnesses during childhood. Our contributions are threefold. First, we identified four types of health trajectories for men and eight for women, representing a more nuanced longitudinal health status profile than previously shown. Second, we found that childhood and adult socioeconomic circumstances influence self-reported health trajectories at older age. Third, our results suggest there is no simple monotonic relationship between life course circumstances and self-reported health trajectories. abstract_id: PUBMED:33940326 Alcohol use trajectories across the life course: Influences of childhood predictors and consequences for late-life health. Background: The cumulative, negative health effects of alcohol consumption are exacerbated in older adulthood. We used a 'life course epidemiology' approach to explore how alcohol use trajectories develop across the lifespan, what early life events influence these trajectories and their associations with late-life health. Methods: Survey data combined with retrospective life course history interviews were collected from 749 non-lifetime alcohol abstainer adults aged 61-81 years (51 % female). Frequency and quantity items of the AUDIT-C assessed alcohol use across each decade of life. Early life factors were childhood socioeconomic status, parental health behaviours, and age of drinking onset. Health outcomes were alcohol-related conditions. Results: Latent class growth analysis yielded two life course trajectories for women: consistently infrequent, low quantity drinking (Group 1: 48 %) and increasingly frequent, low quantity drinking (Group 2: 52 %). Men showed three trajectories: consistently infrequent, low quantity drinking (Group 3: 36 %); increasingly frequent, low quantity drinking (Group 4: 51 %); and drinking with increasing frequency and quantity until midlife, after which consumption gradually declined (Group 5: 13 %). Better childhood socioeconomic status was associated with Groups 2 and 4. Later drinking onset was associated with Groups 1 and 3. Parental alcohol misuse, early drinking initiation and childhood socioeconomic adversity were predictive of Group 5. Those in Group 5 were five-to-seven times more likely to have alcohol-related comorbidities. Conclusions: Early life experiences influence life course hazardous alcohol use. Interventions across the life course, from childhood, when drinking may be initiated, through to older adulthood, when sensitivity to alcohol increases, are needed. abstract_id: PUBMED:28553816 Heterogeneous Trajectories of Depressive Symptoms in Late Middle Age: Critical Period, Accumulation, and Social Mobility Life Course Perspectives. Objective: This study investigated patterns of depressive symptoms and whether socioeconomic status (SES) across the life course affects these trajectories using the critical period, accumulation, and social mobility models. Method: This study uses data from 8,532 adults, age 51 to 64, collected over 12 years from the Health and Retirement Study (observations = 25,887). A latent class analysis was performed to examine distinct depressive symptom trajectories; life course models were studied with multinomial logistic regression. Results: Four heterogeneous latent classes were identified for depression: Declining, Low, Increasing, and High and Increasing. The High and Increasing group was associated with a disadvantaged childhood SES, accumulated exposure to socioeconomic risks, and persistent SES disadvantage supporting the three life course models. Discussion: There was evidence of distinct profiles of depressive symptoms in late middle age and of interrelated life course mechanisms underlying the influences of childhood SES on later life depression. abstract_id: PUBMED:33740588 Socioeconomic status trajectories across the life course, daily discrimination, and inflammation among Black and white adults. Objective: This study replicates and expands Surachman et al.'s (2020) findings documenting socioeconomic status (SES) trajectories across the life course in an independent sample of Black (majority recruited from Milwaukee, WI) and white adults in the United States. We extend this work by examining whether SES trajectories and daily discrimination are independently associated with markers of inflammation. Method: Data were from 215 Black adults (188 recruited from Milwaukee, WI; 27 recruited from across the continental US) and 985 white adults (7 recruited from Milwaukee, WI; 978 recruited from across the continental US) who completed the baseline interview and biomarker assessment during the second wave of the Midlife in the United States (MIDUS) Study (ages = 34-84). SES life course trajectories were examined using latent class analysis based on objective (e.g., income and education) and subjective (e.g., social status and financial strain) indicators of SES. The association between life course SES trajectories and daily discrimination with markers of inflammation (IL-6, CRP, fibrinogen) were examined using multiple linear regression analyses, controlling for demographic, psychological, behavioral, and health-related covariates. Results: Black and white participants showed different patterns of life course SES trajectories. Among Black participants, the trajectories were Objectively Stable Low (45.16%), Downwardly Mobile (18.05%), and Upwardly Mobile (36.79%). Compared to the Upwardly Mobile, the Objectively Stable Low class showed elevated IL-6 after controlling for all covariates. Further, daily discrimination, but not SES trajectories, was significantly associated with CRP and fibrinogen after controlling for demographic, psychological, and behavioral covariates. White participants' experiences of life course SES trajectories were characterized as Objectively Stable Low (7.02%), Subjectively Downward (12.48%), Upwardly Mobile (39.99%), and Stable High (40.51%). Among white participants, SES trajectories, but not daily discrimination, were associated with all markers of inflammation (controlling for age and sex). Discussion: Consistent with the fundamental cause theory, multiple independent pathways link SES trajectories across the life course and daily discrimination to racial disparities in IL-6, CRP, and fibrinogen. abstract_id: PUBMED:24833250 The influence of work-family conflict trajectories on self-rated health trajectories in Switzerland: a life course approach. Self-rated health (SRH) trajectories tend to decline over a lifetime. Moreover, the Cumulative Advantage and Disadvantage (CAD) model indicates that SRH trajectories are known to consistently diverge along socioeconomic positions (SEP) over the life course. However, studies of working adults to consider the influence of work and family conflict (WFC) on SRH trajectories are scarce. We test the CAD model and hypothesise that SRH trajectories diverge over time according to socioeconomic positions and WFC trajectories accentuate this divergence. Using longitudinal data from the Swiss Household Panel (N = 2327 working respondents surveyed from 2004 to 2010), we first examine trajectories of SRH and potential divergence over time across age, gender, SEP and family status using latent growth curve analysis. Second, we assess changes in SRH trajectories in relation to changes in WFC trajectories and divergence in SRH trajectories according to gender, SEP and family status using parallel latent growth curve analysis. Three measures of WFC are used: exhaustion after work, difficulty disconnecting from work, and work interference in private family obligations. The results show that SRH trajectories slowly decline over time and that the rate of change is not influenced by age, gender or SEP, a result which does not support the CAD model. SRH trajectories are significantly correlated with exhaustion after work trajectories but not the other two WFC measures. When exhaustion after work trajectories are taken into account, SRH trajectories of higher educated people decline slower compared to less educated people, supporting the CAD hypothesis. abstract_id: PUBMED:26047691 Economic hardship in childhood and adult health trajectories: An alternative approach to investigating life-course processes. In this study, we advance existing research on health as a life course process by conceptualizing and measuring both childhood disadvantage and health as dynamic processes in order to investigate the relationship between trajectories of early life socioeconomic conditions and trajectories of health in midlife. We utilize a trajectory-based analysis that takes a disaggregated, person-centered approach to understand dynamic trajectories of health as latent variables that reflect the timing, duration and change in health conditions experienced by respondents over a period of 10 years in midlife as a function of stability and change in exposure to economic hardship in early life. Results from repeated-measures latent class analysis of longitudinal data from the Panel Study of Income Dynamics indicate that economic hardship in childhood has long-term, negative consequences for health both among individuals beginning life and remaining in poverty as well as those moving into poverty. In contrast, adults with more advantaged early life experiences, or who moved out of poverty during the period of observation, were at a lower risk of experiencing health trajectories characterized by the early onset or increasing risk of disease. We argue that a person-centered, disaggregated approach to the study of the relationship between socioeconomic status and health across the life course holds potential for the study of health inequality and that a greater focus on trajectory-based analysis is needed. abstract_id: PUBMED:30898851 Socioeconomic trajectories across the life course and risk of total and cause-specific mortality: prospective findings from the Moli-sani Study. Background: A life course approach has been suggested as the most appropriate to establish the total impact of socioeconomic status (SES) on adult health outcomes; however, such an approach has been poorly used within Mediterranean populations. We aimed to examine the SES trajectories from childhood to adulthood associated with mortality risk in a large general population-based cohort and to test potential pathways (eg, inflammation) underlying such associations. Methods: Longitudinal analyses on 22 194 subjects recruited in the Moli-sani Study, Italy (2005-2010). Low and high SES in childhood, educational attainment (low/high) and SES during adulthood (measured by a score including material resources and dichotomised as low/high) were used to define overall trajectories. Results: Over 8.3 years of follow-up, 1155 deaths occurred. In the group with poor childhood SES, an upward trajectory in both educational and material circumstances was associated with lower risk of all-cause death (HR=0.64; 95% CI 0.47 to 0.87), as opposed to subjects who remained stably low (low education and adulthood SES). Subjects with high childhood SES, but not educational achievement, were at increased risk of total and cardiovascular disease (CVD) death, although reporting higher material SES in adult life, as compared with the stably high SES group (HR=1.44; 1.02 to 2.02 and HR=1.90; 1.10 to 3.28, respectively). Inflammatory markers marginally accounted for such associations. Conclusion: For individuals with low SES in early life, an educational and material upward trajectory over the life course was associated with lower mortality risk. In the high SES childhood group, lack of a higher educational attainment appeared to be unfavourably associated with survival. abstract_id: PUBMED:28556801 Heterogeneous Trajectories of Physical and Mental Health in Late Middle Age: Importance of Life-Course Socioeconomic Positions. Drawing on life course and cumulative disadvantage theory, this study examines heterogeneous trajectories of functional limitations and depressive symptoms among late middle-aged individuals. This study used prospective data from 6010 adults, 51 to 64 years old, collected over a 12-year-period from the Health and Retirement Study. Considering the empirical proposition that several physical and mental trajectories may exist, Latent Class Growth Modeling was used. Five heterogeneous patterns of joint trajectories (Relatively healthy, Moderately improving, Steadily deteriorating, Steeply deteriorating, and Persistently high comorbid) were identified. Early life adversity was related to an increasing risk of declines in physical and mental health. The Persistently high comorbid class was characterized by a concentration of disadvantages over the life course. The development of public health interventions could help reduce co-existing physical and mental health problems, especially during late middle-age. Answer: The accumulation hypothesis suggests that the health effects of socioeconomic disadvantage accumulate over the life course. Evidence supporting this hypothesis can be found in several studies that have examined the relationship between socioeconomic trajectories and health outcomes in midlife. For instance, the Whitehall II study found a graded linear relationship between the accumulation of socioeconomic exposure and health outcomes in midlife. Men with the lowest socioeconomic position (SEP) across three life stages had increased odds of coronary heart disease (CHD), poor physical functioning, and poor mental functioning compared to men with high SEP at all three life stages. This effect was also observed in women for CHD and physical functioning, although no cumulative effect of SEP on minor psychiatric disorder was observed. The study concluded that health effects of socioeconomic disadvantage do indeed accumulate over the life course (PUBMED:15256527). Similarly, a study using data from the Mexican Health and Aging Study panel survey identified various types of health trajectories for older adults and found that both childhood and adult socioeconomic circumstances influence self-reported health trajectories at older age. This suggests that socioeconomic factors at different life stages contribute to health outcomes later in life (PUBMED:34733129). Another study that examined alcohol use trajectories across the life course found that early life experiences, such as childhood socioeconomic status and parental health behaviors, influence life course hazardous alcohol use. This study also indicated that interventions across the life course are needed to address the cumulative negative health effects of alcohol consumption (PUBMED:33940326). Moreover, research using data from the Health and Retirement Study identified distinct depressive symptom trajectories in late middle age and found that high and increasing depressive symptoms were associated with disadvantaged childhood SES, accumulated exposure to socioeconomic risks, and persistent SES disadvantage, supporting the critical period, accumulation, and social mobility models (PUBMED:28553816). In contrast, a study in Switzerland found little evidence to support the cumulative advantage/disadvantage model, as health trajectories did not show significant divergence across socioeconomic positions (PUBMED:25683678). Overall, the evidence from these studies suggests that socioeconomic trajectories across the life course are indeed associated with health outcomes in midlife, providing support for the accumulation hypothesis. However, the strength and pattern of this association may vary across different populations and health outcomes.
Instruction: Tone burst-galvanic ratio of vestibular evoked myogenic potential amplitudes: a new parameter of vestibular evoked myogenic potential? Abstracts: abstract_id: PUBMED:32086900 Cervial vestibular-evoked myogenic potential induced by galvanic vestibular stimulation in normal people Objective:To establish a new method for detecting vestibular function by testing cervical vestibular-evoked myogenic potential induced by galvanic vestibular stimulation in normal population. Method:Twenty normal ears were tested for cervical vestibular evoked myogenic potential induced by galvanic vestibular stimulation. SPSS 18.0 software was used to analyze the obtained data. Result:In all healthy subjects mastoid-forehead galvanic vestibular stimulation produced a positive-negative biphasic EMG responses on SCM ipsilateral to the cathodal electrode. The latency of p13 was(11.52±3.05) ms. The latency of n23 was(15.31±3.38) ms. The amplitude of p13-n23 was(40.55±27.93) μV. The interval of p13-n23 was(3.53±1.38) ms. The interaural asymmetry ratio(AR, %) of p13, n23 latency, the amplitude and interval were respectively(6.96±6.79)%, (6.47±5.93)%, (28.08±26.42)% and (16.61±11.11)%. There was no significant difference in all parameters between the right and left ears of all subjects. Conclusion:The value of cervical vestibular-evoked myogenic potential induced by galvanic vestibular stimulation in normal subjects can be established to explore methods for diagnosis, treatment and researching mechanism of auditory neuropathy and vestibular neuropathy. abstract_id: PUBMED:35434345 Galvanic vestibular-evoked myogenic potentials in evaluating damaged sites of vestibular neuritis. Objectives: To test the possibility of pure otolithic organ deficits and validate the histopathological evidence of retrovestibular neural impairment in vestibular neuritis (VN), the authors adopted a topographic survey combining cervical vestibular-evoked myogenic potential (cVEMP) and ocular vestibular-evoked myogenic potential (oVEMP) using various stimulation modes and caloric tests. Methods: Forty patients with VN were enrolled in this study. All patients underwent pure tone audiometry, acoustic cVEMP, galvanic cVEMP, vibratory oVEMP, galvanic oVEMP, and caloric tests. Different combinations of vestibular tests were further compared and analyzed. Results: According to vestibular test results in affected VN ears, the proportion (10%) of pure saccular dysfunction was significantly less than that (52.5%) of saccular nerve deficit. The proportion (2.5%) of pure utricular dysfunction was significantly less than that (37.5%) of utricular nerve deficit. The percentage (82.5%) of VN involving the ampullar vestibulo-ocular reflex (VOR) pathway was significantly higher than that (40%) involving the utriculo-ocular reflex (UOR) pathway. The superior, inferior, and total VN percentages were 37.5%, 17.5%, and 45%, respectively. The proportion of inferior VN was significantly less than that of VN involving the superior vestibular nerve. Conclusion: There were significantly fewer cases of pure otolithic organ dysfunction than vestibular nerve involvement in VN patients. The damage to the ampullar VOR pathway was more significant than that to the UOR pathway, and both pathways might be independent of each other. In addition, the incidence of isolated inferior VN was significantly less than that of VN involving the superior vestibular nerve. Level Of Evidence: Level 3. abstract_id: PUBMED:33012701 Galvanic vestibular evoked myogenic potentials: normative data and the effect of age. Introduction: Galvanic vestibular evoked myogenic potentials evaluate vestibular nerve responses using electric stimulation by records collected from the sternocleidomastoid muscle. A normal vestibular evoked myogenic potential response consists of the first positive, P1, and negative, N1, peaks. The response can be affected by factors such as age and gender and is also consequential in the diagnosis of pathologies. Objectives: The present study was performed to obtain normative data on healthy adults, to help in diagnosis by establishing clinical norms as well as to investigate changing test parameters with age in galvanic vestibular evoked myogenic potentials. Methods: A total of 100 healthy participants were included in the study. Galvanic vestibular evoked myogenic potential (current 3 mA, duration 1 ms) was performed randomly on both ears of each participant. The participants between the ages of 18-65 (mean age 39.7 ± 13.9) were divided into 5 groups according to their ages. Normative data of galvanic vestibular evoked myogenic potentials parameters were calculated in groups and in total, and age-related changes were examined. Results: The galvanic vestibular evoked myogenic potential waveform was elicited from all participants (200 ears). The latency of P1 and N1 was 7.82 ± 3.29 ms and 22.06 ± 3.95 ms, respectively. The P1-N1 amplitude value was 66.64 ± 24.5 µV. The percentage of vestibular asymmetry was 16.29 ± 11.99%. The latencies of P1 and N1 and P1-N1 amplitude values demonstrated significant differences among different age groups (p &lt; 0.01). Conclusions: The results of this study show that as age increased, latencies were prolonged, and amplitudes gradually decreased. The normative data aids in the diagnosis of retrolabyrinthine lesions and the increase in the clinical use of galvanic vestibular evoked myogenic potentials. abstract_id: PUBMED:35469706 Effectiveness of galvanic vestibular evoked myogenic potential for evaluation of Meniere's disease. Objectives: Although the cause of Meniere's disease (MD) is not fully understood, endolymphatic hydrops is widely believed to be responsible for MD. Previous studies have used Air-Conducted Sound (ACS)-induced Vestibular Evoked Myogenic Potentials (VEMPs) to evaluate otolithic function in patients with MD. However, the use of Galvanic Vestibular Stimulation-VEMPs (GVS-VEMPs) with other vestibular tests in MD has been rare. This study aimed to explore the application of galvanic VEMPs in assessing MD. Methods: Normal individuals and patients with unilateral definite MD were included in this retrospective study. All participants underwent pure tone audiometry. Ocular and cervical VEMPs induced by GVS, and ACS were recorded. The characteristic parameters of VEMPs (n1 latency, p1 latency, amplitude, and AR) were analyzed. Results: The provocation rates of GVS-VEMPs did not differ between MD patients and control individuals. Compared with ACS, GVS could evoke potentials with longer latencies. MD patients presented GVS-VEMPs with lower amplitudes and ACS-cVEMP with shorter latencies and had a higher response rate in GVS-oVEMP. However, no differences or correlations were found in the characteristic parameters of GVS-VEMPs among the different stages of MD. Conclusions: GVS is as effective as ACS for inducing VEMP, and GVS-VEMP recording can detect retrolabyrinthine degeneration in MD. Further research is needed to assess the utility of GVS-VEMP in the evaluation of MD severity. Level Of Evidence: Level 4. abstract_id: PUBMED:36694985 Characteristics of ipsilateral, contralateral and bilateral masseter vestibular-evoked myogenic potential in healthy adults. Objective: This study aimed to characterise the ipsilateral, contralateral and bilateral masseter vestibular-evoked myogenic potential using clicks and 500 Hz tone burst stimuli in healthy adults. Method: Masseter vestibular-evoked myogenic potential was recorded from 20 healthy participants aged 19-28 years (11 males and 9 females). Masseter vestibular-evoked myogenic potential was recorded using 500 Hz tone burst and click stimuli in ipsilateral, contralateral and bilateral modes. Results: A statistically significant difference was observed between ipsilateral and contralateral stimulation for p11 latency, n21 latency and p11-n21 amplitude for both click and 500 Hz tone burst stimuli. The amplitude of the p11-n21 complex was higher for ipsilateral, contralateral and bilateral stimulations for 500 Hz tone burst than for click stimulus. Conclusion: This study showed a significant difference for p11-n21 amplitude between click and 500 Hz tone burst evoked masseter vestibular-evoked myogenic potential. In addition, bilateral stimulation elicited a larger response than ipsilateral and contralateral stimulation. abstract_id: PUBMED:28719825 Degeneration of the vestibular nerve in unilateral Meniere's disease evaluated by galvanic vestibular-evoked myogenic potentials. Objective: The staging system of Meniere's disease utilizes audiograms to probe cochlear dysfunction. We explored the addition of galvanic vestibular-evoked myogenic potentials (VEMP) to further explore vestibular function. Methods: Seventy patients with unilateral Meniere's disease were enrolled in this study. Within 2weeks of diagnosis, all subjects underwent pure tone audiometry, cervical and ocular VEMP, and caloric test. The prevalence of abnormal tests and the VEMP characteristic parameters such as latencies and amplitudes were analyzed. Results: In affected ears, the abnormal rate of acoustic cVEMPs, galvanic cVEMPs, vibratory oVEMPs and galvanic oVEMPs was 37%, 17%, 20%, and 9%, respectively. No significant differences existed in VEMP latencies and amplitudes between affected ears and unaffected ears. Conclusions: The impairment of otolithic organs was found to be more than that of vestibular afferents. The deterioration of the saccule was more than that of the utricle, whereas retrolabyrinthine degeneration of sacculo-collic reflex and vestibulo-ocular reflex was similar. Significance: This study is the first to use an electrophysiological test to evaluate the retrolabyrinthine function of patients with unilateral Meniere's disease. abstract_id: PUBMED:20955634 Comparison of vestibular evoked myogenic potentials elicited by click and short duration tone burst stimuli. Introduction: Vestibular evoked myogenic potentials are short latency electrical impulses that are produced in response to higher level acoustic stimuli. They are used clinically to diagnose sacculocollic pathway dysfunction. Aim: This study aimed to compare the vestibular evoked myogenic potential responses elicited by click stimuli and short duration tone burst stimuli, in normal hearing individuals. Method: Seventeen subjects participated. In all subjects, we assessed vestibular evoked myogenic potentials elicited by click and short duration tone burst stimuli. Results And Conclusion: The latency of the vestibular evoked myogenic potential responses (i.e. the p13 and n23 peaks) was longer for tone burst stimuli compared with click stimuli. The amplitude of the p13-n23 waveform was greater for tone burst stimuli than click stimuli. Thus, the click stimulus may be preferable for clinical assessment and identification of abnormalities as this stimulus has less variability, while a low frequency tone burst stimulus may be preferable when assessing the presence or absence of vestibular evoked myogenic potential responses. abstract_id: PUBMED:23833997 Present situation and development of ocular vestibular-evoked myogenic potential Myogenic potentials evoked by air conducted sound (ACS), bone conducted vibration (BCV) or galvanic pulses can be recorded with surface electrodes over contracted muscles. These myogenic potentials are of vestibular origin (utricle and saccule) and so these potentials are called vestibular evoked myogenic potentials (VEMPs). Since the vestibular system has projections to many muscle systems, there are many such VEMPs. In this review, we discuss the generated origin, response pathway, waveform characteristics and clinical application of ocular vestibular-evoked myogenic potential (oVEMP). abstract_id: PUBMED:23927833 Role of cervical vestibular evoked myogenic potential response in identifying vestibular dysfunction. Objectives: To analyse cervical vestibular evoked myogenic potential response parameters in normal volunteers and vertiginous patients. Subjects And Methods: A prospective study of 50 normal subjects and 50 patients with vertigo was conducted at Chiang Mai University Hospital, Thailand. Cervical vestibular evoked myogenic potential responses were measured using air-conducted, 500-Hz, tone-burst stimuli with subjects in a sitting position with their head turned toward the contralateral shoulder. Results: The mean ± standard deviation age and male:female ratio in the normal (44.0 ± 9.3 years; 12:38) and vertigo groups (44.7 ± 9.8 years; 17:33) were not significantly different. The prevalence of absent responses in the normal (14 per cent) and vertigo ears (46 per cent) differed significantly (p &lt; 0.0001). Other cervical vestibular evoked myogenic potential parameters (i.e. response threshold, P1 and N1 latency, P1–N1 interlatency and interamplitude, inter-ear difference in P1 threshold, and asymmetry ratio) showed no inter-group differences. Conclusion: The absence of a cervical vestibular evoked myogenic potential response is useful in the identification of vestibular dysfunction. However, patients should undergo a comprehensive battery of other vestibular tests to supplement their cervical vestibular evoked myogenic potential response findings. abstract_id: PUBMED:26223715 Comparison of Tone Burst, Click and Chirp Stimulation in Vestibular Evoked Myogenic Potential Testing in Healthy People. Objective: Vestibular evoked myogenic potential (VEMP) is a clinical test used in the diagnosis of vestibular diseases. VEMP uses several stimulants to stimulate the vestibular system and measure myogenic potentials. The aim of this study was to compare the effects of tone burst, click, and chirp stimulation in VEMP on the latency and amplitude of myogenic potentials. Materials And Methods: We compared the results of 78 ears from 39 volunteers. We measured the sternocleidomastoid muscle potential of each ear following a 500-Hz tone burst, click, and chirp stimulation while in a sitting position and evaluated the latency and amplitude. Results: The tone burst stimulus resulted in waves with longer latency (15.8±1.9 ms) but higher amplitude (35.9±17.1 µV) compared with the other stimuli, and the chirp stimulus resulted in waves with shorter latency (9.9±2.4 ms) but lower amplitude (33±18.6 µV) (p&lt;0.001). The VEMP asymmetry ratio did not significantly differ. Onclusion: Because the amplitudes and latencies of different stimuli significantly differ, further studies including more patients and stimulus types are needed to obtain standardized VEMP protocols. Answer: The abstracts provided do not explicitly mention a "tone burst galvanic ratio of vestibular evoked myogenic potential amplitudes" as a parameter. However, they do discuss various aspects of vestibular evoked myogenic potentials (VEMPs) and the use of different stimuli, including tone bursts and galvanic stimulation, to elicit these potentials. For instance, the study in abstract PUBMED:20955634 compares VEMP responses elicited by click stimuli and short duration tone burst stimuli, finding that the latency of VEMP responses was longer for tone burst stimuli compared with click stimuli, and the amplitude of the p13-n23 waveform was greater for tone burst stimuli than click stimuli. This suggests that tone burst stimuli can be used to elicit VEMPs with specific characteristics. Additionally, abstract PUBMED:26223715 compares the effects of tone burst, click, and chirp stimulation in VEMP on the latency and amplitude of myogenic potentials, indicating that the tone burst stimulus resulted in waves with longer latency but higher amplitude compared with the other stimuli. While these studies provide information on the use of tone bursts in VEMP testing, they do not define a specific parameter called "tone burst galvanic ratio of vestibular evoked myogenic potential amplitudes." Therefore, based on the provided abstracts, it cannot be concluded that such a parameter is established or recognized in the literature. Further research and literature review would be necessary to determine if this parameter exists and its relevance in the context of VEMP testing.
Instruction: Smoking and diabetes: is the association mediated by adiponectin, leptin, or C-reactive protein? Abstracts: abstract_id: PUBMED:20231844 Are there socioeconomic inequalities in cardiovascular risk factors in childhood, and are they mediated by adiposity? Findings from a prospective cohort study. Background: Socioeconomic gradients in adiposity were not present during childhood for previous generations, but have emerged in contemporary children. It is unknown whether this translates to socioeconomic gradients in associated cardiovascular risk factors in children, with consequent implications for inequalities in coronary heart disease (CHD) and diabetes when these children reach adulthood. Methods: Using data from 7772 participants aged 10-years from the Avon Longitudinal Study of Parents and Children, we examined the association between maternal education and a large number of cardiovascular risk factors (cholesterol, triglycerides, high-density lipoprotein, apolipoprotein, adiponectin, leptin, C-reactive protein (CRP), interleukin-6 (IL-6) and systolic and diastolic blood pressure), and examined whether inequalities were mediated by adiposity, measured by dual energy X-ray absorptiometry (DXA)-assessed total fat mass. Results: There were socioeconomic differences in a number of the cardiovascular risk factors (apolipoprotein B, systolic and diastolic blood pressure, CRP, leptin and IL-6). Inequalities were greater in girls than boys. Inequalities in CRP and leptin were completely mediated by adiposity. Inequalities in other cardiovascular risk factors were partially mediated by adiposity. Conclusion: This study showed important socioeconomic inequalities in adiposity and associated cardiovascular risk factors in a contemporary UK population of 10-year-old children. Differences between contemporary children and previous generations in the socioeconomic patterning of cardiovascular risk factors suggest future adults may have greater inequalities in diabetes and CHD than current adults. These findings highlight the importance of interventions aimed at preventing obesity in childhood, particularly among those of lower socioeconomic position. abstract_id: PUBMED:25400076 Smoking and diabetes: is the association mediated by adiponectin, leptin, or C-reactive protein? Background: Although the association between cigarette smoking and risk of type 2 diabetes is well established, its mechanisms are yet to be clarified. This study examined the possible mediating effects of adiponectin, leptin, and C-reactive protein (CRP) concentrations on the smoking-diabetes association. Methods: Between 2002 and 2011, we followed 3338 Japanese workers, aged 35-66 years, who were enrolled in the second Aichi workers' cohort study. We used multivariable-adjusted Cox regression models to determine the hazard ratios and respective 95% confidence intervals (CIs) of the association between smoking status and risk of diabetes. A multiple mediation model with bootstrapping was used to estimate the magnitude and the respective bias-corrected (BC) 95% CIs of the indirect effects of smoking on diabetes through the three biomarkers. Results: Relative to never smokers, the risk of diabetes was significantly elevated in current (hazard ratio 1.75, 95% CI 1.25-2.46) and ex-smokers (hazard ratio 1.54, 95% CI 1.07-2.22). The indirect effects of smoking on diabetes through adiponectin levels were statistically significant among light (point estimate 0.033, BC 95% CI 0.005-0.082), moderate (point estimate 0.044, BC 95% CI 0.010-0.094), and heavy smokers (point estimate 0.054, BC 95% CI 0.013-0.113). In contrast, neither the indirect effects of smoking on diabetes through leptin nor CRP levels were significant, as the corresponding BC 95% CIs included zero. Conclusions: In our analysis, adiponectin concentration appeared to partially mediate the effect of smoking on diabetes, while leptin and CRP levels did not. abstract_id: PUBMED:27150701 Effect of basal insulin therapy on vascular endothelial function and adipokine profiles in people with Type 2 diabetes. Aim: To compare the effects of the basal insulin analogues glargine and detemir on endothelial function and adipocytokine levels in people with Type 2 diabetes. Methods: We studied 32 people with Type 2 diabetes whose blood glucose control was unsatisfactory while receiving only oral hypoglycaemic drugs. Participants were randomized to either insulin glargine or detemir for 24 weeks and then crossed over to the other treatment without a washout period. Flow-mediated vasodilatation, adipocytokine levels (plasminogen activator inhibitor-1 and leptin/adiponectin ratio), and fasting ghrelin levels were monitored. Results: HbA1c levels were significantly decreased by both basal insulin therapies. Body weight was significantly increased by glargine but not by detemir. The proportion of flow-mediated vasodilatation was significantly increased by detemir but not glargine (glargine: from 5.17 ± 0.69 to 5.94 ± 0.83%; detemir: from 4.89 ± 0.78 to 7.92 ± 0.69%). Plasminogen activator inhibitor-1 level was significantly decreased by only detemir (glargine: from 16.4 ± 1.8 to 17.3 ± 2.1; detemir: from 19.2 ± 2.8 to 16.0 ± 1.6 ng/ml). The leptin/adiponectin ratio was significantly increased only by glargine. Acyl ghrelin level was significantly decreased by glargine but not detemir. Conclusions: These results suggest that the effect on endothelial function and adipocytokine profiles may differ between glargine and detemir in people with diabetes (Trial registration ID: UMIN000004973). abstract_id: PUBMED:33400346 Coffee and type 2 diabetes risk: Is the association mediated by adiponectin, leptin, c-reactive protein or Interleukin-6? A systematic review and meta-analysis. Aim: Coffee and diabetes risk association has been demonstrated in numerous studies; however, the exact mechanism has not been clarified yet. The present meta-analysis was conducted to cover the current knowledge regarding the effect of coffee on Type 2 Diabetes (T2D), in addition to the evaluation of adiponectin, leptin, C-reactive protein (CRP) and Interleukin-6 (IL-6) levels among coffee consumers as relatively possible mediators of this effect. Method: A comprehensive search of the literature was carried out using search engines up to March 2020. The effect sizes were investigated using the standardised mean difference (SMD) and odds ratios (OR) or relative risk (RR) with its 95% confidence interval (CI). A total of 69 cross-sectional and cohort studies were included and divided as follows: 31 articles for T2D risk, 15 studies for adiponectin, 6 studies for leptin, 12 studies for CRP and 5 studies for IL-6. Results: Overall, coffee consumption was inversely associated with T2D risk with an estimated pooled RR of 0.73 (95% confidence interval [0.68, 0.80] for the highest vs lowest coffee consumption categories. The combined SMD between the different coffee intake categories, showed that coffee consumption was associated with higher adiponectin levels (P = .002), and lower level of leptin (P = .04) and CRP (P = .2), with apparently no change in IL-6 levels (P = .91). Conclusion: The present meta-analysis showed strong epidemiological evidence that coffee consumption is inversely associated with the risk of T2D. Also, adiponectin, leptin concentrations appeared to be potential mediators of the coffee effect on diabetes, while IL-6 levels did not. abstract_id: PUBMED:36940600 C-reactive protein partially mediates the inverse association between coffee consumption and risk of type 2 diabetes: The UK Biobank and the Rotterdam study cohorts. Background: Coffee is among the most consumed beverages worldwide. Coffee consumption has been associated with lower risk of type 2 diabetes mellitus (T2D), but underlying mechanisms are not well understood. We aimed to study the role of classic and novel-T2D biomarkers with anti- or pro-inflammatory activity in the association between habitual coffee intake and T2D risk. Furthermore, we studied differences by coffee types and smoking status in this association. Methods: Using two large population-based cohorts, the UK-Biobank (UKB; n = 145,368) and the Rotterdam Study (RS; n = 7111), we investigated associations of habitual coffee consumption with incident T2D and repeated measures of insulin resistance (HOMA-IR), using Cox proportional hazards and mixed effect models, respectively. Additionally, we studied associations between coffee and subclinical inflammation biomarkers including C-reactive protein (CRP) and IL-13, and adipokines, such as adiponectin and leptin, using linear regression models. Next, we performed formal causal mediation analyses to investigate the role of coffee-associated biomarkers in the association of coffee with T2D. Finally, we evaluated effect modification by coffee type and smoking. All models were adjusted for sociodemographic, lifestyle and health-related factors. Results: During a median follow-up of 13.9 (RS) and 7.4 (UKB) years, 843 and 2290 incident T2D cases occurred, respectively. A 1 cup/day increase in coffee consumption was associated with 4% lower T2D risk (RS, HR = 0.96 [95%CI 0.92; 0.99], p = 0.045; UKB, HR = 0.96 [0.94; 0.98], p &lt; 0.001), with lower HOMA-IR (RS, log-transformed β = -0.017 [-0.024;-0.010], p &lt; 0.001), and with lower CRP (RS, log-transformed β = -0.014 [-0.022;-0.005], p = 0.002; UKB, β = -0.011 [-0.012;-0.009], p &lt; 0.001). We also observed associations of higher coffee consumption with higher serum adiponectin and IL-13 concentrations, and with lower leptin concentrations. Coffee-related CRP levels partially mediated the inverse association of coffee intake with T2D incidence (average mediation effect RS β = 0.105 (0.014; 0.240), p = 0.016; UKB β = 6.484 (4.265; 9.339), p &lt; 0.001), with a proportion mediated by CRP from 3.7% [-0.012%; 24.4%] (RS) to 9.8% [5,7%; 25.8%] (UKB). No mediation effect was observed for the other biomarkers. Coffee-T2D and coffee-CRP associations were generally stronger among consumers of ground (filtered or espresso) coffee and among never and former smokers. Conclusions: Lower subclinical inflammation may partially mediate the beneficial association between coffee consumption and lower T2D risk. Consumers of ground coffee and non-smokers may benefit the most. KEYWORDS (MESH TERMS): coffee consumptions; diabetes mellitus, type 2; inflammation; adipokines; biomarkers; mediation analysis; follow-up studies. abstract_id: PUBMED:19933993 Declining beta-cell compensation for insulin resistance in Hispanic women with recent gestational diabetes mellitus: association with changes in weight, adiponectin, and C-reactive protein. Objective: To identify factors associated with declining beta-cell compensation for insulin resistance. Research Design And Methods: In a cohort of Hispanic women with recent gestational diabetes mellitus, oral glucose tolerance tests (OGTTs), intravenous glucose tolerance tests (IVGTTs), and bioelectrical impedance measurements were performed at 15-month intervals for up to 5 years, or until fasting plasma glucose exceeded 140 mg/dl (7.8 mmol/l). Data were analyzed to identify predictors of declining beta-cell compensation for insulin resistance (the disposition index [DI]) and to examine the mechanism of weight gain and changes in circulating levels of selected adipokines and inflammatory markers on beta-cell compensation decline. Results: A total of 60 nondiabetic women had a median of four sets of OGTT + IVGTT during a median follow-up of 52 months. Fourteen of the women developed diabetes. None of the baseline characteristics were significantly predictive of a decline in DI. There were significant univariate associations between declining DI and weight gain (specifically fat gain), declining adiponectin and rising C-reactive protein. Multivariate analysis showed that the weight gain was the most significant factor associated with declining DI. The amount of association between weight gain and declining DI was explained 31% by changes in adiponectin and C-reactive protein and 40% by changes in insulin resistance. Conclusions: These results identify weight gain as the strongest factor associated with declining beta-cell compensation for insulin resistance in Hispanic women at high risk for type 2 diabetes. Such effect may be mediated through at least two effects: alterations in adipokine levels and increasing insulin resistance. abstract_id: PUBMED:20543523 Serum fetuin-A is an independent marker of insulin resistance in Japanese men. Aim: Fetuin-A, also known as alpha2-Heremans Schmid glycoprotein, is an abundant plasma protein synthesized predominantly in the liver. Fetuin-A inhibits insulin receptor autophosphorylation, which is mediated by its intrinsic tyrosine kinase activity. In this study, we examined the association between the serum fetuin-A level and insulin resistance in Japanese men. Methods: We recruited 300 unrelated Japanese men without known chronic diseases, such as diabetes mellitus, or a history of regular drug use, and who underwent health examinations. From a 75-g oral glucose tolerance test, the study population included 194 individuals with normal glucose tolerance, 91 with impaired glucose tolerance and/or impaired fasting glucose, and 15 with diabetes mellitus. Serum fetuin-A concentrations were measured using an ELISA kit. Results: Serum fetuin-A concentrations were positively correlated with fasting insulin levels (r = 0.269, p&lt;0.001), HOMA-IR (r = 0.274, p&lt;0.001) and LDL-cholesterol (r = 0.172, p&lt;0.01), and negatively correlated with HDL-cholesterol concentrations (r = -0.191, p&lt;0.001). Fetuin-A concentrations were also positively correlated with serum leptin (r = 0.150, p&lt;0.01) and negatively with adiponectin concentrations (r = -0.208, p&lt;0.001). Stepwise regression analyses confirmed that the fetuin-A concentration was independently associated with the fasting insulin level and HOMA-IR, as were body mass index, triglyceride, LDL-cholesterol, leptin and adiponectin concentrations. Conclusion: Our data suggest that increased serum fetuin-A levels constitute an independent marker of insulin resistance and an atherogenic lipid profile in Japanese men. abstract_id: PUBMED:29332675 Sleep duration and incidence of type 2 diabetes: the Multiethnic Cohort. Objectives: As an emerging risk factor for the rising incidence of type 2 diabetes, we examined sleep duration in relation to type 2 diabetes and several biomarkers. Design: Prospective cohort recruited 1993-1996. Setting: The Multiethnic Cohort in Hawaii and California. Participants: A cohort of 151,691 White, African American, Japanese American, Native Hawaiian, and Latino participants; 9695 cohort members had biomarker measurements. Measurements: Sleep duration was self-reported at cohort entry. Diabetes status was obtained from 3 questionnaires and confirmed by 3 administrative data sources. Biomarkers were measured by standard assays 9.6±2.1 years after cohort entry. We estimated diabetes risk as a time-varying outcome using Cox regression adjusted for body mass index assessed at 3 time points and other known confounders and computed adjusted means of biomarkers by sleep hours. Results: During 7.9±3.5 years of follow-up, 8487 new diabetes cases were diagnosed. Long sleep duration (≥9 hours), as compared with 7-8 hours, was significantly associated with higher incidence (hazard ratio, 1.12; 95% confidence interval 1.04-1.21), but the 4% elevated incidence for short sleep duration (≤6 hours) did not reach significance (95% confidence interval 0.99-1.09). After stratification, the associations appeared stronger in Japanese American than other ethnic groups and in participants without comorbidity. Hours of sleep were positively associated with C-reactive protein and triglycerides and inversely related to high-density lipoprotein cholesterol and adiponectin but not with leptin levels and homeostatic model assessment of insulin resistance. Conclusion: In this multiethnic population, the 12% higher diabetes risk for long sleep hours may be mediated through inflammation, a poor lipid profile, and lower adiponectin levels. abstract_id: PUBMED:18977478 Differential metabolic effects of pravastatin and simvastatin in hypercholesterolemic patients. Background: Lipophilic and hydrophilic statins have different effects on adiponectin and insulin resistance in experimental studies and different effects on the rate of onset of new diabetes in large scale clinical studies. Therefore, we hypothesized that simvastatin and pravastatin may have differential metabolic effects in hypercholesterolemic patients. Methods: This was a randomized, single-blind, placebo-controlled, parallel study. Age, gender, and body mass index were matched. Forty-three patients were given placebo, simvastatin 20mg, or pravastatin 40 mg, respectively once daily for 2 months. Results: Simvastatin and pravastatin therapy significantly changed lipoprotein levels and improved flow-mediated dilation after 2 months when compared with baseline (P&lt;0.001) or placebo treatment (P&lt;0.001 by ANOVA). Simvastatin therapy significantly increased insulin levels (mean % changes; 127%, P=0.014) and decreased plasma adiponectin levels (10%, P=0.012) and insulin sensitivity as assessed by QUICKI (6%, P=0.007) when compared with baseline. By contrast, pravastatin therapy did not significantly change insulin levels (-3%, P=0.437) but significantly increased plasma adiponectin levels (9%, P=0.011) and insulin sensitivity (6%, P=0.008) when compared with baseline. In addition, these effects of simvastatin were significant when compared with pravastatin (P&lt;0.001 for insulin levels by ANOVA on Ranks, P&lt;0.001 for adiponectin and P=0.001 for QUICKI by ANOVA). When compared with baseline, simvastatin significantly increased plasma leptin levels (35%, P=0.028), but pravastatin did not (1%, P=0.822). Conclusions: Despite causing comparable changes in lipoprotein and endothelium-dependent dilation, simvastatin and pravastatin therapy had differential metabolic effects in hypercholesterolemic patients that may be clinically relevant. abstract_id: PUBMED:25293442 Effects of a community-based weight loss intervention on adipose tissue circulating factors. Aims: Obesity is associated with metabolic dysfunctions, which may be mediated by changes in adipose tissue signaling factors. These molecules are denoted as Adipose Tissue Generated Mediators of CardioVascular Risk (ATGMCVR) here, and include leptin, adiponectin, C-reactive protein (CRP), interleukin 6 (IL-6), tumor necrosis factor alpha (TNFα), and plasminogen activator inhibitor 1 (PAI-1). This study examined the effect of a weight loss program on ATGMCVR in obese adults with prediabetes. Materials And Methods: Subjects were randomized to usual care (UC; n=15) or lifestyle weight loss groups (LWL; n=15). LWL was a community-based weight loss intervention to promote physical activity and healthy eating. ATGMCVR at 1-year were compared between groups by analysis of covariance; baseline value of the mediator was the covariate. Baseline means for ATGMCVR were compared between those with (n=21) and without (n=9) metabolic syndrome (MetS). Results: At baseline, subjects were 58±9 (SD) years, 70% female, with a BMI of 34±4kg/m(2). One-year weight loss (%) was 7.8±6.0% for LWL and 1.7±4.5% for UC. Group differences at 1-year were noted (adjusted means [95%CI] for UC and LWL, respectively) for adiponectin (8526.3 [7397.7, 9827]; 10,870.9 [9432.0, 12,529.3]ng/ml; p=0.02), leptin (30.4 [26.1, 35.4]; 23.7 [20.3, 27.5]ng/ml; p=0.02), IL-6 (0.4 [0.3, 0.5]; 0.2 [0.1, 0.2] pg/ml; p=0.001), and PAI-1 (50 [42.7, 58.7]; 36.2 [30.8, 42.4]pg/ml; p=0.01). No differences in baseline ATGMCVR were seen between subjects with and without MetS. Conclusion: These findings suggest ATGMCVR can be improved with weight loss; larger studies are needed to determine if improvements in metabolic dysfunction are related to changes in ATGMCVR. Answer: The association between smoking and diabetes appears to be partially mediated by adiponectin levels but not by leptin or C-reactive protein (CRP) levels. A study examining the possible mediating effects of these biomarkers on the smoking-diabetes association found that the indirect effects of smoking on diabetes through adiponectin levels were statistically significant among light, moderate, and heavy smokers. However, the indirect effects of smoking on diabetes through leptin and CRP levels were not significant, as the corresponding bias-corrected 95% confidence intervals included zero (PUBMED:25400076).
Instruction: Are emergency care nurses prepared for chemical, biological, radiological, nuclear or explosive incidents? Abstracts: abstract_id: PUBMED:22726947 Are emergency care nurses prepared for chemical, biological, radiological, nuclear or explosive incidents? Unlabelled: Two main areas exist within emergency care where chemical, biological, radiological, nuclear and explosive preparedness can be focused: departmental preparedness and staff preparedness. This study focused upon the latter. Aim: To identify areas where nurses require training in order to improve preparedness for a CBRNe incident. Methods: A competency questionnaire was developed from the literature and completed by 50 nursing staff across three Emergency Departments within one NHS Trust in Northern Ireland. Descriptive analysis was used for the quantitative data along with content analysis for the qualitative questions. Results: Six key areas were identified for training; waste management (including clinical waste, contaminated clothing, contaminated water and the management of the contaminated deceased), Triage, Chain of command, PODs, awareness of the range of Personal Protective Equipment and its appropriate use and the decontamination of people and equipment. Conclusion: There is a need for a standardised 'blueprint' of role-specific competency criteria for a CBRNe incident for all emergency healthcare staff. The assessment tool used in this study can help to assess levels of preparedness amongst nursing staff and, if adapted accordingly, help gauge preparedness of other key healthcare professionals. abstract_id: PUBMED:19178546 Chemical, biological and radiological incidents: preparedness and perceptions of emergency nurses. Despite their important role in chemical, biological and radiological (CBR) incident response, little is known about emergency nurses' perceptions of these events. The study aim was to explore emergency nurses' perceptions of CBR incidents and factors that may influence their capacity to respond. Sixty-four nurses from a metropolitan Emergency Department took part. The majority were willing to participate in CBR incidents and there was a positive association between willingness to participate and postgraduate qualification in emergency nursing. Willingness decreased, however, with unknown chemical and biological agents. One third of participants reported limitations to using personal protective equipment. Few participants had experience with CBR incidents although 70.3 per cent of participants had undergone CBR training. There were significant differences in perceptions of choice to participate and adequacy of training between chemical, biological and radiological incidents. The study results suggest that emergency nurses are keen to meet the challenge of CBR incident response. abstract_id: PUBMED:37264827 Chemical, Biological, Radiological, and Nuclear Preparedness of Public Hospitals in Riyadh. Objectives: Chemical, biological, radiological, and nuclear (CBRN) incidents are those that involve chemical or biological warfare agents or toxic radiological or nuclear materials. These agents can cause disasters intentionally or accidentally. Hospitals play a crucial role in handling CBRN disasters. This study aimed to assess the CBRN preparedness of government hospitals in Riyadh. Methods: A descriptive cross-sectional study was conducted across government hospitals in Riyadh. All government hospitals with more than 100 inpatient beds and an emergency department met the inclusion criteria. Hospital preparedness was assessed using an adaptation of the CBRNE (chemical, biological, radiological, nuclear, or explosive event) Plan Checklist. This adaptation was chosen due to the inclusion of explosive events in hospital disaster readiness, and its structural composition of key clinical guidelines necessary for a comprehensive disaster and readiness plan. Results were described in frequencies across several domains such as foundational considerations and planning which are used to assess plan preparedness using readiness tools, training, and awareness among staff members in accordance to a pre-established emergency plan, placed procedures and their implementation, and modules for preparing for a biological incident, a chemical incident, and a radiological or nuclear incident. Results: Of the 11 eligible hospitals, 10 participated in the study. Furthermore, CBRN considerations were included in the disaster plans of 7 hospitals. Drills had been conducted in collaboration with local agencies in only 2 hospitals. The staff had been trained to recognize the signs and symptoms of exposure to class (A) biological agents in less than half of the hospitals. Eight of the hospitals had antidotes and prophylactics to manage chemical incidents, but only half of them had radiation detection instruments. Personal protective equipment was available in all hospitals, but rapid access to stockpiles of medications was available in only half of them. Conclusions: Government hospitals in Riyadh demonstrated insufficient CBRN preparedness as per the CBRNE Plan Checklist. Overall, there was a lack of preemptive planning, application of pre-established policies and procedures, and adequate staff training. Furthermore, several hospitals had insufficient stockpiles of medications and in concrete plans on accessing government stockpiles in the case of an emergency. Therefore, their staffs should be trained to manage CBRN emergencies, and local drills should be conducted to improve their preparedness. abstract_id: PUBMED:30379127 Emergency Department Response to Chemical, Biological, Radiological, Nuclear, and Explosive Events: A Systematic Review. IntroductionA Chemical, Biological, Radiological, Nuclear, and explosive (CBRNe) event is an emergency which can result in injury, illness, or loss of life. The emergency department (ED) as a health system is at the forefront of the CBRNe response with staff acting as first receivers. Emergency departments are under-prepared to respond to CBRNe events - recognizing key factors which underlie the ED CBRNe response is crucial to provide evidence-based knowledge to inform policies and, most importantly, clinical practice.ProblemChallenges in detection, decontamination, and diagnosis are associated with the ED CBRNe response when faced with self-presenting patients. Methods: A systematic review was carried out in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). An in-depth search strategy was devised to identify studies which focused on the ED and CBRNe events. The inclusion criteria were stringent in terms of the environment (ED), participants (first receivers), situation (CBRNe response), and actions (detection, decontamination, and diagnosis). Fifteen databases and topic-specific journals were searched. Studies were critically appraised using the Mixed Methods Appraisal Tool (MMAT). Papers were thematically coded and synthesized using NVivo 10 (QSR International Ltd, Melbourne, Australia). Results: Sixty-seven full-text papers were critically appraised using the MMAT; 70% were included (n = 60) as medium- or high-quality studies. Data were grouped into four themes: preparedness, response, decontamination, and personal protective equipment (PPE) problems.DiscussionThis study has recognized the ED as a system which depends on four key factors - preparedness, response, decontamination, and PPE problems - which highlight challenges, uncertainties, inconsistencies, and obstacles associated with the ED CBRNe response. This review suggests that response planning and preparation should be considered at three levels: organizational (policies and procedures); technological (decontamination, communication, security, clinical care, and treatment); and individual (willingness to respond, PPE, knowledge, and competence). Finally, this study highlighted that there was a void specific to detection and diagnosis of CBRNe exposure on self-presenting patients in the ED. Conclusion: The review identified concerns for both knowledge and behaviors which suggests that a systems approach would help understand the ED response to CBRNe events more effectively. The four themes provide an evidence-based summary for the state of science in ED CBRNe response, which can be used to inform future policies and clinical procedures. RazakS, HignettS, BarnesJ. Emergency department response to chemical, biological, radiological, nuclear, and explosive events: a systematic review. Prehosp Disaster Med. 2018;33(5):543-549. abstract_id: PUBMED:26347810 Emergency response to nuclear, biological and chemical incidents: challenges and countermeasures. Given the multiple terrorist attacks that have occurred in recent years in China, medical rescue teams and specialized incident assessment teams have been established by the government; however, medical rescue after nuclear, biological, and chemical incidents remains challenging and is often inefficient. In the present article, problems were analyzed regarding the assessment of responder countermeasures, training of professionals and the management of emergency medical incidents related to nuclear, biological and chemical attacks. Countermeasures, the establishment of response coordination, public education, practical training and exercise, and a professional consultant team or system should be the focus of emergency medical response facilities. Moreover, the government was offered professionals who are involved in managing nuclear, biological and chemical incidents. abstract_id: PUBMED:25416374 Clinical guidelines for responding to chemical, biological, radiological, nuclear and trauma/burn mass casualty incidents: Quick reference guides for emergency department staff. The word 'DISASTER' may be used as a mnemonic for listing the critical elements of emergency response. The National Disaster Life Support Education Foundation's (NDLSEC) DISASTER paradigm emphasises out-of-hospital emergency response and includes the following elements: (1) detect; (2) incident command system; (3) security and safety; (4) assessment; (5) support; (6) triage and treatment; (7) evacuate; and (8) recovery. This paper describes how the DISASTER paradigm was used to create a series of clinical guidelines to assist the preparedness effort of hospitals for mitigating chemical, biological, radiological, nuclear incidents or explosive devices resulting in trauma/burn mass casualty incidents (MCIs) and their initial response to these events. Descriptive information was obtained from observations and records associated with this project. The information contributed by a group of subject matter experts in disaster medicine, at the Yale New Haven Health System Center for Emergency Preparedness and Disaster Response was used to author the clinical guidelines. Akin to the paradigm developed by the NDLSEC for conducting on-scene activities, the clinical guidelines use the letters in the word 'disaster' as a mnemonic for recalling the main elements required for mitigating MCIs in the hospital emergency department. abstract_id: PUBMED:16920155 Weapons of mass destruction: Overview of the CBRNEs (Chemical, Biological, Radiological, Nuclear, and Explosives). The events of September 11, 2001, made citizens of the world acutely aware of disasters consequent to present-day terrorism. This is a war being waged for reasons obscure to many of its potential victims. The term "NBCs" was coined in reference to terrorist weapons of mass destruction, i.e., nuclear, biological and chemical. The currently accepted acronym is "CBRNE" which includes Chemical, Biological, Radiological, Nuclear, and Explosive weapons. Non-nuclear explosives are the most common terrorist weapon now in use. Nuclear and radiological weapons are beyond the scope of this publication, which focuses on the "CBEs", i.e. chemical, biological and explosive weapons. Although neurologists will not be the first responders to CBEs, they must know about the neurological effects in order to provide diagnosis and treatment to survivors. Neurological complications of chemical, biological and explosive weapons which have or may be used by terrorists are reviewed by international experts in this publication. Management and treatment profiles are outlined. abstract_id: PUBMED:26537699 An assessment of Chemical, Biological, Radiologic, Nuclear, and Explosive preparedness among emergency department healthcare providers in an inner city emergency department. Introduction: Emergency healthcare providers are required to care for victims of Chemical, Biological, Radiologic, Nuclear, and Explosive (CBRNE) agents. However, US emergency departments are often ill prepared to manage CBRNE casualties. Most providers lack adequate knowledge or experience in the areas of patient decontamination, hospital-specific disaster protocols, interagency familiarization, and available supply of necessary medical equipment and medications. This study evaluated the CBRNE preparedness of physicians, nurses, and midlevel providers in an urban tertiary care emergency department. Methods: This retrospective observational survey study used a previously constructed questionnaire instrument. A total of 205 e-mail invitations were sent to 191 eligible providers through an online survey distribution tool (Survey Monkey®). Respondents were enrolled from February 1, 2014 to March 15, 2014. Simple frequencies of correct answers were used to determine the level of preparedness of each group. Cronbach's coefficient α was used to validate the precision of the study instrument. Finally, validity coefficients and analysis of variance ANOVA were used to determine the strength of correlation between demographic variables, as well as the variation between individual responses. Results: Fifty-nine providers responded to the questionnaire (31.14 percent response rate). The overall frequency of correct answers was 66.26 percent, indicating a relatively poor level of CBRNE preparedness. The study instrument lacked precision and reliability (coefficient α 0.4050). Significant correlations were found between the frequency of correct answers and the respondents' gender, practice experience, and previous experience with a CBRNE incident. Significant variance exists between how providers believe casualties should be decontaminated, which drugs should be administered, and the interpretation of facility-specific protocols. Conclusions: Emergency care providers are inadequately prepared to manage CBRNE incidents. Furthermore, a valid and precise instrument capable of measuring preparedness needs to be developed. Standardized educational curriculums that consider healthcare providers' genders, occupations, and experience levels may assist in closing the knowledge gaps between providers and reinforce emergency departments' CBRNE preparedness. abstract_id: PUBMED:17894215 Critical assessment of statewide hospital pharmaceutical surge capabilities for chemical, biological, radiological, nuclear, and explosive incidents. Introduction: In recent years, government and hospital disaster planners have recognized the increasing importance of pharmaceutical preparedness for chemical, biological, radiological, nuclear, and explosive (CBRNE) events, as well as other public health emergencies. The development of pharmaceutical surge capacity for immediate use before support from the (US) Strategic National Stockpile (SNS) becomes available is integral to strengthening the preparedness of local healthcare networks. Methods: The Pharmaceutical Response Project served as an independent, multidisciplinary collaboration to assess statewide hospital pharmaceutical response capabilities. Surveys of hospital pharmacy directors were conducted to determine pharmaceutical response preparedness to CBRNE threats. Results: All 45 acute care hospitals in Maryland were surveyed, and responses were collected from 80% (36/45). Ninety-two percent (33/36) of hospitals had assessed pharmaceutical inventory with respect to biological agents, 92% (33/36) for chemical agents, and 67% (24/36) for radiological agents. However, only 64% (23/36) of hospitals reported an additional dedicated reserve supply for biological events, 67% (24/36) for chemical events, and 50% (18/36) for radiological events. More than 60% of the hospitals expected to receive assistance from the SNS within &lt; or = 48 hours. Conclusions: From a pharmaceutical perspective, hospitals generally remain under-prepared for CBRNE threats and many expect SNS support before it realistically would be available. Collectively, limited antibiotics and other supplies are available to offer prophylaxis or treatment, suggesting that hospitals may have insufficient pharmaceutical surge supplies for a large-scale event. Although most state hospitals are improving pharmaceutical surge capabilities, further efforts are needed. abstract_id: PUBMED:21459352 The identification of competencies for an NHS response to chemical, biological, radiological, nuclear and explosive (CBRNe) emergencies. Unlabelled: Emergency planning is a new science and there is a lack of UK standardisation in regard to NHS emergency planning. A first step to achieving standardisation within emergency planning is elucidation of national training competencies. Aim: This study was designed to find consensus amongst multi-agency chemical, biological, radiological, nuclear and explosive (CBRNe) experts on the factors that must be included in future NHS CBRNe competencies. Methods: Data was collected using an anonymised online Delphi study. Two rounds of study were completed. Experts were selected through a combination of purposive and snowball sampling. In total 43 experts agreed to take part in this study. Twenty-one experts responded to the first round's single open-ended question. Fourteen experts responded to the second which contained 64 Likert-scored questions. Consensus was achieved in 46 questions. Results: The consensus of expert opinion generated 10 suggested core training competencies. Conclusion: There is an urgent need to standardise national planning, training and evaluation of NHS staff who are expected to respond to CRBNe and other emergencies. Further research is needed to identify training and response templates so that lessons can be learned from these experiences and fed back to improve future response. Answer: Based on the provided abstracts, emergency care nurses' preparedness for chemical, biological, radiological, nuclear, or explosive (CBRNe) incidents appears to be variable and often insufficient. Several studies have identified gaps in training, knowledge, and resources that suggest many emergency care nurses may not be fully prepared for such incidents. One study focused on identifying areas where nurses require training to improve preparedness for a CBRNe incident. It found six key areas for training: waste management, triage, chain of command, Points of Dispensing (PODs), awareness of the range of Personal Protective Equipment (PPE) and its appropriate use, and decontamination of people and equipment (PUBMED:22726947). Another study explored emergency nurses' perceptions of CBR incidents and factors influencing their capacity to respond. It found that while the majority of nurses were willing to participate in CBR incidents, willingness decreased with unknown chemical and biological agents. One-third reported limitations to using PPE, and few had experience with CBR incidents, although 70.3% had undergone CBR training (PUBMED:19178546). A study assessing the CBRN preparedness of government hospitals in Riyadh revealed that hospitals demonstrated insufficient CBRN preparedness, with a lack of preemptive planning, application of pre-established policies and procedures, and adequate staff training. Furthermore, several hospitals had insufficient stockpiles of medications and concrete plans for accessing government stockpiles in emergencies (PUBMED:37264827). A systematic review identified concerns for both knowledge and behaviors among emergency department staff, suggesting that a systems approach would help understand the ED response to CBRNe events more effectively. The review highlighted challenges, uncertainties, inconsistencies, and obstacles associated with the ED CBRNe response (PUBMED:30379127). An assessment among emergency department healthcare providers in an inner-city emergency department found that providers were inadequately prepared to manage CBRNE incidents, with a relatively poor level of preparedness indicated by the frequency of correct answers (PUBMED:26537699). In conclusion, while there is a willingness among emergency care nurses to respond to CBRNe incidents, there are significant gaps in preparedness that need to be addressed through standardized training, improved resources, and better planning and protocols.
Instruction: Is integrated nursing home care cheaper than traditional care? Abstracts: abstract_id: PUBMED:18755460 Is integrated nursing home care cheaper than traditional care? A cost comparison. Background: It is generally assumed that integrated care has a cost-saving potential in comparison with traditional care. However, there is little evidence on this potential with respect to integrated nursing home care. Aims And Objectives: DESIGN/METHODS/SETTINGS/PARTICIPANTS: Between 1999 and 2003, formal and informal caregivers of different nursing homes in the Netherlands recorded activities performed for residents with somatic or psycho-social problems. In total, 23,380 lists were analysed to determine the average costs of formal and informal care per activity, per type of resident and per nursing home care type. For formal care activities, the total personnel costs per minute (in Euro) were calculated. For informal care costs, two shadow prices were used. Results: Compared to traditional care, integrated care had lower informal direct care costs per resident and per activity and lower average costs per direct activity (for a set of activities performed by formal caregivers). The total average costs per resident per day and the costs of formal direct care per resident, however, were higher as were the costs of delivering a set of indirect activities to residents with somatic problems. Conclusions: The general assumption that integrated care has a cost-saving potential (per resident or per individual activity) was only partially supported by our research. Our study also raised issues which should be investigated in future research on integrated nursing home care. abstract_id: PUBMED:17658652 The impact of integrated care on direct nursing home care. Background/aim: The introduction of integrated nursing home care is an important policy goal in many countries and is expected to affect the type, frequency and duration of activities delivered to nursing home residents. The exact impact however is unknown. The aim of this paper is to reduce this information gap in order to provide decision supporting information to policy makers and managers. DESIGN/METHODS/ETHICAL ISSUES: At three measurement points between 1999 and 2003, caregivers belonging to 18 functions registered activities delivered to somatic and psycho-geriatric nursing home residents in The Netherlands. Residents either received traditional care, integrated care or care that contained elements of traditional and integrated care (hybrid care). Thirty-six thousand and seventy-one registration lists were used for data analysis. Data analysis included determining, comparing and linking the (total) average frequency and duration of each activity per care type, measurement point and type of resident. Results: The (total) average frequency and total duration of most activities were higher for integrated care than for traditional and hybrid care. The average duration per activity was generally higher for traditional care. The (total) average frequency of most direct care activities at most measurement points and the total average duration per resident per day were higher for somatic care than for psycho-geriatric care. Conclusions: The introduction of integrated nursing home care affects the total average duration and frequency of direct care activities. However, there is no noticeable impact on individual activities or on differences in activities received by somatic and psycho-geriatric residents and the degree to which the occurrence of an activity is related to the duration of that activity. This is because a large proportion of care delivery represents patterned behaviour (routines). Because existing routines are difficult to get rid of, we should not have too high expectations about the effect of integrated care on service delivery. abstract_id: PUBMED:31327797 The community-based integrated care system in Japan: Health care and nursing care challenges posed by super-aged society. Japan is experiencing unprecedented aging of its population. People age 65 years or older accounted for 28.1% of the total population in 2018, and that proportion is expected to reach 33.3% in 2036 and 38.4% in 2065. In 2017, the average life expectancy in Japan was 81.09 years for men and 87.26 years for women. By 2065, it is expected to reach 84.95 years for men and 91.35 years for women. Population aging affects health and long-term care systems. The government proposed the establishment of "a community-based integrated care system" by 2025 with the purpose of comprehensively ensuring the provision of health care, nursing care, preventive care, housing, and livelihood support. This will require health care and nursing care professionals who are capable of fully understanding the physical and mental characteristics of elderly people and the fostering of organic collaboration with others professionals in the community-based integrated care system. A department of gerontology or geriatric medicine is desired to be established in each medical school to teach students medicine and efficient medical care, to conduct research, and to develop personnel to facilitate this paradigm shift. In 2018, there were 263 colleges of nursing with an admissions capacity of 23,667. In Japan, Certified Nurse Specialists can specialize in 13 areas as of December 2016. The number of Certified Nurse Specialists increased to 2,279 as of December 2018. One hundred and forty-four of those specialists specialized in Gerontological Nursing while 53 specialized in Home Care Nursing. The number of nurses specializing in Gerontological Nursing and Home Care Nursing is desired to be increased in order to implement and improve community-based comprehensive care. abstract_id: PUBMED:23980555 Integrated versus incidental dental care in nursing homes. Purpose/aim: In most of the 334 Dutch nursing homes, incidental dental care is provided upon indication by the medical and nursing staff. However, more and more nursing homes offer integrated dental care, where a dentist works on a structural basis in facilities within the home. Methods: To evaluate the costs and effects of integrated versus incidental care, we studied the oral status and treatment needs in two nursing homes in the same city with 175 and 120 residents an average age of 81.7 and 79.3 years, respectively. Results: In the integrated care home, the dentist spent an average 2.2 hours/year per resident at a cost of €229 for the dentist's work and €143 for laboratory costs. In the incidental care home, these values were 0.1 hours/year per resident at a cost of €15 for the dentist's work and €20 for laboratory costs. Conclusions: With integrated care, 55.6% of the residents had no oral treatment need versus 13.1% for incidental care, more edentulous residents wore dentures, and residents had less soft tissue pathology. abstract_id: PUBMED:19513988 Efficacy of integrated interventions combining psychiatric care and nursing home care for nursing home residents: a review of the literature. Background: Nursing home residents needing both psychiatric care and nursing home care for either somatic illness or dementia combined with psychiatric disorders or severe behavioural problems are referred to as Double Care Demanding patients, or DCD patients. Integrated models of care seem to be necessary in order to improve the well-being of these residents. Objectives: Two research questions were addressed. First, which integrated interventions combining both psychiatric care and nursing home care in DCD nursing home residents are described in the research literature? And second, which outcomes of integrated interventions combining both psychiatric care and nursing home care in DCD nursing home residents are reported in the literature? Method: A critical review of studies was done that involved integrated interventions combining both psychiatric care and nursing home care on psychiatric disorders and severe behavioural problems in nursing home patients. A systematic literature search was performed in a number of international databases. Results: Eight intervention trials, including four RCTs (2b level of evidence), were identified as relevant studies for the purpose of this review. Seven studies, three of which were RCTs, showed beneficial effects of a comprehensive, integrated multidisciplinary approach combining medical, psychiatric and nursing interventions on severe behavioural problems in DCD nursing home patients. Conclusions: Important elements of a successful treatment strategy for DCD nursing home patients include a thorough assessment of psychiatric, medical and environmental causes as well as programmes for teaching behavioural management skills to nurses. DCD nursing home patients were found to benefit from short-term mental hospital admission.This review underlines the need for more rigorously designed studies to assess the effects of a comprehensive, integrated multidisciplinary approach towards DCD nursing home residents. abstract_id: PUBMED:29405015 Role of Visiting Nursing Care in Japanese Home Healthcare Taiwan's rapidly aging society is expected to make it a super-aged society in 2026. By 2060, people aged 65 or older will account for 40% of the population, a ratio that will approximate that in Japan. In Japan, the elderly population was 27.3% in 2016. By 2025, when the baby-boomers become 75 years old in Japan, issues of long-term care and end-of-life care will be more important and challenging. Since 1976, more Japanese have died in hospital settings than in home settings. Although the percentage of people dying at home increased slightly to 12.7% in 2016, after the recent introduction and promotion of home healthcare, Japan will face a significant challenge to deal with the healthcare 'tsunami' of high natural death rates, which is expected to impose a heavy death burdened on society by 2040, when the death rate is expected to reach 1,670,000/year. Therefore, the Japanese authorities have begun to promote the Community-based Integrated Care System, in which home healthcare and visiting nursing play crucial roles. This article summarizes the historical trend and current situation of visiting nursing in Japan. Japan uses a hybrid payment system for visiting nursing that is financially supported both through private medical insurance policies and Kaigo insurance (Japanese long-term care insurance). The total of 8613 visiting nursing stations that were active in community settings in 2016 cooperated with 14,000 support clinics for home healthcare and cared for 570,000 patients in home settings. We believe that visiting nursing will play an important role in home healthcare in Taiwan in the future. abstract_id: PUBMED:8333900 Hygienic requirements in home care Hygiene deals with the basis of the prevention of diseases as well as with the preservation and stabilization of health. In this context hygiene deals with animated and inanimated factors which have a promoting or damaging influence on health. Due to the analysis of these factors, the explanation of their functioning and their evaluation from the medical point of view, hygiene develops principles for the protection of health and works on preventive measures for the general public and the individual person. Home care comprises the individual medical care as well as home care, i.e a qualified domestic basic and medical treatment with individuals who fulfil the criteria of this need. Objectives of this home care and home medical care are the cure, the improvement, the prevention of aggravation, the alleviation of pain and the prolongation of live expectancy of people in need for this care. Hygienic requirements on home and medical care, therefore, present a broad spectrum of measures which contribute to the maintenance of health of those persons in need for such a treatment. These requirements have partly been written down in the "Festlegungen der Unfallverhütungsvorschrift, Gesundheitsdienst (VBG 103)" and concerning the field of home medical care in the "Berufsgenossenschaft für Gesundheitsdienst und Wohlfahrtspflege". Herein the following topics are considered: Occupational medial provision of the persons responsible for taking care, notification of infectious diseases, protective clothing, care taking techniques and handling of medical technical equipment and means of aid. Requirements on hygiene which refer directly to the sick person comprise, beside others, problems of the individual hygiene including physical hygiene, alimentary hygiene and home hygiene. It is attempted to present in a clearly arranged catalogue the variety of hygienical requirements resulting from the persons responsible for care taking and the persons in need or care and, furthermore, to show at the same time the value of these hygienical requirements for the home and home medical care. abstract_id: PUBMED:30770992 Care planning for people with dementia on the margins of care between home care and nursing home: the balance of care approach in the RightTimePlaceCare project in Germany Background: The balance of care approach is a strategic planning framework that can be used to research the adequacy of care arrangements and the cost implications. It seeks to identify people who are on the margins of care, i. e. whose care and nursing needs could be met in more than one setting, and explores the relative costs of the possible alternatives. This article describes a balance of care application for people with dementia in a transitional phase between home and institutional care in Germany. Methods: A sequential mixed-methods design was applied that combined empirical data, the decision of healthcare professionals (panels) and cost estimates in a structured way. Data were collected as part of the RightTimePlaceCare project from 235 people with dementia and their caregivers in 2 settings, in nursing homes and domestic care. Results: Based on five key variables, case types of people with dementia with comparable needs were developed. In panels with healthcare professionals there was consensus that people represented by four of these case types could by cared for at home while the reference group of actual study participants was currently being cared for in nursing homes. For these four case types, exemplary home care arrangements were formulated, costs were estimated and compared to institutional care costs. Conclusion: There is a potential for home care for a significant group of people with dementia currently admitted to institutional care. Some of the alternative home care arrangements were cost-saving. Despite some limitations, the study demonstrated the utility of the balance of care approach to support the development of empirically based expert recommendations on care provision. abstract_id: PUBMED:17038104 Nursing home care: whodunit? Aims And Objectives: (1) To analyse and compare (changes and differences in) activity profiles of various types of nursing home care. (2) To assess the impact of integrated care on these activity profiles. Background: Because of an ongoing introduction of integrated nursing home care, caregivers increasingly have to co-ordinate their activities, engage into interprofessional relationships and take over each other's tasks. Consequently, activity profiles [i.e. combinations of (contributions to) care activities and the roles that perform them] are expected to change. Design/methods: At three measurement points in the period 1999-2003, caregivers (in 18 different roles) recorded and listed direct and indirect care activities. A total of 41,335 lists were analysed to derive activity profiles of traditional, transitional and integrated nursing home care in the Netherlands. Results: Traditional, transitional and integrated care shared some comparable activity profiles. Integrated care differed from the other types with respect to the contribution of the geriatric nurse, recreational activities supervisor, nutrition assistant, household assistant and nursing assistant to activities such as extra care, handling food and club activities. Contrary to the other roles, the licensed practical nurse contributed to (almost) all activities in all types of care. Conclusions: Nursing home care has several recurring activity profiles. These profiles are the same in all types of nursing home care. The introduction of integrated care implies that particular profiles have to be added to these profiles. As a generalist, the licensed practical nurse seems to play a key role in all activity profiles. Relevance To Clinical Practice: Because of demographic and financial pressures, integrated care for older people becomes increasingly important. By addressing the impact of integrated care on activity profiles, this paper provides information on how new types of care can be delivered in the most effective manner. abstract_id: PUBMED:29508473 Attractiveness of people-centred and integrated Dutch Home Care: A nationwide survey among nurses. The World Health Organization is calling for a fundamental change in healthcare services delivery, towards people-centred and integrated health services. This includes providing integrated care around people's needs that is effectively co-ordinated across providers and co-produced by professionals, the patient, the family and the community. At the same time, healthcare policies aim to scale back hospital and residential care in favour of home care. This is one reason for the home-care nursing staff shortages in Europe. Therefore, this study aimed to examine whether people-centred, integrated home care appeals to nurses with different levels of education in home care and hospitals. A questionnaire survey was held among registered nurses in Dutch home-care organisations and hospitals in 2015. The questionnaire addressed the perceived attractiveness of different aspects of people-centred, integrated home care. In total 328 nurses filled in the questionnaire (54% response rate). The findings showed that most home-care nurses (70% to 97%) and 36% to 76% of the hospital nurses regard the different aspects of people-centred, integrated home care as attractive. Specific aspects that home-care nurses find attractive are promoting the patient's self-reliance and having a network in the community. Hospital nurses are mainly attracted to health-related prevention and taking control in complex situations. No clear differences between the educational levels were found. It is concluded that most home-care nurses and a minority of hospital nurses feel attracted to people-centred, integrated home care, irrespective of their educational level. The findings are relevant to policy makers and home-care organisations who aim to expand the home-care nursing workforce. Answer: The research on whether integrated nursing home care is cheaper than traditional care provides mixed results. According to a study conducted in the Netherlands, integrated care had lower informal direct care costs per resident and per activity, as well as lower average costs per direct activity performed by formal caregivers. However, the total average costs per resident per day and the costs of formal direct care per resident were higher in integrated care, as were the costs of delivering a set of indirect activities to residents with somatic problems (PUBMED:18755460). This suggests that while certain aspects of integrated care may be less expensive, overall costs may be higher compared to traditional care. Another study found that the introduction of integrated nursing home care affects the total average duration and frequency of direct care activities, but there was no noticeable impact on individual activities or on differences in activities received by somatic and psycho-geriatric residents (PUBMED:17658652). This indicates that integrated care may change how care is delivered, but it does not necessarily translate into lower costs. In the context of dental care within nursing homes, integrated care resulted in better oral health outcomes and less treatment need compared to incidental care, but at a higher cost (PUBMED:23980555). This suggests that while integrated care may offer better quality outcomes in some areas, it may come at an increased financial cost. Overall, the assumption that integrated care has a cost-saving potential is only partially supported by the research, and the evidence suggests that integrated nursing home care may not always be cheaper than traditional care (PUBMED:18755460). Further research is needed to fully understand the cost implications of integrated care in nursing home settings.
Instruction: Histochemical detection of platelet esterase activity in the bone marrow postmortem: can megakaryocytes serve as indicators for time since death? Abstracts: abstract_id: PUBMED:2935204 Immunohistochemical localization of membrane and alpha-granule proteins in human megakaryocytes: application to plastic-embedded bone marrow biopsy specimens. Using a new technique for antigen localization, we have demonstrated platelet proteins in megakaryocytes in plastic-embedded biopsy specimens of normal human bone marrow. In a series of 25 specimens, megakaryocytes showed labeling with antibodies to the integral membrane glycoproteins IIIa, IIb, and the IIb-IIIa complex; granule membrane protein 140; and five alpha-granule matrix proteins: thrombospondin, factor VIII-related antigen, beta-thromboglobulin, platelet factor 4, and fibrinogen. The antibodies to the membrane glycoproteins IIIa, IIb, and IIb-IIIa produced diffuse cytoplasmic staining and heavier staining on the plasma membrane, whereas the antibodies to the alpha-granule matrix proteins produced a distinct granular staining within the cytoplasm. Staining for granule membrane protein 140 was also granular in distribution. Rare mononuclear cells consistent with megakaryocyte precursors were labeled with these markers. Other enzyme histochemical and lectin-binding studies showed that the enzyme alpha-naphthyl acetate esterase, the lectin Ulex europaeus I, and the periodic-acid Schiff reaction were consistent, but not specific, markers of megakaryocytes. This immunohistochemical technique should facilitate the examination of qualitative and quantitative changes in megakaryocytes in a variety of physiologic and pathologic processes. abstract_id: PUBMED:37255850 Spatial transcriptomics of murine bone marrow megakaryocytes at single-cell resolution. Background: While megakaryocytes are known for making platelets, recent single-cell RNA sequencing data have revealed subpopulations of megakaryocytes with predicted immunoregulatory and bone marrow niche-supporting roles. Although these studies uncovered interesting information regarding the transcriptional variation of megakaryocytes, the generation, localization, and regulation of these subsets have not yet been studied and therefore remain incompletely understood. Considering the complex organization of the bone marrow, we reasoned that the application of spatial transcriptomic approaches could help dissect megakaryocyte heterogeneity within a spatiotemporal context. Objectives: The aim of this study was to combine spatial context and transcriptomics to assess the heterogeneity of murine bone marrow megakaryocytes in situ at a single-cell level. Methods: Bone marrow sections were obtained from femurs of C57BL/6J mice. Using the murine whole transcriptome array on the Nanostring GeoMx digital spatial profiling platform, we profiled 44 individual megakaryocytes (CD41+ by immunofluorescence) in situ throughout the bone marrow, both adjacent and nonadjacent to the endothelium (directly in contact with vascular endothelial-cadherin-positive cells). Results: Principal component analysis revealed no association between transcriptomic profile and adjacency to the vasculature. However, there was a significant effect of proximal vs distal regions of the bone. Two and 3 genes were found overexpressed in the proximal and distal sides, respectively. Of note, proplatelet basic protein and platelet factor 4, 2 genes associated with platelet production, had higher expression in proximal megakaryocytes. Conclusion: This study indicates a possible effect of spatial location on megakaryocyte heterogeneity and substantiate further interest in investigating megakaryocyte subpopulations in the context of their spatial orientation. abstract_id: PUBMED:28914425 Platelet factor 4 increases bone marrow B cell development and differentiation. Platelet factor 4 (PF4) is a megakaryocyte-/platelet-derived chemokine with diverse functions as a regulator of vascular and immune biology. PF4 has a central role in vessel injury responses, innate immune cell responses, and T-helper cell differentiation. We have now discovered that PF4 has a direct role in B cell differentiation in the bone marrow. Mice lacking PF4 (PF4-/- mice) had fewer developing B cells in the bone marrow beginning after the pre-pro-B cell stage of differentiation. In vitro, PF4 increased the differentiation of hematopoietic progenitors to B cell lineage cells, indicating that PF4 has a direct effect on B cell differentiation. STAT5 activation is essential in early B cell development and PF4 increased the phosphorylation of STAT5. Taken together, these data demonstrate that PF4 has an important role in increasing B cell differentiation in the bone marrow environment. abstract_id: PUBMED:1743589 The effect of low affinity platelet factor 4 (LAPF4) secreted by human megakaryoblastic cell line (MEG-01) upon human bone marrow fibroblasts. Background: The role of low affinity platelet factor 4 (LAPF4) in the hemopoietic microenvironment has not yet been clarified. Methods: Low affinity platelet factor 4 (LAPF4) was purified from normal human platelets and the culture medium of the human megakaryoblastic cell line (MEG-01), and their effects upon the growth of human bone marrow fibroblasts were assessed in order to investigate the biological role of LAPF4. The purified LAPF4 was added to the culture media of human bone marrow fibroblasts up to the concentration of 200 ng/m1, and the growth rate of fibroblasts and the uptake of 3H-thymidine into fibroblasts were measured. Results: The molecular weight of LAPF4 from MEG-01 was approximately 8,800, which corresponded to the monomer type of LAPF4 from normal platelets. The density of fibroblasts after 10 days of culture was 4.6 +/- 0.9 x 10(5)/ml, 7.8 +/- 0.8 x 10(5)/ml and 11.3 +/- 0.6 x 10(5) in control medium, in the medium with LAPF4 from platelets and in the medium with LAPF4 from MEG-01, respectively, which indicates that LAPF4 from MEG-01 enhanced the growth rate of bone marrow fibroblast almost 2.5 times. The uptake of 3H-thymidine into fibroblasts was significantly increased by 1,000 ng/ml of LAPF4 from MEG-01. Conclusions: These results suggest that LAPF4 may play a role in the proliferation of fibroblasts. abstract_id: PUBMED:26803701 Regulatory role of Megakaryocytes on Hematopoietic Stem Cells Quiescence by CXCL4/PF4 in Bone Marrow Niche. Platelet factor-4 (CXCL4/PF-4) is a member of CXC-chemokine family produced by megakaryocytic lineage and stored in platelet α-granules. Platelet stimulation by aggregating agents such as thrombin and ADP leads to CXCL4 secretion. CXCL4 plays several roles in coagulation, angiogenesis control, immune system modulation and spread of cancer. Megakaryocytes (Mks) are associated with the vascular niche in the bone marrow (BM) and are located in vicinity of BM sinusoids. Mk-derived CXCL4 is involved in several hematopoietic processes, including inhibition of megakaryopoiesis and maintenance of hematopoietic stem cell (HSC) quiescence. The major aim of this review article was to evaluate the role of CXCL4 in hematological malignancies, promotion of HSC quiescence as well as BM niche cells. abstract_id: PUBMED:6788894 Human megakaryocytes. II. Expression of platelet proteins in early marrow megakaryocytes. Analysis of various platelet proteins by immunofluorescence demonstrated that platelet glycoproteins Ib, IIb, and IIIa, as well as plasma factor VIII antigen (factor VIII:AGN), platelet factor 4, and fibronectin are present in the vast majority of morphologically recognizable megakaryocytes. In addition, a small number of lymphoid-like mononuclear marrow cells, representing approximately 1.4--2.9/10(4) marrow cells, was found to express the same platelet proteins. This population of early marrow megakaryocytes is analogous to small acetylcholinesterase-positive rat and mouse marrow cells. Fc receptors for IgG were expressed in all megakaryocytes and megakaryocyte precursors, whereas the Ia antigen was detected only on a proportion of mature megakaryocytes and not on only early or precursor megakaryocytes. Platelet glycoproteins Ib, IIb, and IIIa, as well as factor VIII:AGN, and platelet factor 4 were established as distinct markers for marrow megakaryocytes and may be helpful for identifying megakaryocytic cells as well as for monitoring events of megakaryocyte differentiation. abstract_id: PUBMED:32726410 Increased CXCL4 expression in hematopoietic cells links inflammation and progression of bone marrow fibrosis in MPN. Primary myelofibrosis (PMF) is a myeloproliferative neoplasm (MPN) that leads to progressive bone marrow (BM) fibrosis. Although the cellular mutations involved in the pathogenesis of PMF have been extensively investigated, the sequential events that drive stromal activation and fibrosis by hematopoietic-stromal cross-talk remain elusive. Using an unbiased approach and validation in patients with MPN, we determined that the differential spatial expression of the chemokine CXCL4/platelet factor-4 marks the progression of fibrosis. We show that the absence of hematopoietic CXCL4 ameliorates the MPN phenotype, reduces stromal cell activation and BM fibrosis, and decreases the activation of profibrotic pathways in megakaryocytes, inflammation in fibrosis-driving cells, and JAK/STAT activation in both megakaryocytes and stromal cells in 3 murine PMF models. Our data indicate that higher CXCL4 expression in MPN has profibrotic effects and is a mediator of the characteristic inflammation. Therefore, targeting CXCL4 might be a promising strategy to reduce inflammation in PMF. abstract_id: PUBMED:33816691 Platelet TSP-1 controls prostate cancer-induced osteoclast differentiation and bone marrow-derived cell mobilization through TGFβ-1. The development of distant metastasis is the leading cause of prostate cancer (CaP)-related death, with the skeleton being the primary site of metastasis. While the progression of primary tumors and the growth of bone metastatic tumors are well described, the mechanisms controlling pre-metastatic niche formation and homing of CaP to bone remain unclear. Through prior studies, we demonstrated that platelet secretion was required for ongoing tumor growth and pre-metastatic tumor-induced bone formation. Platelets stimulated bone marrow-derived cell (BMDC) mobilization to tumors supporting angiogenesis. We hypothesized that proteins released by the platelet α granules were responsible for inducing changes in the pre-metastatic bone niche. We found that the classically anti-angiogenic protein thrombospondin (TSP)-1 was significantly increased in the platelets of mice with RM1 murine CaP tumors. To determine the role of increased TSP-1, we implanted tumors in TSP-1 null animals and assessed changes in tumor growth and pre-metastatic niche. TSP-1 loss resulted in increased tumor size and enhanced angiogenesis by immunohistochemistry. Conversely, TSP-1 deletion reduced BMDC mobilization and enhanced osteoclast formation resulting in decreased tumor-induced bone formation as measured by microcomputed tomography. We hypothesized that changes in the pre-metastatic niche were due to the retention of TGF-β1 in the platelets of mice after TSP-1 deletion. To assess the importance of platelet-derived TGF-β1, we implanted RM1 CaP tumors in mice with platelet factor 4-driven deletion of TGF-β1 in platelets and megakaryocytes. Like TSP-1 deletion, loss of platelet TGF-β1 resulted in increased angiogenesis with a milder effect on tumor size and BMDC release. Within the bone microenvironment, platelet TGF-β1 deletion prevented tumor-induced bone formation due to increased osteoclastogenesis. Thus, we demonstrate that the TSP-1/TGF-β1 axis regulates pre-metastatic niche formation and tumor-induced bone turnover. Targeting the platelet release of TSP-1 or TGF-β1 represents a potential method to interfere with the process of CaP metastasis to bone. abstract_id: PUBMED:6833369 New synthesis of a platelet-specific protein: platelet factor 4 synthesis in a megakaryocyte-enriched rabbit bone marrow culture system. The site of synthesis of platelet-specific proteins remains to be established. With the use of short-term megakaryocyte-enriched cultures, direct evidence was obtained to show that megakaryocytes synthesize the platelet-specific protein, platelet factor 4. A megakaryocyte-enriched fraction of rabbit bone marrow for culture was obtained by centrifugal elutriation and cultured with [3H]leucine. Newly synthesized 3H-platelet factor 4 was sought by copurification with added carrier rabbit platelet factor 4, using heparin agarose affinity chromatography and immunoprecipitation with specific goat anti-rabbit platelet factor 4 antisera. SDS PAGE of the washed immunoprecipitates demonstrated a [3H]leucine-containing peak which migrated identically with purified homogeneous rabbit platelet factor 4. A second, slightly larger molecular-weight protein was identified in the gels also, suggesting that rabbit platelet factor 4 may be synthesized as a larger molecular-weight precursor in rabbit megakaryocytes. These results provide direct evidence that the platelet-specific protein, platelet factor 4, is synthesized in rabbit megakaryocytes before it is packaged into alpha-granules for release in circulating platelets. abstract_id: PUBMED:34582554 Don't you forget about me(gakaryocytes). Platelets (small, anucleate cell fragments) derive from large precursor cells, megakaryocytes (MKs), that reside in the bone marrow. MKs emerge from hematopoietic stem cells in a complex differentiation process that involves cytoplasmic maturation, including the formation of the demarcation membrane system, and polyploidization. The main function of MKs is the generation of platelets, which predominantly occurs through the release of long, microtubule-rich proplatelets into vessel sinusoids. However, the idea of a 1-dimensional role of MKs as platelet precursors is currently being questioned because of advances in high-resolution microscopy and single-cell omics. On the one hand, recent findings suggest that proplatelet formation from bone marrow-derived MKs is not the only mechanism of platelet production, but that it may also occur through budding of the plasma membrane and in distant organs such as lung or liver. On the other hand, novel evidence suggests that MKs not only maintain physiological platelet levels but further contribute to bone marrow homeostasis through the release of extracellular vesicles or cytokines, such as transforming growth factor β1 or platelet factor 4. The notion of multitasking MKs was reinforced in recent studies by using single-cell RNA sequencing approaches on MKs derived from adult and fetal bone marrow and lungs, leading to the identification of different MK subsets that appeared to exhibit immunomodulatory or secretory roles. In the following article, novel insights into the mechanisms leading to proplatelet formation in vitro and in vivo will be reviewed and the hypothesis of MKs as immunoregulatory cells will be critically discussed. Answer: The abstracts provided do not directly address the use of megakaryocytes or platelet esterase activity as indicators for time since death in postmortem analyses. However, they do provide information on the characteristics and functions of megakaryocytes, as well as the proteins they express, which could potentially be relevant to the question. Megakaryocytes are known to express various platelet proteins, including glycoproteins IIb, IIIa, and the IIb-IIIa complex, as well as alpha-granule matrix proteins such as thrombospondin, factor VIII-related antigen, beta-thromboglobulin, platelet factor 4, and fibrinogen (PUBMED:2935204). Additionally, megakaryocytes have been shown to synthesize platelet factor 4 (PF4), a platelet-specific protein (PUBMED:6833369), and their role in bone marrow homeostasis and immunoregulation has been highlighted (PUBMED:34582554). While these abstracts provide insights into the proteins produced by megakaryocytes and their potential roles in various physiological and pathological processes, they do not offer evidence on the stability of these markers after death or their correlation with the postmortem interval. Therefore, based on the provided abstracts, it cannot be concluded that megakaryocytes or the detection of platelet esterase activity can serve as reliable indicators for time since death in postmortem analyses. Further research would be required to establish any potential relationship between megakaryocyte characteristics and the determination of postmortem interval.
Instruction: Is the absence of a normal nocturnal fall in blood pressure (nondipping) associated with cardiovascular target organ damage? Abstracts: abstract_id: PUBMED:24061070 Nocturnal hypertension or nondipping: which is better associated with the cardiovascular risk profile? Background: Both increased night blood pressure (BP) and nondipping are associated with worse cardiovascular risk and prognosis. However, as they are often related features, their relative importance has been difficult to assess separately. In this study we address separate associations of nocturnal hypertension and nondipping with cardiovascular risk profile in treated and untreated hypertensive patients. Methods: A total of 37,096 untreated patients and 62,788 patients receiving antihypertensive treatment from the Spanish Ambulatory Blood Pressure Monitoring Registry were included. Each cohort was separated into 4 groups: group 1, night systolic blood pressure (SBP) &lt;120 mm Hg and normal dipping (&gt;10%); group 2, night SBP &lt;120 mm Hg and nondipping (≤10%); group 3, nocturnal hypertension (SBP ≥120 mm Hg) and normal dipping; and group 4, nocturnal hypertension and nondipping. Results: The smallest proportion of patients with additional cardiovascular risk factors, organ damage, and history of previous events was observed in the group with both normal night SBP and dipping, whereas those with both nocturnal hypertension and nondipping showed the largest proportion of cardiovascular risk factors and diseases. When groups showing only 1 abnormality were compared, nondipping was associated with female sex, reduced renal function, and previous cardiovascular events, whereas nocturnal hypertension was associated with male sex, smoking, and increased urinary albumin excretion. In treated patients, it was also associated with the presence of diabetes. Conclusion: Nondipping is related to more advanced disease (reduced renal function and clinical evidence of cardiovascular disease), whereas nocturnal hypertension is associated with albuminuria. The worst cardiovascular risk profile is present in patients exhibiting both nocturnal hypertension and nondipping. abstract_id: PUBMED:32368135 Isolated Nocturnal Hypertension: What Do We Know and What Can We Do? Nocturnal hypertension has been recognized as a significant risk factor for cardio- and cerebrovascular diseases. Blood pressure (BP) monitoring significantly increased our awareness of nocturnal hypertension and studies revealed its influence on target organ damage. Nocturnal hypertension is associated with nonphysiological 24-h BP patterns, which consider inadequate drop or even increment of nighttime BP in comparison with daytime BP (nondipping and reverse dipping). Nevertheless, investigations showed that nocturnal hypertension was a predictor of adverse outcome independently of circadian BP pattern. There are still many uncertainties regarding diagnosis, mechanisms and treatment of nocturnal hypertension. There is a small difference between American and European guidelines in cutoff values defining nocturnal hypertension. Pathophysiology is also not clear because many conditions such as diabetes, metabolic syndrome, obesity, sleep apnea syndrome, and renal diseases are related to nocturnal hypertension and nonphysiological circadian BP pattern, but mechanisms of nocturnal hypertension still remain speculative. Therapeutic approach is another important issue and chronotherapy provided the best results so far. There are studies which showed that some groups of antihypertensive medications are more effective in regulation of nocturnal BP, but it seems that the timing of drug administration has a crucial role in the reduction of nighttime BP and conversion of circadian patterns from nonphysiologic to physiologic. Follow-up studies are necessary to define clinical benefits of nocturnal BP reduction and restoring unfavorable 24-h BP variations to physiological variant. abstract_id: PUBMED:9321744 Is the absence of a normal nocturnal fall in blood pressure (nondipping) associated with cardiovascular target organ damage? Objective: To determine whether the failure to decrease blood pressure normally during sleep is associated with more prominent target organ damage. Methods: Cardiac and vascular structure and function were characterized in 183 asymptomatic, unmedicated hypertensive patients and compared with their ambulatory blood pressures. Results: The 104 patients with a normal (&gt; 10%) nocturnal fall in systolic blood pressure (dippers) were similar to the 79 patients with an abnormal fall (nondippers) in sex, race, body size, smoking history, and average awake ambulatory blood pressure. Nondippers tended to be older (57 versus 54 years, P = 0.06). The supine blood pressure upon completion of the ultrasound studies was higher in the nondippers (156/93 versus 146/89 mmHg, P &lt; 0.005) as was the variability of the awake diastolic blood pressure. There were no differences between dippers and nondippers in left ventricular mass (170 versus 172 g), mass index (90 versus 91 gm/m2), prevalence of abnormal ventricular geometry, common carotid artery diameter (5.74 versus 5.75 mm), and vascular strain. Although nondippers were more likely to have carotid artery plaque (41 versus 27%, P = 0.053) and an increased intimal-medial thickness (0.84 versus 0.79 mm, P &lt; 0.05), adjustment for age rendered the differences insignificant. There were no differences in the relation of awake and sleeping systolic pressures to the left ventricular mass (r = 0.36 and 0.35, respectively, both P &lt; 0.005) or to the carotid wall thickness (r = 0.28 and 0.29, respectively, both P &lt; 0.005). When the 114 men and 69 women were considered separately, similar findings were obtained. When the 109 whites and 56 blacks (African-Americans and Afro-Caribbeans) were considered separately, there were no differences in left ventricular structure in either group, and differences in vascular structure were confined to the white subgroup. Conclusion: The lack of a normal nocturnal fall in blood pressure is not associated with an increase in left ventricular mass or in arterial disease independently of age. Age-related changes in carotid artery wall thickness and plaque among nondippers may reflect a contribution of an altered baroreceptor function to the lack of normal nocturnal and supine blood pressure decreases. abstract_id: PUBMED:30597750 Nocturnal pulse rate correlated with ambulatory blood pressure and target organ damage in patients with chronic kidney disease. The relationship between resting pulse rate (PR) and the occurrence of hypertension and cardiovascular (CV) mortality has been described in the general population. Few studies have examined the relationship between ambulatory PR, ambulatory blood pressure (BP), and target organ damage (TOD) in patients with chronic kidney disease (CKD). A total of 1509 patients with CKD were recruited in our hospital. Ambulatory blood pressure monitoring (ABPM) over a 24-hours period was performed and referenced with clinical data in this cross-sectional study. TOD was measured by estimated glomerular filtration rate (eGFR), left ventricular hypertrophy (LVH), and carotid intima-media thickness (cIMT). Univariate and multivariate analyses were used to evaluate the relationship between PR, BP, and TOD. The percentage of male patients was 58.3% with a mean age of 44.6 ± 16.2 years. Nocturnal PR rather than 24-hours PR or daytime PR was an independent risk factor for clinical hypertension, 24-hours hypertension, BP dipper state, poor renal function, and LVH. In addition, the authors found that nighttime PR &gt;74 beats/min (bpm) group was independently associated with clinical hypertension, 24-hours hypertension, day and night hypertension, nondipping BP, lower eGFR, and LVH when compared with nighttime PR &lt;64 bpm group. Furthermore, 1:1 propensity score matching between PR ≤74 bpm group and PR &gt;74 bpm group was performed. Multivariate analyses indicated nighttime PR &gt;74 bpm remained independently associated with clinical hypertension, daytime and nighttime hypertension, and LVH. An increased nocturnal PR is associated with TOD, higher BP, and nondipping BP in patients with CKD. abstract_id: PUBMED:12799090 Associations between nondipping of nocturnal blood pressure decrease and cardiovascular target organ damage in strictly selected community-dwelling normotensives. Background: In hypertensives, nondippers are more likely than dippers to suffer silent, as well as overt, hypertensive target organ damage. In this study, we investigated whether a nondipper status was associated with target organ damage in normotensives. Methods: We performed ambulatory blood pressure (BP) monitoring, echocardiography, and carotid ultrasonography and measured natriuretic peptides and urinary albumin (UAE) in 74 normotensive subjects with the following criteria: 1) clinical BP &lt;140/90 mm Hg; 2) average 24-h ambulatory BP &lt;125/80 mm Hg. Results: The left ventricular mass index (LVMI) and the relative wall thickness (RWT) measured by echocardiography were greater in nondippers than dippers (LVMI: 103 +/- 26 v 118 +/- 34 g/m(2), P &lt;.05; RWT: 0.38 +/- 0.07 v 0.43 +/- 0.09, P &lt;.01). Plasma atrial natriuretic peptide (ANP) and brain natriuretic peptide (BNP) were higher in nondippers than dippers (ANP: 14 +/- 10 v 36 +/- 63 pg/mL, P &lt;.01; BNP: 16 +/- 12 v 62 +/- 153 pg/mL, P &lt;.05). There were no significant differences in UAE and intima-media thickness measured by carotid ultrasonography. Conclusions: Normotensive nondipping may not reflect renal damage, but may have a predominant effect on cardiac damage. Nondipping of nocturnal BP seems to be a determinant of cardiac hypertrophy and remodeling, and may result in a cardiovascular risk independent of ambulatory BP levels in normotensives. abstract_id: PUBMED:21414566 β2 adrenergic receptor polymorphisms and nocturnal blood pressure dipping status in the Wisconsin Sleep Cohort Study. Nondipping nocturnal blood pressure (BP) is associated with target organ damage and cardiovascular disease. We hypothesized that β1- and β2-AR-associated single nucleotide polymorphisms (SNPs) would associate with nondipping BP patterns. Participants (n = 497, age range 30-74 years, 40% female) of the Wisconsin Sleep Cohort Study with at least one ambulatory BP monitoring test were included. Nondipping was defined as less than a 10% dip in sleep BP compared with wake BP. Dipping ratios were calculated as sleep/wake BP. Single nucleotide polymorphisms in the β1-AR (rs7076938, tagging for Gly389Arg) and β2-AR (rs17778257 and rs2400707, tagging for Arg16Gly and Gln27Glu) were selected. β2-AR SNP rs2400707 A-positive subjects (tagging for Glu27) had higher systolic and diastolic dipping ratios in a dose-response fashion. Systolic dipping ratios were: GG = 0.846; AG = 0.854; AA = 0.861 (P = .015). Diastolic dip ratios were: GG = 0.807; AG = 0.815; AA = 0.824 (P = .026). The β2-AR rs17778257/rs2400707 A/A haplotype was associated with dipping ratios and systolic nondipping status (nondipping odds radio 2.0 [1.0-3.8] for A/A versus A/G). Results were similar when models included participants on antihypertensive medications. Higher dipping ratios indicating a lack of nocturnal BP dipping are associated with β2-AR polymorphisms. Nocturnal dipping patterns may be modulated by β2-AR polymorphisms. abstract_id: PUBMED:19505276 Management of nocturnal hypertension. Nocturnal hypertension is a common complication of essential and secondary hypertension. Abnormal circadian blood pressure patterns associated with elevated sleep blood pressure include nondipping and reverse dipping, both of which are associated with increased target-organ damage and adverse cardiovascular outcomes. Nocturnal hypertension can be treated with several approaches that include both lifestyle changes, such as sodium restriction and potassium supplementation, and pharmacological treatments, primarily through the use of bedtime dosing of antihypertensive agents. Evening administration of blockers of the renin-angiotensin-aldosterone system is the most consistently effective of these treatment strategies. In this review, we provide a detailed discussion of the options available for the management of nocturnal hypertension. abstract_id: PUBMED:9814612 Is nondipping in 24 h ambulatory blood pressure related to cognitive dysfunction? Objective: Associations between the outcome of 24 h ambulatory monitoring and cognitive performance were studied in order to evaluate the potential relevance of ambulant blood pressure status to brain function. It was hypothesized that a small daytime-night-time difference in mean blood pressure (nondipping) is associated with reduced cognitive performance, in line with studies in hypertensive subjects that have reported associations between nondipping and target-organ damage. Methods: The study followed a cross-sectional design and was part of a larger research programme on determinants of cognitive aging (Maastricht Aging Study, MAAS). A group of 115 community residents aged 28-82 years was recruited from a general practice population and screened for cardiovascular events and medication use. All underwent 24 h blood pressure monitoring. Cognitive performance was measured with tests of verbal memory, attention, simple speed and information processing speed. Results: Mean daytime or night-time levels of both systolic and diastolic blood pressure were unrelated to cognitive outcome, when age, sex and educational level were controlled for. Differences between mean daytime and night-time blood pressure (based on both narrow and wide measurement intervals for day and night-time periods) were positively associated with memory function (5-9% of additional variance explained) and one sporadic positive association was found on the sensorimotor speed score (4%). Nondippers (n=15) showed lower levels of both memory and sensorimotor speed scores. Conclusions: Ambulatory blood pressure status was not associated with cognitive performance. A reduced nocturnal blood pressure drop was associated with quite specific cognitive deficits, but the underlying mechanism remains to be determined. abstract_id: PUBMED:31456375 Clinical Significance and Therapeutic Implication of Nocturnal Hypertension: Relationship between Nighttime Blood Pressure and Quality of Sleep. Recent global hypertension guidelines recommend an early, strict and 24-hour blood pressure (BP) control for the prevention of target organ damage and cardiovascular events. Out-of-office BP measurement such as ambulatory BP monitoring and home BP monitoring is now widely utilized to rule out white-coat hypertension, to detect masked hypertension, to evaluate the effects of antihypertensive medication, to analyze diurnal BP variation, and to increase drug adherence. Nocturnal hypertension has been neglected in the management of hypertension despite of its clinical significance. Nighttime BP and non-dipping patterns of BP are stronger risk predictors for the future cardiovascular mortality and morbidity than clinic or daytime BP. In addition to ambulatory or home daytime BP and 24-hour mean BP, nocturnal BP should be a new therapeutic target for the optimal treatment of hypertension to improve prognosis in hypertensive patients. This review will provide an overview of epidemiology, characteristics, and pathophysiology of nocturnal hypertension and clinical significance, therapeutic implication and future perspectives of nocturnal hypertension will be discussed. abstract_id: PUBMED:18398329 Pulse pressure or dipping pattern: which one is a better cardiovascular risk marker in resistant hypertension? Objective: Nocturnal blood pressure (BP) reduction and ambulatory pulse pressure (PP) are well known prognostic markers obtained from ambulatory BP monitoring (ABPM). The aim of this study is to investigate which one of these ABPM parameters is related to high cardiovascular risk profile in resistant hypertension, based on their associations with target organ damage (TOD). Methods: Clinical-demographic, laboratory and ABPM variables were recorded in a cross-sectional study involving 907 resistant hypertensive patients. Nocturnal systolic BP reduction and 24-h PP were assessed both as continuous and dichotomized variables (PP at the upper tertile value: 63 mmHg). Statistical analyses included bivariate tests and multivariate logistic regression with each TOD as the dependent variable. Results: Patients with the nondipping pattern and high 24-h PP shared some characteristics: they were older, had higher prevalence of cerebrovascular disease and nephropathy, higher office and 24-h BP levels, increased serum creatinine and microalbuminuria, and higher left ventricular mass index than their counterparts. Additionally, patients with high PP had a greater prevalence of diabetes and other TOD. In multivariate logistic regression, high PP was independently associated with all TODs even after adjustment for sex, age, BMI, cardiovascular risk factors, 24-h mean arterial pressure and antihypertensive treatment, whereas nondipping pattern was only associated with hypertensive nephropathy. Furthermore, PP was more strongly associated with the number of TOD than the nocturnal systolic blood pressure (SBP) fall. Conclusions: In a large group of resistant hypertensive patients, an increased 24-h PP shows a closer correlation with high cardiovascular risk profile than the nocturnal BP reduction. Answer: Yes, the absence of a normal nocturnal fall in blood pressure, known as nondipping, is associated with cardiovascular target organ damage. Studies have shown that nondipping is related to more advanced disease, including reduced renal function and clinical evidence of cardiovascular disease (PUBMED:24061070). Nondippers tend to have more prominent target organ damage compared to dippers, as indicated by the presence of carotid artery plaque and increased intimal-medial thickness, although these differences may not be significant after adjusting for age (PUBMED:9321744). Additionally, nondipping has been associated with left ventricular hypertrophy and remodeling, suggesting a cardiovascular risk independent of ambulatory blood pressure levels in normotensives (PUBMED:12799090). Furthermore, nondipping patterns have been linked to cognitive deficits, indicating potential relevance to brain function (PUBMED:9814612). Overall, nondipping is recognized as a significant risk factor for cardio- and cerebrovascular diseases and is associated with increased target-organ damage and adverse cardiovascular outcomes (PUBMED:32368135).
Instruction: Does transfer of work from a public sector organisation to a commercial enterprise without staff reductions increase risk of long-term sickness absence among the staff? Abstracts: abstract_id: PUBMED:23658426 Does transfer of work from a public sector organisation to a commercial enterprise without staff reductions increase risk of long-term sickness absence among the staff? A cohort study of laboratory and radiology employees. Background: Privatisations of public sector organisations are not uncommon, and some studies suggest that such organisational changes may adversely affect employee health. In this study, we examined whether transfer of work from public sector hospital units to commercial enterprises, without major staff reductions, was associated with an increased risk of long-term sickness absence among employees. Methods: A cohort study of 962 employees from four public hospital laboratory and radiology units in three hospitals which were privatised during the follow-up and 1832 employees from similar units without such organisational changes. Records of new long-term sick leaves (&gt;90 days) were obtained from national health registers and were linked to the data. Mean follow-up was 9.2 years. Results: Age- and sex-adjusted HR for long-term sickness absence after privatisation was 0.83 (95% CI 0.68 to 1.00) among employees whose work unit underwent a change from a public organisation to a commercial enterprise compared with employees in unchanged work units. Further adjustments for occupation, socioeconomic status, type of job contract, size of residence and sick leaves before privatisation had little impact on the observed association. A sensitivity analysis with harmonised occupations across the two groups replicated the finding (multivariable adjusted HR 0.92 (0.70-1.20)). Conclusions: In this study, transfer of work from public organisation to commercial enterprise did not increase the risk of long-term sickness absence among employees. abstract_id: PUBMED:25239190 Reorganization increases long-term sickness absence at all levels of hospital staff: panel data analysis of employees of Norwegian public hospitals. Background: The Norwegian specialist health service has undergone many processes of reorganization during the last three decades. Changes are mainly initiated to increase the efficiency and quality of health care serving an ageing population under the condition of a diminishing labour supply. The aim of this study is to investigate the effects of reorganization on long-term sickness absence among different levels of hospital staff. Methods: The study draws on panel data on employees of Norwegian public hospitals in 2005 and 2007 (N = 106,715). National register data on individual employees' days of medically certified long-term (&gt;16 days) sickness absence were linked with survey measures of actual reorganization executed at each hospital in each year. The surveys, answered by hospital administration staff, measured five types of reorganization: merging units, splitting up units, creating new units, shutting down units and reallocation of employees. The variation in sickness absence days was analysed using random and fixed effects Poisson regression with level of reorganization as the main explanatory variable. Results: The fixed effects analysis shows that increasing the degree of organizational change at a hospital from a low to a moderate or high degree leads to an increase in the number of days of long-term sickness absence of respectively 9% (95% CI: 1.03-1.15) and 8% (95% CI: 1.02-1.15). There are few significant differences between employees in different education categories. Only physicians have a significantly higher relative increase in days of long-term sickness absence than the control group with lower tertiary education. Conclusions: Increased long-term sickness absence is a risk following reorganization. This risk affects all levels of hospital staff. abstract_id: PUBMED:32030546 Predicting Long-Term Sickness Absence and Identifying Subgroups Among Individuals Without an Employment Contract. Purpose Today, decreasing numbers of workers in Europe are employed in standard employment relationships. Temporary contracts and job insecurity have become more common. This study among workers without an employment contract aimed to (i) predict risk of long-term sickness absence and (ii) identify distinct subgroups of sick-listed workers. Methods 437 individuals without an employment contract who were granted a sickness absence benefit for at least two weeks were followed for 1 year. We used registration data and self-reported questionnaires on sociodemographics, work-related, health-related and psychosocial factors. Both were retrieved from the databases of the Dutch Social Security Institute and measured at the time of entry into the benefit. We used logistic regression analysis to identify individuals at risk of long-term sickness absence. Latent class analysis was used to identify homogenous subgroups of individuals. Results Almost one-third of the study population (n = 133; 30%) was still at sickness absence at 1-year follow-up. The final prediction model showed fair discrimination between individuals with and without long-term sickness absence (optimism adjusted AUC to correct for overfitting = 0.761). Four subgroups of individuals were identified based on predicted risk of long-term sickness absence, self-reported expectations about recovery and return to work, reason of sickness absence and coping skills. Conclusion The logistic regression model could be used to identify individuals at risk of long-term sickness absence. Identification of risk groups can aid professionals to offer tailored return to work interventions. abstract_id: PUBMED:30581808 Screening for the risk on long-term sickness absence. Background: Long-term sickness absence is a growing concern in Belgium and other European countries. Since 2017, Belgian physicians of the sickness funding organisations are required to assess the re-integration possibilities within the first two months of sickness absence. Given the shortage of physicians in the assessment of work disability and the growing number of people in sickness absence, there is a need for a triage tool, allowing to assign return-to work support to patients having a high-risk profile not to resume work. Methods/design: The current study comprises a comprehensive validation process of a screening tool that supports Belgian physicians in guiding people back to work. The study consists of a theoretical construct validation (face validity and content validity), and an empirical construct validation (concurrence validity, factorial validity, predictive validity, hypothesis testing validity and known- group validity). Expected impact of the study for Public Health: The screening instrument assessing the risk for long-term sickness absence is a tool developed to support physicians who work for sickness funds and for occupational health and safety organisations. Both professionals play an important role in the return to work process and the prevention of long-term sickness absence. The screening tool aims at making a distinction between people who will resume their work independently and people who will need support to do so. Generation of this prediction model will help physicians to focus effort and resources in the high-risk group. Results may also help understand the relationship between the biopsychosocial model and long-term sick-leave. abstract_id: PUBMED:26136596 Associations of chronic embitterment among NHS staff. Background: Chronic embitterment is commonly recognized among people seen by occupational health professionals, but its associations remain unclear. Aims: To investigate the associations of chronic embitterment with procedural justice, the psychological work contract, depression and the Health &amp; Safety Executive's Management Standards Indicator Tool as well as selected employment variables. Methods: A sample of consecutive attendees at occupational health in a single NHS Trust were asked to complete a set of validated questionnaires and give further information about sickness absence and their work. Results: The sample (n = 236) was representative of those attending this occupational health department. Staff showing embitterment rated procedural justice and organizational support lower than other staff and were significantly more likely to have taken sickness absence, uncertified as well as certified. Uncertified sickness absence was almost universal in embittered staff. Although depression was significantly more prevalent among embittered staff, the majority of embittered staff were not depressed. Of those who scored as embittered and depressed or anxious, 93% had had certified sickness absence in the previous 12 months. On the Management Standards Indicator Tool, embittered individuals had significantly lower scores than others on Manager and Peer Support, Role, and Change subscales. Conclusions: Although the data are cross-sectional and self-reported, they provide evidence of the burden on organizations of chronic embitterment among their staff, particularly in terms of sickness absence. Staff who are embittered perceive their organization as unsupportive of them, and as showing low levels of procedural justice. abstract_id: PUBMED:37852191 Causes and Risk Factors for Absenteeism among Medical Staff in German Specialized Lung Clinics during the COVID Pandemic. Background: Staff shortages pose a major challenge to the health system. Objectives: The objective of this study was to clarify the role of different causative factors we investigated on staff absenteeism during the COVID pandemic. Methods: The prospective multicentre cohort study assessed the private and professional impact of the pandemic on health care workers (HCWs) using a specially developed questionnaire. HCWs from 7 specialist lung clinics throughout Germany were surveyed from December 1 to December 23, 2021. The current analysis addresses pandemic-related absenteeism. Results: 1,134 HCW (55% female; 18.4% male, 26.3% not willing to provide information on age or gender) participated. 72.8% had received at least one vaccination dose at the time of the survey, and 9.4% reported a COVID infection. Of those with positive tests, 98% reported home quarantine for median (IQR) 14 (12-17) days; 10.3% of those who ultimately tested negative also reported quarantine periods of 14 (7-14) days. 32.2% of vaccinated respondents reported absenteeism due to vaccine reactions of 2 (1-3) days. Overall, 37% (n = 420) of HCW reported pandemic-related absenteeism, with 3,524 total days of absenteeism, of which 2,828 were due to illness/quarantine and 696 to vaccination effects. Independent risk factors for COVID-related absenteeism ≥5 days included already having COVID, but also concern about long-term effects of COVID (OR 1,782, p = 0.014); risk factors for vaccine-related absenteeism ≥2 days included concerns of late effects of vaccination (OR 2.2, 95% CI: 1.4-3.1, p &amp;lt; 0.000). Conclusion: Staff shortages due to quarantine or infections and vaccine reactogenicity have put a strain on German respiratory specialists. The fact that staff concerns also contributed to absenteeism may be helpful in managing future pandemic events to minimize staff absenteeism. abstract_id: PUBMED:23849237 Perceived organizational justice as a predictor of long-term sickness absence due to diagnosed mental disorders: results from the prospective longitudinal Finnish Public Sector Study. Organizational justice perceptions have been suggested to be associated with symptoms of mental health but the nature of the association is unknown due to reporting bias (measurement error related to response style and reversed causality). In this study, we used prospective design and long-term (&gt;9 days) sickness absence with psychiatric diagnosis as the outcome measure. Participants were 21,221 Finnish public sector employees (the participation rate at baseline in 2000-2002 68%), who responded to repeated surveys of procedural and interactional justice in 2000-2004 along with register data on sickness absence with a diagnosis of depression or anxiety disorders (822 cases). Results from logistic regression analyses showed that a one-unit increase in self-reported and work-unit level co-worker assessed interactional justice was associated with a 25-32% lower odds of sickness absence due to anxiety disorders. These associations were robust to adjustments for a variety of potential individual-level confounders including chronic disease (adjusted OR for self-reported interactional justice 0.77, 95% CI 0.65-0.91) and were replicated using co-worker assessed justice. Only weak evidence of reversed causality was found. The results suggest that low organizational justice is a risk factor for sickness absence due to anxiety disorders. abstract_id: PUBMED:34564731 Sickness absence and return to work among employees with knee osteoarthritis with and without total knee arthroplasty: a prospective register linkage study among Finnish public sector employees. Objective: This study aimed to examine duration of sickness absence due to knee osteoarthritis (OA) and sustained return to work (RTW) among municipal employees, who had at least one compensated sickness absence period due to knee OA. The contribution of sociodemographic characteristics, diabetes and previous sickness absence were assessed. We differentiated between participants with and without total knee arthroplasty (TKA). Methods: Data from 123 506 employees in the Finnish Public Sector Study were linked with national health and mortality register information. There were 3 231 sickness absence periods (2372 participants) due to knee OA in 2005-2011. Kaplan-Meier curves for sustained RTW were obtained and median time with inter-quartile range (IQR) calculated for those with and without TKA. Cox regression analyses were carried out in multivariable analyses. Results: The median time to RTW from the beginning of sickness absence was 21-28 days when TKA was not related to sickness absence and 92-145 days when it was. Among participants with no TKA, age 60-64, non-sedentary work, diabetes, and previous sickness absences predicted longer time to RTW, while pain medication predicted a shorter time. Among participants with TKA, non-sedentary work and previous sickness absences predicted a longer time to RTW. Conclusions: The clinical relevance of the difference in time to RTW between employees with or without TKA was substantial. Employees with knee OA working in physically demanding jobs need work modifications after TKA, and this calls for a dialog between occupational health care professionals and workplaces. abstract_id: PUBMED:34134668 Alternative duty work as workplace-initiated procedure to reduce sickness absence. Purpose: Alternative duty work is a procedure that enables an employee with a short-term disability to perform modified duties as an alternative to sickness absence. We examined whether the implementation of an alternative duty policy was associated with reduced sickness absence in the Finnish public sector. Methods: Two city administrations (A and D) that implemented an alternative duty work policy to their employees (n = 5341 and n = 7538) served as our intervention cities, and two city administrations (B and C) that did not implement the policy represented the reference cities (n = 6976 and n = 6720). The outcomes were the number of annual days, all episodes, and short-term (&lt; 10 days) episodes during the 2 years before versus the 2 years after the intervention year. We applied repeated measures negative binomial regression analyses, using the generalized estimating equations method and the difference-in-difference analysis to compare the intervention and control cities (adjusted for sex, age, type of job contract, occupational class). Results: During the five-year study period, the number of sickness absence days and episodes increased in both the intervention and control cities. Covariate-adjusted analysis of relative risk showed that the overall increase in post- versus pre-intervention sickness absence days was smaller in intervention City A, RR = 1.14 (95% CI = 1.09-1.21) than in control cities B and C, RR = 1.19 (95% CI =1.14-1.24), group × time interaction p &lt; 0.02. In intervention City D, we found a corresponding result regarding all sickness absence episodes and short-term sickness absence episodes but not days. Conclusions: This follow-up suggests that implementing an alternative duty work policy may marginally decrease employees' sickness absences. abstract_id: PUBMED:36855219 Sickness absence among employees of healthcare organizations in the public sector in Mongolia: A cross-sectional study. Objectives: This study aimed to understand the rate of sickness absence (SA) among employees of public healthcare organizations in Mongolia, to identify factors associated with long-term SA, and to estimate costs due to SA. Methods: This cross-sectional study included employees of public healthcare organizations who had certified SA from 2016 to 2018. Sociodemographic and occupational characteristics of absentees and the data on absences were collected. A logistic regression analysis was performed to identify factors associated with long-term SA (≥15 days) among employees who had SA. Absence parameters and the average costs due to SA were calculated and the total cost due to SA at all public healthcare organizations was estimated. Results: From 2016 to 2018, there were 13 653 absentees and 21 043 SA, and the absence rate was 0.9%. The average absence length per absence and absentee were 9.63 days and 14.85 days, respectively. Factors associated with long-term SA were age ≥40 years, 10-19 years in employment, working at the second and tertiary levels, and night shift. The average cost per absentee was 295.5 USD, and the estimated total cost for all health organizations was 1 796 993 USD per year. Conclusions: The absence rate was 0.9% and older age, longer work experience, higher organizational level, and night shift were associated with taking a long-term SA. To reduce the costs of absenteeism and promote the health of employees in healthcare organizations, policymakers should review the policies related to SA and develop national guidelines on SA for employers, healthcare managers, and employees. Answer: The transfer of work from a public sector organization to a commercial enterprise without staff reductions does not increase the risk of long-term sickness absence among the staff. This conclusion is supported by a cohort study of 962 employees from four public hospital laboratory and radiology units that were privatized during the follow-up period. The study compared these employees with 1832 employees from similar units without such organizational changes. The results showed that the age- and sex-adjusted hazard ratio (HR) for long-term sickness absence after privatization was 0.83 (95% CI 0.68 to 1.00), indicating no increased risk of long-term sickness absence among employees whose work unit underwent a change from a public organization to a commercial enterprise compared with employees in unchanged work units. Further adjustments for occupation, socioeconomic status, type of job contract, size of residence, and sick leaves before privatization had little impact on the observed association. A sensitivity analysis with harmonized occupations across the two groups replicated the finding (multivariable adjusted HR 0.92 [0.70-1.20]) (PUBMED:23658426).
Instruction: Do drainage liquid characteristics serve as predictors for seroma formation after incisional hernia repair? Abstracts: abstract_id: PUBMED:19998048 Do drainage liquid characteristics serve as predictors for seroma formation after incisional hernia repair? Background: Though the occurrence of postoperative seroma after incisional hernia repair using mesh reinforcement is very common, little is known about the genesis of seroma formation. The aim of this study was to determine the characteristics of drainage liquid as a potential predictor for the development of seroma after incisional hernia mesh repair. Furthermore, the characteristics of drainage liquid were compared to the characteristics of seroma liquid. Methods: The incidence of postoperative seroma associated with pH value, concentration of lactate, total protein, albumin, propeptide-III-procollagen (P-III-P), hyaluronan, fibronectin and IL-1 receptor antagonist (IL-1-RA) in the drainage liquid were prospectively determined in 38 patients who underwent incisional hernia repair by lightweight polypropylene-polyglactin composite mesh (Vypro-II). The findings were compared to the seroma liquid characteristics of those patients who developed a seroma formation. Results: In 11 patients (29%), seroma formation was present after removal of the drainage. We observed significantly elevated mean drainage volume (148 +/- 64 ml vs. 93 +/- 71 ml; P = 0.014) and significantly reduced pH value (7.26 +/- 0.12 vs. 7.41 +/- 0.23; P = 0.016) and IL-1-RA (100 +/- 71 U/ml vs. 145 +/- 108 U/ml; P = 0.016) in the drainage liquid of patients with seroma formation in comparison to patients without seroma formation. In addition, we found significantly altered concentration of lactate (9.8 +/- 2.0 mmol/l vs. 5.5 +/- 1.4 mmol/l; P &lt; 0.001), P-III-P (24 +/- 13 U/ml vs. 89 +/- 79 U/ml; P = 0.045) and fibronectin (0.10 +/- 0.03 g/l vs. 0.24 +/- 0.13 g/l; P = 0.005) in the drainage liquid in comparison to seroma liquid. Conclusions: The pH value of wound solution proves to be a reliable predictor for the subsequent presence of seroma formation. Furthermore, our findings implicate that seroma formation cannot be seen as persistent drainage liquid. abstract_id: PUBMED:21061138 Serum analyses for protein, albumin and IL-1-RA serve as reliable predictors for seroma formation after incisional hernia repair. Purpose: The aim of this study was to identify potential parameters as predictors for seroma formation after incisional hernia mesh repair. Methods: The incidence of postoperative seroma was determined prospectively in 37 patients who underwent incisional hernia repair with lightweight polypropylene-polyglactin composite mesh (Vypro-II®). Postoperative seroma manifestation was related to patient characteristics (gender, age, BMI, comorbidity, nicotine abuse) and to preoperative serum concentration of total protein, albumin, interleukin-1-receptor-antagonist (IL-1-RA), propeptid-III-procollagen, hyaluronan and fibronectin. Ultrasound investigation was performed on postoperative days 1, 2, 3, 8 and 10. Results: Ten patients (27%) developed seroma with a mean volume of 77 ± 88 ml. Higher BMI correlated with increased seroma formation (P = 0.038). In patients with seroma, total protein (67 ± 7 vs 72 ± 4 g/l; P = 0.037), albumin (42 ± 3 vs 40 ± 4 g/l; P = 0.018) and IL-1-RA (1.4 ± 1 vs 0.8 ± 0.6 U/ml; P = 0.048) exhibited significantly altered serum concentrations in comparison to patients without seroma formation. No significant differences were seen in any other parameters. Conclusions: High BMI, lowered preoperative serum concentration of total protein and albumin, and high serum concentration of IL-1-RA are related to an elevated risk for postoperative seroma formation. abstract_id: PUBMED:36520177 Drain Versus No Drain in Open Mesh Repair for Incisional Hernia, Results of a Prospective Randomized Controlled Trial. Background: Open mesh repair of incisional hernia is associated with different local complications, particularly bleeding and seroma formation. Traditionally, drains have been placed perioperatively to prevent these complications, despite the lack of scientific evidence or expert consensus. We formulated the hypothesis that the absence of drainage would reduce number of patients presenting collections or complications. The present study aimed to compare postoperative complication rates after open mesh repair for incisional hernia with or without prophylactic wound drainage. Methods: Prospective randomized study using standardized surgical technique and drain placement. The primary endpoint was the evaluation of residual fluid collection with ultrasound on postoperative day 30. Other complications, subdivided into medical and surgical, were analyzed as secondary endpoints. Results: There were 144 patients randomized (70 with drain, 74 without drain). No difference was identified between both groups for fluid collection at 30 days (60.3% vs. 62%, p = 0.844). However, less surgical complications were identified in the drain group (21.7% vs. 42.7%, p = 0.007), with a lower wound dehiscence rate (1.5% vs. 9.3%, p = 0.041). Conclusions: Prophylactic drainage in open incisional hernia repair does not objectively reduce the rate of postoperative fluid collections. Therefore, our results do not support the use of routine drainage in incisional hernia repair. Trial Registration: Trial registration on clinicaltrials.gov (NCT00478348). abstract_id: PUBMED:28917016 Predictors of outpatient resource utilization following ventral and incisional hernia repair. Introduction: Little is known about the predictors of increased ambulatory costs following open ventral and incisional hernia repair (VIHR); however, postoperative complications would be expected to be associated with an increased burden on outpatient resources. The purpose of this study is to evaluate the impact of perioperative factors on outpatient resource utilization following VIHR. Methods: With IRB approval, the surgery scheduling system was queried to identify all cases of VIHR done at our institution over 3 years. Cases with other procedures done at time of VIHR were excluded. National Surgical Quality Improvement Program clinical data, physician billing data which included market and payor across cases, and medical record review data were combined and evaluated in order to quantify care and predictors of usage during the 6 months postoperatively. Results: Data were analyzed for 308 patients. Median patient age was 52 years (SD = 13.3), and over half were female. The number of outpatient visits to the surgical office varied from 0 to 18 [median = 2; interquartile range (IQR) = 1-3]. CDC Wound Class &gt;1 was associated with increase of mean 1.4 visits (IQR: 0.5-2.3); p = 0.003. Component separation, longer duration of operation, and increased mesh size were also predictive of increased number of office visits (p &lt; 0.01). Postoperative infected seroma/seroma requiring drainage added a mean 2.3 visits (IQR: 1.3-3.3), (p &lt; 0.001); and deep wound infection added a mean 3.9 visits (IQR: 1.9-5.9) (p &lt; 0.001). Conclusions: Postoperative complications confer a significant burden for patients and to the outpatient surgical office. In an era in which improved quality and cost-efficiency has become imperative, measures to decrease risk of postoperative complications particularly for more complex VIHR would be expected to decrease resource utilization and increase value of care. abstract_id: PUBMED:27095812 Delayed onset seroma formation 'opting out' at 5 years after ventral incisional hernia repair. We present a case of delayed onset seroma formation presenting 5 years after ventral incisional hernia repair (VIHR) with mesh. The patient presented with several months of progressive abdominal fullness and eventual spontaneous drainage from a prior abdominal surgical incision site. Surgical drainage was performed with evolvement of mesh infection. After 5 months of conservative management, the patient remained symptomatic and continued to show evidence of infection. Subsequently, she underwent mesh explantation and definitive repair with complex abdominal wall reconstruction. To the best of our knowledge, this case represents the longest delay in the onset of seroma formation post-VIHR, reported in the literature. abstract_id: PUBMED:26007706 Deep seroma after incisional hernia repair. Case reports and review of the literature. Aim: Wound-related complications are common after incisional hernia repair with mesh; seroma formation is the most frequent problem. The formation of a deep seroma has been rarely reported in the literature. Material Of Study: In one year, September 2012-2013, 136 patients underwent surgery for incisional hernia repair, both elective and urgent. Results: The following complications were observed: one dislocation of polypropylene prosthesis, a massive relapsed seroma and two deep seromas described in this article. A 63- years-old female underwent open incisional hernia repair with an intraperitoneal PTFE patch. She developed recurrent seroma under the mesh drained percutaneously, and finally the prosthesis was removed. A 72- years-old male underwent open incisional hernia repair with an intraperitoneal PTFE patch. After several months the patient had seroma infection. The prosthesis was then removed. Conclusions: Seroma is a wellknown complication of postoperative ventral hernia repair, especially where prosthetic mesh is used. The formation of a deep seroma is rare. Only few works mention this complication in literature. In the development of these chronic seromas a role may be played by a long-term inflammatory reaction, more pronounced with polypropylene and polyester meshes than with ePTFE. A conservative follow up of the seromas is recommended because drainage can introduce infection. In cases where the seroma causes discomfort or is infected then drainage is necessary. From experience at our institution we suggest that patients with the deep subtype of mesh-associated seromas may require closer clinical follow up. When possible, we recommend attempting the drainage of the liquid, eventually followed by microbiological examination. abstract_id: PUBMED:36576667 Fascia defect closure versus non-closure in minimal invasive direct inguinal hernia mesh repair: a systematic review and meta-analysis of real-world evidence. Purpose: Laparoscopic and robotic inguinal hernia mesh repair are both common surgical procedures worldwide. Postoperative hernia recurrence and seroma formation are important concerns. In ventral hernia, primary defect closure in laparoscopic surgery reduces the recurrence rate. However, there is no synthetic evidence of direct inguinal hernia defect closure versus non-closure in minimal invasive surgery. Therefore, this study investigated the efficacy of defect closure in patients undergoing minimal invasive direct inguinal hernia mesh repair. Methods: Eligible studies were identified through a search of PubMed, Embase, Cochrane Library, and CINAHL from their inception until March 2022. Studies examining defect closure in laparoscopic direct inguinal hernia repair were included, and a meta-analysis was performed using the random-effect model. Sensitivity analyses were performed by removing one study at a time. The primary outcomes were hernia recurrence and seroma formation. Acute and chronic postoperative pain, operation time, and length of hospital stay were the secondary outcomes. Results: Five nonrandomized studies and one randomized controlled trial were included. Pooled analysis revealed defect closure might reduce the hernia recurrence rate (risk difference, - 0.02; 95% confidence interval [CI] - 0.04 to - 0.00; p = 0.02). The result of seroma formation (odds ratio, 0.49; 95% CI 0.17-1.46; p = 0.20) showed no significant difference. Moreover, no significant differences were observed in acute postoperative pain, chronic pain, length of hospital stay, and operation time. Conclusions: Our study indicated defect closure seems to be an option to reduce the direct inguinal hernia recurrence rate. No significant benefits were shown in seroma formation and other secondary outcomes. Our study was mostly based on nonrandomized studies and underestimated the effect of defect closure; thus, further high-quality studies are required to draw definitive conclusions. abstract_id: PUBMED:18065367 Late complication after mesh repair of incisional hernias: pseudocyst formation Introduction: Using mesh for hernia repair is a very common procedure; they are particularly useful for postoperative incisional hernias. The most common complications of mesh repair are seroma, haematoma or abscess formation. Previous literature data suggested that fibrotic cyst formation appearing in the late postoperative period is relatively rare. However, more recent studies and our own experience indicate that its incidence is more common and we have to consider it in the differential diagnosis of the complications of hernia repair. Patients: 148 incisional hernia mesh repairs were carried out between a period of 1st January, 2001 to 31st December, 2005. Fibrotic pseudocyst formation was observed in five cases. All developed as a late complication between 4 to 25 months postoperatively. An onlay polypropylene mesh was used in each case. All five patients underwent a wide excision of the pseudocyst wall and they recovered without complications. Discussion: The etiology of pseudocyst formation is unclear. Nevertheless, a possible relationship between the development of seroma and haematoma can not be ruled out, despite no fibrotic pseudocyst formation was observed in the early postoperative period. There is no evidence whether the characteristics of the mesh or the way of implantation would have any effect on pseudocyst formation, as well. It is noted that the only treatment of this late complication is surgical excision. Pseudocyst formation might be prevented by the application of sublay and minimally invasive techniques. abstract_id: PUBMED:38314159 An Algorithmic Approach for the MIS Repair of Ventral Midline Hernias Associated With Diastasis of the Rectus Abdominis Muscle. Purpose: We present our algorithmic approach for symptomatic ventral hernias with Diastasis of the Rectus Abdominis Muscle (DRAM). Methods: Retrospective analysis of patients with symptomatic ventral hernias and DRAM undergoing hernia repair and plication of DRAM from July 2018-March 2021 was conducted. Based on our algorithm, patients were selected for an Endoscopic Onlay Repair (ENDOR) or a Robotic Extended Totally Extraperitoneal Ventral Repair (R-eTEP). Results: We performed a R-eTEP in fifty-seven patients and an ENDOR in twenty-four patients. In the R-eTEP group, thirty-seven (65%) patients were female, the mean age was 54.8 (±10.6), and the mean BMI was 32 (±4.8). Fifty patients (87.7%) had multiple defects, of which 19 (38%) were recurrent hernias and 31 (62%) were incisional hernias. The mean operative time was 200 (±62.4) minutes, with two cases requiring a hybrid approach. The median length of stay was 1 day (0-12), and the median follow-up was 103 days. Twenty-four patients underwent an ENDOR, 19 females (79.2%), the mean age was 45.7 years (±11.7) and the mean BMI was 28 (±3.6). 13 patients had isolated umbilical or epigastric hernias. The mean operative time was 146.2 min (±51.1). Fibrin sealant and suture was the predominant method for mesh fixation, and most cases were performed in an ambulatory setting. Four patients developed post-operative seromas; one requiring drainage due to infection. The Median follow-up was 48.5 days (10-523), with two reported hernia recurrences. Conclusion: An algorithmic approach for adequate patient selection was shown to be safe for treating ventral hernias with DRAM. abstract_id: PUBMED:37195549 Comparative study of onlay versus sublay mesh repair in the management of ventral hernias. Ventral hernias with large defects (≥ 2 cm) are managed with tension free mesh repair. The growing consensus that sublay (retrorectus) mesh repair is superior to onlay mesh repair due to fewer complications is based on literature dominated by retrospective studies from high and upper-middle income countries. There is thus a need of more prospective studies from various countries to resolve the controversy. The objective of this study was to compare the outcomes of onlay versus sublay mesh repair in the management of ventral hernias. Our single centre, prospective comparative study set in a low-middle income country comprised of 60 patients with a ventral hernia undergoing open surgical repair using either onlay technique (n = 30) or sublay technique (n = 30). Surgical site infections, seroma formation, recurrence were found in 3.33%, 6.67%, 0% patients in sublay repair group and in 16.67%, 20%, 6.67% patients in onlay repair group respectively. Mean duration of surgery, mean Visual Analogue Scale (VAS) score for chronic pain, mean duration of hospital stay were 46 min, 4.5, 8 days in onlay repair group and 61 min, 4.2, 6 days in sublay repair group respectively. Onlay repair group was associated with shorter duration of surgery. However, sublay repair was associated with lesser rates of surgical site infections, chronic pain and recurrence than onlay repair. Sublay mesh repair had better outcomes than onlay mesh repair for the management of ventral hernias, however superiority of any one technique could not be established. Answer: Yes, drainage liquid characteristics can serve as predictors for seroma formation after incisional hernia repair. A study found that the pH value of wound solution is a reliable predictor for the subsequent presence of seroma formation. Additionally, the study observed significantly elevated mean drainage volume, significantly reduced pH value, and IL-1-RA in the drainage liquid of patients with seroma formation compared to patients without seroma formation. There were also significantly altered concentrations of lactate, P-III-P, and fibronectin in the drainage liquid in comparison to seroma liquid (PUBMED:19998048). Another study supported these findings by showing that higher BMI, lowered preoperative serum concentration of total protein and albumin, and high serum concentration of IL-1-RA are related to an elevated risk for postoperative seroma formation (PUBMED:21061138). These studies suggest that certain characteristics of the drainage liquid, such as pH value, volume, and specific protein concentrations, can be indicative of the likelihood of seroma development following incisional hernia repair with mesh.
Instruction: Fracture toughness and microhardness of a composite: do they correlate? Abstracts: abstract_id: PUBMED:38022172 Examining the Impact of Preheating on the Fracture Toughness and Microhardness of Composite Resin: A Systematic Review. The objective of this comprehensive study was to systematically evaluate the effects of preheating on two critical mechanical properties, microhardness and fracture toughness, in resin composite materials. The overarching goal was to provide valuable insights into the potential benefits and limitations of this technique for enhancing the overall mechanical performance of such materials. To achieve this, an extensive and systematic electronic search was conducted across multiple reputable databases, including MEDLINE/PubMed, SCOPUS, ProQuest, SpringerLink, Web of Science, ScienceDirect, and Google Scholar, with data collection extending until June 2023. This rigorous search process resulted in the identification of 29 pertinent articles, which were subjected to a thorough risk of bias assessment employing the Quality Assessment Tool For In Vitro Studies (QUIN). The findings of this comprehensive investigation revealed several noteworthy trends. First, concerning microhardness, all the studies consistently demonstrated a positive effect of preheating on this mechanical property. This uniformity in results corroborates the initial hypothesis that preheating indeed enhances microhardness in resin composite materials. Second, with respect to fracture toughness, a majority of the studies provided evidence supporting the notion that preheating has a favorable influence on this particular mechanical property. This alignment of outcomes suggests that preheating can be a beneficial technique for improving fracture toughness in resin composites. However, it is essential to note that there were a few exceptions within the collected data, where preheating appeared to lead to a decrease in fracture toughness. Additionally, one study reported no statistically significant effect on fracture toughness. These deviations from the general trend highlight the complexity of the relationship between preheating and fracture toughness, indicating that other factors or material-specific nuances may come into play. In conclusion, the results of this study indicate that preheating resin composites can improve microhardness and fracture toughness, offering potential benefits for dental restorations. Yet, conflicting data warrants further research to uncover the reasons behind these discrepancies. Future studies should also investigate preheating's broader impact on composite resin materials to gain a comprehensive understanding of its applications and limitations in the field. abstract_id: PUBMED:30823655 Influence of Graphene Oxide on the Mechanical Properties, Fracture Toughness, and Microhardness of Recycled Concrete. There is a constant drive to improve the properties of recycled concrete owing to its inferior strength and fracture toughness compared to normal concrete and recent progress in graphene oxide (GO) nanomaterials impelling nanosized reinforcements to recycled concrete. Here, GO-modified natural sand (NS)- or recycled sand (RS)-based mortars (GONMs or GORMs) with six GO fractions (wGOs) were fabricated to explore their 28 d mechanical strengths (f28t, f28c), fracture toughness (KIC, δc), and microhardness (Hv), as well as their crystal phases (using X-ray powder diffraction) and microstructures (using scanning electronic microscopy). Results reveal, greater enhancements in mechanical strengths (4.50% and 10.61% in f28t, 4.76% and 13.87% in f28c), fracture toughness (16.49% and 38.17% in KIC, 160.14% and 286.59% in δc), and microhardness (21.02% and 52.70% in Hv) of GORM with just 0.025 wt‰ and 0.05 wt‰ GO, respectively, with respect to the control are achieved when comparing with those of GONM with the same wGO. More zigzag surfaces, more irregular weak interface slips, and the relatively lower strengths of RS bring the superiority of the template and reshaping effects of GO into full play in GORM rather than in GONM. These outcomes benefit a wide range of applications of recycled concrete products. abstract_id: PUBMED:35590294 Effect of erosive media on microhardness and fracture toughness of CAD-CAM dental materials. Background: Erosive acids might create surface flaws and deteriorate the mechanical properties of CAD-CAM materials. This invitro study aimed to investigate the effect of simulated gastric HCl and extrinsic erosive acids on surface microhardness and fracture toughness of CAD-CAM materials. Methods: 400 bar-shaped specimens (17×4×2 mm3) were prepared from 4 different CAD-CAM dental materials (n = 100/group); monolithic zirconia (Ceramill Zolid HT+, Amanngirbach, Austria), lithium disilicate ceramic (IPS e.max CAD, Ivoclar Vivadent, Liechtenstein), nanohybrid resin composite (Grandio Blocs, VOCO) and polymer-infiltrated glass network (Vita Enamic, VITA Zahnfabrik). Specimens from each material type were further subdivided into 5 groups (n = 20) according to the erosive media applied (simulated gastric HCl, white wine, Coca-Cola®, orange juice, and artificial saliva that served as a control). Specimens were immersed for 24 h in an incubator at 37 ℃, then ultrasonically cleaned in distilled water and air-dried. Half of the specimens were tested for Vickers microhardness (VHN) at parameters of 500 gf for 10 s, while the rest of the specimens underwent 3-point flexure till fracture. Fractured surfaces were examined under a scanning electron microscope (SEM) for fracture toughness (KIC) calculation using the quantitative fractographic analysis method. Data collected were statistically analyzed using two-way analysis of variance (α = 0.05) after verification of data normality and homogeneity of variances. Results: Erosive media created surface flaws that lowered the surface microhardness of the material and initiated the fracture pattern under different loads according to material type. The material type was a more predominant factor than erosive media that affected either the microhardness or the fracture toughness of CAD-CAM dental materials. The highest VHN and KIC values were found among Ceramill Zolid HT+ groups followed by IPS e.max CAD and Grandio Blocs regardless of the erosive media employed. Erosive media significantly reduced the VHN and KIC in Vita Enamic specimens compared to the rest of the material types. Conclusion: All CAD-CAM materials used, except Vita Enamic, showed high resistance against the erosive acids indicating higher longevity of the material in patients frequently exposed to either extrinsic or intrinsic acid. abstract_id: PUBMED:29713431 Effects of incorporation of 2.5 and 5 wt% TiO2 nanotubes on fracture toughness, flexural strength, and microhardness of denture base poly methyl methacrylate (PMMA). Purpose: The aim of this preliminary study was to investigate, for the first time, the effects of addition of titania nanotubes (n-TiO2) to poly methyl methacrylate (PMMA) on mechanical properties of PMMA denture base. Materials And Methods: TiO2 nanotubes were prepared using alkaline hydrothermal process. Obtained nanotubes were assessed using FESEM-EDX, XRD, and FT-IR. For 3 experiments of this study (fracture toughness, three-point bending flexural strength, and Vickers microhardness), 135 specimens were prepared according to ISO 20795-1:2013 (n of each experiment=45). For each experiment, PMMA was mixed with 0% (control), 2.5 wt%, and 5 wt% nanotubes. From each TiO2:PMMA ratio, 15 specimens were fabricated for each experiment. Effects of n-TiO2 addition on 3 mechanical properties were assessed using Pearson, ANOVA, and Tukey tests. Results: SEM images of n-TiO2 exhibited the presence of elongated tubular structures. The XRD pattern of synthesized n-TiO2 represented the anatase crystal phase of TiO2. Moderate to very strong significant positive correlations were observed between the concentration of n-TiO2 and each of the 3 physicomechanical properties of PMMA (Pearson's P value ≤.001, correlation coefficient ranging between 0.5 and 0.9). Flexural strength and hardness values of specimens modified with both 2.5 and 5 wt% n-TiO2 were significantly higher than those of control (P≤.001). Fracture toughness of samples reinforced with 5 wt% n-TiO2 (but not those of 2.5% n-TiO2) was higher than control (P=.002). Conclusion: Titania nanotubes were successfully introduced for the first time as a means of enhancing the hardness, flexural strength, and fracture toughness of denture base PMMA. abstract_id: PUBMED:16211564 Microhardness and fracture toughness of dental materials by indentation method. The main objective of this study was to measure the fracture toughness of the human teeth enamel using the microindentation technique and to compare the results calculated from the equations developed for Palmqvist and radial-median cracks. Vickers microhardness measurements of dental ceramic (alumina) and human teeth were performed using indentation fracture method. The fracture toughness of enamel was calculated using different equations reported in the literature. Vickers microhardness of the sintered alumina specimen (98.8% theoretical density) was measured to be 14.92 GPa under 9.8 N indentation load. Three equations based on the radial-median cracks were found to be applicable for the fracture toughness determination of the enamel. Results show that indentation fracture method is adequate to measure microhardness and fracture toughness of dental materials. However the calculation of fracture toughness depended on the nature of the cracks and also on the location of the indentation. Therefore, it is necessary to identify the crack profile and to select the appropriate equation for accurate fracture toughness values. abstract_id: PUBMED:32817869 Comprehensive data on the mechanical properties and biodegradation profile of polylactide composites developed for hard tissue repairs. Polylactide (PLA), a biopolymer, was reinforced with three fillers (two organic reinforcements and one inorganic filler). The processing technique used to fabricate the composites was the melt-blending technique. The composites and the unreinforced PLA were subjected to microhardness, compression and biodegradation characterisations. Data obtained are presented in this article as raw data. Data from microhardness and compression tests were used to predict the fracture toughness. The biodegradation of the composites was also examined, and the data obtained reported in this article. The data presented in this article allow for a comprehensive understanding of the mechanical behaviour and the biodegradation profile of three composites of PLA with respect to their applications as biodegradable implants. It also helps in the selection of fillers for biopolymers such as PLA. abstract_id: PUBMED:12654547 Fracture toughness and microhardness of a composite: do they correlate? Objectives: Chipping and bulk fracture are major contributors in clinical failures of composite restorations. Fracture toughness (K(Ic)) quantifies susceptibility for fracture, but experimental determination is complicated. It would be beneficial for the dental community if a relatively simple experiment, such as microhardness (HK), could be used to screen composites for fracture resistance. This study explores a possible correlation between K(Ic) and HK. Methods: Composite cylinders (4mm diameter and approximately 7 mm long) were cured for five combinations of light intensity (I, microm W/cm(2)) and curing time (T, s) to achieve a range of different total light energy densities (I x T=100 x 10, 100 x 20, 300 x 20, 300 x 40, and 700 x 60 microm W s/cm(2)). A chevron-notch was cut in the median plane of the cylinders for the fracture toughness test, which was executed in a displacement control mode at 6 micro m/s cross-head speed (sample size 4). Knoop hardness was determined at the median plane of the cylinders (sample size 6). The tests were performed 15 min and 24h after curing. Results: Both the K(Ic) and HK increased with increased light energy density and storage time. Linear regression analysis indicated a strong correlation between HK and K(Ic) tested at the same time period (R(2)=0.97 and 0.90 for 15 min and 24h, respectively). The correlation became weaker between the different storage times (R(2)=0.71), indicating a change in fracture toughness and/or microhardness mechanisms. Conclusion: Fracture toughness of a composite cannot be simply extrapolated from microhardness. abstract_id: PUBMED:19486457 Effect of artificial saliva storage on microhardness and fracture toughness of a hydrothermal glass-ceramic. Purpose: This study evaluated the effect of artificial saliva storage on the hardness, crack length, and fracture toughness of a glazed, polished, and bleached hydrothermal low-fusing glass-ceramic (Duceram LFC). Materials And Methods: Forty ceramic discs were constructed. The discs were assigned to four groups (n = 10) according to their surface finish: Gp1 -- Autoglaze, Gp2 -- Autoglaze/ground/diamond-polished, Gp3 -- Overglaze, Gp4 -- Overglaze/ground/diamond-polished. Each group was further divided into two subgroups forming eight total subgroups (n = 5). Subgroup A was unbleached; Subgroup B was bleached. Testing was performed before and after 21 days of artificial saliva storage. Data were presented as means and standard deviation (SD). ANOVA was used, along with Duncan's post hoc test for pairwise comparison between the means when ANOVA test was found significant (p&lt; or = 0.05). Results: Surface treatments such as glazing, polishing, and bleaching, saliva storage, and the interaction between these variables had a statistically significant effect on mean values of microhardness, crack length, and fracture toughness of the specimens. There was a statistically significant increase in microhardness and fracture toughness mean values, while crack length values decreased after saliva storage. Polished specimens recorded the smallest crack lengths and fracture toughness, and highest hardness values before and after saliva storage. No difference in fracture toughness values was evident between glazed and polished specimens. Mean crack lengths decreased after saliva storage in all the tested specimens. Hardness values increased after saliva storage. The autoglazed group showed significantly higher fracture toughness, lower crack length, and microhardness than the overglazed group. Conclusions: Surface finishing procedures and artificial saliva storage had a statistically significant effect on mean values of microhardness, crack length, and fracture toughness. This in vitro study suggests that fracture toughness of ceramics may be affected by different surface treatments such as glazing, polishing, bleaching, or a combination; however, in this study Duceram LFC proved its self-healing property after 3-week storage in artificial saliva. abstract_id: PUBMED:35454402 The Effect of Sintering Temperature on the Phase Composition, Microstructure, and Mechanical Properties of Yttria-Stabilized Zirconia. It is known that the yttria-stabilized zirconia (YSZ) material has superior thermal, mechanical, and electrical properties. This material is used for manufacturing products and components of air heaters, hydrogen reformers, cracking furnaces, fired heaters, etc. This work is aimed at searching for the optimal sintering mode of YSZ ceramics that provides a high crack growth resistance. Beam specimens of ZrO2 ceramics doped with 6, 7, and 8 mol% Y2O3 (hereinafter: 6YSZ, 7YSZ, and 8YSZ) were prepared using a conventional sintering technique. Four sintering temperatures (1450 °C, 1500 °C, 1550 °C, and 1600 °C) were used for the 6YSZ series and two sintering temperatures (1550 °C and 1600 °C) were used for the 7YSZ and 8YSZ series. The series of sintered specimens were ground and polished to reach a good surface quality. Several mechanical tests of the materials were performed, namely, the microhardness test, fracture toughness test by the indentation method, and single-edge notch beam (SENB) test under three-point bending. Based on XRD analysis, the phase balance (percentages of tetragonal, cubic, and monoclinic ZrO2 phases) of each composition was substantiated. The morphology of the fracture surfaces of specimens after both the fracture toughness tests was studied in relation to the mechanical behavior of the specimens and the microstructure of corresponding materials. SEM-EDX analysis was used for microstructural characterization. It was found that both the yttria percentage and sintering temperature affect the mechanical behavior of the ceramics. Optimal chemical composition and sintering temperature were determined for the studied series of ceramics. The maximum transformation toughening effect was revealed for ZrO2-6 mol% Y2O3 ceramics during indentation. However, in the case of a SENB test, the maximum transformation toughening effect in the crack tip vicinity was found in ZrO2-7 mol% Y2O3 ceramics. The conditions for obtaining YSZ ceramics with high fracture toughness are discussed. abstract_id: PUBMED:30842794 Effect of Delayed Light-Curing Through a Zirconia Disc on Microhardness and Fracture Toughness of Two Types of Dual-Cure Cement. Objectives: Photopolymerization immediately sets dual-cure cements and prevents the continuation of chemical polymerization. Delayed light-curing allows the chemical process to continue up to the point before starting irradiation; however, there is a controversy in this respect. The present study evaluates the effect of delayed light-curing through a zirconia disc on the microhardness and fracture toughness (KIC) of two types of dual-cure cement. Materials And Methods: Samples measuring 25×5×3 mm3 were prepared for fracture toughness test, and discs measuring 5 mm in diameter and 3 mm in thickness were prepared for microhardness test using Bifix and BisCem cements. Light-curing protocols were as follows: immediate light-curing (group A), a 2-minute delay (group B), a 5-minute delay (group C), direct irradiation (group D), and no irradiation (group E). In groups A to C, light-curing was carried out through a zirconia disc. Data were analyzed by two-way and one-way analysis of variance (ANOVA), post-hoc Tukey's test, and Kruskal-Wallis test at 95% confidence interval. Results: There was a significant difference in the microhardness of the cements (P=0.00). Delayed light-curing had no effect on microhardness (P=0.080). The microhardness of BisCem in group E was significantly lower than that in group D (P=0.015). The fracture toughness of Bifix in groups B and C was significantly different than that in group E and BisCem groups. Conclusions: Under the limitations of our study, delayed light-curing had different effects on microhardness and fracture toughness. Differences in light-curing protocols resulted in different effects based on the cement type. Light-curing is recommended to achieve optimal mechanical properties. Answer: The relationship between fracture toughness and microhardness of a composite is complex and not straightforward. While some studies have found a correlation between the two properties, others have indicated that they do not necessarily correlate directly. For instance, one study explored the possible correlation between fracture toughness (K(Ic)) and microhardness (HK) in a composite material. The study found that both K(Ic) and HK increased with increased light energy density and storage time. Linear regression analysis indicated a strong correlation between HK and K(Ic) tested at the same time period. However, the correlation became weaker between different storage times, suggesting that the mechanisms underlying fracture toughness and microhardness may change over time (PUBMED:12654547). Another study that examined the effects of preheating on the fracture toughness and microhardness of composite resin found that preheating generally improved microhardness and had a favorable influence on fracture toughness. However, there were exceptions where preheating led to a decrease in fracture toughness or had no statistically significant effect, indicating that other factors or material-specific nuances may influence the relationship between these two properties (PUBMED:38022172). Similarly, the addition of titania nanotubes to poly methyl methacrylate (PMMA) was found to enhance both the hardness and fracture toughness of the denture base material, suggesting a positive correlation between these properties in this specific context (PUBMED:29713431). On the other hand, a study on the effect of artificial saliva storage on a hydrothermal glass-ceramic found that surface treatments and storage had a significant effect on microhardness, crack length, and fracture toughness, indicating that the relationship between these properties can be affected by external factors such as surface treatments and environmental conditions (PUBMED:19486457). In conclusion, while there may be instances where fracture toughness and microhardness of a composite correlate, this relationship is not consistent across all materials and conditions. Factors such as material composition, surface treatments, environmental conditions, and the specific methods used to measure these properties can influence the correlation between fracture toughness and microhardness (PUBMED:12654547; PUBMED:38022172; PUBMED:29713431; PUBMED:19486457). Further research is needed to fully understand the complexities of this relationship.
Instruction: Is whole-body hydration an important consideration in dry eye? Abstracts: abstract_id: PUBMED:22952120 Is whole-body hydration an important consideration in dry eye? Purpose: To identify if whole-body hydration plays an important role in dry eye (DE). We hypothesized that individuals classified as DE have higher plasma osmolality (Posm), indicating suboptimal hydration, compared with those classified as non-DE. Methods: Using a hospital-based observational cross-sectional design, assessment of DE and hydration was performed upon admission in 111 participants (N = 56 males and 55 females; mean ± SD age 77 ± 8 years). Assessments of DE included tear osmolarity (Tosm), the 5-item dry eye questionnaire (DEQ-5), rating of eye dryness using a visual analogue scale (VAS), and noninvasive tear film breakup time (NITBUT). Hydration assessment was performed by measuring Posm using freezing-point depression osmometry. Results: Posm was higher in DE than control (CON), indicating suboptimal hydration when using the 316 mOsm/L Tosm cutoff for DE (mean Posm + 11 mOsm/kg versus CON, P = 0.004, Cohen's effect size [d]) = 0.83) and the more conservative Tosm classification for DE where Tosm &gt;324 and CON &lt;308 mOsm/L (mean Posm + 12 mOsm/kg versus CON, P = 0.006, d = 0.94). Posm was also higher in DE than CON when using composite DE assessments, including Tosm and DEQ-5 (P = 0.021, d = 1.07); Tosm and NITBUT (P = 0.013, d = 1.08); and the VAS and DEQ-5 (P = 0.034, d = 0.58). Conclusions: These are the first published data to show that individuals classified as DE have higher Posm, indicating suboptimal hydration, compared with non-DE. These findings indicate that whole-body hydration is an important consideration in DE. abstract_id: PUBMED:25950246 Hydration, fluid regulation and the eye: in health and disease. Variation in systemic hydration status, namely chronic systemic hypohydration or dehydration, can influence the development of several chronic non-ophthalmic diseases. Owing to the eye's high water content and unique system of fluid regulation, we hypothesized that hydration status may affect the eye in health and disease states. Therefore, we performed a systematic review of the current evidence implicating changes in hydration and their association with ocular physiology and morphological characteristics. We also reviewed relevant clinical correlations of changes in hydration and major common eye diseases. Our findings suggest that systemic hydration status broadly affects a variety of ocular pathophysiologic processes and disease states. For example, dehydration may be associated with development of dry eye syndrome, cataract, refractive changes and retinal vascular disease. On the other hand, excessive hydration is associated with some ocular diseases. Tear fluid osmolarity may be an effective marker of systemic hydration status. Recent studies implicate chronic renin-angiotensin-aldosterone system activation in the pathogenesis of diabetic retinopathy and glaucoma but also suggest its antagonism may be a useful therapeutic target. Our findings indicate that assessment of hydration status may be an important consideration in the management of patients with chronic eye diseases and undergoing eye surgery. Further research investigating the role of acute and chronic changes in hydration in individuals with and without ocular disease is warranted. abstract_id: PUBMED:23471906 Author response: is whole body hydration an important consideration in dry eye? N/A abstract_id: PUBMED:29476817 Basal Tear Osmolarity as a metric to estimate body hydration and dry eye severity. The osmolarities of various bodily fluids, including tears, saliva and urine, have been used as indices of plasma osmolality, a measure of body hydration, while tear osmolarity is used routinely in dry eye diagnosis, the degree of tear hyperosmolarity providing an index of disease severity. Systemic dehydration, due to inadequate water intake or excessive water loss is common in the elderly population, has a high morbidity and may cause loss of life. Its diagnosis is often overlooked and there is a need to develop a simple, bedside test to detect dehydration in this population. We hypothesize that, in the absence of tear evaporation and with continued secretion, mixing and drainage of tears, tear osmolarity falls to a basal level that is closer to that of the plasma than that of a tear sample taken in open eye conditions. We term this value the Basal Tear Osmolarity (BTO) and propose that it may be measured in tear samples immediately after a period of evaporative suppression. This value will be particular to an individual and since plasma osmolarity is controlled within narrow limits, it is predicted that it will be stable and have a small variance. It is proposed that the BTO, measured immediately after a defined period of eye closure, can provide a new metric in the diagnosis of systemic dehydration and a yardstick against which to gauge the severity of dry eye disease. abstract_id: PUBMED:32800702 Fluorescent contact lens for continuous non-invasive measurements of sodium and chloride ion concentrations in tears. Rapid and non-invasive measurement of hydration status is medically important because even mild levels of dehydration can have a significant impact on physical and cognitive performance. Despite the potential value of determining whole-body hydration based on the electrolytes found in tears, very few tests are available. An area of intense interest is the development of a contact lens which could measure ion concentrations in tears, specifically that of sodium (Na+) and chloride (Cl-) ions, the dominant electrolytes in blood plasma and tears. Here, we describe a method to make fluorescent contact lenses which allow determination of Na+ and Cl- ion concentrations in tears. Fluorophores known to be sensitive to Na+ and Cl- were derivatized to bind non-covalently to two commercially-available silicone hydrogel (SiHG) contact lenses-the Biofinity (Comfilcon A) or MyDay (Stenfilcon A) lenses. The sodium- and chloride-sensitive fluorophores displayed spectral changes in the physiological range for Na+ and Cl- ions in tears. The lenses for both Na+ and Cl- ions were completely reversible. The sodium responses were not sensitive to protein interference including human lysozyme, human serum albumin and mucin type 2. The chloride sensitivity was similar with both lenses, but the sodium-sensitive range was different in the Biofinity and MyDay lenses. We also fabricated a lens with both the Na+ and Cl- probes in a single MyDay lens resulting in a contact lens that independently measured Na+ and Cl- concentrations without physical separation of the fluorophores. Our findings indicated that a sodium and chloride-sensitive contact lens (NaCl-lens) could be used for rapid non-invasive detection of whole-body hydration, as well as associated diseases or other infections. abstract_id: PUBMED:36915855 Quality of Life in Digital Device Users Who are Treated with Systane Hydration PF. Purpose: To understand the impact of Systane Hydration PF on dryness symptoms and quality of life in digital device users and to determine if participants prefer either the unit-dose or multi-dose dispensing system of Systane Hydration PF. Materials And Methods: This 2-week, three visit study recruited regular digital device users. Participants were required to score ≤80 on the Impact of Dry Eye on Everyday Life (IDEEL) Quality of Life (QoL) Work domain and between 13 and 32 on the Ocular Surface Disease Index (OSDI) questionnaire. Participants were randomized to either Systane Hydration PF unit-dose or multi-dose for 1 week and switched to the alternative dosing system for the second week. Participations were evaluated by completing the full IDEEL-QoL module and OSDI questionnaire at each visit. Likert surveys were completed to probe dispensing system preferences. Results: Thirty participants with a mean ± SD age of 28.6 ± 12.0 years (70% female) were recruited. Participants had significant improvements in all three IDEEL-QoL domains as well as in OSDI scores (all p &lt; 0.0001). Participants had similar preferences for the two dispensing systems, though they were more likely to indicate that they thought that the multi-dose bottle was more environmentally friendly than the unit-dose vials. Conclusion: Digital device users with dry eye symptoms had meaningful improvements in eye comfort and quality of life scores after being treated with Systane Hydration PF for 2 weeks. Participants did not have a clear dispensing system preference suggesting that the best dispensing system may depend on the patient. abstract_id: PUBMED:23471907 Tear osmolality and whole-body hydration. N/A abstract_id: PUBMED:21849890 Influence of modest changes in whole-body hydration on tear fluid osmolarity: important considerations for dry eye disease detection. N/A abstract_id: PUBMED:24959648 Full body hydration and dry eye N/A abstract_id: PUBMED:28708816 Determination of the Minimal Clinically Important Difference of the University of North Carolina Dry Eye Management Scale. Purpose: To establish an initial estimation of the MCID of the University of North Carolina Dry Eye Management Scale (UNC DEMS) and assess its association with patient perceptions of symptom change. Methods: Thirty-three patients (33.3% men, 67.7% women, mean age 60.5 yrs) with previous DEMS scores were recruited from a UNC ophthalmology clinic in spring 2014. We used anchor-based methods, categorizing important symptom change, to compare the change in the DEMS scores across visits to patient assessments of change; linear regression coefficients estimated the MCID. We correlated clinical assessments, patient perceptions, and DEMS scores. Results: DEMS score changes correlated with global anchors [-0.4229 (P = 0.014)]. Unadjusted linear regression yielded a beta coefficient of -0.54 (confidence interval, -0.97 to -0.12, R = 0.18, P = 0.014), which estimated the DEMS MCID. Adjusting the regression model for days since the last visit and DEMS score improved the association (beta = -0.56; confidence interval, -0.99 to -0.13; R = 0.43; P = 0.013). Descriptive statistics produced an MCID of 1 point. Patients said that 2 points would represent a significant change. The DEMS modestly correlated with the Schirmer test (-0.4045, P = 0.0266), Oxford Grading Scheme (+0.3713, P = 0.0364), and tear breakup time (-0.3559, P = 0.0456). Conclusions: The UNC DEMS is a valid, responsive patient-reported outcome measure instrument, which is easy to use in the clinic and capable of showing an MCID of 1 point. Answer: Yes, whole-body hydration is an important consideration in dry eye. A study identified that individuals classified as having dry eye (DE) had higher plasma osmolality (Posm), indicating suboptimal hydration, compared with those classified as non-DE. This was determined using various assessments of dry eye, including tear osmolarity (Tosm), the 5-item dry eye questionnaire (DEQ-5), rating of eye dryness using a visual analogue scale (VAS), and noninvasive tear film breakup time (NITBUT). The findings suggest that whole-body hydration status is an important factor in dry eye (PUBMED:22952120). Additionally, a systematic review of current evidence found that systemic hydration status broadly affects a variety of ocular pathophysiological processes and disease states, including dry eye syndrome, cataract, refractive changes, and retinal vascular disease. The review concluded that assessment of hydration status may be an important consideration in the management of patients with chronic eye diseases and those undergoing eye surgery (PUBMED:25950246). Furthermore, it has been hypothesized that Basal Tear Osmolarity (BTO) could provide a new metric in the diagnosis of systemic dehydration and a yardstick against which to gauge the severity of dry eye disease. BTO is proposed to be measured in tear samples immediately after a period of evaporative suppression, which may reflect plasma osmolarity more closely than tear samples taken in open eye conditions (PUBMED:29476817). In summary, the evidence indicates that whole-body hydration is indeed an important consideration in the context of dry eye, affecting both the diagnosis and management of the condition.
Instruction: Lower mini-sternotomy in congenital heart disease: just a cosmetic improvement? Abstracts: abstract_id: PUBMED:26093954 Lower mini-sternotomy in congenital heart disease: just a cosmetic improvement? Objectives: Lower mini-sternotomy represents a minimally invasive surgical technique that has been utilized for the repair of a wide variety of congenital heart defects with excellent surgical and cosmetic outcomes. However, clinical improvements provided for this technique beyond cosmetic results are controversial. The aim of our study is to report our results with lower mini-sternotomy for the repair of congenital heart malformations and compare them with a matched group with a full median sternotomy approach. Methods: From 2010 through 2013, 105 consecutive congenital patients (81 paediatric) underwent lower mini-sternotomy at our centre (Group 1). We analysed in-hospital and follow-up outcomes, and compare them with an age-sex-diagnosis-type of surgery-matched group (Group 2). Both groups were managed following the same clinical protocols. Results: In Group 1, age at the time of surgery was 12 ± 17 years (range from 0.2 to 64.6 years). In this group, 81 patients were paediatric and 62 were female. Operative techniques were atrial septal defect (n = 72), ventricular septal defect (n = 24) and atrioventricular canal repairs (n = 9). There were no deaths or major in-hospital complications. Two adult patients required conversion to full median sternotomy. For a medium follow-up of 1.5 years (range from 1 month to 5 years), there were no deaths, reinterventions or reoperations and no significant residual defects were found. Compared with Group 2, patients in Group 1 had longer cardiopulmonary bypass times (58.71 ± 19.08 vs 45.39 ± 20.45, P &lt; 0.001) and cross-clamp times (32.75 ± 13.11 vs 23.22 ± 13.93, P &lt; 0.001), higher rate of early extubation (96 vs 85%, P = 0.018) and lower rate of postoperative complications (11.6 vs 22.3%, P = 0.034). Conclusions: At our centre, lower mini-sternotomy represents a safe alternative for the repair of congenital heart defects in paediatric and adult populations. Cardiopulmonary bypass and cross-clamp times were longer in the mini-sternotomy group. However, these patients showed earlier extubation and less postoperative complications when compared with patients with a full sternotomy approach. Combined with improved cosmetic outcomes, lower mini-sternotomy could represent the technique of choice for these populations. abstract_id: PUBMED:32789105 Accelerating Improvement in Children's Healthcare Through Quality Improvement Collaboratives: A Synthesis of Recent Efforts. Purpose Of Review: Quality improvement collaboratives can accelerate quality improvement and patient safety efforts. We reviewed major pediatric quality improvement collaboratives that have published results in the past five years and discussed common success factors and barriers encountered by these collaboratives. Recent Findings: Many pediatric quality improvement collaboratives are active in neonatal, cystic fibrosis, congenital heart disease, hematology/oncoogy, chronic kidney disease, rheumatology, critical care, and general pediatric care. Summary: Factors important to the success of these pediatric quality improvement collaboratives include data sharing and communication, trust among institutions, financial support, support from national organizations, use of a theoretical framework to guide collaboration, patient and family involvement, and incentives for participation at both the individual and institutional levels. Common barriers encountered by these collaboratives include insufficient funding or resources, legal concerns, difficulty coming to consensus on best practices and outcome measures, and overcoming cultural barriers to change. Learning from the successes and challenges encountered by these collaboratives will enable the pediatric healthcare quality improvement community to continue to evolve this approach to maximize benefits to children. abstract_id: PUBMED:29198264 International quality improvement initiatives. Across the globe, the implementation of quality improvement science and collaborative learning has positively affected the care and outcomes for children born with CHD. These efforts have advanced the collective expertise and performance of inter-professional healthcare teams. In this review, we highlight selected quality improvement initiatives and strategies impacting the field of cardiovascular care and describe implications for future practice and research. The continued leveraging of technology, commitment to data transparency, focus on team-based practice, and recognition of cultural norms and preferences ensure the success of sustainable models of global collaboration. abstract_id: PUBMED:31784844 Congenital Heart Disease in Low- and Lower-Middle-Income Countries: Current Status and New Opportunities. Purpose Of Review: The paper summarises the most recent data on congenital heart disease (CHD) in low- and lower-middle-income countries (LLMICs). In addition, we present an approach to diagnosis, management and interventions in these regions and present innovations, research priorities and opportunities to improve outcomes and develop new programs. Recent Findings: The reported birth prevalence of CHD in LLMICs is increasing, with clear evidence of the impact of surgical intervention on the burden of disease. New methods of teaching and training are demonstrating improved outcomes. Local capacity building remains the key. There is a significant gap in epidemiological and outcomes data in CHD in LLMICs. Although the global agenda still does not address the needs of children with CHD adequately, regional initiatives are focusing on quality improvement and context-specific interventions. Future research should focus on epidemiology and the use of innovative thinking and partnerships to provide low-cost, high-impact solutions. abstract_id: PUBMED:28062603 Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings. Background: Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods: Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results: A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions: When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. abstract_id: PUBMED:24268088 Pediatric collaborative networks for quality improvement and research. Despite efforts of individual clinicians, pediatric practices, and institutions to remedy continuing deficiencies in pediatric safety and health care quality, multiple gaps and disparities exist. Most pediatric diseases are rare; thus, few practices or centers care for sufficient numbers of children, particularly in subspecialties, to achieve large and representative sample sizes, and substantial between-site variation in care and outcomes persists. Pediatric collaborative improvement networks are multi-site clinical networks that allow practice-based teams to learn from one another, test changes to improve quality, and use their collective experience and data to understand, implement, and spread what works in practice. The model was initially developed in 2002 by an American Board of Pediatrics Workgroup to accelerate the translation of evidence into practice, improve care and outcomes for children, and to serve as the gold standard for the performance in practice component of Maintenance of Certification requirements. Many features of an improvement network derive from the Institute for Healthcare Improvement's collaborative improvement model Breakthrough Series, including focus on a high-impact condition or topic; providing support from clinical content and quality improvement experts; using the Model for Improvement to set aims, use data for feedback, and test changes iteratively; providing infrastructure support for data collection, analysis and reporting, and quality improvement coaching; activities to enhance collaboration; and participation of multidisciplinary teams from multiple sites. In addition, they typically include a population registry of the children receiving care for the improvement topic of interest. These registries provide large and representative study samples with high-quality data that can be used to generate information and evidence, as well as to inform clinical decision making. In addition to quality improvement, networks serve as large-scale health system laboratories, providing the social, scientific, and technical infrastructure and data for multiple types of research. Statewide, regional, and national pediatric collaborative networks have demonstrated improvements in primary care practice as well as care for chronic pediatric diseases (eg, asthma, cystic fibrosis, inflammatory bowel disease, congenital heart disease), perinatal care, and patient safety (eg, central line-associated blood stream infections, adverse medication events, surgical site infections); many have documented improved outcomes. Challenges to spreading the improvement network model exist, including the need for the identification of stable funding sources. However, these barriers can be overcome, allowing the benefits of improved care and outcomes to spread to additional clinical and safety topics and care processes for the nation's children. abstract_id: PUBMED:34791116 Quality improvement initiative in paediatric echocardiography laboratory in a low- to middle-income country. Aims: To determine the impact of a quality improvement (QI) initiative in the area of paediatric echocardiography (echo) in a low- to middle-income country (LMIC).Care for patients with congenital heart disease is challenging, especially in LMICs. Collaborative learning through QI projects is imperative to ensure improvement in delivery processes leading to better patient outcomes. Methods And Results: This QI initiative was taken by a team consisting of physicians and sonographers. Problems were identified, a key driver diagram (KDD) was created, and simple process re-engineering was done using interventions based on the KDD. Metrics (five process and one outcome) were assessed to determine the effectiveness of the QI project. The process metrics assessed were comprehensiveness of exam, timeliness of reporting, diagnostic accuracy and error, and sedation adverse event rates of transthoracic echocardiograms, while a novel comprehensive echo laboratory (lab) quality score was developed as an outcome metric. Data were collected quarterly and analysed in the post-implementation phase. Significant improvement was seen in comprehensive mean score (20.4-29.7), timeliness (40-95%), and diagnostic accuracy rate (91-100%), while a decrease was seen in the diagnostic error rate (7.5-3.5%) and the sedation adverse event rate (6.8-0%), pre- vs. post-implementation. The overall quality outcome score improved from 7 to 19 and the echo lab was able to achieve adequate quality. Conclusion: This QI initiative produced improvement in all the processes, and the overall quality of the echo lab without any substantial increase in resources or cost. abstract_id: PUBMED:29350246 Data-Driven Quality Improvement Project to Increase the Value of the Congenital Echocardiographic Report. Echocardiography is the primary diagnostic modality for congenital heart disease patients. The written report is used to communicate with the care team and organization is often divided into the body with detailed findings and the conclusions with important findings summarized. Strategies to increase workflow efficiency include batch writing of reports after performance of multiple echocardiograms and the use of report templates which may contribute to discrepancies within report leading to potential downstream medical errors. The aim of this project was to measure the rate of inconsistencies in the echocardiogram reports and through an iterative series of process improvement decrease this rate while maintaining sonographer efficiency and diagnostic accuracy. The discrepancy rate, diagnostic error rate, and sonographer productivity were collected one-year prior and during the iterative quality improvement process. The primary outcome and discrepancies in reports were determined by two reviewers: an experienced pediatric echocardiographic cardiologist and a senior sonographer. Minor discrepancies were defined as contradictions between the body and the conclusion of the report that were unlikely to affect patient care. Major discrepancies were defined as discrepancies between the body and the conclusion that had significant potential to affect patient care. Sonographer productivity was measured as studies per sonographer per month. Our primary intervention was to initiate a quarterly QI meeting and to decrease the batch writing of preliminary echocardiogram reports. No major discrepancies were identified pre- or post-intervention. The minor discrepancies decreased from 40.7 to 6%. Sonographer productivity was not significantly changed with a slight increase from 100 studies/sonographer/month during the baseline to 101 studies/sonographer/month during the intervention. There was no change in major or minor diagnostic error rate. Our quality improvement intervention increased the value of our reports by significantly decreasing minor discrepancies without negatively impacting sonographer productivity or diagnostic accuracy. abstract_id: PUBMED:23804476 Outcomes analysis and quality improvement for the treatment of patients with pediatric and congenital cardiac disease. Tremendous progress has been made in the science of assessing the outcomes of the treatments of patients with pediatric and congenital cardiac disease. Multi-institutional databases have been developed that span subspecialty, geographic, and temporal boundaries. Linking of different databases enables additional analyses not possible using the individual data sets alone and can facilitate quality improvement initiatives. Measures of quality can be developed, in the domains of structure, process, and outcome, which can facilitate quality improvement. Parents are an integral part of the health care team and are key partners with regard to quality improvement. The role of the parent in the process of health care delivery can be facilitated by enhancing the organizational culture and creating methods of transparency, empowering parents, and implementing effective strategies of communication. The professionals caring for patients with pediatric and congenital cardiac disease, in collaboration with the patients and their families, now have the opportunity to capitalize on the power of our databases and move beyond outcome assessment and benchmarking, to collaborative quality improvement. abstract_id: PUBMED:31663842 Quality Improvement Basics: A Crash Course for Pediatric Cardiac Care. Background: Lack of knowledge of quality improvement (QI) methodology and change management principles can explain many of the difficulties encountered when trying to develop effective QI initiatives in health care. Methods: An interactive QI workshop at the 14th Annual Meeting of the Pediatric Cardiac Intensive Care Society provided an overview of the role of QI in health care, basic QI frameworks and tools, and leadership and organizational culture pitfalls. The top five QI projects submitted to the meeting were later presented to an expert QI panel in a separate session to illustrate examples of QI principles. Results: Workshop presenters introduced two major QI methodologies used to design QI projects. Important first steps include identifying a problem, forming a multidisciplinary team, and developing an aim statement. Key driver diagrams were highlighted as an important tool to develop a project's framework. Several diagnostic tools used to understand the problem were discussed, including the "5 Why's," cause-and-effect charts, and process flowcharts. The importance of outcome, process, and balancing measures was emphasized. Identification of interventions, the value of plan-do-study-act cycles to fuel continuous QI, and use of statistical process control, including run charts or control charts, were reviewed. The importance of stakeholder engagement, transparency, and sustainability was discussed. Later, the top five QI projects presented highlighted multiple "QI done well" practices discussed during the preconference QI workshop. Conclusions: Understanding QI methodology and appropriately applying basic QI tools are pivotal steps to realizing meaningful and sustained improvement. Answer: Lower mini-sternotomy in congenital heart disease is not just a cosmetic improvement. According to the study reported in PUBMED:26093954, lower mini-sternotomy has been utilized for the repair of a variety of congenital heart defects with not only excellent cosmetic outcomes but also clinical improvements. The study compared the results of lower mini-sternotomy with a full median sternotomy approach in a group of 105 consecutive congenital patients. The findings showed that while cardiopulmonary bypass and cross-clamp times were longer in the mini-sternotomy group, these patients experienced earlier extubation and a lower rate of postoperative complications compared to the full sternotomy group. These results suggest that lower mini-sternotomy can be a safe alternative for the repair of congenital heart defects, offering benefits beyond cosmetic improvements, such as reduced postoperative complications and earlier extubation, which can contribute to a faster recovery process.
Instruction: Sudden infant death syndrome in South Australia 1968-97. Part 3: is bed sharing safe for infants? Abstracts: abstract_id: PUBMED:11115030 Sudden infant death syndrome in South Australia 1968-97. Part 3: is bed sharing safe for infants? Objective: To examine the risk of death for bed-sharing infants. Methodology: All unexpected infant deaths occurring in South Australia between 1970 and 1997, occurring after the infant was put to rest and diagnosed by after death scene investigation and autopsy as sudden death infant syndrome, accidental death, or 'undetermined' were studied. Results: Accidents were the most likely cause of death for 5% of infants who died in designated infant containers (cots, cradles, etc), 24% of those who were sharing a bed or couch, and 72% of those who were placed alone on a bed or couch. Conclusions: While bed sharing showed an increased risk of dying accidentally, when compared with infants sleeping in designated infant containers, the risk of accidental death in this study was even greater for infants left alone on adult beds or couches. abstract_id: PUBMED:11115029 Sudden infant death syndrome (SIDS) in South Australia 1968-97. Part 2: the epidemiology of non-prone and non-covered SIDS infants. Objective: To identify the risk factors for infants who die suddenly and unexpectedly, but whose deaths are not related to prone position, or having the head covered. Methodology: A case-control study was designed in which the cases were infants who had died of sudden infant death syndrome (SIDS) in South Australia between January 1974 and December 1997, who were found not prone, not bed sharing and with the head not covered. The controls were two infants for each case, born in the same year and found in the prone position (again not bed sharing and with the head not covered). Results: Sudden unexpected death infancy is rare in non-prone infants with the head not covered. occurring on average twice a year in South Australia, where there are 18,000-21,000 births per year. In this group there was a higher percentage of infants with features associated with low socio-economic groups (teenage pregnancies and maternal smoking), sibling SIDS, suspicion of non-accidental injury and the presence of minor congenital anomalies, especially cardiac anomalies. Conclusions: The majority of unexpected deaths in infancy can be prevented by not allowing infants to be unobserved in prone position, and by preventing them from getting their faces covered. For the few infants not found in these positions, a careful investigation should be made for malformations or non-accidental injury. abstract_id: PUBMED:29299793 Bed-Sharing in the First 8 Weeks of Life: An Australian Study. Background As the evidence continues to emerge about the relationship between sudden unexpected death in infancy (SUDI) and the way an infant sleeps, providing consistent and evidence-informed recommendations on how best to sleep infants is an ongoing challenge. A recent case series study in the state of Victoria, Australia, identified 45.8% of sleep-related infant deaths occurred whilst bed-sharing. This study prompted the need for further exploration of infant sleeping practices, including bed-sharing, in this population. Methods A cross-sectional survey of 2745 mothers attending the Maternal and Child Health (MCH) Service across Victoria, Australia was conducted. Data included the prevalence and circumstances of bed-sharing, family demographics, and SUDI risk and protective factors. Associations between bed-sharing and SUDI risk and protective factors were examined using univariate and multivariate analyses. Results Bed-sharing prevalence was found to be 44.7%, with 21.5% reporting that this was intended. Multivariate analyses showed bed-sharing was less likely amongst those with an annual household income above $AUS104, 000 (OR 0.72; 95% CI 0.54-0.96) and more likely amongst mothers who breastfed (OR 1.71; 95% CI 1.23-2.37). Conclusions Bed-sharing prevalence in this population compares closely with the Victorian case series study and a previous cross-sectional study in the state of Queensland, Australia, in 2002. Noted gaps in how families are implementing current recommendations about reducing the risk of SUDI were identified for sleep position, sleep location and the sleep environment. Further consideration needs to be given to addressing these gaps and applying these findings of current bed-sharing practices to the development of infant safe sleeping policy and programs. abstract_id: PUBMED:11115028 Sudden infant death syndrome in South Australia 1968-97. Part I: changes over time. Objective: To compare the epidemiology of sudden infant death syndrome (SIDS) over three consecutive decades. Methodology: The birth history, infant's developmental and health history, infant care practices for the infant, death scene investigation and autopsy findings for all infants dying suddenly and unexpectedly in South Australia (SA) between January 1968 and December 1997 were studied. Results: The incidence of SIDS in SA rose through the 1970s and early 1980s with the highest incidence being in infants born in 1986 at 2.4 per 1000 live births (LB). Two factors felt to be dangerous for some infants were identified being left unobserved in the prone position and having the head covered by bed clothes. Publicity about the risk of prone sleeping has been accompanied by a fall in SIDS deaths, to an incidence of 0.5 per 1000 LB in 1997. The incidence in Aboriginal infants, and infants living in lower socio-economic conditions has always been high, but the over-representation of these groups has increased in the last 5 years. Conclusion: It no infant under 8 months of age was placed prone or was able to get to prone unobserved before the age when they can easily get back to supine, and no infant was able to get the head completely covered while unobserved, the incidence of SIDS in SA should fall below 0.2 per 1000 LB. abstract_id: PUBMED:34109935 'It looks like a breadbox': a pilot study investigating implementation of the Pepi-Pod® program with Aboriginal families in metropolitan South Australia. Aims: To collaboratively explore the cultural acceptance of the Pepi-Pod® program as an alternate safe sleep space and to explore the process of implementing the Pepi-Pod® program in a mainstream health service for Aboriginal families living in urban South Australia. Background: Aboriginal and Torres Strait Islander infants continue to die from sudden infant death syndrome (SIDS) and sudden unexpected death in infancy (SUDI) at rates three to four times greater than other infants born in Australia despite Council of Australian Governments commitment to halve the gap in the Indigenous infant mortality rate by 2018. The Pepi-Pod® program is evidenced in New Zealand and Queensland to provide a culturally appropriate safe sleep alternative that contributes to the reduction of SIDS and SUDI. We have no evidence of acceptability or feasibility when offered through mainstream services in metropolitan South Australia. Methods: With a focus on decolonizing the research process through a two-way process for mutual learning between Aboriginal and non-Aboriginal team members and community, a novel qualitative design was employed including photo elicited yarning sessions (n = 7), focus groups (n = 2), and field notes (n = 15). Results: Four themes emerged: 'you don't have to worry'; 'a way of sharing knowledge'; 'it looks like a bread box?' and 'need for consistent safe sleep messages'. The findings suggest that participants believe the Pepi-Pod® program may enrich Aboriginal families' lives evoking feelings of comfort and safety; however, the design could be improved to make them more culturally appropriate. There was confusion around safe sleep processes and education with a call for streamlining safe sleep messaging. abstract_id: PUBMED:28766090 What Do Pediatricians Tell Parents About Bed-Sharing? Background and objectives Despite the risks of bed-sharing, little is known about what pediatricians tell parents about bed-sharing with infants and whether pediatricians provide specific recommendations outlined by the American Academy of Pediatrics (AAP). This study aimed to understand pediatricians' opinions about bed-sharing and the advice pediatricians provide to parents about bed sharing. Methods The study employed a qualitative study design and the conceptual framework of the Theory of Planned Behavior. 24 primary care pediatricians from a variety of practice settings were interviewed about the anticipatory guidance they provide to families whose infants are in the at-risk age group for SIDS. Results Pediatricians' opinions about bed-sharing differed widely both with respect to identifying bed-sharing as a topic they routinely address in anticipatory guidance as well as in what they tell parents about bed sharing. Some strongly and routinely advise against bed-sharing and identify bed-sharing as a clear risk to infants. Others believe bed-sharing to be both safe and useful. A third group allow the content of anticipatory guidance to be driven by parental concerns. Most pediatricians are clearer in their recommendation to place infants supine to sleep than in their recommendation to avoid bed-sharing. Conclusions Overall, there is considerable variation among pediatricians in the advice they provide about bed-sharing, and most advice is not congruent with the AAP recommendations. Additional efforts to educate pediatricians may be necessary to change attitudes and behaviors with respect to anticipatory guidance about safe sleep. abstract_id: PUBMED:25287799 Does β-APP staining of the brain in infant bed-sharing deaths differentiate these cases from sudden infant death syndrome? Archival cerebral tissue from infants whose deaths were attributed to sudden infant death syndrome (SIDS) from South Australia and Western Denmark were stained for β-amyloid precursor protein (β-APP) and graded according to a simple scoring chart. The resulting APP scores were correlated with sleeping situation (shared vs. alone) showing a significantly higher amount of β-APP staining in the non-bed-sharing, than in the bed-sharing infants (Mann-Whitney, Australia: p = 0.0128, Denmark: p = 0.0014, Combined: p = 0.0031). There was also a marked but non-significant difference in sex distribution between bed-sharers and non-bed-sharers with a male to female ratio of 1:1 in the first group and 2:1 in the latter. Of 48 Australian and 76 Danish SIDS infants, β-APP staining was present in 116 (94%) cases. The eight negative cases were all from the Danish cohort. This study has shown that the amount of β-APP staining was significantly higher in infants who were sleeping alone compared to those who were bed-sharing with one or more adults, in both an Australian and Danish cohort of SIDS infants. Whether this results from differences in the speed with which these infants die, differences in lethal mechanisms involving possible accidental asphyxiation in shared sleepers, or differences in the number of previous hypoxic-ischemic events, remains to be clarified. abstract_id: PUBMED:31465639 Physicians Knowledge and Practice of Safe Sleep Recommendations for Infants in South Dakota. Introduction: Safe sleep recommendations for infants have been evolving over the past three decades. It has been shown that physicians' recommendations strongly influence parents' choice of infant sleep position. However, the proportion of physicians and/or nurses giving infant sleep advice to parents is reportedly low. Methods: A survey was conducted in South Dakota to evaluate pediatricians' and family practitioners' knowledge of safe sleep recommendations for infants. Survey questions assessed their beliefs regarding risk factors for sudden infant death syndrome (SIDS) and their recommendations for safe sleep environments provided to parents/caregivers. Results: Among the respondents, 98 percent felt that it is important to discuss SIDS with every parent and 80 percent of them indicated a need within their profession to have further information on the topic of SIDS. However, 31 percent of physicians with 16 years or more since training and 64.5 percent of those with less than 16 years since training did provide safe sleep advice to parents and caregivers. Conclusions: Despite the knowledge of SIDS risk factors, gaps were seen in dissemination of information regarding all risk factors to parents. While sleep position, postnatal exposures and breast feeding were more likely to be addressed, other elements of a safe sleep environment such as bedding surface, bed sharing, pacifier use and room ventilation were less likely to be covered. These findings indicate the need for improved health care provider education/communication in South Dakota. abstract_id: PUBMED:31964354 Infant care practices and parent uptake of safe sleep messages: a cross-sectional survey in Queensland, Australia. Background: Globally, the incidence of sleep-related infant mortality declined dramatically following the first public health campaigns seen internationally in the 1990s to reduce the risks of sudden infant death. However, Australian Sudden Unexpected Death in Infancy (SUDI) rates have plateaued with little change in incidence since 2004 despite two further public health safe sleep campaigns. This study aims to describe contemporary infant care practices employed by families related to the current public health SUDI prevention program. Methods: A cross-sectional survey of 3341 Queensland primary caregivers with infants approximately 3-months of age was conducted using the Queensland Registry of Births, Deaths and Marriages as a sampling frame. Surveys were returned either via reply-paid mail or online. Questionnaires explored prevalence of infant care practices and awareness of safe sleep recommendations. Univariable analysis was used to generate descriptive statistics for key variables. Results: Overall, only 13% of families routinely practised all six 'Safe Sleeping' program messages. More than one third (1118, 34%) of infants had slept in a non-supine sleep position at some time. Potentially hazardous sleep environments were common, with 38% of infants sleeping with soft items or bulky bedding, or on soft surfaces. Nearly half, for either day- or night-time sleeps, were routinely placed in a sleep environment that was not designed or recommended for safe infant sleep (i.e. a bouncer, pram, beanbag). Most babies (84%) were reportedly smoke free before and after birth. Sleeping in the same room as their caregiver for night-time sleeps was usual practice for 75% of babies. Half (1600, 50%) of all babies shared a sleep surface in the last two-weeks. At 8-weeks, 17% of infants were no longer receiving any breastmilk. Conclusions: The prevalence rates of infant care practices among this Australian population demonstrate many families continue to employ suboptimal practices despite Australia's current safe sleep campaign. Strategic approaches together with informed decisions about pertinent messages to feature within future public health campaigns and government policies are required so targeted support can be provided to families with young infants to aid the translation of safe sleep evidence into safe sleeping practices. abstract_id: PUBMED:30828590 Effect of Education and Cardboard Bassinet Distribution on Newborn Bed-Sharing. Objective. This study sought to determine if infant sleep education plus a cardboard bassinet reduced bed-sharing, a risk factor for sudden infant death syndrome (SIDS) and sleep-related deaths (SRD), in the first week of life. Methods. Women delivered between 1/1/2015-11/15/2016 were interviewed by phone within 72 hours of discharge. Control 1 delivered through 10/31/2015 (previously reported); Control 2: 11/1/2015-2/7/2016; Intervention 1 received inpatient safe sleep education and delivered between 2/8/2016-5/4/2016; Intervention 2 also received a cardboard bassinet and delivered after 5/4/2016. Subjects self-reported bed-sharing, newborn sleep position, feeding method, and sleep environment; demographic data was obtained from medical records. Bayesian methodology compared bed-sharing rates between aggregated control and intervention groups; results were expressed as posterior rates, rate ratios, and 95% credible intervals (CredInt); the posterior probability that the rate ratio was &gt; 1(Bayesian probability) was determined by calculation and simulation. Results. 5187 eligible subjects, 2763 (53%) completed the survey (Control 1: n=1264; Control 2: n=423; Intervention 1: n=391; Intervention 2: n=685). Bed-sharing rates: Control: 6.3% (5.2,7.4); Intervention: 4.7% (3.5,5.9). Rate ratio (Control/Intervention) was 1.36 (0.95,1.83) and the Bayesian probability that the rate ratio &gt;1 was .96 and .97 by calculation and simulation, respectively. Bed-sharing rates for exclusively breastfed infants: Control: 11% (7.4, 14.6); Intervention: 5.9% (2.7, 9.2); Rate ratio was 2.00 (1.01, 3.15) and the Bayesian probability that the rate ratio &gt;1 was .993. Conclusions. Infant sleep education plus a cardboard bassinet reduced the rate of bed-sharing in the first week of life, particularly among exclusively breastfeeding dyads. Answer: The study on sudden infant death syndrome (SIDS) in South Australia from 1968 to 1997, specifically Part 3, focused on the safety of bed sharing for infants. The results indicated that bed sharing did show an increased risk of accidental death for infants when compared to those sleeping in designated infant containers such as cots and cradles. Accidental deaths were most likely for 24% of infants who were sharing a bed or couch, compared to 5% of those in designated infant containers. However, the risk of accidental death was even higher, at 72%, for infants who were placed alone on an adult bed or couch. These findings suggest that while bed sharing increases the risk of accidental death, the risk is greater for infants left alone on adult beds or couches (PUBMED:11115030).
Instruction: Part-time work and cigarette use among teenagers. Does age moderate this relationship? Abstracts: abstract_id: PUBMED:12353457 Part-time work and cigarette use among teenagers. Does age moderate this relationship? Objectives: Previous studies on part-time work and substance use suggest that those teenagers working longer hours during the school year use cigarettes more frequently than those working less or not at all. The purpose of this study was to investigate whether age moderates the relationship between part-time work hours and smoking status. Methods: This 1999 study consisted of a cross-sectional survey of 4,297 junior high and high school students aged 13 to 19 from 111 randomly selected schools in Ontario. Results: Compared to not working at all,moderate (11 to 20 hours/week) to long (21+ hours) work hours was more strongly associated with the probability of being a smoker among young teenagers (13 to 16 years old). Work intensity was only weakly associated with cigarette use in late adolescence (17 to 19 years old). Conclusions: Working longer hours during school is associated with cigarette use, particularly among young teenagers. Although these cross-sectional data prohibit any firm conclusions regarding causality, the strong association observed among young teenagers increases the plausibility of early work transition being a risk factor for initiating smoking. abstract_id: PUBMED:26163390 The e-cigarette disrupts other consumptions in Parisian teenagers (2012-2014) Objectives: To analyze experimentation with and use of e-cigarette by middle school and high school students in Paris. Method: Since 2012, questions about e-cigarette use have been included in the cross-sectional survey on tobacco that is administered annually since 1991 by the Paris sans tabac association. Overall, 2% of the 12-19 years-old attending school in Paris were surveyed. Results: Of the 10,051 teenagers aged 12-19 surveyed in 2012-2014, 21.8% (n=2194) had tried e-cigarettes. Among these experimenters, 58.9% (n=1292) were smokers, 37.4% (n=820) were non-smokers and 3.7% (n=82) were ex-smokers. From 2012 to 2014, the rate of e-cigarette experimenters increased significantly (from 7.9% to 26.3% for 12-15 years and from 12.2% to 47.2% for 16-19 years-old). The rate of regular e-cigarettes users increased in the same proportion. However over this time, there has been a decline from 15.3% to 10.9% in the rate of 12-15 year old smokers (regular or occasional) and from 38.3% to 33.5% of smokers aged 16-19. Other consumption (cannabis, alcohol abuse) also decreased but no causal relationship can be established. Conclusion: The annual doubling of e-cigarette experimentation and regular use rates has been associated with a decrease in the consumption of tobacco and other products. These data should provide some reassurance against fears that e-cigarette use among young people will provide a significant gateway to tobacco smoking. abstract_id: PUBMED:32279866 The institutionalization of part-time work: Cross-national differences in the relationship between part-time work and perceived insecurity. We propose an institutionalization of part-time work model to account for the cross-national variation in workers' perceptions of insecurity in their jobs and in the labor market, arguing that part-time work is institutionalized as high quality or marginal employment depending on the extent to which it is voluntary, gendered, and legally protected. Using heterogeneous choice models on 2005 ISSP data linked to country-level characteristics, we find that the relationship between part-time work and insecurity is gendered and contingent upon the type of insecurity (cognitive job insecurity, labor market insecurity, or affective job insecurity) and whether or not one works part-time on a voluntary basis. At the individual level, working in a part-time job is associated with greater cognitive job insecurity but lower labor market and affective job insecurity. At the national level, the expected negative association between the part-time work rate and insecurity is most consistent for affective job insecurity. The findings also indicate that the strength of the association between working in a part-time job and insecurity varies by the national institutional context of part-time work, including the degree of gender segregation of part-time work and legal protections for part-time workers. Overall, the results from this study provide support for our institutionalization of part-time work perspective. Part-time work is not inherently secure or insecure employment. The amount of insecurity associated with this type of non-standard work is contingent upon the prevailing local myths or scripts that structure the ways that the state, employers, and workers view part-time work. abstract_id: PUBMED:32677152 Quality of work among part-time nurses and its relationship to job satisfaction and work values: A cross-sectional study. This study compared the quality of work between part-time and full-time nurses and investigated the relationship between quality of work and satisfaction with the work itself among part-time nurses. Quality of work was measured by specific work content and job characteristics. It was hypothesized that work content would affect satisfaction mediated by job characteristics. Furthermore, the moderating effects of intrinsic work values on the relationship between job characteristics and satisfaction were examined. A cross-sectional survey was conducted with 444 part-time and 795 full-time nurses working in Japanese hospital wards in 2017. The Mann-Whitney U test, χ2 test, and hierarchical multiple regressions were used. Work content differed between part-time and full-time nurses; however, job characteristics and satisfaction showed little difference. Work content was not associated with satisfaction mediated by job characteristics. Within the job characteristics, autonomy and feedback had positive effects on the satisfaction of part-time nurses, whereas interaction with work values was insignificant. When assigning work, it is important to allow part-time nurses to experience higher levels of autonomy and feedback regardless of their intrinsic work values. abstract_id: PUBMED:31121389 E-cigarette minimum legal sale age laws and traditional cigarette use among rural pregnant teenagers. Teenagers under 18 could legally purchase e-cigarettes until states passed minimum legal sale age laws. These laws may have curtailed teenagers' use of e-cigarettes for smoking cessation. We investigate the effect of e-cigarette minimum legal sale age laws on prenatal cigarette smoking and birth outcomes for underage rural teenagers using data on all births from 2010 to 2016 from 32 states. We find that the laws increased prenatal smoking by 0.6 percentage points (pp) overall. These effects were concentrated in prepregnancy smokers, with no effect on prepregnancy non-smokers. These results suggest that the laws reduced cigarette smoking cessation during pregnancy rather than causing new cigarette smoking initiation. Our results may indicate an unmet need for assistance with smoking cessation among pregnant teenagers. abstract_id: PUBMED:35300051 Sociodemographic, personal, peer, and familial predictors of e-cigarette ever use in ESPAD Ireland: A forward stepwise logistic regression model. Introduction: E-cigarette ever use has risen significantly in recent years in Ireland, similar to trends elsewhere in Europe, the United States, and Asia-Pacific region. Results from ESPAD Ireland (European School Survey Project on Alcohol and other Drugs) show teenage e-cigarette ever use increased from 18% (2015) to 37% (2019). Given this increase, our aim is to profile e-cigarette ever users and never users in this age group; to examine sociodemographic, personal, peer, and familial factors associated with e-cigarette ever use; and to suggest appropriate measures to reduce use. Methods: A nationally representative stratified random sample of 50 ESPAD schools was surveyed in 2019, with 3495 students aged 15-17 years. Bivariate and multivariable logistic regression analyses were performed using Stata version 16. Results: E-cigarette ever use was significantly associated with ever smoking (AOR=4.15; 95% CI: 1.29-13.41), ever cannabis use (AOR=2.21; 95% CI: 1.11-4.41) and ever inhalants use (AOR=2.51; 95% CI: 1.07-5.88). Children of university-educated mothers had significantly higher odds of e-cigarette ever use (AOR=3.46; 95% CI: 1.40-8.54). Associated with reduced AORs were reading books for enjoyment (AOR=0.32; 95% CI: 0.16-0.64), living in households where smoking was regulated (AOR=0.53; 95% CI: 0.30-0.94), and perceiving moderate risk in trying e-cigarettes once or twice (AOR=0.20; 95% CI: 0.07-0.67). Conclusions: E-cigarette ever use is part of a pattern of teenage polysubstance use including cigarette smoking, providing some support for the common liability theory. Regulation of smoking in the home, reading for enjoyment, and perceiving risk from e-cigarette use are associated with decreased likelihood of ever use, and higher parental education with increased likelihood. Thus, health education emphasizing the role of parents and risks of e-cigarette use is indicated to reduce the rise in e-cigarette ever use in teenagers. abstract_id: PUBMED:34409150 Part-time work and health in the United States: The role of state policies. Part-time work is a common work arrangement in the United States that can be precarious, insecure, and lacking opportunities for advancement. In turn, part-time work, especially involuntary part-time work, tends to be associated with worse health outcomes. Although prior research documents heterogeneity in the health consequences of precarious work across countries, we do not know whether state-level institutional contexts shape the association between part-time work and self-rated health in the United States. Using data from the Current Population Survey (2009-2019; n = 813,077), the present study examined whether linkages between part-time work and self-rated health are moderated by state-level social policies and contexts. At the population level, we document differences in the prevalence of fair/poor health among part-time workers across states. For instance, 21% of involuntary part-time workers reported fair/poor health in West Virginia compared to 7% of involuntary part-time workers in Massachusetts. Findings also provide evidence that voluntary (β =.51) and involuntary (β=.57) part-time work is associated with greater odds of fair/poor health among individuals. Moreover, the association between voluntary part-time work and self-rated health is weaker for individuals living in states with higher amounts for maximum unemployment insurance, higher minimum wage, and lower income inequality. State-level policies did not moderate the association between involuntary part-time work and health. The present study points to the need to mitigate the health consequences of part-time work with social policies that enhance the health of workers. abstract_id: PUBMED:26115786 Problematic Internet use among teenagers in Sfax, Tunisia Unlabelled: Use of the Internet in Tunisian society and especially among teenagers has increased in recent years. While the use of Internet is primarily intended for research and communication, the Internet has also become an important part of teenager's life. Most people use the Internet in healthy and productive ways. However, some teenagers develop a problematic use of the Internet, which is a condition also known by the term of "Internet addiction". In Tunisia, the literature does not report data on problematic Internet use prevalence among Tunisian teenagers. Purpose: The aim of this study was to determine the prevalence of problematic Internet use among teenagers, in the urban area of Sfax. Methods: This transversal study was carried out between the 15th January and the 15th February 2009. Participants were 600, first to third grade secondary school students, aged 14 to 20. They were recruited from seven secondary schools randomly selected in the urban area in Sfax City. The self-administered Young's 8-item questionnaire was used in this survey. Participants who scored five or more can be considered problematic Internet users. Finally, 587 valid data samples were collected, 263 (44%) males and 324 (56%) females. Eighty-five percent of the participants were in the 15-17 age range. The mean age was 16 years (±1.26). Results: The prevalence of problematic Internet use was 18.05% (total of 587 students). The sex-ratio was 0.75. The problematic Internet use was not correlated with sex. The average time of Internet use per day among problematic Internet users was 4.5hours (±2.84) against 1.02hours (±1.56) among non problematic Internet users (P=0.000). Discussion: In this study, the prevalence of problematic Internet use was higher than that reported in previous studies. Two facts can explain this: first, the use of the only self-administered questionnaire does not distinguish between a simple abuse and a problematic Internet use. A psychiatric examination is essential to support the diagnosis of problematic Internet use. Second, this study was conducted exclusively in an urban area where access to the Internet is easier than in rural areas. This study was not representative of teenagers in Sfax. The gender difference was consistent with the findings of prior research, as was the average time of Internet use per day among problematic Internet users. Conclusion: This study shows that problematic Internet use is frequent in our teenage population. A preventive outreach for youths, their families and health professionals is needed to limit the extension of this problem among young people. The existence of misuse of the Internet should not lead to avoiding this media. Reflections on how to moderate the relationship with the Internet are necessary; especially since Internet becomes an essential medium of information and communication. The current lack of Internet education in school curricula is also a shortage to fulfil. abstract_id: PUBMED:27039750 Age of initiation, Determinants and Prevalence of Cigarette Smoking among Teenagers in Mushin Local Government Area of Lagos State, Nigeria. Background: Cigarette smoking constitutes a major threat to the health and wellbeing of teenagers. While smoking has been on decline in the developed countries, the reverse is the case in developing countries. The aim of this study was to determine the age of initiation, determinants and prevalence of cigarette smoking among teenagers in Mushin Local Government Area of Lagos state, Nigeria. Materials And Methods: This was a descriptive cross-sectional study among 475 teenagers selected by multistage sampling. A pre-tested, structured, interviewer-administered questionnaire was used for data collection. The study was carried out in November, 2014. Results: Response rate was 84.6%. Mean age of the respondents was 16.4±1.65years. Range and mean age of initiation of cigarette smoking were 7 to 17 years and 12.0±3.32 years respectively. Teenagers who were above 15 years (OR:5.13, 95%CI: 0.87-30.26), males (OR:5.19, 95%CI: 1.57-17.18), married (OR:8.41, 95%CI: 1.04- 63.35), had ≤primary school education(OR:4.31, 95%CI: 1.07-17.33), influenced by friends(OR:308.84, 95%CI: 84.87-1123.81), and influenced by advertisements (OR:27.83, 95%CI: 3.92-197.64) were more likely to have initiated cigarette smoking. Furthermore, teenagers who were males (OR:12.77, 95%CI: 2.90-56.28), married (OR:19.24, 95%CI: 2.05-180.45), had ≤primary school education(OR:7.85, 95%CI: 2.37-26.01), influenced by friends(OR:28.56, 95%CI: 10.86-75.07), and influenced by advertisements (OR:5.95, 95%CI: 1.72-20.61) were more likely to be current cigarette smokers. In addition, 24.9% had initiated cigarette smoking while 14.7% were current smokers of cigarette. Conclusions: Mean age of initiation of cigarette smoking was 12.0±3.32 years. Determinants of cigarette smoking were age, gender, marital status, educational background, friends and advertisements. Life time prevalence of cigarette smoking was higher than prevalence of current cigarette smokers. Cigarette smoking reduction programs should take these factors into consideration. abstract_id: PUBMED:33121761 E-cigarette use and concurrent risk behaviors among adolescents. Purpose: Although electronic cigarette (e-cigarette) use is on the rise among adolescents, its relationship with risk behaviors is unclear. This study aimed to examine whether e-cigarette use was related to other risk behaviors and whether age and sex play a role in those associations. Methods: Data from the 2017 Youth Risk Behavior Surveillance System was used. Weighted logistic regression analysis was conducted to examine the relationships among variables of interest. Fundings: The results showed that e-cigarette users, compared to nonusers, had higher odds of using other types of substances, as well as being involved in other risk behaviors including driving-related behaviors, sexual activity-related behaviors, and violence-related behaviors. Furthermore, some of these relationships are moderated by age and sex. Discussion: Our findings suggest that e-cigarette use is linked to other risk behaviors among adolescents. Policy makers and educators should address multiple risk behaviors as part of comprehensive tobacco-free policies and youth substance use prevention programs. Answer: The relationship between part-time work and cigarette use among teenagers is indeed moderated by age. A study conducted in 1999 with a cross-sectional survey of 4,297 junior high and high school students aged 13 to 19 in Ontario found that working moderate (11 to 20 hours/week) to long (21+ hours/week) hours was more strongly associated with the probability of being a smoker among young teenagers (13 to 16 years old) compared to not working at all. However, this association was only weakly observed in late adolescence (17 to 19 years old) (PUBMED:12353457). This suggests that younger teenagers who engage in part-time work are more likely to use cigarettes than their older counterparts, indicating that age does moderate the relationship between part-time work and smoking status.
Instruction: Can simple clinical features be used to identify patients with severe carotid stenosis on Doppler ultrasound? Abstracts: abstract_id: PUBMED:9886444 Can simple clinical features be used to identify patients with severe carotid stenosis on Doppler ultrasound? Objectives: Carotid endarterectomy reduces the risk of stroke in symptomatic patients with severe ipsilateral carotid stenosis. Symptomatic patients should therefore undergo carotid Doppler imaging, but in some centres access to imaging is limited. It was therefore investigated whether simple clinical features alone or in combination could be used to identify patients with severe carotid stenosis, so that they could be referred preferentially for carotid imaging. Methods: 1041 patients with acute stroke, cerebral or retinal transient ischaemic attacks, and retinal strokes admitted to Western General Hospital or seen in neurovascular clinics were assessed by a stroke physician. Their carotid arteries were investigated using colour Doppler imaging by a consultant neuroradiologist. Patients with primary intracerebral haemorrhage, total anterior circulation strokes, posterior circulation strokes, or posterior circulation transient ischaemic attacks were excluded because carotid surgery would be inappropriate. Results: 726 patients were used in the analysis. Stepwise logistic regression showed that there were significant positive associations between severe carotid stenosis and an ipsilateral bruit, diabetes mellitus, and previous transient ischaemic attacks; and a negative association with lacunar events. The strategy with the highest specificity (97%) was "any three of these four features" but sensitivity was only 17%. The strategy with the highest sensitivity (99%) was to use one or more of the four features, but specificity was only 22%. Conclusion: None of the strategies identified all patients with severe carotid stenosis with a reasonable specificity. When access to carotid imaging is severely limited, simple clinical features are of some use in prioritising patients for imaging, but access to carotid imaging should be improved. abstract_id: PUBMED:15549220 Significance of Doppler ultrasound procedures for diagnosis of carotid stenoses Determining degree and morphology of stenoses is important for surgical planning or stent implantation. Vascular ultrasound is usually the first modality to evaluate carotid artery stenosis. Due to rapid development various methods of vascular ultrasound are applied including continuous wave (CW) Doppler, duplex Doppler, colour-coded duplex sonography (CCDS), power Doppler and B-flow technique. For quantitative assessment of the degree of stenosis the most frequently used parameters are peak systolic velocity (PSV), end-diastolic velocity (EDV) in the internal carotid artery (ICA), as well as ICA to CCA ratios of PSV and EDV. Different results reported in the literature may reflect differences in defining the degree of stenosis and methodological differences in protocol or imaging techniques. Differences in defining the degree of stenosis, advantages and disadvantages of the different Doppler techniques and future developments are discussed in detail. abstract_id: PUBMED:34306452 Diagnostic significance of transcranial doppler combined with carotid ultrasound in patients with cerebral ischemic stroke. Objective: To explore the diagnostic value of transcranial doppler (TCD) combined with carotid ultrasound (CU) in cerebral ischemic stroke (CIS). Methods: A total of 68 patients with CIS who were treated in our hospital from September 2018 to September 2020 were selected as the research group, and another 68 patients with non-CIS admitted during the same period were selected as the reference group. Both groups underwent TCD and CU examinations to compare their diagnostic values. Results: There were no distinct differences concerning clinical data such as gender ratio, age, BMI value, smoking history, residence, and complications between the two groups (P &gt; 0.05). The carotid artery intima-media thickening was reported at a markedly higher rate in the research group compared to the reference group (P &lt; 0.001). It was indicated by the CU examination that the degree of carotid artery stenosis in the research group was more severe compared with the reference group (P &lt; 0.05). The patients in the research group experienced more severe intracranial artery stenosis relative to the reference group by the TCD examination (P &lt; 0.05). The plaque morphology of the research group was predominantly irregular, and the internal echoes were predominantly hypoechoic. The plaque morphology in the reference group was predominantly regular, and the internal echoes were predominantly isoechoic. There was remarkably higher incidence of the research group (78%) with ulcer as compared to the reference group (P &lt; 0.05), and no marked difference was observed in the incidence of calcification (P &gt; 0.05). The combined diagnostic approach was superior to TCD and CUS alone in the terms of accuracy, sensitivity and specificity (P &lt; 0.001). Conclusion: TCD combined with CU can greatly improve the diagnostic efficiency of CIS, and provide more evidence for clinical therapy. It deserves promotion and use. abstract_id: PUBMED:11253085 Comparison of 3D power doppler ultrasound, color doppler ultrasound and digital subtraction angiography in carotid stenosis Purpose: To compare easy-to-perform three-dimensional power Doppler ultrasound (3D PDUS) to color Doppler ultrasound (CDUS) and digital subtraction angiography (DSA) in the assessment of internal carotid artery (ICA) stenoses in patients with severe atherosclerosis. Methods: 26 ICA's (7 without stenosis, 4 low-, 4 middle-, and 11 high-grade stenoses) in 13 patients were examined with DSA, CDUS, and 3D PDUS. CDUS and 3D PDUS were performed with a 7.5 MHz standard transducer and a Sonoline Elegra ultrasound machine. The three methods were performed and interpreted by different persons who were not aware of the diagnoses. Results: Regarding the degree of stenosis correlations between DSA and 3D PDUS were r = 0.98 (p &lt; 0.001), between DSA and CDUS r = 0.97 (p &lt; 0.001), and between CDUS and 3D PDUS r = 0.95 (p &lt; 0.001). Sensitivity and specificity regarding the detection of a high-degree stenosis was 90% and 100% for 3D PDUS, and 100% and 93.3% for CDUS. Conclusion: Even in cases with severe atherosclerosis, both sonographic methods reveal similar results comparable to DSA. 3D PDUS does not result in a diagnostic improvement on CDUS, however, it does give the new opportunity for complete data storage, reconstruction, and survey presentations. abstract_id: PUBMED:11118921 Variability in Doppler ultrasound influences referral of patients for carotid surgery. Objective: Colour Doppler ultrasound is operator dependent, but it is unclear how much clinical impact this might have on patient referral for carotid endarterectomy. Our aim was to quantify the interobserver variability of Doppler ultrasound. Methods: Consecutive patients attending for carotid Doppler ultrasound underwent two examinations on the same day, in random order, by two of three observers blind to each other's results. Severity of stenosis was assessed using standard velocity criteria and lesion appearance. Results: A total of 189 patients were scanned (378 ICAs). Of the 134 ICAs scanned by observers 1 and 2, observer 1 classified 11 as 80-99% stenosis (operable), compared with nine by observer 2. Of the 206 ICAs scanned by observers 1 and 3, observer 1 classified 11 as 80-99% stenosis, compared with only five by observer 3. Of the 38 ICAs scanned by observers 2 and 3, observer 2 classified 2 as 80-99% stenosis compared with none by observer 3. Overall, clinical management would differ in 10/378 (3%) of ICAs, but in 10/22 (45%) of those considered operable by one of the three observers. Conclusion: There was clinically important interobserver variability in the assessment of ICA disease, which could result in serious errors if endarterectomy were performed on the basis of a single Doppler ultrasound. abstract_id: PUBMED:11190873 Detection of microembolus with transcranial doppler Objectives: To review techniques and the most important clinical applications of microembolus detection with transcranial Doppler. Development: Technical aspects of microembolus detection are discussed initially; clinical circumstances in which embolus detection is currently used follow, such as carotid stenosis, cardiac pathology, monitoring during neuroangiography, coronary revascularization surgery and endarterectomy. Lastly, the potential clinical applications of this technique are reviewed: localization of an embolic source, risk of recurrent ischemia in patients with unstable atherosclerotic plaques, intraoperative monitoring and potential surgical technique modifications based on this information, contributions to the understanding of the pathophysiology of the fat embolism syndrome, and monitoring therapeutic efficacy of antithrombotics and thrombolytics. Conclusions: The ability to detect cerebral emboli with transcranial ultrasound has increased significantly our understanding about the pathogenesis of cerebral ischemia. Microembolus detection with transcranial Doppler is a powerful and novel diagnostic technique that allows to study the embolic process in vivo and in real time, instead of using indirect indicators of embolism (clinical and radiological features, possible embolic sources, etc.). abstract_id: PUBMED:7817411 Color Doppler ultrasound examination of carotid arteries Eleven hundred and thirty-eight patients suspected of carotid artery disease were examined prospectively by ultrasound-Doppler. In 39 cases, intra-arterial digital subtraction arteriography was performed as well as well and compared to ultrasound Doppler examination. The overall accuracy was 80% (kappa: 0.65). Discrimination between normal and diseased vessels could be made with an accuracy of 93% (kappa: 0.70) and the accuracy when separating vessels into greater or less than 50% stenosis was 88% (kappa: 0.74). Sources of error are discussed and the paper concludes that colour-Doppler examination is an accurate method for assessment of the carotid artery. abstract_id: PUBMED:29917285 Comparison of Carotid Doppler Ultrasound to Other Angiographic Modalities in the Measurement of Carotid Artery Stenosis. Background And Purpose: The purpose of this study was to compare Doppler ultrasound (DUS) to other angiographic modalities: computed tomography angiography (CTA), magnetic resonance angiography (MRA), and digital subtraction angiography (DSA). Methods: All DUS studies performed at Stroke Prevention Clinic (SPC) from 2011 to 2013 and referred for further angiographic modalities were included. Patients were excluded if the corresponding angiographic modality was not performed within 6 months of DUS. Patients were also excluded if they underwent interventions before DUS or between the time of DUS and the corresponding angiographic modality. The degree of stenosis was classified as mild (&lt;50%), moderate (50-69%), severe (70-99%), or occlusion (100%). Results: In total, 245 patients were identified. Nine patients were excluded (3.7%). Overall 472 Doppler studies of single ICAs from 236 patients were included in our analysis. Age was 65 ± 13 years and 136 patients were males (57.6%). There was an excellent agreement between DUS and CTA (kappa = .9 [P &lt; .001], n = 274), good agreement with MRA (kappa = .8 [P &lt; .001], n = 242), and excellent agreement with DSA (kappa = .92 [P &lt; .001], n = 18). There was excellent agreement between CTA and MRA (kappa = .87, n = 46). Conclusion: Doppler ultrasound performed in a dedicated SPC by an experienced sonographer and reviewed by a certified stroke neurologist serves as a reliable initial screening tool in determining carotid artery stenosis. abstract_id: PUBMED:8555661 Power Doppler ultrasound examination of the intracerebral and extracerebral vasculature. Power Doppler is a new ultrasound technique based on the visualization of the Doppler power spectrum. In 22 patients with different lesions of brain-supplying vessels the results of power Doppler ultrasound were compared with those of digital subtraction angiography and conventional color Doppler ultrasound. In patients with intracranial high-degree stenoses, power Doppler was largely free of those artifacts that occur with conventional color Doppler (e.g., echoshadowing and aliasing). The vessel lumen was seen in all patients with more than 80% stenoses. Power Doppler was superior to conventional color Doppler in visualizing both middle cerebral artery stenosis and intracerebral aneurysms. Power Doppler visualization of intracranial pathology was similar to visualization by digital subtraction angiography. Power Doppler appears to be a valuable tool in the noninvasive imaging of intracranial and extracranial pathological vascular conditions. abstract_id: PUBMED:23247808 Evaluation of distal turbulence intensity for the detection of both plaque ulceration and stenosis grade in the carotid bifurcation using clinical Doppler ultrasound. Objectives: To determine the interrelationship of stenosis grade and ulceration with distal turbulence intensity (TI) in the carotid bifurcation measured using conventional clinical Doppler ultrasound (DUS) in vitro, in order to establish the feasibility of TI as a diagnostic parameter for plaque ulceration. Methods: DUS TI was evaluated in a matched set of ulcerated and smooth-walled carotid bifurcation phantoms with various stenosis severities (30, 50, 60 and 70 %), where the ulcerated models incorporated a type 3 ulceration. Results: Post-stenotic TI was significantly elevated owing to ulceration in the mild and moderate stenoses (P &lt; 0.001). TI increased with stenosis severity in both the ulcerated and non-ulcerated series, with a statistically significant effect of increasing stenosis severity (P &lt; 0.001). Whereas TI in the mild and non-ulcerated moderate stenoses was less than 20.4 ± 1.3 cm s(-1), TI in the ulcerated moderate and severe models was higher than 25.6 ± 1.3 cm s(-1), indicating a potential diagnostic threshold. Conclusion: We report a two-curve relationship of stenosis grade and ulceration to distal TI measured using clinical DUS in vitro. Clinical DUS measurement of distal TI may be a diagnostic approach to detecting ulceration in the mild and moderately stenosed carotid artery. Key Points: • Patients with carotid artery plaque ulcerations are at higher risk of stroke. • Clinical Doppler ultrasound is routinely used to detect carotid artery stenosis. • Doppler ultrasound turbulence intensity can detect ulceration in realistic flow models. • Turbulence intensity also increases with stenosis severity independent of ulceration. • Doppler ultrasound should help in assessing both stenosis severity and ulceration. Answer: Simple clinical features alone or in combination have been investigated to identify patients with severe carotid stenosis on Doppler ultrasound. A study involving 1041 patients assessed by a stroke physician found significant positive associations between severe carotid stenosis and an ipsilateral bruit, diabetes mellitus, and previous transient ischaemic attacks, and a negative association with lacunar events. However, none of the strategies based on these clinical features identified all patients with severe carotid stenosis with reasonable specificity. The strategy with the highest specificity (97%) had a sensitivity of only 17%, while the strategy with the highest sensitivity (99%) had a specificity of only 22%. The conclusion was that simple clinical features are of some use in prioritizing patients for imaging when access to carotid imaging is severely limited, but overall, access to carotid imaging should be improved (PUBMED:9886444). In summary, while certain clinical features may suggest the presence of severe carotid stenosis, they are not sufficient to reliably identify all patients with this condition. Doppler ultrasound remains an essential diagnostic tool, and efforts should be made to improve access to this imaging modality for accurate diagnosis and appropriate management of patients at risk of stroke due to carotid stenosis.
Instruction: Patients waiting for a hip or knee joint replacement: is there any prioritization for surgery? Abstracts: abstract_id: PUBMED:18373585 Patients waiting for a hip or knee joint replacement: is there any prioritization for surgery? Objective: To investigate whether patients are prioritized for joint replacement surgery on the basis of severity of osteoarthritis, pain and physical functioning. Method: A total of 105 patients on the waiting list for primary total knee or hip replacement from a UK regional orthopaedic centre were interviewed at baseline and followed up at 3, 6 and 9 months or until joint replacement. Measurement tools were the visual analogue scale (VAS), Western Ontario and McMaster Universities (WOMAC) osteoarthritis index and the Oxford hip or knee score. Results: Most participants (81, 77%) were categorized on the waiting list as 'routine', despite having high levels of pain according to the measurement scales. There was no significant correlation between the waiting list categorization and the actual waiting time for a hip or knee joint replacement operation (Kendall's tau = 0.17; P = 0.062) and the waiting list categorization did not appear to ensure that patients were operated upon earlier. There were also no significant differences in measures (VAS pain, WOMAC and Oxford hip or knee scores) between those individuals who had their operations earlier (before 6 months) compared with those participants who had their operations later (6 months or greater) or even not at all. Of the 105 patients who were listed for joint replacement, 24 (25%) patients did not have their operation due to: a medical delay (14); self-delay/cancellation (7); arthroscopy instead (2); and death (1). Conclusion: With the expected increase in demand for joint replacement, there needs to be a re-examination of assessment procedures of patients listed for joint replacement. The use of measurement tools to assess symptoms such as pain and physical function would be one way forward. abstract_id: PUBMED:19762882 'There are too many of us to fix.' Patients' views of acceptable waiting times for hip and knee replacement. Objectives: To assess patients' views of maximum acceptable waiting times (MAWT) for hip and knee replacement, associated factors and the accuracy of self-reported waiting times. Methods: We mailed 1000 questionnaires each to two random samples of patients either waiting for or who had received an arthroplasty within the preceding 3-12 months. We used linear regression to assess the determinants of patient MAWT, and content analysis to assess reasons for MAWT and ideal waiting time. Results: Of the 1330 responses, 1127 had MAWT data. The sample was 57% women; mean age was 70 +/- 11 years. Median self-reported and actual waiting time was eight months (Spearman correlation = 0.70). Median MAWT was four months and ideal waiting time was two months. The most frequent reasons for MAWT were pain, quality of life and needing time to prepare for surgery. A longer MAWT was associated with younger age, group (waiting), a longer self-reported waiting time, better EQ-5D index, an acceptable waiting time, a perception of fairness and a view that others worse off on the list should go ahead. Conclusions: Patients' views of acceptable waiting times are important for a fair process of establishing waiting time benchmarks for joint replacement. abstract_id: PUBMED:16756865 Results of a management plan for surgical waiting lists for hip and knee replacements This study describes the implementation of a management plan for surgical joint replacement waiting lists and its results after 3 years. The plan was based on the following: unification of information and scheduling, periodic review, clinical guidelines, management of demand, prioritization according to need, and increasing the services provided. During the first year, the plan succeeded in revealing the real waiting list, with 23% more patients than previously included. Three years later, 16% of the patients had not turned up for surgery after being scheduled; the mean length of hospital stay for joint replacements had been reduced by 4 days; 59.5% of the patients joining the list had been assessed with a prioritization instrument, and the number of joint replacements had increased by 16% with a reduction of 14.7% in patients waiting for joint replacements. The resolution time for these procedures had also decreased by 3 months for knee arthroplasty and by 1 month for hip arthroplasty. abstract_id: PUBMED:3803022 Alternatives to endoprostheses of the hip and knee joint Total hip and knee replacement is one of the most successful operative procedures in orthopaedic surgery. With respect to greater life expectancy, however, loosening of the implant and revision surgery with all its sequelae must be taken into account. Therefore joint preserving procedures resp. arthrodesis of the hip and knee joint are still of great importance. The best long term results can be achieved by operations which can improve the biomechanics of the joint. Operative procedures which aim at the biology of the joint only lead to temporary pain relief. Arthrodesis of the hip and knee joint should always be considered in young patients with no possibility for joint preserving techniques. abstract_id: PUBMED:16838360 Hip and knee joint rotations differ between patients with medial and lateral knee osteoarthritis: gait analysis of 30 patients and 15 controls. The motions and moments in the hip and knee in female patients on the waiting list for knee prosthesis surgery with medial (n = 15) or lateral (n = 15) osteoarthritis (OA) were compared with a control group (n = 15). We hypothesized that not only the kinematics and kinetics of the knee but also of the hip would differ between patients the medial and lateral groups. At midstance, patients with lateral OA showed slightly (2 degrees) more maximal (peak) adduction (p = 0.015) of the hip joint and patients with medial OA had 7 degrees more abduction (p &lt; 0.001) than did controls. In patients with lateral OA, the femur was positioned in about 7 degrees more maximum external rotation (p = 0.001), but femur position did not differ between medial OA and controls (p &gt; or = 0.8). There was a tendency to higher internal hip rotation moment in lateral OA compared to controls (p = 0.021). The maximum values of the internal knee abduction moments were 52% higher in medial OA (p = 0.005) and 63% lower in lateral OA (p &lt; 0.001) compared to controls. Cases with medial OA had 9 degrees more, whereas those with lateral OA had 6 degrees less external tibial rotation than controls (medial vs. lateral OA, p = 0.001). We found an association between presence of lateral OA of the knee and the biomechanics of the hip joint. It remains to be evaluated if the changed biomechanics of the hip joint is a reason for development of lateral OA or an observation that is a result of this disease. abstract_id: PUBMED:22459101 Joint perception after hip or knee replacement surgery. Introduction: Knee and hip arthroplasties are recognized as being effective. However, subjects with a prosthetic joint rarely report returned sensation comparable to their native joint. Hypothesis: Joint perception by patients following hip joint replacement is better than following knee replacement and in both cases this perception is directly related to the clinical results measured with currently validated scores. Patients And Methods: Patient joint perception in prosthetic reconstruction was evaluated in 347 patients, 46 who underwent unicompartmental knee arthroplasty (UKA), 119 tricompartmental knee arthroplasty (TKA), 93 hip resurfacing (HR), and 89 total hip arthroplasty (THA). The subjects' joint perception, their satisfaction, and the WOMAC clinical score were recorded and compared. Results: Joint perception was significantly worse for knee arthroplasties (TKA and UKA) compared to hip arthroplasties (THA or HR) (P&lt;0.001). The WOMAC score was also significantly less favorable for knee arthroplasties than for hip arthroplasties (P&lt;0.0001). However, there was no significant difference for the clinical scores between TKA and UKA as well as between HR and THA. Joint perception was strongly correlated with the WOMAC score for all groups (R(2)=0.951). Discussion: No difference was found after more conservative surgeries such as HR or UKA compared to traditional arthroplasty procedures (THA or TKA). Demonstrating inferior results in comparison to the hip, knee arthroplasties deserve particular attention and can still be improved. The assessment of joint perception used in this study can be considered a valuable clinical tool that is strongly correlated to validated, but more complex to apply, clinical scores. abstract_id: PUBMED:14997924 Prioritization of patients on scheduled waiting lists: validation of a scoring system for hip and knee arthroplasty. Introduction: The hip and knee replacement priority criteria tool (HKPT) is 1 of 5 tools developed by the Western Canada Waiting List Project for setting priorities among patients awaiting elective procedures. We set out to assess the validity of the HKPT priority criteria score (PCS) and map the maximum acceptable waiting times (MAWTs) for patients to levels of urgency. Methods: Two studies were used to assess convergent and discriminant validity. In study 1, consecutive patients on a waiting list for hip or knee arthroplasty were assessed by orthopedic surgeons from the 4 provinces in Western Canada, using the HKPT and data on patient age, gender, joint site, type of surgery (primary or revision), 2 measures of surgeon-rated patient urgency, and diagnosis. In study 2, 6 patients were videotaped during a consultation interview with the surgeon and were assessed by a group of experts. We measured function with the PCS and the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Results: In study 1, we assessed 394 patients, and in study 2, 19 raters assessed the 6 patients. Correlations between the PCS and other measures of physician-rated urgency were strong, ranging from 0.78 to 0.89. For a subgroup of 60 patients, correlation between the PCS and function as measured with the WOMAC was 0.48, and correlation was greater (0.45-0.56) between items measuring similar constructs (e.g., pain at rest) than those measuring different constructs (0.21-0.40). In study 2, median MAWTs ranged from 4 to 24 weeks for 5 levels of urgency based on PCS percentiles. Conclusions: Results from this study support the validity of the PCS as a measure of surgeon-rated urgency for hip or knee arthroplasty. Evaluative studies are needed to assess the validity and acceptability of the tools and the establishment of MAWTs in clinical practice. abstract_id: PUBMED:12930107 Developing priority criteria for hip and knee replacement: results from the Western Canada Waiting List Project. Introduction: The Western Canada Waiting List Project (WCWL), a federally funded partnership of 19 organizations, was created to develop tools for managing waiting lists. The WCWL panel on hip and knee replacement surgery was 1 of 5 panels constituted under this project. Methods: The panel developed and tested a collection of standardized clinical criteria for setting priorities among patients awaiting hip and knee replacement. The criteria were applied to 405 patients in 4 provinces. Regression analysis was used to determine the set of criteria weights that collectively best predicted clinicians' overall urgency ratings. Inter-rater and test-retest reliability was assessed from 6 videotaped patient interviews, scored by orthopedic surgeons, related professionals and general practitioners. Results: The priority criteria accounted for over two-thirds of the observed variance in overall urgency ratings (adjusted R2 = 0.676). The panel modified the criteria and weights based on the empirical findings and on clinical judgement. The reliability of the priority criteria for the hip and knee replacement tool was among the strongest of the 5 instruments developed in the WCWL project. Conclusions: The panel considered the criteria easy to use and reasonably reflective of expert surgical judgement regarding clinical urgency for hip and knee replacement. Further development and testing of the tool appears warranted. abstract_id: PUBMED:19051231 Living with severe osteoarthritis while awaiting hip and knee joint replacement surgery. Objectives: To explore the lived experiences of patients with severe osteoarthritis (OA) of the hip or knee joint while awaiting joint replacement surgery. Methods: An exploratory qualitative approach using phenomenology was adopted for the purpose of the study. Unstructured interviews were carried out on a sample of six patients who had been referred to the National Health Service waiting list for a primary hip or knee replacement. The participants were invited to share their experiences and concerns relating to how they were coping with end-stage OA of their hip or knee joint. Interviews were digitally recorded and transcribed verbatim. Narrative data were analysed using Giorgi's (1985) procedural steps to reveal themes which recurred in the participants' stories. Results: Six themes emerged from the data, central to the experience of living with severe OA. They were: coping and living with pain; not being able to walk; coping with everyday activities; body image; advice and support available; and the effect of their disease upon family, friends and helpers. There were also a number of sub-themes associated with each major theme. Conclusions: This study suggests that there is an absence of generic support, guidance and information relating to the management of symptoms of OA for individuals awaiting hip and knee replacement surgery. Patients awaiting hip and knee joint replacement surgery often have difficulty in managing their symptoms. Support in general appears to be dependent on the availability of resources in the primary care setting. Potential patients who are fortunate to know or meet someone who has undergone a similar procedure learn from the experience of exchanging information between themselves, along with coping strategies in the management of their symptoms. abstract_id: PUBMED:21680042 The importance of patient expectations as a determinant of satisfaction with waiting times for hip and knee replacement surgery. Objectives: The disconfirmation model hypothesizes that satisfaction is a function of a perceived discrepancy from an initial expectation. Our objectives were: (1) to test the disconfirmation model as it applies to patient satisfaction with waiting time (WT) and (2) to build an explanatory model of the determinants of satisfaction with WT for hip and knee replacement. Methods: We mailed 1000 questionnaires to 2 random samples: patients waiting or those who had received a joint replacement within the preceding 3-12 months. We used ordinal logistic regression analysis to build an explanatory model of the determinants of satisfaction. Results: Of the 1330 returned surveys, 1240 contained patient satisfaction data. The sample was 57% female; mean age was 70 years (SD 11). Consistent with the disconfirmation model, when their WTs were longer than expected, both waiting (OR 5.77, 95% CI 3.57-9.32) and post-surgery patients (OR 6.57, 95% CI 4.21-10.26) had greater odds of dissatisfaction, adjusting for the other variables in the model. Compared to those who waited 3 months or less, post-surgery patients who waited 6 to 12 months (OR 2.59, 95% CI 1.27-5.27) and over 12 months (OR 3.30, 95% CI 1.65-6.58) had greater odds of being dissatisfied with their waiting time. Patients who felt they were treated unfairly had greater odds of being dissatisfied (OR 4.74, 95% CI 2.60-8.62). Conclusions: In patients on waiting lists and post-surgery for hip and knee replacement, satisfaction with waiting times is related to fulfillment of expectations about waiting, as well as a perception of fairness. Measures to modify expectations and increase perceived fairness, such as informing patients of a realistic WT and communication during the waiting period, may increase satisfaction with WTs. Answer: The prioritization for surgery in patients waiting for a hip or knee joint replacement appears to be a complex issue with various factors at play. According to the abstracts provided, there is evidence that prioritization does not always correlate with the severity of osteoarthritis, pain, and physical functioning. A study found that most participants on the waiting list were categorized as 'routine' despite having high levels of pain, and there was no significant correlation between waiting list categorization and actual waiting time for surgery (PUBMED:18373585). Additionally, there were no significant differences in pain and physical function measures between those who had their operations earlier versus those who had them later or not at all. Another study focused on patients' views of maximum acceptable waiting times (MAWT) for hip and knee replacement, finding that the median MAWT was four months, with ideal waiting time being two months. Factors associated with a longer MAWT included younger age, being on the waiting list, longer self-reported waiting times, better quality of life, and a perception of fairness (PUBMED:19762882). A management plan for surgical waiting lists for hip and knee replacements was implemented in one study, which included prioritization according to need. This plan resulted in an increase in the number of joint replacements and a reduction in patients waiting for these procedures. The resolution time for these procedures also decreased (PUBMED:16756865). In terms of prioritization tools, the hip and knee replacement priority criteria tool (HKPT) was developed to set priorities among patients awaiting elective procedures. The HKPT priority criteria score (PCS) showed strong correlations with other measures of physician-rated urgency and function as measured with the WOMAC index (PUBMED:14997924). The Western Canada Waiting List Project also developed priority criteria for hip and knee replacement, which accounted for a significant variance in overall urgency ratings and was considered easy to use and reflective of expert surgical judgement (PUBMED:12930107). Overall, while there are tools and management plans in place to prioritize patients for hip and knee replacement surgeries, the actual prioritization process may not always align with patients' pain levels and physical functioning. Additionally, patients' perceptions of acceptable waiting times and fairness play a role in their satisfaction with the waiting process (PUBMED:21680042).
Instruction: The development of bronchiectasis on chest computed tomography in children with cystic fibrosis: can pre-stages be identified? Abstracts: abstract_id: PUBMED:27108295 The development of bronchiectasis on chest computed tomography in children with cystic fibrosis: can pre-stages be identified? Objective: Bronchiectasis is an important component of cystic fibrosis (CF) lung disease but little is known about its development. We aimed to study the development of bronchiectasis and identify determinants for rapid progression of bronchiectasis on chest CT. Methods: Forty-three patients with CF with at least four consecutive biennial volumetric CTs were included. Areas with bronchiectasis on the most recent CT were marked as regions of interest (ROIs). These ROIs were generated on all preceding CTs using deformable image registration. Observers indicated whether: bronchiectasis, mucus plugging, airway wall thickening, atelectasis/consolidation or normal airways were present in the ROIs. Results: We identified 362 ROIs on the most recent CT. In 187 (51.7 %) ROIs bronchiectasis was present on all preceding CTs, while 175 ROIs showed development of bronchiectasis. In 139/175 (79.4 %) no pre-stages of bronchiectasis were identified. In 36/175 (20.6 %) bronchiectatic airways the following pre-stages were identified: mucus plugging (17.7 %), airway wall thickening (1.7 %) or atelectasis/consolidation (1.1 %). Pancreatic insufficiency was more prevalent in the rapid progressors compared to the slow progressors (p = 0.05). Conclusion: Most bronchiectatic airways developed within 2 years without visible pre-stages, underlining the treacherous nature of CF lung disease. Mucus plugging was the most frequent pre-stage. Key Points: • Development of bronchiectasis in cystic fibrosis lung disease on CT. • Most bronchiectatic airways developed within 2 years without pre-stages. • The most frequently identified pre-stage was mucus plugging. • This study underlines the treacherous nature of CF lung disease. abstract_id: PUBMED:17053494 Update on the application of chest computed tomography scanning to cystic fibrosis. Purpose Of Review: To summarize the role of chest computed tomography as a tool to evaluate cystic fibrosis lung disease, and to describe what abnormalities can be detected using computed tomography and how computed tomography can be used in clinical practice. Recent Findings: Pulmonary function tests are only an indirect measure of lung structure, and are insensitive to localized and early structural abnormalities. Computed tomography is able to detect small areas with localized severe damage. The differentiation between normal and abnormal on computed tomography is relatively easy. In half of cystic fibrosis patients, the information obtained from pulmonary function tests is discordant with the information obtained from computed tomography. Summary: Cystic fibrosis patients show lung inflammation and infection starting early in life. This leads to irreversible structural lung damage such as bronchiectasis and air trapping. Cystic fibrosis lung disease varies widely between patients. The primary aim of therapy is to prevent structural damage and to conserve lung function. Adequate monitoring of cystic fibrosis lung disease is paramount to tailor treatment to a patient's need. Computed tomography is currently the best tool to monitor lung structure, and pulmonary function tests are the best to monitor lung function. abstract_id: PUBMED:2939078 Cystic fibrosis: a comparison of computed tomography and plain chest radiographs. In patients with cystic fibrosis, plain chest radiographs may suggest the presence of bronchiectasis, bronchoceles, hilar adenopathy, or pulmonary arterial hypertension. We compared computed tomography (CT) with conventional chest radiography in 12 patients. CT clearly reveals the cause of increased linear markings, nodular lesions, and enlarged hila as seen on plain chest radiographs. It showed that nine patients had hilar adenopathy, five had enlarged pulmonary arteries, and 11 had bronchiectasis. Bronchoceles, a finding that may influence therapy, were seen on seven CT scans but on only four of the plain films. abstract_id: PUBMED:35181935 Chest X-rays are less sensitive than multiple breath washout examinations when it comes to detecting early cystic fibrosis lung disease. Aim: Annual chest X-ray is recommended as routine surveillance to track cystic fibrosis (CF) lung disease. The aim of this study was to investigate the clinical utility of chest X-rays to track CF lung disease. Methods: Children at Gothenburg's CF centre who underwent chest X-rays, multiple breath washouts and chest computed tomography examinations between 1996 and 2016 were included in the study. Chest X-rays were interpreted with Northern Score (NS). We compared NS to lung clearance index (LCI) and structural lung damage measured by computed tomography using a logistic regression model. Results: A total of 75 children were included over a median period of 13 years (range: 3.0-18.0 years). The proportion of children with abnormal NS was significantly lower than the proportion of abnormal LCI up to the age of 4 years (p &lt; 0.05). A normal NS and a normal LCI at age 6 years were associated with a median (10-90th percentile) total airway disease of 1.8% (0.4-4.7%) and bronchiectasis of 0.2% (0.0-1.5%). Conclusion: Chest X-rays were less sensitive than multiple breath washout examinations to detect early CF lung disease. The combined results from both methods can be used as an indicator to perform chest computed tomography less frequently. abstract_id: PUBMED:2914186 High-resolution computed tomography in pulmonary cystic fibrosis. Twenty-one patients with established cystic fibrosis were examined using high-resolution, narrow-section computed tomography. The bronchiectasis of cystic fibrosis is described and is shown to be characteristic in many respects. Computed tomography was found to be more accurate than standard radiography in locating the disease process and demonstrated pleural involvement which was not apparent on chest radiographs. abstract_id: PUBMED:23830321 Multi-modality monitoring of cystic fibrosis lung disease: the role of chest computed tomography. Cystic fibrosis [CF] lung disease is characterized by progressive bronchiectasis and small airways disease. To monitor CF lung disease traditionally spirometry has been the most important modality. In addition to spirometry chest radiography was used to monitor progression of structural lung abnormalities. However, the importance of chest radiography in disease management has been limited due to its poor sensitivity and specificity to detect disease progression. Over the last decade chest CT has become the gold standard for monitoring the severity and progression of bronchiectasis. Small airways disease can be monitored using spirometry, multiple breath washout techniques, and chest CT. In modern CF-care a multi-modality approach is needed to monitor CF lung disease and to personalize treatment for the needs of the patient. When state-of-the-art low dose bi-annual chest CT protocols are used radiation risk is considered to be low. In between chest CT imaging, physiologic measures are important to obtain for monitoring. Stratification of monitoring protocols based on the risk profile of the patient can help us in the future to better care for people with CF. abstract_id: PUBMED:25164327 What did we learn from two decades of chest computed tomography in cystic fibrosis? Despite our current treatment, many cystic fibrosis (CF) patients still show progressive bronchiectasis and small airways disease. Adequate detection and monitoring of progression of these structural abnormalities is needed to personalize treatment to the severity of CF lung disease of the patient. Chest computed tomography (CT) is the gold standard to diagnose and monitor bronchiectasis. Many studies have been done to validate the role of chest CT in CF and to improve the protocols. From these studies it became clear that for correct interpretation of the severity of bronchiectasis and small airways disease standardization of lung volume for the inspiratory and expiratory CT scan acquisition is needed. The risk related to the radiation exposure of a chest CT scan every second year is considered low. Automated and quantitative image analysis systems are developed to improve the reliability and sensitivity of assessments of structural lung changes in CF, particularly in early life. In this paper an overview is given of the lessons learned from two decades of monitoring CF lung disease using chest CT. abstract_id: PUBMED:25474182 Chest computed tomography predicts the frequency of pulmonary exacerbations in children with cystic fibrosis. Rationale: Abnormalities on chest computed tomography (CT) in children with cystic fibrosis (CF) have been shown to correlate with short-term measures of lung disease. Chest CT scores offer promise as a potential surrogate end point in CF; however, there is limited information available on the ability of chest CT scores to predict future morbidity. Objectives: Determine whether chest CT scores are associated with the rate of pulmonary exacerbations over the next 10 years. Methods: Ten years of follow-up data were obtained from the CF Foundation Patient Registry for 60 children enrolled in the Pulmozyme Early Intervention Trial and who had chest CT scans at baseline. Measurements And Main Results: Multivariable Poisson regression was used to compare Brody CT scores and the number of pulmonary exacerbations in the following 10 years. At the time of the chest CT, the mean (SD) age was 10.6 (1.7) years. A 1-point increase in the Brody CT score was associated with an increase in the mean (95% confidence interval) rate of pulmonary exacerbations of 1.39 (1.15, 1.67) (P &lt; 0.001). Brody CT scores were more strongly associated with the number of pulmonary exacerbations than FEV1 % predicted at the time of the chest CT (P = 0.037 by chi-square test). Conclusions: There is a significant association between Brody CT scores and the rate of pulmonary exacerbations up to 10 years later. This association is stronger than for FEV1 obtained at the time of the CT, suggesting that chest CT scores offer improved ability to predict future outcomes. abstract_id: PUBMED:9565424 Cystic fibrosis: when should high-resolution computed tomography of the chest Be obtained? Objective: To provide indications for high-resolution computed tomography (HRCT) of the chest in patients with cystic fibrosis (CF). Design: We compared the HRCT and conventional chest radiography (CCR) scores and assessed their correlation with clinical scores and pulmonary function tests. Setting: Department of Pediatrics, Federico II University, Naples, Italy. Subjects: A total of 30 patients with CF 6.75 to 24 years of age. Results: CCR scores correlated highly with HRCT (r = -0.8) and clinical (r = 0.5) scores, whereas total HRCT scores were not related to clinical scores. Of all the specific HRCT findings, only bronchiectasis appeared related significantly to the clinical score (r = 0.6). Most pulmonary function tests were related to CCR and total HRCT scores, but not to HRCT scoring of specific changes. Forced vital capacity and CCR scores appeared the best predictors of the HRCT score (multiple R = 0.58 and 0.79, respectively). In patients with mild lung disease, HRCT detected bronchiectasis and air trapping in 57% and 71% of the cases, respectively. In patients with more abnormal chest radiographs, bronchiectasis and air trapping were demonstrated on HRCT in all cases. Conclusions: These findings suggest that HRCT of the chest is most useful in the identification of early lung abnormalities in patients with CF with mild respiratory symptoms, whereas for established disease, CCR is still the first-line imaging technique. The advantage of detecting early changes on CT imaging awaits additional confirmation, at least until early therapeutic interventions affecting significantly the final outcome of the disease are demonstrated. In patients with advanced disease, HRCT may be useful in the evaluation of specific lung changes when more aggressive treatment such as chest surgical interventions is indicated. Given the cost of the procedure and the high radiation dosage compared with CCR, a careful assessment of the cost:benefit ratios of HRCT is strongly recommended in CF. abstract_id: PUBMED:33683309 QUANTIFICATION OF PULMONARY PATHOLOGY IN CYSTIC FIBROSIS-COMPARISON BETWEEN DIGITAL CHEST TOMOSYNTHESIS AND COMPUTED TOMOGRAPHY. Purpose: Digital tomosynthesis (DTS) is currently undergoing validation for potential clinical implications. The aim of this study was to investigate the potential for DTS as a low-dose alternative to computed tomography (CT) in imaging of pulmonary pathology in patients with cystic fibrosis (CF). Methods: DTS and CT were performed as part of the routine triannual follow-up in 31 CF patients. Extent of disease was quantified according to modality-specific scoring systems. Statistical analysis included Spearman's rank correlation coefficient (r) and Krippendorff's alpha (α). Major Findings: The median effective dose was 0.14 for DTS and 2.68 for CT. Intermodality correlation was very strong for total score and the subscores regarding bronchiectasis and bronchial wall-thickening (r = 0.82-0.91, P &lt; 0.01). Interobserver reliability was high for total score, bronchiectasis and mucus plugging (α = 0.83-0.93) in DTS. Conclusion: Chest tomosynthesis could be a low-dose alternative to CT in quantitative estimation of structural lung disease in CF. Answer: The development of bronchiectasis in children with cystic fibrosis (CF) can be monitored using chest computed tomography (CT), which is considered the gold standard for assessing lung structure in CF patients (PUBMED:17053494). A study aimed to identify the development of bronchiectasis and determinants for rapid progression of this condition on chest CT in CF patients. The study included 43 patients with CF who had at least four consecutive biennial volumetric CTs. Regions of interest (ROIs) were marked on the most recent CT where bronchiectasis was present, and these ROIs were generated on all preceding CTs using deformable image registration. Observers indicated whether bronchiectasis, mucus plugging, airway wall thickening, atelectasis/consolidation, or normal airways were present in the ROIs (PUBMED:27108295). The results showed that in 51.7% of ROIs, bronchiectasis was present on all preceding CTs, while 175 ROIs showed the development of bronchiectasis. In 79.4% of these, no pre-stages of bronchiectasis were identified, indicating that most bronchiectatic airways developed within 2 years without visible pre-stages. However, in 20.6% of the bronchiectatic airways, pre-stages were identified, with mucus plugging being the most frequent pre-stage (17.7%), followed by airway wall thickening (1.7%) and atelectasis/consolidation (1.1%). The study concluded that the treacherous nature of CF lung disease is underlined by the rapid development of bronchiectasis without visible pre-stages in most cases, although mucus plugging was the most common pre-stage when present (PUBMED:27108295).
Instruction: Rifampicin-resistant and rifabutin-susceptible Mycobacterium tuberculosis strains: a breakpoint artefact? Abstracts: abstract_id: PUBMED:23633684 Rifampicin-resistant and rifabutin-susceptible Mycobacterium tuberculosis strains: a breakpoint artefact? Objectives: It has long been assumed that some rifampicin-resistant Mycobacterium tuberculosis strains are susceptible to, and thus treatable with, rifabutin. However, clinical breakpoints for susceptibility testing of rifabutin as well as the evidence for a clinical effect of rifabutin in rifampicin-resistant strains remains poorly defined. The objective of this study was to re-evaluate the breakpoint for rifabutin in relation to its MIC wild-type distribution and the presence of mutations in rpoB. Methods: The MIC in 7H10 Middlebrook medium was determined for clinical isolates of M. tuberculosis (n = 95), where a majority were multidrug resistant. Additionally, all strains were screened for rpoB mutations by sequencing and the GenoType MTBDRplus assay. Results: Rifampicin resistance was confirmed by genotypical and/or phenotypical tests in 73 isolates (76.8%). Nineteen isolates, defined as rifampicin resistant and rifabutin susceptible according to the present breakpoint, exhibited significantly higher MICs of rifabutin (0.064-0.5 mg/L) than rifabutin-susceptible isolates without any detectable mutations in rpoB (P &lt; 0.001). These 19 isolates were clearly resistant to rifampicin (MIC 2-256 mg/L) and all but one had mutations in rpoB, with 9 (47.4%) specifically in Asp516Val. Conclusions: Our results indicate that rifampicin-resistant but rifabutin-susceptible isolates according to the present breakpoints harbour rpoB mutations and have a rifabutin MIC significantly higher than strains without any detectable mutations in rpoB. So far there are no clinical, pharmacological or microbiological data to confirm that such isolates can be treated with rifabutin and we suggest a revision of the current breakpoints. abstract_id: PUBMED:22883990 Cross-resistance between rifampin and rifabutin in multidrug resistant Mycobacterium tuberculosis complex strains Objective: To study the cross-resistance between rifampin and rifabutin in multidrug resistant Mycobacterium tuberculosis complex strains, and therefore to provide laboratory data for using rifabutin in the treatment of multidrug resistant tuberculosis. Methods: The MIC(90) of rifabutin and rifampin against 99 multidrug resistant Mycobacterium tuberculosis clinical strains were determined by microplate assays. Statistical analysis was performed by using the χ(2) test and the t test. Results: The cross-resistance rate between rifampicin and rifabutin was 85.9% (85/99), but the MIC(90) of rifabutin (≤ 16 mg/L, median 2 mg/L) was significantly lower than that of rifampicin (≥ 2 mg/L, median &gt; 32 mg/L). The cross-resistance rate increased with the resistance level of rifampicin. The cross-resistance strains in the lower and the medium groups were 0/9 and 5/9 respectively, while the strains of the high rifampicin-resistant group were almost all cross-resistant (98.8%, 80/81). Conclusion: Rifabutin had activities against rifampin resistant Mycobacterium tuberculosis complex strains in vitro, and therefore may be used as an alternative for the treatment of multidrug resistant tuberculosis. abstract_id: PUBMED:19860266 Cross-resistance between rifampicin and rifabutin among multi-drug resistant Mycobacterium tuberculosis strains Objective: To compare the susceptibility of rifampicin (RFP) and rifabutin (RBT) against multi-drug resistant Mycobacterium tuberculosis (MDR-TB). Method: A total of 44 confirmed MDR-TB strains collected by Ryoken consortium and 97 susceptible M. tuberculosis strains were tested for the susceptibility to RBT, following CLSI M24-A laboratory standard. The core 81bp region of rpoB gene was sequenced for MDR-TB strains, and the mutations were defined. Results: Among the 44 MDR-TB strains tested, 12 strains were susceptible to RBT. The genetic alterations were identified in 43 of 44 MDR-TB strains. The RBT susceptible strains had alterations mainly at codons 506-508, 511, 512, and 516. The mutation at codons 526 and 531 closely related to RBT resistance with two exceptions. The overall cross-resistance between RFP and RBT was 72.7%. Discussion: RBT will be a candidate drug for the treatment of MDR-TB. In this study, approximately 27% of MDR-TB in Japan had susceptibility to RBT. However, the drug susceptibility testing for RBT is not easy at general microbiology laboratory considering the preparation of medium and laboratory facilities. It is also difficult to transfer confirmed MDR-TB strains to one place to another for testing legally. The mutations at codons 526 and 531 in rpoB are closely related to RBT resistance and will be useful to predict its susceptibility. abstract_id: PUBMED:36099255 Minimum inhibitory concentrations of rifampin and isoniazid among multidrug and isoniazid resistant Mycobacterium tuberculosis in Ethiopia. Introduction: Traditionally, single critical concentrations of drugs are utilized for Mycobacterium tuberculosis (Mtb) drug susceptibility testing (DST); however, the level of drug resistance can impact treatment choices and outcomes. Mutations at the katG gene are the major genetic mutations in multidrug resistant (MDR) Mtb and usually associated with high level resistance. We assessed the minimum inhibitory concentrations (MICs) of MDR or rifampin resistant (RR) and isoniazid (INH) resistant Mtb isolates to determine the quantification of drug resistance among key anti-tuberculosis drugs. Methods: The study was conducted on stored Mtb isolates collected as part of a national drug resistance survey in Ethiopia. MIC values were determined using Sensititre™ MYCOTB plates. A line probe assay (MTBDRplus) was also performed to identify genetic determinants of resistance for all isolates. Results: MIC testing was performed on 74 Mtb isolates including 46 MDR, 2 RR and 26 INH phenotypically resistant isolates as determined by the Löwenstein Jensen (LJ) method. Four (15%) INH resistant Mtb isolates were detected as borderline rifampin resistance (MIC = 1 μg/ml) using MYCOTB MIC plates and no rifampin resistance mutations were detected by LPA. Among the 48 MDR/RR TB cases, 9 (19%) were rifabutin susceptible (MIC was between ≤0.25 and 0.5μg/ml). Additionally, the MIC for isoniazid was between 2-4 μg/ml (moderate resistance) for 58% of MDR TB isolates and 95.6% (n = 25) of the isolates had mutations at the katG gene. Conclusion: Our findings suggest a role for rifabutin treatment in a subset of RR TB patients, thus potentially preserving an important drug class. The high proportion of moderate level INH resistant among MDR Mtb isolates indicates the potential benefit of high dose isoniazid treatment in a high proportion of katG gene harboring MDR Mtb isolates. abstract_id: PUBMED:26459537 Predicting differential rifamycin resistance in clinical Mycobacterium tuberculosis isolates by specific rpoB mutations. Setting: Rifampin (RMP) resistant Mycobacterium tuberculosis is usually assumed to be resistant to all rifamycins. Increasing evidence indicates, however, that some rpoB mutations, detectable by rapid molecular diagnostics, confer resistance to RMP but not to rifabutin (RBT), suggesting that RBT may be effective for the treatment of M. tuberculosis with these mutations. Objective: To determine if specific rpoB mutations reliably predict differential phenotypic resistance to RMP and RBT. Design: We selected 60 clinical M. tuberculosis isolates from a repository of multinational multidrug-resistant tuberculosis isolates and stratified them into two groups: 1) those with rpoB mutations suspected to confer differential resistance to RMP and RBT, and 2) those expected to be cross-resistant to RMP and RBT. These assumptions were tested by comparing the phenotypic susceptibilities of RMP/RBT with those predicted by mutations in the rpoB gene. Results: Of 20 suspected RMP-resistant/RBT-susceptible isolates, 15 were RMP-resistant but RBT-susceptible, 3 were RMP- and RBT-susceptible, and 2 were cross-resistant to both RMP and RBT. In comparison, 40 of 40 suspected cross-resistant isolates were both RMP- and RBT-resistant. Conclusion: Our data support the association between specific rpoB mutations and differential resistance of M. tuberculosis to RMP and RBT. Clinical studies are required to investigate the efficacy of RBT in the treatment of M. tuberculosis harboring these mutations. abstract_id: PUBMED:22640449 Rifampicin-resistant Mycobacterium tuberculosis: susceptibility to isoniazid and other anti-tuberculosis drugs. Based on data from 14 Supranational Tuberculosis (TB) Reference Laboratories worldwide, the proportion of rifampicin (RMP) resistant isolates that were isoniazid (INH) susceptible by phenotypic drug susceptibility testing varied widely (0.5-11.6%). RMP-resistant isolates that were INH-susceptible had significantly lower rates of resistance to other first- and second-line anti-tuberculosis drugs (except rifabutin) compared to multidrug-resistant isolates. RMP resistance is not always a good proxy for a presumptive diagnosis of multidrug-resistant TB, which has implications for use of molecular assays that identify only RMP resistance-associated DNA mutations. abstract_id: PUBMED:22100303 Pharmacophore insights into rpoB gene mutations in Mycobacterium tuberculosis rifampicin resistant isolates. This paper reports the susceptibility profile to rifabutin (RFB) 1 and six recently synthesized RFB analogs 3-8, of either rifampicin (RFP) susceptible Mycobacterium tuberculosis and resistant clinical isolates from two sources: Mexico and Brazil. Taking into account that about 95% of M. tuberculosis strains resistant to RFP present mutations in the rpoB gene, with some of these mutations being determinant also to RFB resistance, the RFB analogs were screened for activity against a set of known RFP susceptible and resistant strains. N'-Acetyl-RFB 5 and N'-(undec-10″-enoyl)-RFB 8 showed the best results, in particular with mutations in the codon 516, 522 and 531 of the rpoB gene, and were therefore selected for in vivo assessment of their efficacy. Studies conducted with tuberculous Balb/C mice previously infected with Ser531Leu mutated clinical isolate, evidenced both 5 and 8 as promoters of a significant decrease on tubercle bacilli burden in lungs associated with lower tissue damage, thus confirming them as good leads for drug discovery. The SAR of the acylated compounds 5 and 8 envisaging the identification of pharmacophore features, highlights the importance of profiling more clearly the chemistry within the molecular aspects for elucidation of the mode of action of RFB and analogs, in relation to mutations in Multidrug-Resistant (MDR) strains. abstract_id: PUBMED:6289711 In vitro susceptibility of Mycobacterium avium complex and Mycobacterium tuberculosis strains to a spiro-piperidyl rifamycin. The spiro-piperidyl rifamycins are newly synthesized rifamycin S compounds. One of these compounds, LM 427, was tested in vitro against strains of the Mycobacterium avium complex and strains of M. tuberculosis; LM 427 inhibited 81.3% of 155 strains of the M. avium complex tested at a concentration of 1.0 microgram/ml compared with 5.8% inhibited by the same concentration of rifampin. Twenty-nine strains were resistant to both LM 427 and rifampin at 1.0 microgram/ml. Further testing of these 29 strains showed LM 427 inhibitory for all but 5 strains at 2.0 micrograms/ml and inhibitory for all but 1 at 5.0 micrograms/ml. Rifampin, on the other hand, inhibited none at 2.0 micrograms/ml and 11 strains at 5.0 micrograms/ml. The in vitro activity of LM 427 was also compared with rifampin by testing both compounds against M. tuberculosis at 1.0 microgram/ml. This comparison showed that all strains susceptible to rifampin were also susceptible to LM 427. However, 16 strains were susceptible to LM 427 and resistant to rifampin. The inhibition of drug-resistant mycobacterial species that cause pulmonary disease makes this compound an important consideration for future clinical studies. abstract_id: PUBMED:9866928 Multidrug-resistant tuberculosis. 2. Mechanisms of drug-resistance in Mycobacterium tuberculosis--genetic mechanisms of drug-resistance Multidrug-resistant Mycobacterium tuberculosis infection is now world wide health problem. However, according to the recent advances of molecular biological technics, some of the genetic mechanisms of drug-resistance of M. tuberculosis has been uncovered. Generally, drug-resistance of M. tuberculosis was caused by point mutations in chromosomal gene. In isoniazid (INH) resistant M. tuberculosis, mutations and genetic deletions in catalase-peroxidase gene (katG), inhA gene, or alkyl hydroperoxide reductase gene were reported. We also found that about 15% of INH-resistant M. tuberculosis isolates lacked katG gene, and these isolates showed highly resistance to INH with MIC &gt; or = 64 micrograms/ml. On the other hand, mutations and other genetic alterations in RNA polymerase beta subunit gene (rpoB) were the major mechanisms of resistance to rifampicin (RFP) with high frequencies of 90% or more. Our evaluation of the relationship between RFP susceptibility and genetic alteration in rpoB gene also showed that 95% of RFP-resistant M. tuberculosis isolates involved genetic alterations in 69 bp core region of rpoB gene. Moreover, these genetic alterations in rpoB gene were suspected as the resistant mechanism to other rifamycin antituberculosis drugs, such as rifabutin and KRM-1648. In addition, it was reported that point mutations in 16S rRNA gene (rrs) and ribosomal protein S12 gene (rpsL) induced M. tuberculosis as streptomycin (SM) resistant phenotype. We analyzed genetic alternations in rpsL gene of clinically isolates of M. tuberculosis, about 60% of SM resistant isolates were shown point mutation in this gene ant they were all high SM-resistant with MIC &gt; or = 256 micrograms/ml. Furthermore, nicotinamidase (pncA) gene, DNA gyrase A subunit (gyrA) gene, and embB gene were reported as the responsible gene to pyrazinamide-, quinolone- and ethambutol-resistance, respectively. Although all mechanisms of drug-resistance were still unclear, these informations are very useful and helpful for development of rapid diagnosis system of drug-resistant M. tuberculosis. abstract_id: PUBMED:12908735 The 2,3-bis(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide (XTT) assay as rapid colorimetric method for determination of antibiotic susceptibility of clinical Mycobacterium tuberculosis isolates in liquid medium. We investigated the usefulness of a colorimetric method based on the reduction of a tetrazolium salt (XTT) for the susceptibility testing of clinical isolates of Mycobacterium tuberculosis to isoniazid, rifampin, rifabutin, ethambutol hydrochloride, ethionamide and streptomycin. The isolates and the ATCC reference strains reported as susceptible according to the agar dilution method approved by the National Committee for Clinical Laboratory Standards were found to be susceptible by the XTT colorimetric assay after times of incubation ranging between three days for rifampin and rifabutin to eight days for isoniazid. In comparison with other colorimetric methods reviewed in this article, the proposed assay is suitable for determining the susceptibility or resistance to most antituberculous drugs and, as a consequence of the water-solubility of the formazan yielded by reduction of XTT, additional steps such as the addition of extraction buffer and further incubation before the spectrophotometric analysis are not needed. The XTT reduction assay is an inexpensive, rapid and reliable screening method for the detection of susceptible, resistant and multidrug-resistant strains of M. tuberculosis and is an alternative to the costly performance of molecular or radiometric methods. Answer: The concept that some rifampicin-resistant Mycobacterium tuberculosis strains may be susceptible to rifabutin and thus treatable with it has been a subject of investigation. However, the clinical breakpoints for susceptibility testing of rifabutin and the evidence for its clinical effect in rifampicin-resistant strains are not well-defined (PUBMED:23633684). A study aimed to re-evaluate the breakpoint for rifabutin in relation to its MIC wild-type distribution and the presence of mutations in the rpoB gene. The study found that 19 isolates, defined as rifampicin-resistant and rifabutin-susceptible according to the current breakpoint, had significantly higher MICs of rifabutin than rifabutin-susceptible isolates without detectable mutations in rpoB. These 19 isolates were clearly resistant to rifampicin and all but one had mutations in rpoB, suggesting that the current breakpoints may not accurately reflect the susceptibility of these strains to rifabutin (PUBMED:23633684). Another study found that the cross-resistance rate between rifampicin and rifabutin was 85.9%, but the MIC90 of rifabutin was significantly lower than that of rifampicin. The cross-resistance rate increased with the resistance level of rifampicin, indicating that rifabutin may still have activity against some rifampicin-resistant strains and could be considered as an alternative treatment for multidrug-resistant tuberculosis (PUBMED:22883990). A further study reported that approximately 27% of multidrug-resistant tuberculosis in Japan had susceptibility to rifabutin. The mutations at codons 526 and 531 in the rpoB gene were closely related to rifabutin resistance, suggesting that these mutations could be useful in predicting rifabutin susceptibility (PUBMED:19860266). In summary, while there is evidence that some rifampicin-resistant M. tuberculosis strains may be susceptible to rifabutin, the breakpoints for rifabutin susceptibility testing may need revision to accurately reflect this. The presence of specific mutations in the rpoB gene appears to be a significant factor in determining the susceptibility of these strains to rifabutin (PUBMED:23633684; PUBMED:22883990; PUBMED:19860266).
Instruction: Somatic complaint differences between Turkish immigrants and Belgians: do all roads lead to Rome? Abstracts: abstract_id: PUBMED:21170770 Somatic complaint differences between Turkish immigrants and Belgians: do all roads lead to Rome? Objective: Turkish immigrants have been found to report more somatic complaints compared to western majority groups. The present study investigates the combination of two cultural explanations (somatization versus psychologization and emotion mediation) with two acculturative explanations (acculturative stress versus acculturative transition) to explain these differences. Design: In total, 144 Turkish immigrants, 353 Belgian majority members, and 222 Turkish majority members were asked to report the last three emotional episodes they encountered and to rate them on 24 emotion terms and 17 somatic sensations. Results: Turkish majorities scored higher on all somatic factors, anxiety-sadness, and self-conscious emotions followed by Turkish immigrants and Belgian majorities. Furthermore, path analysis showed (partial) mediation effects of anxiety-sadness and self-conscious factors on the differences in the somatic factors between Belgian and Turkish majorities. Conclusion: These results indicate that the somatic differences do not result from a trade-off between somatization and psychologization, but that emotions mediate differences in somatic processes to a large extent. It was also found that differences between Turkish immigrants and Belgian majority members are to be attributed to acculturative transition, rather than to acculturative stress. abstract_id: PUBMED:36779073 Self-management of type 2 diabetes among Turkish immigrants in Norway: A focus group study. The prevalence of Type 2 Diabetes Mellitus (T2DM) is higher among Turkish immigrants than the general population in Norway. The aim of the study is to describe the challenges and experiences faced by Turkish immigrants in Norway in the self-management of T2DM. The study design is based on descriptive research using a qualitative approach. The sample group contained 13 persons participating in three focus group interviews: nine women and four men. A phenomenological-hermeneutical approach was employed to achieve a deeper understanding of the experience of self-management of T2DM among Turkish immigrants in Norway with regard to HL. The participants described experiences of the T2DM self-management with regard to HL and revealed three major themes: (1) understanding the role and responsibility of health care staff in T2DM treatment, (2) assessing T2DM education course and information and (3) applying knowledge and motivation to adapt to life with T2DM. Findings from this study revealed that self-management of patients with T2DM among Turkish immigrants is related to their cultural, religious and socio-economical background and experiences. By understanding the cultural features, a well-tailored intervention according to the needs of Turkish immigrants regarding self-management can be developed. Health care staff are recommended to consider patients' HL when interventions are developed. abstract_id: PUBMED:24966706 Somatic perception, cultural differences and immigration: results from administration of the Modified Somatic Perception Questionnaire (MSPQ) to a sample of immigrants. The number of immigrants in Italy has doubled every 10 years from 1972 and Genoa hosts two large communities of immigrants from South America and Africa. We investigated differences in the somatic perception between immigrants and Italians and between South Americans and Africans living in the city of Genoa. During a 7 month period, an anonymous questionnaire asking for sociodemographic information and the Modified Somatic Perception Questionnaire (MSPQ) were administered to all immigrants accessing an outpatient clinic or the general practitioners offices. MSPQ mean scores were significantly higher in immigrant patients than in Italian patients, after adjusting for sex and age differences. We found no differences between South Americans and Africans in MSPQ score. The tendency to express discomfort through physical symptoms appears to be related to being a foreigner who arrived in Italy through a migratory trip and also to being a person who comes from a cultural context that is very different from the one of developed countries. abstract_id: PUBMED:28545785 Somatization among persons with Turkish origin: Results of the pretest of the German National Cohort Study. Objective: Despite the emerging need to examine mental health of immigrants, there are no investigations designed to analyze representative samples in Germany. The aim of the present study was to explore the severity of somatic symptoms/somatization among a sample of considerable size consisting of persons with Turkish origin. We studied whether somatization was associated with sociodemographic and migration-related characteristics. Methods: This examination was part of a pretest for a large national epidemiological cohort study in Germany. We applied the somatization (PHQ-15) and the depression module (PHQ-9) from the Patient Health Questionnaire in a subsample of 335 Turkish immigrants. We analyzed the distribution of the sum score. Differences in degree of somatization in relation to relevant socio-demographic (gender) and migrant-related characteristics (generation of immigration) were tested with analysis of covariance (ANCOVA), controlling for age. A multiple linear regression analysis was also conducted. Results: Women had significantly higher age-adjusted mean scores than men (M=10.4, SD=6.3 vs. M=8.1, SD=6.3; F=10.467, p=0.001), a significant effect of age was also found (F=4.853, p=0.028). First generation immigrants had a higher age-adjusted mean number of symptoms in relation to the second generation immigrants (M=10.0, SD=6.5 vs. M=7.4, SD=7.0; F=6.042, p=0.014), the effect of age was not significant (F=0.466, p=0.495). Multiple regression analysis revealed that lower severity of somatization was associated with lower numbers of diagnosed physical illnesses (β=0.271, p&lt;0.001) and better language proficiency (β=0.197, p=0.003, explained variance: 15.6%). Conclusions: The degree of somatization among Turkish immigrants in Germany is associated with gender and generation of immigration. abstract_id: PUBMED:30100121 On the move in search of health and care: Circular migration and family conflict amongst older Turkish immigrants in Germany. This study focuses on circular migration amongst older Turkish immigrants to investigate two main questions: (1) How do perceived health and available healthcare systems in Turkey and Germany determine the structure of circular migration? (2) How is eldercare shaped by intergenerational conflict and exchange relations amongst older immigrants, their adult children in Germany, and their extended families in Turkey? Through the analysis of 40 in-depth interviews, this study finds that while Turkey offers healthier physical, psychological, and religious options, older circular migrants are more drawn to Germany because they perceive its healthcare system to be superior. Furthermore, contrary to conventional accounts, eldercare has mostly been navigated through intergenerational conflict and exchange relations, instead of family solidarity or traditional filial roles, and these conflicts and exchange principles around eldercare regulate family living arrangements across the two countries. In short, older Turkish immigrants encounter unique challenges in both their home and host countries, which promulgates their circular migration. abstract_id: PUBMED:37980437 Perceptions of the COVID-19 pandemic and government response among Turkish-speaking immigrants in Germany during the initial lockdown period. Understanding the socioeconomic and health impacts of the pandemic and their consequences on immigrants necessitates consideration of how they have perceived the global threat. By applying agenda-setting theory to Turkish-speaking immigrants in Germany as a case study, I investigated what issues created a sense of urgency for them during the initial lockdown and how they approached the government's COVID-19-related agenda. I used purposeful sampling and video comments on COVID-19-related news created by three journalists on YouTube channels and applied qualitative content analysis to interpret the comments. Analysis of 118 comments and 68 replies showed that social media played an important role in immigrants' awareness of COVID-19, especially about authorities' responses, pandemic-related measures, and economic aid packages. Immigrants perceived the information as cause for concern and reactions varied by socioeconomic status. Comments not only supported authorities pandemic strategies but also indicated doubts caused by lack of transparency, certain recommendations, regulations, and aid. abstract_id: PUBMED:31802224 Adult lactose intolerance, calcium intake, bone metabolism and bone density in German-Turkish immigrants. Introduction: Adult lactose intolerance (ALI) significantly alters calcium intake and absorption, and thus may promote osteoporosis. ALI is a recessive condition with a geographical north-south gradient characterised by decreased levels of intestinal lactase. PCR-based genotyping of lactase (LCT) gene polymorphisms is a safe and easy way to diagnose ALI and may complement diagnostic procedures to identify individuals at risk for reduced calcium intake and subsequently osteoporosis due to lactose malabsorption. Therefore, we investigated the frequency of ALI and its influence on calcium intake, markers of bone metabolism and bone mineral density (BMD) in a cohort of Turkish immigrants living in Germany. Materials And Methods: We investigated single nucleotide polymorphisms of the LCT gene, calcium intake, markers of bone metabolism and BMD in 183 Turkish immigrants. Results: ALI was diagnosed in 154 out of 183 (81%) probands. ALI was significantly associated with self-reported lactose intolerance (p &lt; 0.001) and dislike for dairy products (p &lt; 0.01). Osteopenia was diagnosed in 59 out of 183 (32%) and osteoporosis in 15 out of 183 (8%) probands. Probands with reduced BMD had ALI in 86%. All probands had a decreased calcium intake [mg/week]. There was no significant association between ALI, calcium intake, markers of bone metabolism or BMD. Conclusion: Turkish immigrants mostly have ALI and overall show a reduced calcium intake per week. However, ALI did not significantly influence calcium intake, markers of bone metabolism or BMD in this cohort. Therefore, ALI in Turkish immigrants does not seem to be a risk factor for osteoporosis. abstract_id: PUBMED:28382526 Difficulties in health care for female Turkish immigrants with type 2 diabetes: a qualitative study in Vienna. According to studies, type 2 diabetes mellitus (DMT2) is more commonly encountered among Turkish immigrants than resident populations in Europe and Turkish people living in Turkey. Within this context this study focused on female Turkish immigrants with DMT2 who are living in Vienna. The purpose of this study was to determine the needs, expectations, special conditions and cultural characteristics of Turkish women with DMT2 by gathering information about their own perceptions about DMT2, difficulties encountered during the diagnosis, treatment and their experiences living with the disease. A qualitative phenomenological design was utilized with a sample of 13 participants recruited from the Turkish mosques in Vienna. The interview guide, which included semi-structured questions, was based on previous studies conducted on DMT2 and immigration. Interviews were transcribed verbally from tape recordings, translated into German and each statement paraphrased for further analysis. The interviews indicated participants' degree of knowledge about DMT2 and was the most significant factor affecting approaches to the disease. The most important difficulty for the participants was the re-organization of nutritional habits to DMT2 because of the Turkish cuisine culture. The results of this study can be utilized by the health care providers to gain an understanding of Turkish immigrants and DMT2. Thus, it will enable them to adapt their health care and education to meet the needs of this population better. abstract_id: PUBMED:24753588 Lead in ancient Rome's city waters. It is now universally accepted that utilization of lead for domestic purposes and water distribution presents a major health hazard. The ancient Roman world was unaware of these risks. How far the gigantic network of lead pipes used in ancient Rome compromised public health in the city is unknown. Lead isotopes in sediments from the harbor of Imperial Rome register the presence of a strong anthropogenic component during the beginning of the Common Era and the Early Middle Ages. They demonstrate that the lead pipes of the water distribution system increased Pb contents in drinking water of the capital city by up to two orders of magnitude over the natural background. The Pb isotope record shows that the discontinuities in the pollution of the Tiber by lead are intimately entwined with the major issues affecting Late Antique Rome and its water distribution system. abstract_id: PUBMED:17712610 Quality of life among Turkish immigrants in Sweden. Objectives: To assess quality of life among Turkish immigrants in Sweden by using the WHOQOL-100 scale and to evaluate the domains' contribution to explain the variance in the quality of life of the immigrants. Our hypothesis was QOL among Turkish immigrants in Sweden are better than Turkish people who are living in their home country. Material And Methods: This study was performed in the districts of Stockholm where Turkish immigrants have mostly settled. With the help and guidance of the Turkish Association, a sample of 520 participants was selected. We collected the demographic data by printed questionnaires, and to measure the quality of life, we used the WHOQOL-100 scale Turkish version. For analysis, we used the SPSS V.13.0 and R package programs, variance analyses, and Bayesian regression. Results: The quality of life among the sample of Turkish immigrants was found to be moderate, but higher than the sample of the Turkish population. The quality of life of male immigrants was found to be higher than for females. Swedish-born Turks had better quality of life perceptions. Conclusion: Turkish immigrants' quality of life perceptions were better than those of the Turkish sample. The best scores were received from the third generation. The first generation and female immigrants need attention in order to receive higher quality of life perceptions. Answer: The study examining somatic complaint differences between Turkish immigrants and Belgians found that Turkish immigrants reported more somatic complaints compared to Belgian majority groups. The research aimed to explore cultural and acculturative explanations for these differences. The study involved 144 Turkish immigrants, 353 Belgian majority members, and 222 Turkish majority members who reported their emotional episodes and rated them on emotion terms and somatic sensations. Results showed that Turkish majorities scored higher on all somatic factors, anxiety-sadness, and self-conscious emotions, followed by Turkish immigrants and then Belgian majorities. Path analysis indicated that the differences in somatic factors between Belgian and Turkish majorities were mediated to a large extent by emotions, particularly anxiety-sadness and self-conscious factors. The study concluded that the somatic differences are not a result of a trade-off between somatization and psychologization but are largely mediated by emotions. Furthermore, the differences between Turkish immigrants and Belgian majority members were attributed to acculturative transition rather than acculturative stress (PUBMED:21170770). In summary, the study suggests that the somatic complaint differences between Turkish immigrants and Belgians are not simply a matter of cultural tendencies towards somatization or psychologization but are significantly influenced by emotional mediation and the process of acculturative transition. Therefore, while all roads may not lead to Rome, the path to understanding somatic complaints in this context is complex and influenced by a combination of emotional and acculturative factors.
Instruction: Therapeutic ERCP: a cost-prohibitive procedure? Abstracts: abstract_id: PUBMED:38204727 Surgical outcomes of patients at prohibitive risk who are reconsidered for surgery. Objectives: Transcatheter treatment of advanced mitral and tricuspid valve disease is largely limited to patients at prohibitive surgical risk, although many are not candidates for transcatheter treatment. Here, we describe surgical outcomes of patients at prohibitive risk who were ineligible for transcatheter therapies to guide surgeons in management of this unique population. Methods: Patients at prohibitive risk, defined per surgeon or cardiologist discretion, who were initially referred for a transcatheter mitral or tricuspid intervention in a multidisciplinary atrioventricular valve clinic, were identified from 2019 to 2022. Preoperative risk, operative outcomes, and long-term mortality were evaluated. Results: A total of 337 patients at prohibitive risk were referred for evaluation in a multidisciplinary atrioventricular valve clinic. Of those, 161 underwent transcatheter therapy, 130 patients underwent continued medical management, and 45 were reevaluated and had high-risk surgery. Among surgical patients, 51% were women with a median age of 76 years (quartile 1-quartile 3, 65-81 years). Most patients presented in heart failure (83%; n = 37 out of 45), and 73% were in New York Heart Association functional class III or IV. Most patients (94%; n = 43) had a mitral valve intervention, of whom 56% (24 out of 43) had a mitral valve replacement. The 30-day mortality rate was 4% (2 out of 45) and major morbidity occurred in 33% (15 out of 45). By Kaplan-Meier analysis, 1-year survival was 86% ± 9%. Conclusions: Select patients at prohibitive risk who were ineligible for transcatheter mitral or tricuspid valve intervention underwent surgery with overall low operative mortality and excellent 1-year survival. Patients a prohibitive risk whose anatomy is not amenable to transcatheter devices should be reconsidered for surgery. abstract_id: PUBMED:33324292 Regulatory Fit Demonstrates That Prohibitive Voice Does Not Lead to Low Performance Evaluation. Voice behavior, the extra-role behavior of employees based on their sense of responsibility, plays an important role in organizational development. Research shows that an employee's voice can have a positive impact on both the quality of decision-making and organizational performance. This study explores the relationship between the prohibitive voice and employees' safety performance based on the theory of regulatory fit. The study examined 372 employees and their leaders in the Ningxia Hui Autonomous Region of China through a questionnaire survey. A moderated model was constructed, and the SPSS-PROCESS was applied to analyze the data. The study results show that prevention regulatory focus fit strengthened the positive association between the prohibitive voice and safety performance evaluation. This study provides a new perspective in understanding leaders' evaluation of the prohibitive voice and concludes that the prohibitive voice should be encouraged in organizations as it promotes greater adherence to safety measures and helps reduce organizational development risks. abstract_id: PUBMED:34975609 Managing the Uncertainties Inherent in Prohibitive Voice: How Leadership Interacts With Employee Political Skill. Drawing from the uncertainty management theory, we examine how authoritarian leadership and humble leadership interact with employee political skill to predict prohibitive voice. We conducted a two-wave survey study of 43 managers and 176 subordinates in a power company in China. Our findings indicate that authoritarian leadership has a minimal negative effect on the psychological safety of employees with higher political skill, which in turn leads to a minimal negative effect on their prohibitive voice. Moreover, humble leadership is positively associated with prohibitive voice for employees with lower political skill. For employees with higher political skill, no type of leadership behavior has a significant influence on their prohibitive voice. We outline the implications of these findings for both theoretical and managerial practices. abstract_id: PUBMED:36303909 When prohibitive voice is effective? Exploring the jointly moderating role of hindrance stressor and work-based construal level. Prohibitive voice is important for employees and organizations to reduce hindrance stressor. However, previous research offered limited knowledge for whether prohibitive voice can lead to communication effectiveness in the context of hindrance stressor. The purpose of our research is to examine how to improve the communication effectiveness of prohibitive voice under hindrance stressor condition. Drawing from information richness theory and construal level theory, this study constructed a model regarding the three-way interactive effect of prohibitive voice, hindrance stressor, and work-based construal level on communication effectiveness. By investigating 55 supervisors and 393 employees, we examined our hypothesis. Results of cross-level analysis offered support for the assumption. Specifically, prohibitive voice from employees who are faced with high hindrance stressor and have high work-based construal level tends to be effective communication. Besides, when both hindrance stressor and work-based construal level are low, the association between prohibitive voice and communication effectiveness also tends to be positive. This study delineates the answer for how to increase communication effectiveness of prohibitive voice in the situation of hindrance stressor. abstract_id: PUBMED:35759007 Results of emergency colectomy in nonagenarians and octogenarians previously labeled as prohibitive surgical risk. Purpose: There are no standardized criteria for what constitutes prohibitive risk for emergency abdominal surgery. Methods: A retrospective review was performed comparing two groups of patients having emergent colectomy. One group had previously been labeled as being prohibitive surgical risk and the other was a contemporary, non-prohibitive risk group also requiring emergency colectomy. All operations were performed by a single surgeon. Results: There were 27 prohibitive risk patients and 81 non-prohibitive risk (control group) patients. The average age of the prohibitive risk group was 85 years (range 78-99) compared to the control group mean age of 52 years (18-79, p &lt; 0.00001). Prohibitive risk was due to extremes of age combined with congestive heart failure in 44%, followed by chronic obstructive pulmonary disease combined with heart failure in 19%. The groups were closely matched by the type of colectomy performed. The total complication rate was much higher in the prohibitive risk group compared to the non-prohibitive risk patients (81% versus 48%, p 0.005). But the 30-day mortality rate was similar between groups (7% versus 4%, p 0.6). Conclusion: Patients who are labeled as prohibitive surgical risk may be inaccurately assessed in the majority of cases. Additional research will need to be performed to evaluate the presence of quantifiable high-risk physiological conditions, and not just comorbidities, that place a patient at high risk of death after abdominal surgery. Until then, elderly patients should not be denied colectomy based upon comorbidities alone. abstract_id: PUBMED:36507046 The effect of perceived organizational support on the prohibitive voice behavior of knowledgeable talents during the COVID-19 pandemic: Exploring moderating role of the digitalization level. This study focuses on the effect of perceived organizational support on the prohibitive voice of knowledgeable talents and on the factors influencing that voice during the COVID-19 pandemic in China. Grounding our research in social exchange theory and the Chinese sociocultural background, we collected data from 714 questionnaires from universities in Zhejiang Province and analysize by linear regression and found, first, that perceived organizational support significantly increased the prohibitive voice of knowledgeable talents and, second, that in the relationship between perceived organizational support and the prohibitive voice of those talents, the level of digitalization level of an organization has a positive the effect of perceived organizational support on prohibitive voice behavior. This study enriches the theory of voice behavior, helps universities understand the behavioral patterns of knowledgeable talents and guides the cultivation of an atmosphere with prohibitive voice behavior. abstract_id: PUBMED:9283864 Therapeutic ERCP: a cost-prohibitive procedure? Background: ERCP is increasingly being performed for therapeutic purposes and engenders a proliferation of disposable equipment without a clear indication of cost-effectiveness. Methods: We analyzed the financial impact of ERCP by prospectively analyzing ERCPs performed in our institution from June 1, 1994, to September 30, 1994, by calculating charges related to indirect costs, disposable equipment costs, and overall reimbursement. The data were analyzed according to insurance payor as well. Results: Disposable equipment costs a mean of $149 per diagnostic ERCP and $532 per therapeutic ERCP. For diagnostic ERCP, disposable equipment accounted for 27% of reimbursement; for therapeutic ERCP, disposable equipment accounted for 68% of reimbursement. Although overall reimbursement was higher for therapeutic ERCP, the very high direct costs related to disposable equipment limited the ability of reimbursement to cover indirect costs. Conclusions: Depending on the complexity of cases, quantity of disposable equipment used, and patient-insurance mix, therapeutic ERCP may be cost prohibitive for a given endoscopy unit. Indirect costs should be more carefully and quantitatively analyzed. Disposable equipment should be evaluated in terms of cost, safety, and patient outcome. abstract_id: PUBMED:34970197 Effect of Employee-Customer Interaction Quality on Customers' Prohibitive Voice Behaviors: Mediating Roles of Customer Trust and Identification. Given that customer voice behaviors are confused with customer complaint behaviors in usage, this study thoroughly explains the essential differences between the two constructs. On that basis, this study investigates how employee-customer interaction (ECI) quality affects customers' prohibitive voice behaviors, which is an crucial type of customer voice behaviors, by examining customer trust and identification as mediators. Data from 395 restaurant customers are collected and analyzed using structural equation modeling. Results show that ECI quality positively affects customers' prohibitive voice behaviors. In this effect, customer trust and identification play direct and sequential mediating roles. This study contributes theoretically to the current knowledge by clearly distinguishing customer voice behaviors from customer complaint behaviors and by providing new insights into the mechanism of customers' prohibitive voice behaviors from the perspectives of service interaction and relational benefit enhancement. The practical implications of this study can help pointedly foster customers' prohibitive voice behaviors. abstract_id: PUBMED:35564896 Linking Abusive Supervision to Promotive and Prohibitive Voice Behavior: Testing the Mediating Roles of Work Engagement and Negative Reciprocity. As an important type of extra-role behavior, employee voice behavior is of great significance to the sustainable development of organizations. Employee voice behavior has two different dimensions, namely promotive voice and prohibitive voice, both of which are conducive to decision making, innovation, and improvements to the work process. Among the antecedents of voice behavior, abusive supervision is one of the most essential influencing factors. In response to the call to further explore the antecedents and influencing mechanisms of different dimensions of voice behaviors, this study aims to investigate the different paths of abusive supervision on the two types of voice behavior. Drawing on the conservation of resources theory and social exchange theory, we identified an expanded array of mediators, including work engagement and negative reciprocity, which link abusive supervision to promotive voice behavior and prohibitive voice behavior separately. Data were collected through two-wave questionnaire surveys of 334 employees of 14 enterprises in China. The results show that (a) abusive supervision is negatively correlated with employees' promotive and prohibitive voice behaviors; (b) work engagement mediates the negative relationship between abusive supervision and promotive voice; and negative reciprocity mediates the negative relationship between abusive supervision and prohibitive voice. These findings clearly reveal the influencing mechanisms of abusive supervision on both promotive and prohibitive voice behavior, which not only enriches relevant theoretical research but also provides feasible insights into how to reduce abusive supervision to motivate voice behavior in management practice. abstract_id: PUBMED:33437107 A Moral Cleansing Process: How and When Does Unethical Pro-organizational Behavior Increase Prohibitive and Promotive Voice. In this study, we draw on moral cleansing theory to investigate the consequence of unethical pro-organizational behavior (UPB) from the perspective of the actors. Specifically, we hypothesize that after conducting UPB, people may feel guilty and tend to cleanse their wrongdoings by providing suggestions or identifying problems at work (i.e., prohibitive and promotive voice). We further hypothesize that the above relationship is moderated by the actor's moral identity symbolization. We conducted three studies, including experiment and surveys, to test our hypotheses. Results of these studies show consistent support to our hypotheses. In particular, individuals reported more felt guilt after conducting UPB, and they tended to compensate with more prohibitive and promotive voice subsequently. In addition, the indirect relationship from UPB acting to both voice behaviors via felt guilt was stronger for people with a high level of moral identity symbolization. Theoretical and practical implications are discussed. Answer: The abstract from PUBMED:9283864 discusses the financial impact of endoscopic retrograde cholangiopancreatography (ERCP), which is increasingly performed for therapeutic purposes. The study analyzed the costs associated with ERCP, including indirect costs, disposable equipment costs, and overall reimbursement. The findings indicated that disposable equipment costs were significantly higher for therapeutic ERCP ($532) compared to diagnostic ERCP ($149). For therapeutic ERCP, disposable equipment accounted for 68% of reimbursement, which was a substantial portion. Although reimbursement for therapeutic ERCP was higher, the high direct costs related to disposable equipment limited the ability of reimbursement to cover indirect costs. The conclusion drawn from this analysis suggests that depending on various factors such as case complexity, the amount of disposable equipment used, and the patient-insurance mix, therapeutic ERCP may indeed be cost-prohibitive for some endoscopy units. The study recommends that indirect costs should be more carefully and quantitatively analyzed and that disposable equipment should be evaluated in terms of cost, safety, and patient outcome.
Instruction: Are metabolic syndrome antecedents in prepubertal children associated with being born idiopathic large for gestational age? Abstracts: abstract_id: PUBMED:23627262 Are metabolic syndrome antecedents in prepubertal children associated with being born idiopathic large for gestational age? Introduction: Being born large for gestational age (LGA) is a risk factor for development of metabolic syndrome (MS) in adolescents and adults. Objective: To evaluate prepubertal children born idiopathic LGA to non-obese mothers without gestational diabetes or glucosuria with respect to the presence of MS antecedents. Patients And Methods: We conducted a cross-sectional study to compare 40 (19 F) LGA-born prepubertal children of a mean age of 6.1 ± 2.5 yr and 49 (25 F) appropriate for gestational age (AGA)-born body mass index (BMI)-matched peers of a mean age of 5.4 ± 1.8 yr with respect to their anthropometric data, blood pressure measurements, fasting serum glucose and insulin levels, homeostasis model assessment-insulin resistance (HOMA-IR), and lipids and atherogenic index (AI) [triglyceride (TG)/high-density lipoprotein cholesterol (HDL-C)]. HOMA-IR &gt; 2.5 was used to define IR. HDL-C ≤ 40 mg/dL and TG ≥ 110 mg/dL were used to define dyslipidemia. Both groups were further divided into subgroups as obese and non-obese according to their BMI percentiles and the analyses were repeated. Results: Non-obese LGA children had higher waist circumference (WC) standard deviation scores (SDSs) than BMI-matched AGA-born peers (p = 0.024). There were no significant differences between pooled, obese and non-obese subgroups of LGA-born children and their AGA counterparts with respect to dyslipidemia and IR. AI was higher in non-obese LGA children than in AGA counterparts (p = 0.028). Conclusions: Non-obese idiopathic LGA-born children have higher AIs than AGA-born counterparts in the absence of IR. WC seems to be a good clinical screening tool in identifying at risk of non-obese LGA children. Further studies are needed to evaluate MS antecedents in idiopathic LGA-born children. abstract_id: PUBMED:20724652 Prothrombotic state, cardiovascular, and metabolic syndrome risk factors in prepubertal children born large for gestational age. Objective: To evaluate metabolic syndrome and cardiovascular disease risk factors in prepubertal children born large for gestational age (LGA) to nondiabetic, nonobese mothers. Research Design And Methods: At 6-7 years of age, the comparison of various factors was made between 31 LGA and 34 appropriate-for-gestational-age (AGA) children: fibrinogen, antithrombin III, protein C and S, fasting insulin, glucose, homeostasis assessment model of insulin resistance (HOMA-IR) index, adiponectin, leptin, visfatin, IGF-1, IGF-binding protein (IGFBP)-1, IGFBP-3, lipids, and the genetic factors V Leiden G1691A mutation, prothrombin 20210A/G polymorphism, and mutation in the enzyme 5,10-methylenetetrahydrofolate-reductase gene (MTHFR-C677T). Results: LGA children had higher levels of leptin (P&lt;0.01), fasting insulin (P&lt;0.01), and HOMA-IR (P&lt;0.01), but lower IGFBP-3 (P=0.0001), fibrinogen (P=0.0001), and lipoprotein(a) (P&lt;0.001) than AGA children. Significantly more LGA children were homozygous for the MTHFR-C677T mutation (P=0.0016). Conclusions: Being born LGA to nondiabetic, nonobese mothers is associated with diverse effects on cardiometabolic risk factors at prepuberty. abstract_id: PUBMED:22072340 Central adiposity in children born small and large for gestational age. Objective: To evaluate body composition differences between children that were born small (SGA) or large for gestational age (LGA) compared with their counterparts born adequate for gestational age (AGA). Methods: Body composition was assessed in 124 healthy Caucasian children (50% girls) aged 6-10, classified according to their birth weight for gestational age as AGA, SGA and LGA. Fat mass (FM), percentage of FM, lean mass (LM), bone mineral content (BMC) and bone mineral density were measured by dual-energy X-ray absorptiometry (DXA) in the whole body and at different body regions. Results: LM (adjusted for age and sex) and total BMC (adjusted for age, sex and weight) were both significantly higher in LGA children and lower in SGA when compared with those born AGA. After adjustments for height, LM and BMC differences between groups were not significant. In SGA children, truncal (P&lt;0.05) and abdominal fatness (P&lt;0.01) were higher when compared with both AGA and LGA children, after adjustments for age, sex and height. There were no differences in the percentage of total and central FM between children born LGA and AGA. Conclusions: During childhood, children born SGA had higher central adiposity regardless of their body size. Children born LGA seem to have a higher body size but with harmonic body composition and adequate body fat distribution. Small size for gestational age at birth could programme excess abdominal fat deposition in children, which is a major factor for the clustering of cardiovascular disease risk factors defining the metabolic syndrome. abstract_id: PUBMED:17595499 Metabolic syndrome in obese children born large for gestational age. Objective: This study was to compare the prevalence of metabolic syndrome (MS) and insulin release in Chinese obese children born large-for-gestational age (LGA) with those born appropriate-for-gestational age (AGA). Methods: Obese children were divided into LGA group (n = 60) and AGA group (n = 312); clinical and metabolic characteristics were collected. An oral glucose tolerance test was performed to detect glucose and insulin concentration at 0, 30, 60, 90 and 120 min. Differences between parameters were compared in the two groups and MS was determined. Results: The age of adiposity rebound (AR) was earlier and the period from AR to hospitalization was longer in LGA group than AGA group (4.58 +/- 3.35 years vs 5.64 +/- 3.08 yr, p=0.016 and 5.87 +/- 2.85 yr, vs 4.98 +/- 2.7 yr, p=0.02). There were no differences in beta -cell function, insulin resistance, insulin sensitivity and Disposition index between the two groups. Fasting insulin (FINS) and area under curve of insulin (AUCI) showed differences between two groups. The prevalence of MS was 65% for LGA group, which was significantly higher than AGA group (42.3%). LGA status increased the risk of MS with hazard ratios of 2.53 [95% confidence intervals (CI): 1.42-4.51]. Timing of AR showed significant negative correlation with hypertriglyceridemia (r = -0.497, P = 0.01). Multiple logistic regression analysis identified age at AR as an independent factor associated with blood triglyceride level. The prevalence of hypertension and hypertriglyceridemia was independently associated with LGA [adjusted odds ratios (95% CI) 2.41 (1.39-4.36), P = 0.003; 2.18(1.21-3.72), P = 0.016]. Conclusion: There was a younger trend in age of AR in obese children born LGA. The prevalence of MS was particularly higher in obese pediatric populations born LGA. Hypertension and hypertriglyceridemia were better components for diagnosis of MS in obese children. abstract_id: PUBMED:35011079 Extreme Birth Weight and Metabolic Syndrome in Children. Small and large birth weights (BWs) for gestational age (GA) represent extremes, but the correlation between extreme BW and metabolic syndrome (MetS) has not been fully elucidated. In this study, we examined this correlation in obese children based on changes in their metabolic profile from childhood to adolescence. A retrospective observational study was performed on 535 obese patients aged 0-18 years in the Clinical and Emergency Hospital for Children "Louis Turcanu" in Timisoara, Romania, based on clinical and biological data from January 2015 to December 2019. We emphasized the links between extreme BW and obesity, extreme BW and cardiometabolic risk, obesity and cardiometabolic risk, and extreme BW, obesity and MetS. Children born large for gestational age (LGA) predominated over those born small for gestational age (SGA). Our findings showed that BW has an independent effect on triglycerides and insulin resistance, whereas obesity had a direct influence on hypertension, impaired glucose metabolism and hypertriglyceridemia. The influences of BW and obesity on the development of MetS and its components are difficult to separate; therefore, large prospective studies in normal-weight patients are needed. abstract_id: PUBMED:22570950 Prevalence of components of the metabolic syndrome according to birthweight among overweight and obese children and adolescents. Background/objectives: Extremes of birthweight (BW) have been associated with increased rates of metabolic risks. The objective was to study the prevalence of metabolic risks markers among obese and overweight (OW) subjects according to BW. Subjects/methods: A cross-sectional study was performed in a cohort of 1002 patients (2-18 years, 40.6% male) evaluated for OW or obese subjects in two private clinics. Anthropometrics, fasting lipids, glycemia, and insulin were obtained. Results: Of the subjects, 76.1% were born appropriate for gestational age (AGA), 10.9% small for gestational age (SGA), and 13% large for gestational age (LGA). Children born LGA presented a more severe degree of obesity compared with those born AGA and SGA (p&lt;0.0001). No differences in glycemia, insulin, and lipid levels were detected among the groups. Abnormal glucose was found in 37 subjects: one with type 2 diabetes mellitus (from the previously glucose-intolerant subjects), 10 with glucose intolerance, and 27 with impaired fasting glucose. According to Boney criteria, 6.6% of the patients (6-18 years old) exhibited metabolic syndrome (MS) (69.4% AGA, 12.9% SGA, and 17.7% LGA). Conclusions: Being born LGA represents a higher risk of severe obesity. At this age, the most frequent component of MS was an abnormal lipid profile with low high-density lipoprotein and high triglycerides. Finally, the most frequent finding associated with abnormalities of glucose tolerance was a family history of diabetes. Thus, BW, lipid profile, and family history are mandatory when these patients are evaluated. abstract_id: PUBMED:38461803 Comparison of anthropometric, metabolic, and body compositional abnormalities in Korean children and adolescents born small, appropriate, and large for gestational age: a population-based study from KNHANES V (2010-2011). Purpose: The impacts of growth restriction and programming in the fetal stage on metabolic and bone health in children and adolescents are poorly understood. Moreover, there is insufficient evidence for the relationship between current growth status and metabolic components. Herein, we compared the growth status, metabolic and body compositions, and bone mineral density in Korean children and adolescents based on birth weight at gestational age. Methods: We studied 1,748 subjects (272 small for gestational age [SGA], 1,286 appropriate for gestational age [AGA], and 190 large for gestational age [LGA]; 931 men and 817 women) aged 10-18 years from the Korean National Health and Nutrition Examination Survey (KNHANES) V (2010-2011). Anthropometric measurements, fasting blood biochemistry, and body composition data were analyzed according to birth weight and gestational age. Results: The prevalence of low birth weight (14.7% vs. 1.2% in AGA and 3.2% in LGA, p&lt;0.001) and current short stature (2.237 [1.296-3.861] compared to AGA, p=0.004) in SGA subjects was greater than that in other groups; however, the prevalence of overweight and obesity risks, metabolic syndrome (MetS), and MetS component abnormalities was not. Moreover, no significant differences were found in age- and sex-adjusted lean mass ratio, fat mass ratio, truncal fat ratio, bone mineral content, or bone density among the SGA, AGA, and LGA groups in Korean children and adolescents. Conclusion: Our data demonstrate that birth weight alone may not be a determining factor for body composition and bone mass in Korean children and adolescents. Further prospective and longitudinal studies in adults are necessary to confirm the impact of SGA on metabolic components and bone health. abstract_id: PUBMED:34053939 Large for Gestational Age and Obesity-Related Comorbidities. Both small for gestational age and large for gestational age (LGA) size at birth are associated with metabolic complications throughout life. The long-term consequences of LGA have been investigated in only a few studies. LGA is thought to be associated with early obesity and metabolic risk. Understanding how LGA can influence later obesity risk is important for pediatric obesity interventions. Pregnant women who are overweight or obese are at high risk of having LGA babies. Infants born LGA are at increased risk of becoming overweight or obese children, adolescents, and young adults and can have an increased risk of metabolic syndrome later in life and giving birth to LGA offspring. Education and intervention for weight control before and during pregnancy should be conducted to prevent LGA births. Particular attention is needed for women of childbearing age who are diabetic and obese, which could be the starting point for lifelong management of obesity. abstract_id: PUBMED:37076191 Paediatric Obesity Evaluation for Metabolic Susceptibility (POEMS). Introduction: Our aim was to determine whether there are risk factors which increase the risk of developing dysglycemia in a child who has increased body mass index (BMI) (overweight/obese). Research Design And Methods: This was a retrospective cohort study of 715 children who had increased BMI (overweight/obese). They presented to tertiary care at KK Women's and Children's Hospital, Singapore, for metabolic risk assessment. Subjects who had more than one oral glucose tolerance test were included in order to track and analyze risk factors associated with worsening glycemic status from a previously normal glucose tolerance, impaired fasting glucose, or impaired glucose tolerance (IGT) state. Demographic characteristics, birth history, family history of metabolic syndrome, metabolic comorbidities, and interventions received were recorded. Statistical analysis was performed to determine odds ratio (OR) of worsening glycemic status progression in association with an analyzed variable, adjusted for intervention received. Results: Risk factors of developing dysglycemia can be present right from birth, as participants who were born preterm had increased odds of IGT (OR: 3.49 (1.10 to 11.03)), and a greater proportion of large-for-gestational-age (LGA)/small-for-gestational-age (SGA) babies had dysglycemia (SGA-IGT: 8.8%, SGA-diabetes mellitus (DM): 5.9%, LGA-IGT: 10.6%, LGA-DM: 11.8%) even at baseline. Being born preterm (OR: 3.49 (1.10 to 11.03)), with comorbidities of hypertension (OR: 1.61 (1.01 to 2.57)), hyperlipidemia (OR: 1.80 (1.19 to 2.72)), and fatty liver disease (OR: 2.08 (1.39 to 3.13)), was significantly associated with an increased OR of developing IGT. Risk factors for developing a worsening glycemic status, either to IGT or DM, included age &gt;10 years (OR 4.94 (1.21 to 20.25)), BMI rise (OR 1.71 (1.17 to 2.49)), BMI increase &gt;1.08 kg/m2 (OR 1.71 (1.16 to 2.51)), comorbidities of hyperlipidemia (OR 1.67 (1.12 to 2.50)), and fatty liver disease (OR 2.11 (1.43 to 3.12)). Conclusions: A child who has increased BMI (overweight/obese) and possesses risk factors for worsening glycemic status, if intervened with routine lifestyle modification advice, may still have increased risk of developing dysglycemia and type 2 DM. Therefore, understanding their risk profile provides opportunities to have a tiered and individualized approach. abstract_id: PUBMED:28104278 Review: Is rapid fat accumulation in early life associated with adverse later health outcomes? This review discusses ways in which the maternal environment and placental function affect the birth weight and adult health outcomes of offspring. These maternal and placental factors have varying and sometimes opposing effects on birth weight, resulting in infants that are born small for gestational age (SGA), large for gestational age (LGA) or preterm. However, all these alterations in weight have similar effects on adult health, increasing the risk of obesity and its associated cardiovascular and metabolic disorders. While birth weight has been used as a marker for risk of adverse adult health, we propose that a common feature of all these scenarios - early accumulation of excess body fat - may be a better marker than birth weight alone. Furthermore, altered neonatal fat accumulation may be more closely related to the mechanism by which maternal environment and placental adaptation mediate effects on adult health. We suggest that more research should be focussed on early fat accretion, factors that promote fat accretion and if it can be avoided, and whether it would be beneficial to try to reduce fat accumulation in early life. Answer: Yes, metabolic syndrome antecedents in prepubertal children are associated with being born idiopathic large for gestational age (LGA). Studies have found that non-obese LGA children have higher waist circumference standard deviation scores and atherogenic index (AI) than their appropriate for gestational age (AGA) counterparts, even in the absence of insulin resistance (IR) (PUBMED:23627262). Additionally, LGA children have been observed to have higher levels of leptin, fasting insulin, and HOMA-IR, but lower IGFBP-3 and fibrinogen than AGA children, indicating diverse effects on cardiometabolic risk factors at prepuberty (PUBMED:20724652). Furthermore, obese children born LGA have a higher prevalence of metabolic syndrome (MS) and are at increased risk for hypertension and hypertriglyceridemia compared to those born AGA (PUBMED:17595499). Extreme birth weight, including LGA, has an independent effect on triglycerides and insulin resistance, and LGA children are at increased risk of becoming overweight or obese and can have an increased risk of metabolic syndrome later in life (PUBMED:35011079). Lastly, children born LGA are at a higher risk of severe obesity, and the most frequent component of MS in this age group is an abnormal lipid profile with low high-density lipoprotein and high triglycerides (PUBMED:22570950).
Instruction: Are perinatal outcomes affected by blastocyst vitrification and warming? Abstracts: abstract_id: PUBMED:27287684 Are perinatal outcomes affected by blastocyst vitrification and warming? Background: Transfer of cryopreserved-warmed embryos into an appropriately prepared uterus unaffected by controlled ovarian hyperstimulation is common in the practice of in vitro fertilization. There is limited information on the effect of blastocyst vitrification and warming on perinatal outcomes. Objective: We sought to determine if perinatal outcomes are affected after the transfer of vitrified-warmed blastocysts compared to the transfer of fresh blastocysts, by comparing preeclampsia rate, birthweight, percentage of low birthweight, and preterm delivery rate between embryo transfer types. Study Design: We performed a retrospective database cohort study of 289 fresh and 109 vitrified-warmed blastocyst transfer cycles at an academic medical center. Cycles were performed from July 2, 2009, through Dec. 8, 2014, and included infants born at ≥20 weeks gestational age, excluding donor egg cycles. We examined the association between transfer type (fresh or vitrified-warmed) and proportion of deliveries complicated by preeclampsia, preterm delivery (gestational age &lt;37 weeks), and low birthweight (&lt;2500 g). We assessed associations using generalized linear models, both unadjusted and adjusted, for maternal age, newborn sex, diabetes status, and parity. Results: We observed more pregnancies complicated by preeclampsia following vitrified-warmed transfers (7.6%) compared to fresh embryo transfers (2.6%) (P = .023) (adjusted odds ratio, 3.1; 95% confidence interval, 1.2-8.4). Newborns resulting from vitrified-warmed embryo transfer cycles were similar to those resulting from fresh embryo transfer cycles with regard to low birthweight (7.4% vs 5.3%, P = .421), mean birthweight (3443 vs 3431 g, P = .865), and preterm delivery rate (9.2% vs 8.7%, P = .869). Conclusion: We conclude that embryo vitrification with warming may affect some perinatal outcomes since preeclampsia is increased compared to fresh blastocyst transfer. However, other perinatal outcomes such as low birthweight and preterm delivery rate are not affected. Fresh blastocyst transfers should be considered when possible as they may reduce the incidence of preeclampsia. abstract_id: PUBMED:35585784 The preimplantation genetic testing clinical outcomes of biopsy on vitrification-warming embryos: A retrospective study. Aim: The objective of this study was to assess whether PGT conducted with previously untested vitrified embryos affect the clinical outcomes. Methods: A total of 49 patients who underwent biopsy on vitrification-warming embryos for PGT were enrolled from January 2016 to January 2019. The cleavage-stage embryos were thawed and cultured into the blastocyst stage for biopsy. During this period, 195 patients underwent routine PGT and FET, whose embryos were biopsied before frozen were used as the control group. The clinical outcomes were further compared between the two groups after a 1:2 PSM. Results: There were 47 transferable blastocysts in 30 patients, while 19 patients without transferable embryos, who performed biopsy on vitrification-warming embryos for PGT. During this study period, 27 patients have already underwent FET with the clinical pregnancy rate was 66.7% (18/27). After 1:2 PSM, 24 patients in the biopsy on vitrification-warming embryo group and 48 patients in the control group were compared, the clinical pregnancy rate (68.8% vs. 70.8%, p = 0.86), miscarriage rate (18.2% vs. 11.8%, p = 0.86), or live birth rate (52.1% vs. 62.5%, p = 0.40) had no significant difference. And the transferable blastocyst rate or the clinical pregnancy rate in the vitrification-warming cleavage-stage embryo group was not significantly different from those in the vitrification-warming blastocyst group. In addition, the PGT clinical outcomes of biopsy on vitrification-warming embryos had no significant difference between IVF-fertilized embryos and ICSI-fertilized embryos. Conclusion: Biopsy on the vitrification-warming embryos with a dual vitrified cryopreservation does not affect the embryo quality or the PGT clinical outcomes. abstract_id: PUBMED:34862135 Clinical, obstetric and perinatal outcomes after vitrified-warmed euploid blastocyst transfer are independent of cryo-storage duration. Research Question: The study aimed to retrospectively evaluate the impact of cryo-storage duration on clinical, obstetric and perinatal outcomes after vitrified-warmed euploid blastocyst transfer. Design: This was an observational study including 2688 vitrified-warmed euploid single blastocyst transfers that was conducted at a private IVF centre between May 2013 and March 2020. It included a total of 1884 women (age 38 ± 3 years) undergoing at least one transfer after preimplantation genetic testing for aneuploidies. The euploid blastocysts transferred were clustered into seven groups according to the cryo-storage duration between vitrification and warming: ≤60 days (n = 646; control group), 61-90 days (n = 599), 91-180 days (n = 679), 181-360 days (n = 405), 361-720 days (n = 144), 721-1080 days (n = 118) and &gt;1080 days (n = 97). The primary outcome was the live birth rate (LBR) per transfer. The secondary outcomes were miscarriage rate, obstetric and perinatal issues. The data were adjusted for confounders through logistic or linear regressions. Results: A significantly lower LBR was reported for transfers performed within 91-180 days (n = 291/679, 42.9%; P = 0.017), 181-360 days (n = 169/405, 41.7%; P = 0.016) and 361-720 days (n = 57/144, 39.6%; P = 0.034) versus ≤60 days (n = 319/646, 49.4%). However, this was mainly due to top-quality embryos being transferred first when more euploid blastocysts were available, thereby leaving lower quality ones for subsequent procedures. Indeed, the multivariate odds ratios adjusted for confounders showed similar results across all cryo-storage duration clusters. No difference was reported also for all secondary outcomes. Conclusions: Cryo-storage duration even beyond 3 years from blastocyst vitrification does not affect clinical, obstetric and perinatal outcomes. abstract_id: PUBMED:37835035 Pre-Vitrification and Post-Warming Variables of Vitrified-Warmed Blastocysts That Are Predictable for Implantation. Human IVF embryos that are not used for fresh transfer are cryopreserved by vitrification for later embryo transfers. This study evaluates pre-vitrification and post-warming embryo characteristics that are suitable to predict the chance of clinical pregnancy in single vitrified blastocyst transfer (SVBT) cycles. In a multicenter observational trial (IMBOS trial), embryos were cultured in a time-lapse system before and after vitrification. Associations between clinical pregnancy, morphokinetic parameters, blastocyst collapse, KIDScore D5, pre-vitrification and post-warming Gardner scores, post-warming blastocyst size and re-expansion rates before SVBT were analyzed in 182 SVBTs which resulted in 89 clinical pregnancies. No association was found between clinical pregnancy after SVBT and the number of collapses or the maximal collapse size before vitrification. The multifactorial analysis of pre-vitrification Gardner scores showed a significant association with clinical pregnancy for trophectoderm grading but not for expansion/hatching status and inner cell mass grading. A significant association with clinical pregnancy was found for the time to reach a blastocyst after pronuclear fading (tB-tPNf), KIDScore D5 and post-warming size but not the rate of expansion or maximal expansion size. The selection of blastocysts for SVBT could benefit from using pre-vitrification parameters like tB-tPNf, trophectoderm grading and post-warming blastocyst size. abstract_id: PUBMED:37710287 Effect of blastocyst shrinkage on assisted reproductive outcomes: a retrospective cohort study describing a new morphological evaluation of blastocyst pre-vitrification and post-warming. Background: The failure of frozen-thawed blastocysts to re-expand adequately within a few hours after warming has been reported to have a negative impact on assisted reproductive technology (ART) outcomes. However, the extent to which this failure truly affects ART outcomes has not yet been presented in a manner that is easily understandable to medical practitioners and patients. This study aimed to assess the effects of blastocyst shrinkage on ART outcomes and determine a more effective morphological evaluation approach for use in clinical settings. Methods: This retrospective observational cohort study of frozen-thawed blastocyst transfer cycles was conducted from April 2017 to March 2022. Overall, 1,331 cycles were eligible for inclusion, of which 999 were good-quality blastocysts (GQB) and 332 were non-good-quality blastocysts (non-GQB). All frozen-thawed blastocyst transfer cycles performed during the specified study period were included in the study. Exclusion criteria were established to mitigate potential sources of bias as these cycles could impact implantations. We calculated rates and age-adjusted odds ratios of implantation, clinical pregnancy, ongoing pregnancy, and live birth of the re-expansion group, which showed sufficient expansion, and shrinkage group, which showed insufficient expansion. We also calculated the implantation, clinical pregnancy, ongoing pregnancy, and live birth rates of the re-expansion and shrinkage groups for each morphological scoring system parameter. Results: A reduced ART outcome was observed with use of blastocysts with shrinkage after vitrification/warming. The age-adjusted odds ratios for implantation, clinical pregnancy, ongoing pregnancy, and live birth were lower in the shrinkage group than in the re-expansion group. Conclusions: This study examined the adverse effect of blastocyst shrinkage after warming and recovery culturing on reproductive outcomes in a clinically useful manner by retrospectively examining a substantial number of frozen-thawed embryo transfer cycles. The study findings can possibly reduce concerns regarding over- or under-estimation of blastocyst implantation by allowing providers and patients to refer to the data. abstract_id: PUBMED:30882776 Morphometric Protocol for the Objective Assessment of Blastocyst Behavior During Vitrification and Warming Steps. This article describes the noninvasive method of blastocyst morphometry based on time-lapse microphotography for the accurate monitoring of a blastocyst's volume changing during individual phases before and after vitrification. The method can be useful in searching for the most optimal timing of blastocyst exposure to different concentrations of cryoprotectants by observing blastocyst shrinkage and re-expansion in different pre- and post-vitrification phases. With this methodology, the blastocyst vitrification protocol can be optimized. For a better demonstration of the usefulness of this morphometric method, two different blastocyst preparation protocols for vitrification are compared; one with using an artificial blastocoel collapsing and one without this intervention before vitrification. Both blastocysts' volume changes are followed by time-lapse microphotography and measured by photo-editing software tools. The measurements are taken every 20 seconds in previtrification phases and every 5 minutes in the post-warming period. The changes of the blastocyst dimensions per time unit are presented graphically in line diagrams. The results show a long equilibration previtrification phase in which the intact blastocyst first shrinks and then slowly refills the blastocoel, entering vitrification with a fluid-filled blastocoel. The artificially collapsed blastocyst remains in its shrunken stage through the entire equilibration phase. During the vitrification phase, it also does not change its volume. Since the blastocyst morphometry shows a constant volume of the artificially collapsed blastocysts during the previtrification step, it seems that this stage could be shorter. The described protocol provides many additional comparative parameters of blastocyst behavior during and after cryopreservation on the basis of the speed and intensity of the volume changes, the number of partial blastocoel contractions or total blastocyst collapses, and the time to a total blastocoel re-expansion or the time to hatching. abstract_id: PUBMED:29706285 Blastocyst vitrification, cryostorage and warming does not affect live birth rate, infant birth weight or timing of delivery. Research Question: Does vitrification and warming affect live birth rate, infant birth weight and timing of delivery? Design: Retrospective, cohort study comparing outcomes of donor oocyte recipient fresh (n = 25) versus vitrified (n = 86) euploid blastocyst transfers; donor oocyte recipient singleton live births from fresh (n = 100) versus vitrified (n = 102) single embryo transfers (SET); and autologous vitrified euploid SET (n = 1760) (cryostored 21-1671 days). Results: Group 1: fresh and vitrified-warmed blastocysts had similar live birth (OR 1.7; 95% CI 0.5 to 5.9), implantation (OR 0.9; 95% CI 0.2 to 3.9), clinical pregnancy (OR 3.4; 95% CI 0.9 to 13.0) and pregnancy loss (OR 1.2; 95% CI 0.98 to 1.4); group 2: low birth weight (OR 0.44; 95% CI 0.1 to 1.6) and preterm delivery (0.99; 95% CI 0.4 to 2.3) rates were similar in fresh and vitrified-warmed blastocyst transfers; group 3: cryostorage duration did not affect live birth (OR 1.0; 95% CI 1.0 to 1.0), implantation (OR 1.0; 95% CI 0.99 to 1.01), clinical pregnancy (OR 1.0; 95% CI 1.0 to 1.0]), pregnancy loss (OR 0.99; 95% CI 1.0 to 1.0), birth weight (β = -15.7) or gestational age at delivery (β = -0.996). Conclusions: Vitrification and cryostorage (up to 4 years) are safe and effective practices that do not significantly affect clinical outcome after embryo transfer. abstract_id: PUBMED:28446183 Effect of different artificial shrinkage methods, when applied before blastocyst vitrification, on perinatal outcomes. Background: In recent years, single blastocyst transfer combined with vitrification has been applied widely, which can maximize the cumulative pregnancy rate in per oocyte retrieval cycles and minimize the multiple pregnancy rate. Thus, the guarantee for these is the effectiveness of vitrified blastocyst. Studies has shown that AS of the blastocoel cavity prior to vitrification can reduce injuries, increase the thawed blastocyst survival rate and implantation rate. Several AS methods have been established. However, only a few studies have compared the effectiveness and safety of these AS methods. In this study, we aimed to compare the clinical outcomes and neonatal outcomes in FET cycles with single blastocyst that were artificially shrunk before vitrification by either LAS or MNAS method. Methods: A retrospective comparative study of FET cycles in infertile patients which were at our clinic between January 2013 and December 2014. These FET cycles were divided into two groups by the shrinking methods used before vitrification and the clinical and neonatal outcomes were assessed. Results: There were no statistically differences in blastocyst survival rates (95.40% vs 94.05%, P &gt; 0.05) between the LAS and MNAS groups. However, compared with MNAS, LAS improved the warmed blastocyst implantation/clinical pregnancy rate (60.82% vs 54.37%, P &lt; 0.05), live birth rate (50.43% vs 45.22%, P &lt; 0.05) and also increased the monozygotic twin rate (4.07% vs 1.73%, P &lt; 0.05). There were no differences in the average gestational weeks (38.83 ± 1.57 vs 38.74 ± 1.75), premature birth rate (0.30% vs 0.49%), average birth weight (3217.89 ± 489.98 g vs 3150.88 ± 524.03 g), low birth weight rate (5.60% vs 8.63%) and malformation rate (0.59% vs 0.48%) (P &gt; 0.05). Conclusions: No significant differences in neonatal outcomes were observed, while in clinical outcomes, LAS improved the warmed blastocyst implantation/clinical pregnancy rate and live birth rate markedly, there was also an increased risk of monozygotic twin pregnancies. abstract_id: PUBMED:37311945 Comparative Assessment of Survival and Clinical Outcome Between Two Commercial Vitrification Kits with Different Warming Protocols After Blastocyst Culture: Potential Perspectives Toward Simplified Warming Procedures. This study investigates whether there is an effect on laboratory results and clinical outcome using commercial kits with similar vitrification but different warming procedures for blastocysts vitrified on day 5 or day 6. A single-center retrospective cohort study was performed between 2011 and 2020. A change from a stage-specific kit (Kit 1) to a universal kit (Kit 2) was undertaken in 2017. A total of 1845 untested blastocysts were warmed for single vitrified-warmed blastocyst transfers (SVBT). Eight hundred and twenty-five blastocysts were vitrified with Kit 1 and 1020 with Kit 2. Blastocyst survival was not different (96.1% versus 97.3%). Seven hundred seventy-seven SVBT were performed from Kit 1 and 981 from Kit 2. Overall clinical pregnancy and live birth rates were not different (35.4% versus 34.1% and 30.9% versus 30.5% for Kit 1 and 2, respectively). Subgroup analysis for live birth rates in relation to the day of blastocyst vitrification showed no differences (36.1% and 36.1% for day 5 and 25.4% and 23.5% for day 6 blastocysts, respectively). For both kits, the mean gestational age was not different (38.8 ± 2.5 weeks versus 38.8 ± 2.0 weeks) with a singleton birth weight of 3413 ± 571 g and 3410 ± 528 g for Kit 1 and Kit 2, respectively. Differences in warming procedures do not affect laboratory performance or clinical outcome after blastocyst vitrification. The plasticity of a human blastocyst may allow for further investigation on simplification of blastocyst warming procedures. abstract_id: PUBMED:37582908 Evaluation of live birth rates and perinatal outcomes following two sequential vitrification/warming events at the zygote and blastocyst stages. Purpose: To study the outcome of sequential cryopreservation-thawing of zygotes followed by the cryopreservation-thawing of blastocysts in the course of an IVF treatment on live birth rate and neonatal parameters. Methods: Single center, retrospective chart review for the time period of 2015-2020. Clinical and perinatal outcomes were compared between frozen embryo transfer cycles utilizing twice-cryopreserved (n = 182) vs. once-cryopreserved (n = 282) embryos. Univariate and multivariable analyses were used to adjust for relevant confounders. Results: After adjustment for maternal age, gravidity, parity, body mass index (BMI), paternal age, fertilization method used, the number of oocytes retrieved in the fresh cycle, fertilization rate, and transfer medium, the transfer of twice-cryopreserved embryos resulted in a reduced probability of live birth (OR, 0.52; 95% CI 0.27-0.97; p=0.041) compared to once-cryopreserved embryos. No differences in the sex ratio, the mean gestational age, the mean length at birth, or the mean birth weight were found between the two groups. Conclusion: The circumstantial use of sequential double vitrification-warming in course of treatment is associated with a reduced (but still reasonable) live birth rate compared to once-cryopreserved embryos. As the neonatal outcomes of twice-cryopreserved embryos are similar to once-cryopreserved embryos, this treatment option appears still valid as a rescue scenario in selected cases. Answer: Perinatal outcomes can be affected by blastocyst vitrification and warming, but the effects vary depending on the specific outcomes being measured. A retrospective database cohort study found that pregnancies resulting from vitrified-warmed blastocyst transfers had a higher rate of preeclampsia compared to fresh embryo transfers (7.6% vs. 2.6%), suggesting that vitrification with warming may affect some perinatal outcomes. However, other outcomes such as low birthweight, mean birthweight, and preterm delivery rate were not significantly different between vitrified-warmed and fresh embryo transfer cycles (PUBMED:27287684). Another study indicated that the clinical outcomes of preimplantation genetic testing (PGT) conducted with previously untested vitrified embryos did not show significant differences in clinical pregnancy rate, miscarriage rate, or live birth rate compared to the control group. This suggests that biopsy on vitrification-warming embryos for PGT does not affect embryo quality or PGT clinical outcomes (PUBMED:35585784). Additionally, research has shown that the duration of cryo-storage does not impact clinical, obstetric, and perinatal outcomes after vitrified-warmed euploid blastocyst transfer, even when storage extends beyond three years (PUBMED:34862135). However, certain pre-vitrification and post-warming variables of vitrified-warmed blastocysts can be predictive for implantation, such as trophectoderm grading and post-warming blastocyst size (PUBMED:37835035). Blastocyst shrinkage after warming has been associated with reduced assisted reproductive technology (ART) outcomes, indicating that the extent of blastocyst re-expansion post-warming can be an important factor (PUBMED:37710287). A study comparing different artificial shrinkage methods before blastocyst vitrification found no significant differences in neonatal outcomes, although one method (LAS) improved implantation/clinical pregnancy rate and live birth rate compared to another method (MNAS), but also increased the risk of monozygotic twin pregnancies (PUBMED:28446183). In summary, while certain aspects of perinatal outcomes such as preeclampsia rates may be affected by blastocyst vitrification and warming, other outcomes like low birthweight, preterm delivery, and live birth rates do not appear to be significantly impacted.