input
stringlengths
6.82k
29k
Instruction: Does stroke subtype and measurement technique influence estimation of cerebral autoregulation in acute ischaemic stroke? Abstracts: abstract_id: PUBMED:29996123 Cerebral Haemodynamics following Acute Ischaemic Stroke: Effects of Stroke Severity and Stroke Subtype. Background: Acute ischaemic stroke (AIS) patients often show impaired cerebral autoregulation (CA). We tested the hypothesis that CA impairment and other alterations in cerebral haemodynamics are associated with stroke subtype and severity. Methods: AIS patients (n = 143) were amalgamated from similar studies. Data from baseline (< 48 h stroke onset) physiological recordings (beat-to-beat blood pressure [BP], cerebral blood flow velocity (CBFV) from bilateral insonation of the middle cerebral arteries) were calculated for mean values and autoregulation index (ARI). Differences were assessed between stroke subtype (Oxfordshire Community Stroke Project [OCSP] classification) and severity (National Institutes of Health Stroke Scale [NIHSS] score < 5 and 5-25). Correlation coefficients assessed associations between NIHSS and physiological measurements. Results: Thirty-two percent of AIS patients had impaired CA (ARI < 4) in affected hemisphere (AH) that was similar between stroke subtypes and severity. CBFV in AH was comparable between stroke subtype and severity. In unaffected hemisphere (UH), differences existed in mean CBFV between lacunar and total anterior circulation OCSP subtypes (42 vs. 56 cm•s-1, p < 0.01), and mild and moderate-to-severe stroke severity (45 vs. 51 cm•s-1, p = 0.04). NIHSS was associated with peripheral (diastolic and mean arterial BP) and cerebral haemodynamic parameters (CBFV and ARI) in the UH. Conclusions: AIS patients with different OCSP subtypes and severity have homogeneity in CA capability. Cerebral haemodynamic measurements in the UH were distinguishable between stroke subtype and severity, including the association between deteriorating ARI in UH with stroke severity. More studies are needed to determine their clinical significance and to understand the determinants of CA impairment in AIS patients. abstract_id: PUBMED:26661230 Dynamic cerebral autoregulation following acute ischaemic stroke: Comparison of transcranial Doppler and magnetic resonance imaging techniques. Novel MRI-based dynamic cerebral autoregulation (dCA) assessment enables the estimation of both global and spatially discriminated autoregulation index values. Before exploring this technique for the evaluation of focal dCA in acute ischaemic stroke (AIS) patients, it is necessary to compare global dCA estimates made using both TCD and MRI. Both techniques were used to study 11 AIS patients within 48 h of symptom onset, and nine healthy controls. dCA was assessed by the rate of return of CBFV (Rturn) following a sudden drop induced by the thigh cuff manoeuvre. No significant between-hemisphere differences were seen in controls using either the TCD or MRI technique. Inter-hemisphere averaged Rturn values were not different between TCD (1.89 ± 0.67%/s) and MRI (2.07 ± 0.60%/s) either. In patients, there were no differences between the affected and unaffected hemispheres whether assessed by TCD (Rturn 0.67 ± 0.72 vs. 0.98 ± 1.09%/s) or MRI (0.55 ± 1.51 vs. 1.63 ± 0.63%/s). Rturn for both TCD and MRI was impaired in AIS patients compared to controls in both unaffected and affected hemispheres (ANOVA, p = 0.00005). These findings pave the way for wider use of MRI for dCA assessment in health and disease. abstract_id: PUBMED:23548789 Does stroke subtype and measurement technique influence estimation of cerebral autoregulation in acute ischaemic stroke? Background: It is known that dynamic cerebral autoregulation (dCA) is acutely impaired following ischaemic stroke (IS). However, the influence of stroke subtype, the affected (AF) and unaffected (UA) hemispheres, and the effects of a methodological approach on dCA estimates in stroke are all inconclusive. Therefore, we studied cortical and subcortical acute IS (AIS) patients to test the primary hypotheses that (1) dCA is impaired in stroke subtypes when compared to controls, (2) dCA impairment is more pronounced in the AF compared with the UA hemisphere, and (3) similar results are obtained with both spontaneous blood pressure (BP) fluctuation techniques, and sudden induced BP changes by thigh cuff deflation. Methods: We assessed the dCA values in AIS patients and in healthy controls (n = 10). The AIS patient group consisted of anterior circulation cortical (n = 11) and subcortical (n = 11) strokes within 48 h of symptom onset. Cerebral blood flow velocity was measured using transcranial Doppler ultrasound, and BP measurements were recorded before, during and after the release of bilateral thigh cuffs in 10 controls (7 males) of a mean age of 59 ±15 years (range 31-75), 11 cortical strokes (7 males) of a mean of age 65 ± 19 years (range 25-88) and 11 subcortical strokes (7 males) of a mean age of 60 ± 18 years (range 39-85). Autoregulation index (ARI) estimates, calculated using spontaneous fluctuations and thigh cuff manoeuvre, were derived. Differences in ARI (Tiecks' model) were tested with repeated-measures ANOVA. Results: A total of 22 patients were included, comprising 11 subcortical (lacunar clinical syndrome) and 11 cortical strokes (total anterior circulation stroke/partial anterior circulation syndrome). Of the 10 control subjects, 1 later withdrew because of intolerance to the thigh cuffs. Similar ARI estimates were obtained in both groups, whether assessed from spontaneous fluctuations or thigh cuff measurements (p = 0.37). ARI differences were not significantly different between hemispheres for both control and stroke populations. ARI was significantly impaired in AIS patients compared to age-, sex- and BP-matched control subjects, with a greater impairment of dCA observed in cortical IS. Conclusions: The results of this study suggest that both spontaneous fluctuations and thigh cuff deflation techniques are able to provide reliable estimates of ARI, with the estimates from both spontaneous fluctuations and thigh cuff deflation techniques being in keeping with those reported elsewhere in the literature. dCA was impaired following AIS compared to controls when stroke subtype was considered. Importantly, no differences were observed between UA and AF. This has implications for the assessment of CA after stroke and reinforces the need to define a 'gold standard' test for the investigation of CA. abstract_id: PUBMED:25298773 The longitudinal evolution of cerebral blood flow regulation after acute ischaemic stroke. Background: Acute stroke is known to impair cerebral blood flow (CBF) regulation, but the longitudinal changes of these effects have been poorly reported. The main CBF regulatory mechanisms [cerebral autoregulation (CA) and neurovascular coupling (NVC)] were assessed over 3 months after acute ischaemic stroke. Methods: Recordings of CBF velocity (CBFv), blood pressure (BP), and end-tidal CO2 were performed during 5 min baseline and 1 min passive movement of the elbow. Stroke patients were assessed <72 h of stroke onset, and at 2 weeks, 1 and 3 months after stroke. Results: Fifteen acute stroke subjects underwent all 4 sessions and were compared to 22 control subjects. Baseline recordings revealed a significantly lower CBFv in the affected hemisphere within 72 h after stroke compared to controls (p = 0.02) and a reduction in CA index most marked at 2 weeks (p = 0.009). CBFv rise in response to passive arm movement was decreased bilaterally after stroke, particularly in the affected hemisphere (p < 0.01). Both alterations in CA and NVC returned to control levels during recovery. Conclusion: The major novel finding of this study was that both CA and NVC regulatory mechanisms deteriorated initially following stroke onset, but returned to control levels during the recovery period. These findings are relevant to guide the timing of interventions to manipulate BP and potentially for the impact of intensive rehabilitation strategies that may precipitate acute physiological perturbations but require further exploration in a larger population that better reflects the heterogeneity of stroke. Further, they will also enable the potential influence of stroke subtype to be investigated. abstract_id: PUBMED:28536207 Impaired cerebral autoregulation: measurement and application to stroke. Cerebral autoregulation (CA) is a protective mechanism that maintains cerebral blood flow at a relatively constant level despite fluctuations of cerebral perfusion pressure or arterial blood pressure. It is a universal physiological mechanism that may involve myogenic, neural control as well as metabolic regulations of cerebral vasculature in response to changes in pressure or cerebral blood flow. Traditionally, CA has been represented by a sigmoid curve with a wide plateau between about 50 mm Hg and 170 mm Hg of steady-state changes in mean arterial pressure, defined as static CA. With the advent of transcranial Doppler, measurement of cerebral blood flow in response to transient changes in arterial pressure has been used to assess dynamic CA. However, a gold standard for measuring CA is not currently available. Stroke has been the leading cause of long-term adult disability throughout the world. A better understanding of CA and its response to pathological derangements can help assess the severity of stroke, guide management decisions, assess response to interventions and provide prognostic information. The objective of this review is to provide a comprehensive insight about physiology of autoregulation, measurement methodologies and clinical applications in stroke to help build a consensus for what should be included in an internationally agreed protocol for CA testing and monitoring, and to promote its translation into clinical bedside practice for stroke management. abstract_id: PUBMED:10686451 Dynamic but not static cerebral autoregulation is impaired in acute ischaemic stroke. It remains unclear as to whether dynamic and static cerebral autoregulation (CA) are impaired in acute ischaemic stroke, and whether these changes are related to stroke subtype. This could have important implications with regard to post-stroke prognosis and the management of blood pressure (BP) in the acute post-ictal period. Using transcranial Doppler ultrasonography and non-invasive manipulation of BP, we compared both mechanisms in 61 patients with ischaemic stroke within 96 h of ictus, and 54 age- and sex-matched controls. There was no difference in static and dynamic CA indices between the various stroke subtypes. Combining all stroke subtypes dynamic autoregulation, as measured using thigh cuff release, was significantly impaired in both the affected and non-affected stroke hemispheres compared to controls (mean autoregulation index 4.1 +/- 3.3, 4.8 +/- 3.1 and 6.2 +/- 2.3, respectively, p < 0.05). By comparison static autoregulation, assessed using isometric hand grip and thigh cuff inflation, was not significantly different. In conclusion, dynamic but not static CA appears to be globally impaired in acute ischaemic stroke. This deserves further study and may identify possibilities for therapeutic intervention. abstract_id: PUBMED:30534111 Preliminary Study of Dynamic Cerebral Autoregulation in Acute Ischemic Stroke: Association With Clinical Factors. Background and Purpose: Dynamic cerebral autoregulation (dCA) is probably impaired in the acute and even subacute phases after acute ischemic stroke (AIS); however, the relationship between relevant clinical factors and dCA after AIS has not been investigated. The identification of possible determinants may therefore provide potential therapeutic targets to improve dCA in AIS. Methods: This study enrolled 67 consecutive patients diagnosed with AIS within 3 days from symptom onset. Serial measurements were performed 1-3 days (measurement 1) and 7-10 days (measurement 2) after the onset. Middle cerebral artery blood flow velocities and simultaneous arterial blood pressure (ABP) were recorded continuously with transcranial Doppler combined with a servo-controlled finger plethysmograph. Transfer function analysis was used to derive dCA parameters, phase difference (PD), and coherence in low-frequency range (0.06-0.12 Hz). Univariate and multivariate linear regression analyses were conducted to determine the relationship between clinical factors and PD within the two time points of measurements. Multivariate logistic regression was performed to reveal the relationship between PD and clinical outcomes. Results: Bilateral PD was significantly lower (indicating impaired dCA) in AIS patients, both in measurement 1 and measurement 2 when compared with those of healthy controls (all P < 0.001). After controlling for relevant clinical factors, in measurement 1, age (β = -0.29, P = 0.01), recombinant tissue plasminogen activator (rt-PA) intravenous thrombolysis (β = 0.25, P = 0.034), subtype of large-artery atherosclerosis (LAA) (β = -0.31, P = 0.007), and uric acid level (β = -0.32, P = 0.009) were significant independent predictors of ipsilateral PD. In measurement 2, subtype of LAA (β = -0.28, P = 0.049) and uric acid level (β = -0.43, P = 0.005) were still significant predictive values for ipsilateral PD. After adjusting for age and National Institutes of Health Stroke Scale at admission, ipsilateral PD >35.37° in measurement 2 was independent predictor of good clinical outcomes (adjusted OR = 6.97, 95% CI: 1.27-38.14, P = 0.03). Conclusion: DCA was sustained to be bilaterally impaired in the acute and even subacute phase after AIS. Patients who receiving rt-PA thrombolysis tended to have a better dCA in the acute phase. Increasing age, subtype of LAA, and higher uric acid level had prognostic value for disturbed autoregulation. A relatively preserved dCA may predict good clinical outcomes. abstract_id: PUBMED:31932877 The impact of acute remote ischaemic preconditioning on cerebrovascular function. Purpose: Remote ischaemic preconditioning (RIPC) refers to the protection conferred to tissues and organs via brief periods of ischaemia in a remote vascular territory, including the brain. Recent studies in humans report that RIPC provides neuroprotection against recurrent (ischaemic) stroke. To better understand the ability of RIPC to improve brain health, the present study explored the potential for RIPC to acutely improve cerebrovascular function. Methods: Eleven young healthy (females n = 6, age; 28.1 ± 3.7 years) and 9 older individuals (females n = 4, age 52.5 ± 6.7 years) at increased risk for stroke (cardiovascular disease risk factors) underwent assessments of cerebrovascular function, assessed by carbon dioxide (CO2) reactivity and cerebral autoregulation during normo- and hypercapnia (5% CO2) following 40 mins of bilateral arm RIPC or a sham condition. Squat-to-stand manoeuvres were performed to induce changes in blood pressure to assess cerebral autoregulation (0.10 Hz) and analysed via transfer function. Results: We found no change in middle cerebral artery velocity or blood pressure across 40 mins of RIPC. Application of RIPC resulted in no change in CO2 reactivity slopes (sham vs RIPC, 1.97 ± 0.88 vs 2.06 ± 0.69 cm/s/mmHg P = 0.61) or parameters of cerebral autoregulation during normocapnia (sham vs RIPC, normalised gain%, 1.27 ± 0.25 vs 1.22 ± 0.35, P = 0.46). Conclusion: This study demonstrates that a single bout of RIPC does not influence cerebrovascular function acutely in healthy individuals, or those at increased cardiovascular risk. Given the previously reported protective role of RIPC on stroke recurrence in humans, it is possible that repeated bouts of RIPC may be necessary to impart beneficial effects on cerebrovascular function. abstract_id: PUBMED:32948073 Dynamic Cerebral Autoregulation Post Endovascular Thrombectomy in Acute Ischemic Stroke. The development of the endovascular thrombectomy (EVT) technique has revolutionized acute stroke management for patients with large vessel occlusions (LVOs). The impact of successful recanalization using an EVT on autoregulatory profiles is unknown. A more complete understanding of cerebral autoregulation in the context of EVT may assist with post-procedure hemodynamic optimization to prevent complications. We examined cerebral autoregulation in 107 patients with an LVO in the anterior circulation (proximal middle cerebral artery (M1/2) and internal cerebral artery (ICA) terminus) who had been treated using an EVT. Dynamic cerebral autoregulation was assessed at multiple time points, ranging from less than 24 hours to 5 days following last seen well (LSW) time, using transcranial Doppler ultrasound recordings and transfer function analysis. Complete (Thrombolysis in Cerebral Infarction (TICI) 3) recanalization was associated with a more favorable autoregulation profile compared with TICI 2b or poorer recanalization (p < 0.05), which is an effect that was present after accounting for differences in the infarct volumes. Less effective autoregulation in the first 24 h following the LSW time was associated with increased rates of parenchymal hematoma types 1 and 2 hemorrhagic transformations (PH1-PH2). These data suggest that patients with incomplete recanalization and poor autoregulation (especially within the first 24 h post-LSW time) may warrant closer blood pressure monitoring and control in the first few days post ictus. abstract_id: PUBMED:36479048 Assessment of dynamic cerebral autoregulation in near-infrared spectroscopy using short channels: A feasibility study in acute ischemic stroke patients. Introduction: In acute ischemic stroke, progressive impairment of cerebral autoregulation (CA) is frequent and associated with unfavorable outcomes. Easy assessment of cerebral blood flow and CA in stroke units bedside tools like near-infrared spectroscopy (NIRS) might improve early detection of CA deterioration. This study aimed to assess dynamic CA with multichannel CW-NIRS in acute ischemic stroke (AIS) patients compared to agematched healthy controls. Methods: CA reaction was amplified by changes in head of bed position. Long- and short channels were used to monitor systemic artery pressure- and intracranial oscillations simultaneously. Gain and phase shift in spontaneous low- and very low-frequency oscillations (LFO, VLFO) of blood pressure were assessed. Results: A total of 54 participants, 27 with AIS and 27 age-matched controls were included. Gain was significantly lower in the AIS group in the LFO range (i) when the upper body was steadily elevated to 30. and (ii) after its abrupt elevation to 30°. No other differences were found between groups. Discussion: This study demonstrates the feasibility of NIRS short channels to measure CA in AIS patients in one single instrument. A lower gain in AIS might indicate decreased CA activity in this pilot study, but further studies investigating the role of NIRS short channels in AIS are needed. Answer: Yes, stroke subtype and measurement technique can influence the estimation of cerebral autoregulation (CA) in acute ischaemic stroke (AIS). Studies have shown that dynamic cerebral autoregulation (dCA) is acutely impaired following ischaemic stroke, but the extent of impairment can vary depending on the stroke subtype. For instance, greater impairment of dCA has been observed in cortical ischaemic strokes compared to subcortical strokes (PUBMED:23548789). Additionally, the measurement technique used to assess CA can yield different results. Both spontaneous blood pressure fluctuation techniques and sudden induced blood pressure changes by thigh cuff deflation have been used to provide reliable estimates of the autoregulation index (ARI), with similar ARI estimates obtained from both methods (PUBMED:23548789). Furthermore, the use of different techniques such as transcranial Doppler (TCD) and magnetic resonance imaging (MRI) to study dCA in AIS patients has been compared, and no significant differences were found between the affected and unaffected hemispheres when assessed by either TCD or MRI. However, dCA was found to be impaired in AIS patients compared to controls in both unaffected and affected hemispheres (PUBMED:26661230). It is also important to note that while dynamic CA is often impaired in AIS, static CA may not be significantly different, indicating that the type of autoregulation (dynamic vs. static) being measured can also influence the estimation (PUBMED:10686451). Moreover, the timing of the measurement post-stroke onset can affect the assessment of CA, as CA impairment can evolve over time (PUBMED:25298773). In summary, both stroke subtype and measurement technique can influence the estimation of cerebral autoregulation in acute ischaemic stroke, and these factors should be considered when assessing CA in AIS patients.
Instruction: Renal transplantation with donors older than 70 years: does age matter? Abstracts: abstract_id: PUBMED:23622671 Renal transplantation with donors older than 70 years: does age matter? Purpose: The need for organs for renal transplantation has encouraged the use of grafts from increasingly older donors. Studies of transplantation results with donors older than 70 years are sparse. The main purpose of this study is to compare the results of transplantation with donors older and younger than 70 years old. Methods: This retrospective study included 1233 consecutive deceased-donor renal transplantations performed between January 1, 2001, and December 31, 2011. We compared outcomes of grafts from donors older than 70 years (group ≥ 70; n = 82) versus donors younger than 70 years (group < 70; n = 1151). Results: Univariate analysis of pretransplantation data showed statistically significant differences (P < .05) among the following variables for the group < 70 and group ≥ 70, respectively: recipient age (46 ± 13 versus 61 ± 5 years), donor age (44 ± 16 versus 73 ± 3 years), donor male gender (69.4% versus 47.6%), use of antibody induction immunosuppression (51.7% versus 70.7%), and HLA compatibilities (2.4 versus 2). The group ≥ 70 showed increased postoperative minor complications: bleeding (8.5% versus 3.4%; P = .017), lymphocele formation (3.7% versus 0.5%; P = .011), and incisional hernia (2.4% versus 0.2%; P < .001). Regarding transplantation results, we observed that mean serum creatinine was significantly lower among group < 70, at 1, 3, 6, 12, 24, and 60 months after transplantation (P < .05). Cumulative graft survival at 1, 3, and 4 years was 90%, 85%, and 83% in the group < 70 versus 87%, 79%, and 72% in the group ≥ 70. In the subgroup of recipients younger than 60 years, we did not verify statistically significant differences in allograft survival between group ≥ 70 and group < 70. Using Cox regression for survival analysis, we verified that donor age was not an independent risk factor for graft failure. Conclusions: The group of patients who received kidneys from donors younger than 70 years achieved better transplantation outcomes. Nevertheless, kidneys from older donors represent an excellent alternative for older recipients. abstract_id: PUBMED:22034505 Living kidney donors ages 70 and older: recipient and donor outcomes. Background And Objectives: The profound organ shortage has resulted in longer waiting times and increased mortality for those awaiting kidney transplantation. Consequently, patients are turning to older living donors. It is unclear if an upper age limit for donation should exist, both in terms of recipient and donor outcomes. Design, Setting, Participants, & Measurements: In the United States, 219 healthy adults aged ≥70 have donated kidneys at 80 of 279 transplant centers. Competing risks models with matched controls were used to study the independent association between older donor age and allograft survival, accounting for the competing risk of recipient mortality as well as other transplant factors. Results: Among recipients of older live donor allografts, graft loss was significantly higher than matched 50-to 59-year-old live donor allografts (subhazard ratio [SHR] 1.62, 95% confidence interval [CI] 1.16 to 2.28, P = 0.005) but similar to matched nonextended criteria 50-to 59-year-old deceased donor allografts (SHR 1.19, 95% CI 0.87 to 1.63, P = 0.3). Mortality among living kidney donors aged ≥70 was no higher than healthy matched controls drawn from the NHANES-III cohort; in fact, mortality was lower, probably reflecting higher selectivity among older live donors than could be captured in National Health and Nutrition Examination Survey III (NHANES-III; HR 0.37, 95% CI 0.21 to 0.65, P < 0.001). Conclusions: These findings support living donation among older adults but highlight the advantages of finding a younger donor, particularly for younger recipients. abstract_id: PUBMED:19715828 Renal allografts from donors older than 70 years are useful for single transplantation. Kidney transplantation has been related in elderly recipients to a greater longevity compared with dialysis. Due to the scarcity of donors, transplantation of older patients depends on the acceptance of older donors. We compared the characteristics and evolution of transplants from donors >or=70 years (n = 53) with those from donors >55-<70 years (n = 201). Group D >or=70 included older recipients (65.37 +/- 4.9 vs 55.92 +/- 9.66 years; P = .000) and more women (62.3% vs 45.3%; P = .02), with more peripheral arterial disease (10.9% vs 2.4%; P = .011). No differences in donor characteristics were observed. Induction treatment with thymoglobulin or basiliximab was more common in D >or=70 (81.1% vs 57.3%; P = .006), with no differences in other immunosuppressive drugs. The incidence of delayed graft function (DGF) was similar (P = .82), with a trend to a lower incidence of acute rejection episodes among D>or=70 (11.8% vs 22.5%; P = 0.09). Serum creatinine and proteinuria levels did not differ during follow-up (P > .05). Patients in D >or=70 displayed more episodes of urinary sepsis (19.1% vs 6.4%; P = .008), but no differences were observed in cytomegalovirus (CMV) infection (P = .629), neoplasia (P = .118), ischemic cardiopathy (P = .642), or hospital readmission due to infections (P = .567). Graft survivals at 5 years were 70% and 75% (P = .279) among groups D >or=70 and D>55-<70, respectively, and patient survivals at 5 years were 88% and 88% (P = .63), respectively. In conclusion, our study showed that selected kidneys from donors older than 70 years were followed with excellent graft and patient survivals, permitting older patients on renal replacement therapy to benefit from renal transplantation. abstract_id: PUBMED:27496431 Renal Transplantation Using Kidneys Procured From Elderly Donors Older Than 70 Years. Aim: A major problem for the transplant society is a shortage of organs for transplantation compared with the number of patients on the waiting list. This study aimed to assess the results of the transplantation of kidneys procured from older donors. Patients And Methods: A total of 27 kidneys procured from donors age 70 years or older were transplanted between January 1, 2010, and April 25, 2015. These represented only 4.1% of the 657 kidneys transplanted from deceased donors during this period at the same center. Results: Delayed graft function (DGF) in the recipients of kidneys procured from donors age 70 or older occurred in 46.1% of patients, whereas the recipients of kidneys from younger donors showed DGF at a frequency of 32.7% (P = NS). The annual and 3-year survival rates of kidneys in the study group were 85% and 80%, respectively, and in the control group were 92.5% and 88.6%, respectively (P = NS). According to the Polish National Organ Procurement Organization (Poltransplant), the annual survival rate of a transplanted kidney in Poland stands at 89%, whereas the 3-year survival rate is 82%. We detected no significant posttransplantation differences in the serum creatinine concentration and in the estimated glomerular filtration rate between the study and control groups. The donor age and donor creatinine were the variables independently associated with DGF. Conclusions: The results of transplantation of kidneys from elderly donors were comparable to those of transplantation from younger donors. Kidneys harvested from elderly donors should be used for a transplant after a preliminary assessment. abstract_id: PUBMED:36907943 Kidney transplantation from elderly donors (> 70 years): a systematic review. Purpose: The incidence of kidney transplants from elderly donors over 70 years of age has increased significantly over the past 10 years to reach 20% of available kidney graft in some European countries. However, there is little data available on the outcomes of transplants from these donors. We performed a systematic review to evaluate the outcomes of transplantation from donors over 70 years of age. Methods: A systematic review was performed according to preferred reporting items for systematic reviews and meta-analyses. Medline, Embase, and Cochrane databases were searched to identify all studies reporting outcomes on kidney transplants from donors over 70 years. Due to the heterogeneity of the studies, a meta-analysis could not be performed. Results: A total of 29,765 patients in 27 studies were included. The mean donors age was 74.79 years, and proportion of kidney graft from women was 53.54%. The estimated 1- and 5-year kidney death-censored graft survivals from donors > 70 years old were, respectively, 85.95 and 80.27%, and the patient survivals were 90.88 and 71.29%. The occurrence of delayed graft function was 41.75%, and primary non-function was 4.67%. Estimated graft function at 1 and 5 years was 36 and 38 mL/min/1.73 m2. Paucity data were available on post-operative complications. Conclusions: Elderly donors appear to be a reliable source of grafts. However, these transplants are associated with a high rate of delayed graft function without repercussion on long-term graft survival. Allocation strategy to elderly recipients is the main factor of decreased recipient survival. abstract_id: PUBMED:8452630 Older donors and kidney transplantation. Reluctance to use kidneys from older donors (> 50 years of age) is based on reports of inferior results. We reviewed our experience with 45 kidneys transplanted from older donors. Primary nonfunction, immediate graft function, and 1-, 2- and 3-year graft survival rates were similar to those obtained with kidneys transplanted from donors aged between 20 and 40 years. Renal function at 1 year (as measured by serum creatinine) was poorer in kidneys from older donors. No beneficial effect with respect to graft survival was noted with cyclosporin therapy compared to conventional immunosuppression; however, the numbers are small. We conclude that kidneys from older donors are a valuable source for transplantation. abstract_id: PUBMED:34130292 Long-Term Outcomes of Kidney Transplants from Older/Marginal Donors: A Cohort Study. Introduction: To safely expand the donor pool, we introduced a strategy of biopsy-guided selection and allocation to single or dual transplantation of kidneys from donors >60 years old or with hypertension, diabetes, and/or proteinuria (older/marginal donors). Here, we evaluated the long-term performance of this approach in everyday clinical practice. Methods: In this single-center cohort study, we compared outcomes of 98 patients who received one or two biopsy-evaluated grafts from older/marginal donors ("recipients") and 198 patients who received nonhistologically assessed single graft from ideal donors ("reference-recipients") from October 2004 to December 2015 at the Bergamo Transplant Center (Italy). Results: Older/marginal donors and their recipients were 27.9 and 19.3 years older than ideal donors and their reference-recipients, respectively. KDPI/KDRI and donor serum creatinine were higher and cold ischemia time longer in the recipient group. During a median follow-up of 51.9 (interquartile range 23.1-88.6) months, 11.2% of recipients died, 7.1% lost their graft, and 16.3% had biopsy-proven acute rejection (BPAR) versus 3.5, 7.6, and 17.7%, respectively, of reference-recipients. Overall death-censored graft failure (rate ratio 0.78 [95% CI 0.33-2.08]), 5-year death-censored graft survival (94.3% [87.8-100.0] vs. 94.2% [90.5-98.0]), BPAR incidence (rate ratio 0.87 [0.49-1.62]), and yearly measured glomerular filtration rate decline (1.18 ± 3.27 vs. 0.68 ± 2.42 mL/min/1.73 m2, p = 0.37) were similar between recipients and reference-recipients, respectively. Conclusions: Biopsy-guided selection and allocation of kidneys from older/marginal donors can safely increase transplant activity in clinical practice without affecting long-term outcomes. This may help manage the growing gap between organ demand and supply without affecting long-term recipient and graft outcomes. abstract_id: PUBMED:3048534 Use of older patients as cadaveric kidney donors. We have no fixed upper age limit for cadaveric kidney donors and donors over the age of 50 provided kidneys for 22 per cent of our adult transplant recipients between 1983 and 1986. Immediate function following transplantation occurred in 17 per cent of these kidneys compared with 58 per cent for kidneys from donors under the age of 50. The 1-year actuarial graft survival rate for transplants from donors over 50 was 52 per cent, compared with 70 per cent for transplants from donors under 50 (P less than 0.05). Thus kidneys from older donors make an important contribution to the total pool of organs available for transplantation, but their use leads to inferior results in comparison with kidneys from younger donors. abstract_id: PUBMED:10401744 Dual kidney transplantation: older donors for older recipients. Background: Dual kidney transplantation, the transplantation of both donor kidneys into a single recipient, allows increased use of expanded criteria donors (eg, older donors with a history of hypertension) to alleviate the disparity between available donors and potential recipients. We evaluated outcomes in our dual kidney transplant program that started in 1995. Study Design: A retrospective comparison of donor and recipient data between recipients of dual (n = 41) versus single (n = 199) cadaveric renal transplants from February 1, 1995, to March 22, 1998, was performed. Dual kidney transplantation was selectively performed when the calculated donor admission creatinine clearance was less than 90 mL/min and the donor age was greater than 60 years, or if the donor had an elevated terminal serum creatinine. Every attempt was made to age- and size-match the donor and recipients. Results: Recipients of dual kidneys had donors who were older than single kidney donors (59 +/- 12 versus 42 +/- 17 years respectively, p < 0.0001) and had more hypertension (51% versus 29%, p = 0.024). Average urine output was lower in the dual versus single kidney group (252 +/- 157 versus 191 +/- 70 mL/hr, p = 0.036). Donors for dual kidney recipients had a lower donor admission creatinine clearance of 82 +/- 28 mL/min versus 105 +/- 45 mL/min in the single kidney group (p = 0.005). Recipients of dual versus single kidneys were older (58 +/- 11 versus 47 +/- 12 years, p > 0.0001). Dual versus single kidney recipients had similar serum creatinines up to 2 years posttransplant (1.6 +/- 0.3 versus 1.6 +/- 0.7 mg/dL at 2 years, p = NS) and a comparable incidence of delayed graft function (24% versus 33%, p = NS) and 3-month posttransplant creatinine clearance (54 +/- 23 versus 57 +/- 25 mL/min, p = NS). One-year patient and graft survival for single kidney transplantation was 97% and 90%, respectively, and 98% and 89% for dual kidney transplantation (p = NS). Conclusions: Dual kidney donors were significantly older, had more hypertension, lower urine outputs, and lower donor admission creatinine clearance. Despite these differences, dual kidney recipients had comparable postoperative function, outcomes, and survival versus single kidney recipients. We believe selective use of dual kidney transplantation can provide excellent outcomes to recipients of kidneys from older donors with reduced renal function. abstract_id: PUBMED:25594554 Outcomes in kidney transplant recipients from older living donors. Background: Previous studies demonstrate that graft survival from older living kidney donors (LD; age>60 years) is worse than younger LD but similar to deceased standard criteria donors (SCD). Limited sample size has precluded more detailed analyses of transplants from older LD. Methods: Using the United Network for Organ Sharing database from 1994 to 2012, recipients were categorized by donor status: SCD, expanded criteria donor (ECD), or LD (by donor age: <60, 60-64, 65-69, ≥70 years). Adjusted models, controlling for donor and recipient risk factors, evaluated graft and recipient survivals. Results: Of 250,827 kidney transplants during the study period, 92,646 were LD kidneys, with 4.5% of these recipients (n=4,186) transplanted with older LD kidneys. The use of LD donors 60 years or older increased significantly from 3.6% in 1994 to 7.4% in 2011. Transplant recipients with older LD kidneys had significantly lower graft and overall survival compared to younger LD recipients. Compared to SCD recipients, graft survival was decreased in recipients with LD 70 years or older, but overall survival was similar. Older LD kidney recipients had better graft and overall survival than ECD recipients. Conclusions: As use of older kidney donors increases, overall survival among kidney transplant recipients from older living donors was similar to or better than SCD recipients, better than ECD recipients, but worse than younger LD recipients. With increasing kidney donation from older adults to alleviate profound organ shortages, the use of older kidney donors appears to be an equivalent or beneficial alternative to awaiting deceased donor kidneys. Answer: Renal transplantation with donors older than 70 years has become more common due to the shortage of available organs for transplantation. The age of the donor does matter in terms of transplantation outcomes, but kidneys from older donors can still be a valuable resource, particularly for older recipients. Studies have shown that recipients of kidneys from donors younger than 70 years tend to have better transplantation outcomes, such as lower mean serum creatinine levels and higher cumulative graft survival rates at various time points post-transplantation (PUBMED:23622671). However, kidneys from donors older than 70 years have been found to be an excellent alternative for older recipients, and donor age was not an independent risk factor for graft failure (PUBMED:23622671). Recipients of kidneys from living donors aged 70 and older experienced higher graft loss compared to those from younger living donors, but mortality among these older living kidney donors was not higher than healthy matched controls (PUBMED:22034505). Similarly, kidneys from deceased donors older than 70 years have been associated with good graft and patient survivals, making them a useful option for older patients on renal replacement therapy (PUBMED:19715828). Transplantation results using kidneys from elderly donors older than 70 years have been found to be comparable to those from younger donors, with no significant differences in post-transplantation serum creatinine concentration and estimated glomerular filtration rate (PUBMED:27496431). However, there is a higher rate of delayed graft function associated with transplants from elderly donors, which does not seem to affect long-term graft survival (PUBMED:36907943). Overall, while the use of kidneys from older donors can lead to some inferior results compared to kidneys from younger donors, such as higher rates of delayed graft function and lower immediate function, they still represent a valuable source for transplantation, especially when considering the growing gap between organ demand and supply (PUBMED:8452630, PUBMED:34130292, PUBMED:3048534, PUBMED:10401744, PUBMED:25594554). In conclusion, age does matter in renal transplantation with donors older than 70 years, but with careful selection and allocation, kidneys from older donors can provide good outcomes and expand the donor pool, particularly benefiting older recipients.
Instruction: Does donor iliac artery percutaneous transluminal angioplasty or stent placement influence the results of femorofemoral bypass? Abstracts: abstract_id: PUBMED:8808958 Does donor iliac artery percutaneous transluminal angioplasty or stent placement influence the results of femorofemoral bypass? Analysis of 70 consecutive cases with long-term follow-up. Purpose: Femorofemoral bypass procedures are being performed with increasing frequency in some patients with bilateral disease in whom the "donor" iliac artery undergoes percutaneous transluminal angioplasty or stent placement. This study was undertaken to critically examine the efficacy of this approach. Methods: The records of 70 consecutive patients who over a 14-year period underwent elective femorofemoral bypass procedures for chronic occlusive disease, including those who did (group I; n = 26) and did not (group II; n = 44) undergo donor iliac percutaneous transluminal angioplasty or stent placement, were reviewed. Results: No significant differences were found between group I and II patients with respect to age, gender, risk factors, surgical indications, preoperative ankle-arm indices, and the performance of associated procedures. One patient (1.4%) died of a myocardial infarction; no other major cardiopulmonary complications occurred. The postoperative change in the group I donor limb ankle-brachial index ranged from -0.18 to 0.11 (mean, 0.00), revealing no significant steal. The primary graft patency rates for group I and II patients 30 days after surgery were 92% and 98%, respectively, and at 1, 3, 5, and 7 years after surgery were 87% and 81%, 79% and 73%, 79% and 59%, and 66% and 59%. Conclusions: Donor iliac artery percutaneous transluminal angioplasty or stent placement does not compromise the results of femorofemoral bypass procedures in patients with chronic iliac artery occlusive disease. abstract_id: PUBMED:6222502 Percutaneous transluminal balloon angioplasty of the iliac artery for contralateral ischemia. The cases of three patients with lower extremity ischemia from ipsilateral iliac artery occlusion and contralateral iliac artery stenosis are presented. Planned treatment was percutaneous transluminal angioplasty (PTA) of the contralateral iliac artery, rendering it an adequate donor vessel for subsequent femorofemoral bypass. Because of adequate collateral vessels across the pelvis, cross-femoral bypass was unnecessary following PTA. Current technology allows simultaneous intraoperative PTA and femorofemoral bypass. We do not recommend this on the basis of our experience. abstract_id: PUBMED:2917255 Femorofemoral bypass in unilateral iliac artery occlusion. Between January 1973 and January 1988, 188 patients with unilateral iliac artery occlusion were treated at The Middlesex and University College Hospitals, 185 for primary disease and three for blockage of a previous aortobifemoral graft. In the early part of the series, a variety of operations, including aortofemoral and iliofemoral bypass, and endarteriectomy, was used. Femorofemoral bypass was at first reserved for patients who were considered unfit for major surgery, but the results seemed so good that it was adopted as the procedure of choice. Latterly, percutaneous transluminal angioplasty became available, and the role of this is discussed. Over the 15-year period, 150 patients underwent femorofemoral bypass (all but two receiving a prosthetic graft). Of these, 90 per cent had disabling claudication and 8 per cent had critical ischaemia. There were six early deaths (within 30 days of operation) and five late deaths, and two surviving amputees; nine patients could not be traced. The remaining 128 patients have been assessed at intervals of from 3 to 92 months, both clinically and with Doppler studies. The cumulative patency was 86 per cent at 13 years, and all of these patients experienced subjective and haemodynamic improvement in the recipient limb. Eight grafts occluded in the early postoperative phase. In five patients there was deterioration in the donor limb; it is postulated that the effect was due to causes other than the operation. There were two serious postoperative complications due to technical error, one of which led to early above-knee amputation. These are presented in detail. In the light of this experience, the advantages and indications for femorofemoral bypass and the results to be expected from it have become clarified, and the technique standardized so that errors can be avoided. We suggest that femorofemoral bypass is now the operation of choice for unilateral iliac artery occlusion. abstract_id: PUBMED:9060920 Combined percutaneous transluminal angioplasty, iliac stent deployment, and femorofemoral bypass for bilateral aortoiliac occlusive disease. Background: We examine the technique of combining percutaneous transluminal angioplasty and secondary intravascular stent deployment with femorofemoral bypass graft in patients with bilateral aortoiliac occlusive disease. Study Design: Retrospective review. Results: During the 5-year period from June 1988 to October 1993, 18 patients with iliac occlusion and a hemodynamically significant contralateral iliac stenosis were treated using a combination of endovascular and open surgical techniques. Patients (13 men and 5 women) ranged in age from 50 to 78 years (mean, 64 years). Thirteen patients (72 percent) were treated for claudication, and 5 patients (28 percent) for rest pain or ulceration. Residual intra-arterial pressure gradients following percutaneous transluminal angioplasty or significant postpercutaneous transluminal angioplasty dissection were the indications for stent deployment in 15 and 3 patients, respectively. No operative deaths occurred, but 3 patients (16 percent) had early complications. Primary patency, as determined by life table analysis, was 100 percent, 79 percent, 62 percent, and 51 percent at 1, 2, 3, and 5 years, respectively. Secondary patency at 1, 2, 3, and 5 years was 100 percent, 93 percent, 75 percent, and 63 percent, respectively, with a mean follow-up of 41 months. Conclusions: The combination of percutaneous transluminal angioplasty with primary or secondary stent deployment and femorofemoral bypass can be a useful option for treating iliac occlusion and contralateral iliac stenosis of less than 3 cm in length when the severity of the comorbid illnesses, advanced age, or the presence of prohibitive intra-abdominal pathology makes the avoidance of an abdominal incision desirable. abstract_id: PUBMED:11174818 Selecting patients for combined femorofemoral bypass grafting and iliac balloon angioplasty and stenting for bilateral iliac disease. Purpose: This study examines the selection of patients for combined femorofemoral bypass (FFB) grafting and iliac balloon angioplasty (IBA) and stenting for bilateral iliac occlusive disease (successively or simultaneously) and the correlation of the length and location of stenoses of the donor iliac artery to the success of FFB grafts. Methods: Forty-one patients with long iliac occlusion and significant contralateral iliac stenosis were treated with combined FFB grafting and IBA and stenting, which were performed simultaneously or percutaneously within 1 to 2 days before surgery. Stenting was performed for suboptimal IBAs. IBA/graft patency was evaluated by duplex scanning/ankle-brachial index at 1, 3, 6, and 12 months and every 12 months thereafter. A life-table analysis of patency was performed, according to the length of stenosis as classified by the Society of Cardiovascular Interventional Radiology (group A, < 3 cm and 3-5 cm; group B, > 5 cm). Results: Indications for surgery were limb salvage (22%), rest pain (44%), and claudication (34%). The mean follow-up time was 34.1 months. Perioperative complications were 7% for group A versus 62% for group B (P = .0007) with no perioperative deaths or amputations. Stenting was needed in 12 of 13 patients (92%) in group B versus four of 28 patients (14%) in group A (P < .0001) and in 11 of 12 external iliac artery lesions versus five of 29 common iliac artery lesions (P < .0001). The overall early success rate was 100% for group A and 62% for group B (P = .0028). The primary patency rates at 1, 2, and 3 years were 96%, 85%, and 85% for group A, respectively, and for group B were 46%, 46%, and 31%, respectively (P < .01). The secondary patency rates for group A at 1, 2, and 3 years were 100%, 96%, and 87%, respectively; and for group B were 62%, 54%, and 27%, respectively (P < .001). The overall primary and secondary patency rates for common iliac and external iliac artery lesions were similar (72% and 72% versus 67% and 75%, respectively). The overall limb salvage rates were 96% for group A and 85% for group B. Seven of 13 patients (54%) of group B, in contrast with 0 of 28 patients in group A, had to undergo a revision of the procedure within 30 days (P < .01). Conclusion: Combined use of IBA and stenting and FFB grafting is effective and durable and can be performed simultaneously, if the donor iliac stenosis length is 5 cm or less. Percutaneous transluminal angioplasty/stenting of stenoses of 5 cm or more fail to support FFB grafting in most patients; therefore, their combination should be questioned. abstract_id: PUBMED:8076016 Revascularization of the occluded donor artery of a femorofemoral crossover by axillary bypass. In patients over 70 years of age with disabling leg ischaemia, femorofemoral crossover bypass with an externally supported polytetrafluoroethylene (PTFE) graft is the treatment of choice for unilateral occlusion of the iliac artery. Over a 6-year period, 18 elderly patients underwent femorofemoral bypass, six of whom had received percutaneous transluminal angioplasty before surgery for stenosis of the contralateral external iliac artery (donor artery). Symptoms of ischaemia recurred in three patients because of deterioration of the donor iliac artery more than 2 years after surgery, although all three grafts were well visualized by angiography. Revascularization was attempted in these three patients by an axillary bypass. Disabling symptoms of ischaemia were completely relieved by this procedure, although two patients underwent reoperation 9 and 16 months after the axillary bypass respectively. All three patients are now free from symptoms of ischaemia. It is concluded that: deterioration of the donor iliac artery after femorofemoral bypass does occur, although it has been considered unlikely because of decreased peripheral resistance; in spite of complete occlusion of the donor artery, grafts remain patent, proving the excellent antithrombogenic activity of externally supported PTFE; and revascularization using additional axillary bypass is a feasible procedure in such cases. abstract_id: PUBMED:1828143 Donor limb vascular events following femoro-femoral bypass surgery. A Veterans Affairs Cooperative Study. Three hundred seventeen patients who had femorofemoral bypass surgery were examined for postoperative vascular changes that developed in the donor limb. "Unmasked" claudication developed in 7%, new claudication related to a "steal" developed in 3.5%, progression of preoperative claudication developed in 1%, new rest pain developed in 1.7%, and new necrosis developed in 0.7% of patients. Postoperatively, the donor limb ankle-brachial index decreased 0.30 or greater in 3% of the patients, decreased 0.15 to 0.29 in 6%, and decreased 0.10 to 0.14 in 6% of the patients. Only eight patients (3%) had a clinical and hemodynamic steal. Late vascular procedures for donor iliac stenosis were few, resulting in an iliac percutaneous transluminal angioplasty in 3% and inflow bypasses in 3%. Angiographic patterns of donor and recipient limbs were not predictive of "clinical" or "hemodynamic" steal events. Most new donor limb symptoms of claudication were an unmasking of arterial insufficiency previously present in the donor limb that became apparent when the recipient limb was improved. abstract_id: PUBMED:6232865 Percutaneous transluminal angioplasty. Results and surgical implications. This retrospective study of 91 percutaneous transluminal angioplasties in 80 patients showed an overall patency rate by life-table analysis of 46 percent with a follow-up period of 36 months. Success rates were significantly reduced by predilatation ankle-brachial ratios less than 0.45, by diabetes, by pain at rest or necrosis, and somewhat, by older age. The overall success rate for iliac dilatation was significantly better (58 percent) than that in the femopopliteal segments (18 percent). Although percutaneous angioplasty was performed on many patients thought to be high surgical risks, the complication rate was low, and complications that required surgical intervention were rare (4 percent). Angioplasty was used as an adjunct to vascular surgery in several ways. An attempt to dilate a Dacron graft-femoral artery anastomosis was unsuccessful, and one of two dilatations distal to a femoropopliteal bypass was successful. However, the combination of an iliac angioplasty with outflow femoropopliteal or femorofemoral bypass produced a long-term patency rate of over 85 percent, significantly better than that achieved with iliac dilatation alone. Percutaneous transluminal angioplasty is both an alternative and an adjunct which should be considered by vascular surgeons for their patients. abstract_id: PUBMED:17508237 Endovascular treatment of anastomotic pseudoaneurysms after aorto-iliac surgical reconstruction. Purpose: To assess the effectiveness of endovascular treatment of anastomotic pseudoaneurysms (APAs) following aorto-iliac surgical reconstruction. Materials: We retrospectively evaluated 21 men who, between July 2000 and March 2006, were observed with 30 APAs, 13 to the proximal anastomosis and 17 to the distal anastomosis. The patients had had previous aorto-iliac reconstructive surgery with a bypass due to aneurysm (15/21) or obstructive disease (6/21). The following devices were used: 12 bifurcated endoprostheses, 2 aorto-monoiliac, 4 aortic extenders, 1 stent-graft leg, and 2 covered stents. Follow-up was performed with CT angiography at 1, 3, and 6 months after the procedure and yearly thereafter. Results: Immediate technical success was 100%. No periprocedural complications occurred. Four patients died during follow-up from causes not related to APA, and 1 (treated for prosthetic-enteric fistula) from sepsis 3 months after the procedure. During a mean follow-up of 19.7 months (range 1-72 months), 2 of 21 occlusions of stent-graft legs occurred 3 and 24 months after the procedure (treated with thrombolysis and percutaneous transluminal angioplasty and femorofemoral bypass, respectively) and 1 type I endoleak. Primary clinical success rate was 81% and secondary clinical success was 91%. Conclusion: Endovascular treatment is a valid alternative to open surgery and can be proposed as the treatment of choice for APAs, especially in patients who are a high surgical risk. Further studies with larger series and longer follow-up are necessary to confirm the long-term effectiveness of this approach. abstract_id: PUBMED:2398586 Significance and management of inflow gradients unexpectedly generated after femorofemoral, femoropopliteal, and femoroinfrapopliteal bypass grafting. With multilevel arteriosclerosis, some patients undergoing infrainguinal bypass grafting will develop femoral/brachial pressure gradients only after the bypass is performed. We therefore evaluated arteriographically alternate inflow sites and measured the femoral/brachial pressure gradients before and after placement of 87 femorofemoral and 510 femoropopliteal/infrapopliteal bypasses. No prebypass femoral/brachial pressure gradients were present with and without intraarterial papaverine. However, femoral/branchial pressure gradients greater than or equal to 15 mm Hg were observed after the bypass placement in 16 femorofemoral patients and 43 femoropopliteal/infrapopliteal patients. Gradients averaged 28 +/- 8 (SD) (range, 15 to 50) mm Hg. The post femorofemoral bypass gradients were treated by immediate supplementary inflow extension to the aorta (three patients) or an axillary artery (three patients); by postoperative iliac percutaneous transluminal angioplasty (four patients) or by no treatment (six patients with femoral/brachial pressure gradients of 15 to 35 [23 +/- 5] mm Hg). The 43 postfemoropopliteal/infrapopliteal bypass gradients were treated by immediate supplementary inflow extension to the contralateral femoral artery (15 patients), the aorta (8 patients), or an axillary artery (3 patients); by postoperative iliac percutaneous transluminal angioplasty (5 patients) or by no treatment (12 patients with femoral/brachial pressure gradients of 15 to 30 [21 +/- 4] mm Hg). No thrombosis occurred in the 10 femorofemoral bypasses with postbypass femoral/brachial pressure gradients that were treated. One of the six femorofemoral patients with untreated gradients required a subsequent aortic extension, and one thrombosed after 2 years. Of the 12 untreated patients with femoropopliteal/infrapopliteal bypasses one graft occluded early, and two late failures occurred 12 and 18 months later.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: Donor iliac artery percutaneous transluminal angioplasty (PTA) or stent placement does not compromise the results of femorofemoral bypass procedures in patients with chronic iliac artery occlusive disease. A study analyzing 70 consecutive cases with long-term follow-up found no significant differences in primary graft patency rates between patients who underwent donor iliac PTA or stent placement (group I) and those who did not (group II) at various intervals post-surgery. The patency rates for group I and II patients 30 days after surgery were 92% and 98%, respectively, and at 1, 3, 5, and 7 years after surgery were 87% and 81%, 79% and 73%, 79% and 59%, and 66% and 59% (PUBMED:8808958). Additionally, the combination of PTA with primary or secondary stent deployment and femorofemoral bypass can be a useful option for treating iliac occlusion and contralateral iliac stenosis, especially when the severity of comorbid illnesses, advanced age, or prohibitive intra-abdominal pathology makes the avoidance of an abdominal incision desirable. A retrospective review showed primary patency rates of 100%, 79%, 62%, and 51% at 1, 2, 3, and 5 years, respectively, with a mean follow-up of 41 months (PUBMED:9060920). However, the success of combined femorofemoral bypass grafting and iliac balloon angioplasty and stenting for bilateral iliac occlusive disease may depend on the length and location of stenoses of the donor iliac artery. A study found that combined use of IBA and stenting and FFB grafting is effective and durable when the donor iliac stenosis length is 5 cm or less. In contrast, PTA/stenting of stenoses of 5 cm or more fail to support FFB grafting in most patients, suggesting that their combination should be questioned (PUBMED:11174818). In conclusion, donor iliac artery PTA or stent placement generally does not negatively influence the results of femorofemoral bypass, and can be part of a combined treatment strategy, particularly for patients with specific clinical considerations. However, the length of the donor iliac stenosis should be taken into account when planning the combined approach.
Instruction: Are IgE levels to foods other than rosaceae predictive of allergy in lipid transfer protein-hypersensitive patients? Abstracts: abstract_id: PUBMED:21196759 Are IgE levels to foods other than rosaceae predictive of allergy in lipid transfer protein-hypersensitive patients? Background: Lipid transfer protein (LTP), the most frequent cause of primary food allergy in Italy, is a cross-reacting plant pan-allergen. Markers able to predict whether a patient sensitized to a certain food but not yet clinically allergic will develop allergy would be extremely helpful. Objective: It was the aim of this study to investigate the relevance of IgE levels to some plant foods other than Rosaceae as predictors of either local or systemic allergic reaction in LTP-allergic subjects. Methods: One hundred (40 males, 60 females , mean age 29 years) peach-allergic patients monosensitized to LTP seen at 14 Italian centres in 2009 were studied. Walnut, hazelnut, peanut, tomato, rice and/or maize allergy was ascertained by interview and confirmed by positive skin prick test. IgE levels to these foods and to rPru p 3 were measured. Results: Higher levels of IgE to Pru p 3 were associated with a higher prevalence of allergy to hazelnut, peanut and walnut. For all study foods, except rice, median IgE levels in allergic subjects significantly exceeded those in tolerant subjects, though within single allergic groups, the differences between patients reporting systemic or local (oral) symptoms were not significant. Ninety-five percent cut-off IgE levels predictive of clinical allergy were established for study foods although the marked overlaps between allergic and tolerant subjects made them of limited usefulness. Conclusion: Specific IgE levels are only partially predictive of clinical allergy. The reasons why some individuals showing low specific IgE levels develop clinical allergy whereas others showing high IgE levels do not, despite similar exposure to the allergen, remain unclear. abstract_id: PUBMED:14989398 Relationship between peach lipid transfer protein specific IgE levels and hypersensitivity to non-Rosaceae vegetable foods in patients allergic to lipid transfer protein. Background: Lipid transfer protein (LTP), the major allergen in Rosaceae in geographic areas where the prevalence of birch pollen allergy is low, is a widely cross-reacting pan-allergen, but the pattern of cross-reactivity to plant-derived foods botanically unrelated to Rosaceae shows much variability. Objective: To examine the relationship between peach LTP specific IgE levels and cross-reactivity to several non-Rosaceae, plant-derived foods. Methods: IgE specific for peach LTP was measured by enzyme-linked immunosorbent assay in serum samples from 40 patients with Rosaceae allergy monosensitized to LTP. Patients were considered monosensitized to this protein in the absence of sensitization to other cross-reacting, plant-derived foods as shown by negative skin prick test (SPT) results with both birch and mugwort pollen. SPTs with commercial extracts of walnut, hazelnut, peanut, celery, maize, rice, tomato, orange, and onion were performed to detect possible immunologic cross-reactivity to these foods. Results: Patients with negative SPT results with non-Rosaceae foods showed significantly lower levels of IgE to peach LTP than patients showing skin reactivity to one or more non-Rosaceae foods (P < .001). A significant difference in specific IgE to peach LTP between patients with positive or negative SPT results was observed with each individual food (P < .001 in all cases). The level of IgE to peach LTP was strongly related to the number of positive SPT results with non-Rosaceae foods (r = 0.78; P < .001). Increasing levels of IgE to peach LTP were associated with skin reactivity to nuts (29/40 [72%]), peanut (27/40 [67%]), maize (16/39 [41%]), rice (14/39 [36%]), onion (13/37 [35%]), orange (9/32 [28%]), celery (11/40 [27%]), and tomato (8/39 [20%]). Conclusions: This study suggests that all allergenic determinants in LTP from vegetable foods other than peach cross-react with peach LTP determinants, whereas only some peach LTP epitopes cross-react with allergenic determinants on botanically unrelated, plant-derived foods. The high levels of IgE to peach LTP seem to reflect the presence of IgE targeting common allergenic determinants of LTP, causing cross-reactivity to botanically unrelated, vegetable foods. In LTP-allergic patients, increasing levels of IgE to peach LTP are paralleled by an increasing number of foods other than Rosaceae positive on SPT that cause clinical symptoms. abstract_id: PUBMED:37860911 IgE-mediated reactivity to non-specific lipid transfer protein (nsLTP): clinical implications and management ‒ consensus document of the Association of Italian Territorial and Hospital Allergists and Immunologists (AAIITO). Summary: The primary cause of adult-onset food allergy in Mediterranean countries is IgE-mediated reactivity to non-specific Lipid Transfer Protein (nsLTP), with a prevalence of 9.5% in Italy. nsLTP is heat- and pepsin-stable due to its 3D structure, causing severe allergic reactions, even anaphylaxis. It's conserved across plants and a "panallergen" due to homologous forms in various vegetable foods. Found in Rosaceae fruits' skin, it's categorized into nsLTP1 (9 kDa) and nsLTP2 (7 kDa), representing 93% and 7% of the molecules described to date, respectively. Pru p 3 (nsLTP1) from peach is a primary sensitizer, binding more epitopes than other homologs. Cross-reactivity varies in sensitized patients, influenced by IgE levels. Clinical manifestations range from none to various symptoms. Managing patients sensitized to nsLTP without clinical allergy is a challenge. Sensitization hierarchy usually starts with peach, then expands through Prunoideae, Rosaceae, and other foods. Clinical symptoms don't always expand across LTPs. Patients can tolerate some nsLTP-containing foods and consuming them may maintain tolerance. The absence of guidelines led to the Associazione Allergologi Immunologi Italiani Territoriali e Ospedalieri (AAIITO) creating a consensus-based document. Strategies involve avoidance, self-injectable adrenaline, verification through in vivo and in vitro testing, considering cofactors, and peeling fruits. In localized reactions, abstinence is recommended if specific IgE is high. Concurrent pollinosis may complicates diagnosis, but may help management since symptoms are often less severe. Asymptomatic patients are advised to continue normal diets while considering cofactors and total IgE levels. Management strategies should be case-specific, based on expert Consensus Document. abstract_id: PUBMED:15956792 Why do lipid transfer protein-hypersensitive patients tolerate bean (and other legumes)? Background: Patients allergic to nonspecific lipid transfer protein (LTP) frequently score positive on SPT with legumes but virtually never report adverse reactions eating these foods. Objective: This study investigated the IgE reactivity to legumes of LTP-allergic patients and aimed to establish whether legumes can be considered safe in LTP-allergic patients. Methods: Skin reactivity as well as clinical allergy to bean and pea were evaluated in a large cohort of LTP-hypersensitive patients. Sera from 12 patients showing clinical allergy to a number of botanically unrelated plant-derived foods and high levels of IgE to peach LTP were employed in in vitro studies (ELISA, ELISA inhibition, SDS-PAGE/immunoblot) aiming to investigate IgE reactivity to bean. Results: Preabsorption of patients' sera with boiled bean extract did not cause any loss of IgE reactivity to peach LTP (whereas boiled apple totally abolished it). Immunoblot analysis did not show any IgE reactivity to bean proteins at about 10 kDa, and the SDS profile of bean showed little or no staining at 10 kDa. Conclusion: Sera from LTP-allergic patients do not seem to show any IgE reactivity to 10-kDa proteins in bean. Whether this is caused by epitopic differences between Rosaceae and bean LTPs or by the fact that LTP is not expressed in bean remains to be established. This study explains why virtually all LTP-allergic patients tolerate legumes and suggests that these foods should be considered safe for patients sensitized to this protein. abstract_id: PUBMED:21462803 Lipid transfer protein cross-reactivity assessed in vivo and in vitro in the office: pros and cons. Background: Few studies analyze cross-reactivity between lipid transfer proteins (LTP) from a large spectrum of botanically unrelated plant-derived foods using routine diagnostic tests. Objective: To assess the clinical usefulness of currently available in vivo and in vitro tests in LTP-hypersensitive patients. Methods: An in vitro and in vivo study was performed of 15 peach-allergic adults monosensitized to LTP in order to analyze their allergy and hypersensitivity to apple, hazelnut, walnut, peanut, soybean, lentil, maize, celery, carrot, banana, melon, tomato, kiwi, buckwheat, and sunflower, poppy, mustard, and sesame seeds. Results: The study revealed that 8, 7, 10, 5, 3, 2, 1, 1, and 1 patients were allergic to apple, hazelnut, walnut, peanut, tomato, kiwi, melon, lentil, and maize, respectively. Immunoglobulin (Ig) E levels for peach were strongly associated with the total number of offending foods other than peach and with levels of IgE specific for all the study foods except carrot. Both in vivo and in vitro tests showed excellent sensitivity and negative predictive value, but poor specificity and positive predictive value. Sensitized but tolerant patients showed lower IgE levels than those with a history of local or systemic symptoms, although the difference between the 3 subsets was not statistically significant. Conclusion: This study confirms that peach is the primary sensitizer to LTP and that the level of IgE to peach LTP is the main factor associated with cross-reactivity (and clinical allergy) to non-Rosaceae foods. Clinically irrelevant sensitization is common in LTP-hypersensitive patients, and positive in vivo and/or in vitro test results are of little help in detecting potential clinical reactors. abstract_id: PUBMED:10069889 The major allergen of peach (Prunus persica) is a lipid transfer protein. Background: Allergy to fresh fruits and vegetables is mostly observed in subjects with pollinosis, especially from birch, because of cross-reacting allergens in vegetable foods and pollens. However, allergic reactions to fruits, specifically Rosaceae fruits, have been reported in subjects without pollinosis. Objective: This study evaluated the pattern of IgE reactivity, identifying the allergen responsible in 2 groups of patients with oral allergy syndrome to peach with or without birch pollinosis. Methods: The allergenic components of peach were detected by SDS-PAGE and immunoblotting. The major peach allergen was purified by HPLC with a cation-exchange column followed by gel filtration chromatography. Its IgE-binding capacity and its homology with the protein of the crude extract were demonstrated by immunoblotting inhibition techniques. To better characterize this allergen, periodic acid-Schiff stain and isoelectrofocusing were used. The amino acid sequencing was done with a gas-phase sequencer. Results: SDS-PAGE and immunoblotting of the 15 patients allergic to peach, 8 without and 7 with birch pollinosis, showed that they all recognized a protein with a molecular weight of 9 kd. This was the only allergen recognized by patients not sensitized to pollen, whereas the birch pollen-sensitive patients had IgE binding to other allergenic proteins at higher molecular weights. The purified 9-kd protein retained its IgE-binding capacity, was negative to periodic acid-Schiff stain, and had an isoelectric point value of greater than 9. A search in the Swiss Prot Bank showed this was a lipid transfer protein, belonging to a group of molecules involved in the defensive system of plants. Conclusions: The major allergen of peach is a 9-kd protein belonging to the group of lipid transfer proteins. This is the only allergen recognized by patients allergic to peach but not sensitized to birch pollen. abstract_id: PUBMED:20558999 Comparison of IgE-binding capacity, cross-reactivity and biological potency of allergenic non-specific lipid transfer proteins from peach, cherry and hazelnut. Background: Whether the observed clinical pattern of non-specific lipid transfer protein (nsLTP)-mediated food allergies is attributable to a primary sensitization by Pru p 3 from peach and subsequent cross-reactivity with Rosaceae- and non-Rosaceae-derived foods expressing homologous allergens is still unclear. Objective: To investigate the allergenic properties of nsLTPs from Rosaceae and non-Rosaceae foods. Methods: In peach-, cherry- or hazelnut-allergic patients, prevalence of sensitization, IgE-binding capacity, cross-reactivity and allergenic potency of Pru p 3 was compared with Pru av 3 (cherry) and Cor a 8 (hazelnut). Results: Frequency of sensitization to corresponding nsLTPs was 88, 85, and 77% in peach-, hazelnut- and cherry-allergic patients, respectively. Concomitant allergic reactions to cherry and hazelnut were reported in 51 and 44% of peach-allergic patients, respectively. In contrast to cherry allergy, hazelnut allergy was not strictly associated to peach allergy. Sensitization to Cor a 8 or Pru av 3 was strongly correlated with IgE reactivity to Pru p 3, even when subjects tolerated peach. Specific IgE was highest for Rosaceae LTPs, and cross-inhibition experiments confirmed a stronger IgE-binding capacity of Pru p 3 than Cor a 8. The biological potency of Pru p 3 and Pru av 3 was similar but stronger for both nsLTPs than that of Cor a 8. Conclusion: Clinical cross-reactivity of food-allergic patients in the Mediterranean area is likely attributed to a primary sensitization to Pru p 3 and serological cross-reactivity with homologous food nsLTPs. In comparison to Cor a 8, Rosaceae nsLTPs showed a stronger IgE-binding capacity and allergenic potency indicating a different epitope pattern. abstract_id: PUBMED:11476466 A case of allergy to beer showing cross-reactivity between lipid transfer proteins. Background: Lipid transfer proteins (LTPs) are highly conserved proteins present in a broad spectrum of fruits and vegetables that might represent a novel plant panallergen. Objective: To demonstrate that LTP is an important allergen in beer and that beer LTP cross-reacts with LTP from botanically unrelated plant-derived foods. Methods: Serum from a patient with clinical allergy to both beer and Rosaceae was studied for IgE reactivity to LTP to several vegetable foods by RAST, ELISA, immunoblot, and inhibition studies. Results: Patient's serum showed a strong IgE reactivity to LTP purified from peach peel, carrot, and broccoli, and to a 10 kD protein in both apple and peach immunoblots, whereas no reactivity to birch cross-reactive allergens such as Bet v 1, profilin, or carbohydrates was found. In inhibition studies, preabsorption of serum with apple, walnut, hazelnut, peanut, corn, and rice caused a fall of 97%, 20%, 66%, 91%, 94%, and 93%, respectively, of its reactivity to peach LTP. Beer RAST fell from 1.8 IU/mL to <0.1 IU/mL when a patient's serum was preabsorbed with recombinant carrot LTP. Conclusions: LTP is a relevant allergen in beer. Beer LTP may cross-react with LTP from several other plant-derived foods. abstract_id: PUBMED:17505138 Detection of some safe plant-derived foods for LTP-allergic patients. Background: Lipid transfer protein (LTP) is a widely cross-reacting plant pan-allergen. Adverse reactions to Rosaceae, tree nuts, peanut, beer, maize, mustard, asparagus, grapes, mulberry, cabbage, dates, orange, fig, kiwi, lupine, fennel, celery, tomato, eggplant, lettuce, chestnut and pineapple have been recorded. Objective: To detect vegetable foods to be regarded as safe for LTP-allergic patients. Methods: Tolerance/intolerance to a large spectrum of vegetable foods other than Rosaceae, tree nuts and peanut was assessed by interview in 49 subjects monosensitized to LTP and in three distinct groups of controls monosensitized to Bet v 1 (n = 24) or Bet v 2 (n = 18), or sensitized to both LTP and birch pollen (n = 16), all with a history of vegetable food allergy. Patients and controls underwent skin prick test (SPT) with a large spectrum of vegetable foods. The absence of IgE reactivity to foods that were negative in both clinical history and SPT was confirmed by immunoblot analysis and their clinical tolerance was finally assessed by open oral challenge (50 g per food). Results: All patients reported tolerance and showed negative SPT to carrot, potato, banana and melon; these foods scored positive in SPT and elicited clinical symptoms in a significant proportion of patients from all three control groups. All patients tolerated these four foods on oral challenge. Immunoblot analysis confirmed the lack of IgE reactivity to these foods by LTP-allergic patients. Conclusion: Carrot, potato, banana and melon seem safe for LTP-allergic patients. This finding may be helpful for a better management of allergy to LTP. abstract_id: PUBMED:22015956 Isoform identification, recombinant production and characterization of the allergen lipid transfer protein 1 from pear (Pyr c 3). Non-specific lipid transfer proteins belonging to LTP1 family represent the most important allergens for non pollen-related allergies to Rosaceae fruits in the Mediterranean area. Peach LTP1 (Pru p 3) is a major allergen and is considered the prototypic allergenic LTP. On the contrary, pear allergy without pollinosis seems to be under-reported when compared to other Rosaceae fruits suggesting that the as-yet-uncharacterized pear LTP1 (Pyr c 3) has in vivo a low allergenicity. We report here on the identification of four cDNAs encoding for LTP1 in pear fruits. The two isoforms exhibiting amino acid sequences most similar to those of peach and apple homologues were obtained as recombinant proteins. Such isoforms exhibited CD spectra and lipid binding ability typical of LTP1 family. Moreover, pear LTP1 mRNA was mainly found in the peel, as previously shown for other Rosaceae fruits. By means of IgE ELISA assays a considerable immunoreactivity of these proteins to LTP-sensitive patient sera was detected, even though allergic reactions after ingestion of pear were not reported in the clinical history of the patients. Finally, the abundance of LTP1 in protein extracts from pear peel, in which LTP1 from Rosaceae fruits is mainly confined, was estimated to be much lower as compared to peach peel. Our data suggest that the two isoforms of pear LTP1 characterized in this study possess biochemical features and IgE-binding ability similar to allergenic LTPs. Their low concentrations in pear might be the cause of the low frequency of LTP-mediated pear allergy. Answer: IgE levels to foods other than Rosaceae can be predictive of allergy in lipid transfer protein (LTP)-hypersensitive patients, but the predictability is only partial. Higher levels of IgE to Pru p 3, which is a primary sensitizer in peach, were associated with a higher prevalence of allergy to hazelnut, peanut, and walnut. For most study foods, median IgE levels in allergic subjects significantly exceeded those in tolerant subjects. However, there were marked overlaps between allergic and tolerant subjects, making the predictive value of specific IgE levels limited (PUBMED:21196759). Additionally, increasing levels of IgE to peach LTP were associated with skin reactivity to a variety of non-Rosaceae foods, suggesting that high levels of IgE to peach LTP reflect the presence of IgE targeting common allergenic determinants of LTP, causing cross-reactivity to botanically unrelated vegetable foods (PUBMED:14989398). However, the clinical manifestations in patients sensitized to nsLTP can range from none to various symptoms, and managing patients sensitized to nsLTP without clinical allergy is challenging. The sensitization hierarchy usually starts with peach and then expands to other foods, but clinical symptoms do not always expand across LTPs. Some nsLTP-containing foods may be tolerated by patients, and consuming them may maintain tolerance (PUBMED:37860911). In contrast, patients allergic to LTP frequently score positive on skin prick tests with legumes but rarely report adverse reactions to these foods. This suggests that legumes may be considered safe for patients sensitized to LTP (PUBMED:15956792). Moreover, clinically irrelevant sensitization is common in LTP-hypersensitive patients, and positive in vivo and/or in vitro test results are of little help in detecting potential clinical reactors (PUBMED:21462803). In summary, while IgE levels to foods other than Rosaceae can be indicative of allergy in LTP-hypersensitive patients, the relationship is not absolute, and clinical allergy cannot be predicted solely based on specific IgE levels. The presence of IgE to LTPs from various foods does suggest a potential for cross-reactivity, but the actual clinical relevance must be determined on an individual basis, considering the patient's history and tolerance to specific foods.
Instruction: Partial hepatic resection for ischemic graft damage after liver transplantation: a graft-saving option? Abstracts: abstract_id: PUBMED:8178251 Partial hepatic resection for ischemic graft damage after liver transplantation: a graft-saving option? Background: Intrahepatic biliary strictures or parenchymal infarcts may occur after liver transplantation as a complication of ischemic damage to the graft. In some selected cases the lesions appear to be confined to a part of the liver. We report our experience with partial graft resection in this setting. Methods: From January 1984 to December 1991, 286 liver transplantations were performed in 257 recipients. Seven patients, three children and four adults, underwent partial hepatectomy 3 to 218 weeks after liver transplantation of a full-size graft. The clinical presentation included septic parenchymal infarcts (n = 4) and nonanastomotic biliary strictures (n = 3) complicating (n = 5) artery thrombosis or not (n = 2). There were four left hepatectomies, two left lobectomies, and one right hepatectomy. In four instances partial hepatectomy was performed after failed attempt at biliary reconstruction (n = 2) or arterial revascularization (n = 2). Partial graft resection was performed extrafascially without Pringle's maneuver and mobilization of the remnant liver to preserve its vascularization. Results: No surgical complications occurred, and none of the patients experienced acute hepatic failure during the postoperative period. All patients were discharged home 10 to 96 days (median, 23 days) after liver resection. Two patients had recurrent ischemic cholangitis. One patient underwent successful regrafting for recurrent Budd-Chiari syndrome; one patient died of tumor recurrence. Six patients were alive with a follow-up ranging from 12 to 45 months. Conclusions: These results suggest that partial graft resection is a safe and graft-saving option after liver transplantation in selected patients with localized ischemic damage of the graft. abstract_id: PUBMED:15110588 Hepatic resection after liver transplantation as a graft-saving procedure: indication criteria, timing and outcome. Background: Hepatic resection is uncommon after liver transplantation (LT), but can be a graft-saving procedure in selected cases. Herein we describe the criteria, outcome, and timing of this procedure in our series. Methods: Between January 1996 and December 2002, 397 LTs were performed in 367 recipients, of whom 12 patients (3.2%) subsequently underwent liver graft resections because of ischemic-type biliary lesions (ITBLs) (n = 5, 41.6%), segmental hepatic artery thrombosis (S-HAT)(n = 3, 25%), recurrent hepatocellular carcinoma (HCC) (n = 2, 16.6%), liver abscess (n = 1, 8.3%), or liver trauma (n = 1, 8.3%). The patients were divided into group 1 (n = 3 all with S-HAT) who underwent early resections (within 3 months of LT), and group 2 (n = 9) who underwent late resections (after 3 months). The outcomes and postoperative mortality ratio (within 30 days) were compared. Results: The resections consisted of four left lobectomies, three right hepatectomies, two extended right hepatectomies, one segmentectomy, one anterior trisegmentectomy, and one right lateral sectoriectomy. The perioperative mortality rate was 66.6% in group 1 (one case of myocardial infarction and one of sepsis), and 22% in group 2 (one case of sepsis and one of hepatic failure). Conclusions: Late resections in stable patients with damage confined to the graft yield good prognosis. Even major resections are feasible graft-saving procedures. In contrast, early hepatic resections in S-HAT are associated with a worse outcome. Retransplantation should be considered the first-choice option. Sepsis significantly affects the postsurgical course. abstract_id: PUBMED:34927324 Long term outcomes of hepatic resection following orthotopic liver transplant. Background: Liver resection is sometimes used as a graft saving procedure following orthotopic liver transplantation. Methods: In this single centre retrospective cohort study, 12 adult patients underwent resection over a 20 year period, including recipients of split livers and second grafts. Results: Indications for resection were vascular (portal vein obstruction and hepatic artery thrombus), biliary (ischaemic cholangiopathy, chronic biliary obstruction, biliary-vascular fistula and biloma) and recurrence of disease (primary sclerosing cholangitis [PSC] and hepatocellular carcinoma [HCC]). There was no perioperative mortality. Median follow up was 89 months. At the completion of the study 40% of patients had functioning grafts. One third required retransplantation with a median 1 year 6 months post resection. Three patients were deceased (recurrent HCC n = 1, PSC n = 1 and unspecified causes n = 1). Total graft survival was 91.7% at 1 year, 73.3% at 5 years and 64.2% at 10 years. Conclusions: Liver resection following liver transplant in select patients may salvage the graft or delay the need for retransplantation. abstract_id: PUBMED:16162188 Hepatic resection in liver transplant recipients: single center experience and review of the literature. Biliary complications such as ischemic (type) biliary lesions frequently develop following liver transplantation, requiring costly medical and endoscopic treatment. If conservative approaches fail, re-transplantation is most often an inevitable sequel. Because of an increasing donor organ shortage and unfavorable outcomes in hepatic re-transplantation, efforts to prolong graft survival become of particular interest. From a series of 1685 liver transplants, we herein report on three patients who underwent partial hepatic graft resection for (ischemic type) biliary lesions. In all cases, left hepatectomy (Couinaud's segments II, III and IV) was performed without Pringle maneuver or mobilization of the right liver. All patients fully recovered postoperatively, but biliary leakage required surgical revision twice in one patient. At last follow-up, two patients presented alive and well. The other patient with persistent hepatic artery thrombosis (HAT), however, demonstrated progression of disease in the right liver remnant and required re-transplantation 13 months after hepatic graft resection. Including our own patients, review of the literature identified 24 adult patients who underwent hepatic graft resection. In conclusion, partial graft hepatectomy can be considered a safe and beneficial procedure in selected liver transplant recipients with anatomical limited biliary injury, thereby, preserving scarce donor organs. abstract_id: PUBMED:34423381 Early graft calcification without graft dysfunction after living donor liver transplantation: two case reports. Graft calcification after liver transplantation (LT) has seldom been reported, but almost of all previously reported cases have been attributed to graft dysfunction. We herein report two cases of graft calcification without liver dysfunction after living donor liver transplantation (LDLT). Two patients who underwent LDLT were found to have graft calcification in the early postoperative period (< 1 month). Calcification in the first case was found at the cut edge of the liver at post-operative day (POD) 10, showing a time-dependent increase in calcification severity. The second patient underwent hepatic artery re-anastomosis due to hepatic artery thrombosis on POD4 and received balloon-occluded retrograde transvenous obliteration of the splenic kidney shunt due to decreased portal vein blood flow on POD6. She was found to have diffuse hepatic calcification in the distant hepatic artery area at 1-month post-operation followed by gradual graft calcification at the resection margin at 6-month post-operation. Neither case showed post-operative graft dysfunction. Calcification of the liver graft after LDLT is likely rare, and graft calcification does not seem to affect the short-term liver function in LDLT cases. We recommend strictly controlling the warm/cold ischemia time and reducing the physical damage to the donor specimen as well as monitoring for early calcification by computed tomography. abstract_id: PUBMED:23769037 Hepatic artery reconstruction prevents ischemic graft injury, inhibits graft rejection, and mediates long-term graft acceptance in rat liver transplantation. Background: Hepatic artery (HA) reconstruction is performed in the clinical liver transplantation. Methods: We assessed the importance of HA reconstruction in the success of liver transplantation. Orthotopic liver transplantation was performed without immunosspression from Lewis (RT1l) to Lewis rats (syngeneic transplantation) as well as Lewis to BN (RT1n) rats (allogeneic transplantation) with or without HA reconstruction. We examined graft function, pathology, and mRNA levels using DNA arrays in both arterialized and nonarterialized liver grafts. Results: In Lewis-to-Lewis syngeneic grafts, both the arterialized and nonarterialized grafts survived >120 days with normal graft function. lnfiltration of CD3(+) T cells and CD68(+) macrophages, marked bile duct proliferation with apoptotic epithelial cells, and expansion and increasing fibrosis of portal areas were evident in the nonarterialized grafts at day 120, although preservation of architecture was noted in the arterialized grafts. DNA array analysis of nonarterialized syngeneic grafts demonstrated the upregulation of mRNA of cell death-related proteins, cell cycle-related proteins, and inflammation-related proteins than those in arterialized grafts. Moreover, the arterialized Lewis-to-BN allogeneic grafts could survive for a long time with less severe graft dysfunction than those in non-arterialized allogeneic grafts. Conclusions: HA reconstruction in liver transplantation inhibited hypoxic injury and subsequent inflammation and bile duct proliferation, prevented the augmentation of T-cell-and antibody-mediated rejection, and mediated long-term graft acceptance. HA reconstruction is essential factor in the success of liver transplantation. abstract_id: PUBMED:10359035 Hepatic artery thrombosis in pediatric liver transplantation: graft salvage after thrombectomy. Hepatic artery thrombosis (HAT) is a devastating complication that may occur after orthotopic liver transplantation (OLT). A higher incidence has been reported in children. Salvage of the graft by thrombectomy has been suggested as an alternative to re-transplantation. In this study we report the outcome of three children who underwent thrombectomy for HAT. Between January 1992 and June 1998, 14 children (< 17 yrs of age) underwent liver transplantation. Three developed HAT (one a whole-liver graft recipient, age 17; two living-related graft recipients, ages 4 and 4.5 yr). In the first patient, thrombosis of the hepatic artery was associated with scattered areas of parenchymal necrosis on computed tomography. In the two living-related patients, HAT was found incidentally during re-exploration for bleeding (day 2 and day 10). Thrombectomy was performed in all three patients. At 18-24 months after thrombectomy, all three children had normal graft function. In the first patient, complete regeneration of the liver has been documented by computed tomography and a late asymptomatic recurrent thrombosis is suggested by absence of arterial flow on Doppler examination. The hepatic artery is patent in the two living-related recipients. One of these living-related recipients developed ischemic bile duct stricture and underwent successful percutaneous balloon dilatation. We conclude that long-term normal graft function can be achieved by thrombectomy in pediatric liver recipients with HAT, even in the presence of limited parenchymal damage. abstract_id: PUBMED:18186894 Intimal dissection of the hepatic artery after thrombectomy as a cause of graft loss in pediatric living-related liver transplantation. HAT is the main cause of graft loss in pediatric living-related LTx. Revascularization of the graft by thrombectomy and re-anastomosis has been reported to be effective for graft salvage in cases of HAT and should be attempted when potential donors are not available for emergency re-transplantation. Immediate complications secondary to revascularization attempts in cases of HAT are not described. Late complications are mainly related to biliary tree ischemia. We report a case of child who experienced intimal hepatic artery dissection, which extended into intra-hepatic branches of the artery after a thrombectomy with a Fogarty balloon catheter in an attempt to restore arterial flow after HAT. This complication led to acute deterioration of the graft and the need for emergency re-transplantation. abstract_id: PUBMED:12370660 The role of apoptosis in hepatic graft rejection In 1965, Kerr described a type of death, apoptosis, with different characteristics from necrosis. Apoptosis has an important role in the development and cell homeostasis. Excessive or insufficient apoptosis contributes to the pathogenesis of pathology like ischemia, neurodegeneration, autoimmunity, viral infection, and tumor growth or regression. Apoptosis is subdivided into four sequential phases: order of death; death of cell; phagocytosis of apoptotic bodies and degradation of apoptotic bodies. Death programs converge on sequential activation of a proteases family, caspases. Some aspects of graft rejection can be interpreted as failure of apoptosis in host immunity cells; sometimes rejection involves induction of apoptosis. Apoptotic-type lesions were found in early vascular occlusions, one of the cause of graft failure. Then, an augmented apoptosis in hepatic graft biopsy can be used like a signal of early vascular occlusion. In hepatic transplantation, apoptosis is followed by a proteolytic cascade, which causes sequential activation of caspases. Synthetic inhibitor of caspases can be used, then, in the prevention and/or treatment of pathologies with implication of apoptosis due to ischemia-reperfusion. These inhibitors are not enough for prevention of hepatic lesions, even if caspases inhibitor can be a strategy for treatment of hepatic graft rejection. abstract_id: PUBMED:19328969 Hepatic neutrophil activation during reperfusion may not contribute to initial graft function after short cold ischemia in human liver transplantation. Background: Experimental models of hepatic ischemia/reperfusion injury have implicated a pathophysiologic role for neutrophils in subsequent hepatocellular damage. In human liver transplantation, however, the effect of reperfusion-induced neutrophil activation on initial graft function is not clear. Methods: In 38 patients undergoing liver transplantation, neutrophil CD11b and L-selectin expression, neutrophil count, and plasma lactoferrin levels were measured. To assess changes within the graft during initial reperfusion, samples of blood entering and leaving the graft were obtained simultaneously, and transhepatic ratio calculated (hepatic vein/portal vein; 1 denotes no change, <1 a decrease, and >1 an increase across the liver). Graft steatosis, postoperative liver function, and outcome were recorded. Associations between neutrophil activation markers and outcome measures were evaluated. Results: Substantial hepatic neutrophil activation occurred during initial reperfusion, demonstrated by concomitant L-selectin shedding and CD11b upregulation (transhepatic ratios 0.9 [0.7-1.0]; 1.4 [0.9-1.9]; both P < .001; portal vs hepatic vein]. Simultaneously, hepatic neutrophil sequestration and lactoferrin release occurred (0.3 [0.2-0.5]; 1.7 [1.3-3.4]; both P < .001). Neither cold ischemic time (CIT; median 5 hours 36 minutes) nor hepatic neutrophil activation during reperfusion predicted early graft function, nor was there any association between CIT and neutrophil activation. Conclusions: Despite short CIT, extensive graft neutrophil activation and sequestration occurred. This, however, was not associated with impaired early graft function, suggesting short CIT may protect against severe neutrophil-mediated injury. Answer: Partial hepatic resection has been reported as a graft-saving option in selected cases of ischemic graft damage following liver transplantation. The procedure has been performed in patients presenting with complications such as intrahepatic biliary strictures or parenchymal infarcts, which may occur due to ischemic damage to the graft. In some instances, these lesions are confined to a part of the liver, making partial resection a viable option (PUBMED:8178251). The outcomes of partial hepatic resection in liver transplant recipients have been generally positive. In one study, no surgical complications occurred, and none of the patients experienced acute hepatic failure during the postoperative period. Most patients were discharged home within a median of 23 days after liver resection, and the majority were alive with a follow-up ranging from 12 to 45 months (PUBMED:8178251). Another study reported that late resections in stable patients with damage confined to the graft yielded a good prognosis, and even major resections were feasible as graft-saving procedures. However, early hepatic resections in cases of segmental hepatic artery thrombosis were associated with a worse outcome, suggesting that retransplantation should be considered the first-choice option in such cases (PUBMED:15110588). Long-term outcomes of hepatic resection following orthotopic liver transplantation have also been favorable in some cases. A retrospective cohort study reported that liver resection in select patients may salvage the graft or delay the need for retransplantation, with 40% of patients having functioning grafts at the end of the study and total graft survival rates of 91.7% at 1 year, 73.3% at 5 years, and 64.2% at 10 years (PUBMED:34927324). In summary, partial hepatic resection can be a safe and beneficial procedure in selected liver transplant recipients with anatomically limited biliary injury, potentially preserving scarce donor organs and prolonging graft survival (PUBMED:16162188). However, the success of this approach may depend on the timing of the resection and the specific indications for the procedure. Early resections, particularly in cases of segmental hepatic artery thrombosis, have been associated with poorer outcomes compared to late resections (PUBMED:15110588).
Instruction: Should thiazide diuretics be given as first line antihypertensive therapy or in addition to other medications? Abstracts: abstract_id: PUBMED:24956971 Should thiazide diuretics be given as first line antihypertensive therapy or in addition to other medications? Introduction: The recommendation to start antihypertensive therapy with diuretics (D) might produce delay in blood pressure (BP) control and, possibly, increase cost/benefit ratio. Aim: We evaluate the effects of D in relation to the administration of other anti-hypertensive medications, in clinical practice. Methods: General practitioners recruited 2,409 hypertensive patients with indication to antihypertensive therapy, who were randomized to start treatment with chlorthalidone (12.5-25 mg daily, group D) or any other single medications (excluding thiazides, group A). The patients have been followed for at least 2 years. Result: Among the 2,409 patients recruited (42.5 % women), 1,205 were randomized in group D and 1,204 in group A, of which 1,051 (or 87 %) and 1026 (or 85 %) respectively, completed the study. The number of patients in optimal BP control was similar in the two groups (65.0 vs 64.0 %; p = NS). During follow-up, the group D had prescribed a slightly greater number of medications compared to the group A who added D as second line (2.3 vs 2.1; p < 0.0001). In particular group D took more β-blockers (27.1 vs 14.9 %; p < 0.0001) with a similar number of patients in optimal BP control (64.35 vs 63.9 %; p = NS). Conclusion: The beginning of antihypertensive therapy with diuretics is more often subject to the addition of one or more medications to obtain an effective blood pressure control, since the diuretic administered at the beginning of the antihypertensive regimen is only rarely associated with optimal blood pressure control. abstract_id: PUBMED:28631393 Comparison of thiazide-like diuretics versus thiazide-type diuretics: a meta-analysis. Thiazide diuretics are widely used for the management of hypertension. In recent years, it has been actively debated that there is interchangeability of thiazide-type diuretics hydrochlorothiazide and thiazide-like diuretics including indapamide and chlorthalidone for the treatment of hypertension. With the purpose of seeking out the best thiazide diuretic for clinicians, we summarized the existing evidence on the two types of drugs and conducted a meta-analysis on their efficacy in lowering blood pressure and effects on blood electrolyte, glucose and total cholesterol. Twelve trials were identified: five based on the comparison of indapamide versus hydrochlorothiazide and seven based on the chlorthalidone versus hydrochlorothiazide. In the meta-analysis of blood pressure reduction, thiazide-like diuretics seemed to further reduce systolic BP ([95% CI]; -5.59 [-5.69, -5.49]; P < 0.001) and diastolic BP ([95% CI]; -1.98 [-3.29, -0.66]; P = 0.003). Meanwhile, in the analysis of side effects, the incidence of hypokalemia ([95% CI]; 1.58 [0.80, 3.12]; P = 0.19), hyponatremia ([95% CI]; -0.14 [-0.57, 0.30], P = 0.54), change of blood glucose ([95% CI];0.13 [-0.16, 0.41], P = 0.39) and total cholesterol ([95% CI]; 0.13 [-0.16, 0.41], P = 0.39) showed that there is no statistical significant differences between the two groups of drugs. In conclusion, using thiazide-like diuretics is superior to thiazide-type diuretics in reducing blood pressure without increasing the incidence of hypokalemia, hyponatraemia and any change of blood glucose and serum total cholesterol. abstract_id: PUBMED:31641556 Hyperuricemia Associated with Thiazide Diuretics in Hypertensive Adults. Introduction Thiazide diuretics are essential first-line anti-hypertensive drugs which not only maintain blood pressure but also reduce stroke and congestive heart failure associated with morbidity and mortality in hypertensive patients. However, thiazide diuretics are associated with elevated serum uric acid (SUA) levels. This study aimed to evaluate the impact of thiazide diuretic use on their SUA levels among hypertensive individuals of Pakistan. Methods In this cross-sectional, prospective study, adult hypertensive patients were recruited. They were divided into two groups - thiazide diuretic group and non-thiazide group. Demographic characteristics, hypertension-related characteristics, and SUA levels were included. Data were then entered and analysed using SPSS for Windows version 22.0 (IBM Corp., Armonk, NY, USA). Results In the thiazide group, 24.5% were hyperuricemic as compared to 15.3% in the non-thiazide group (p=0.03). The overall mean SUA levels in the thiazide group were significantly higher than those in the non-thiazide group (5.9 ± 2.1 vs. 5.3 ± 2.7 mg/dL; p=0.02). Males in the thiazide group also showed a similar pattern (5.9 ± 2.3 vs. 5.1 ± 2.1 mg/dL; p=0.02); however, the differences were insignificant in females. Patients using thiazide diuretics for one to three years were more non-hyperuricemic than hyperuricemic (p=0.000). Among hyperuricemic patients, 36.5% were taking thiazides for three to four years and 46% were taking them for more than four years (p<0.05). Conclusion Hyperuricemia is a more common occurrence in thiazide diuretic users as compared to non-users. The overall sample, and men using thiazide diuretics, reported a higher mean SUA as compared to non-users. As the years of thiazide usage advanced, the number of hyperuricemic participants also significantly increased. abstract_id: PUBMED:35404993 Thiazide and the Thiazide-Like Diuretics: Review of Hydrochlorothiazide, Chlorthalidone, and Indapamide. The term thiazide is universally understood to refer to diuretics that exert their principal action in the distal tubule. The thiazide class is heterogenous and can be further subdivided into compounds containing the benzothiadiazine ring structure-the thiazide-type (e.g., hydrochlorothiazide)-and those lacking the benzothiadiazine ring-the thiazide-like (e.g., chlorthalidone and indapamide) drugs. Thiazide-like agents are longer acting and constitute the diuretics used in most of the cardiovascular outcome trials that established benefits of treatment with diuretics, but pragmatic aspects, such as lack of availability in convenient formulations, limit their use. Regardless of class heterogeneity, thiazides have retained importance in the management of hypertension for over 60 years. They are reliably effective as monotherapy in a majority of hypertensive patients, and augment the efficacy of other classes of antihypertensives when used in combination. Importantly, a thiazide-based treatment regimen lowers cardiovascular events, and their sturdy effect reinforces their place among the recommended first-line agents to treat hypertension in major domestic and international hypertension guidelines. There are few head-to-head comparisons within the class, but potential differences have been explored indirectly as well as in non-blood pressure mechanisms and potential pleiotropic properties. Until proven otherwise, the importance of these differences remains speculative, and clinicians should assume that cardiovascular events will be lowered similarly across agents when equivalent blood pressure reduction occurs. Thiazides remain underutilized, with only about one-third of hypertensive patients receiving them. For many patients, however, a thiazide is an indispensable component of their regimen to achieve adequate blood pressure control. abstract_id: PUBMED:30937114 Association Between the Use of Thiazide Diuretics and the Risk of Skin Cancers: A Meta-Analysis of Observational Studies. Background: Thiazide diuretics are among the most widely prescribed antihypertensive agents. Given their photosensitizing effects, however, there are concerns that they may increase the risk of skin cancers. In this meta-analysis, we investigated an association between the use of thiazide diuretics and the risk of skin cancers. Methods: We identified studies by searching three electronic databases (PubMed, EMBASE, and the Cochrane Library) from their inception to October 30, 2017. Nine relevant observational studies (seven case-control and two cohort studies) were included in this study. Since included studies were unlikely to be functionally equal, pooled estimates were calculated using random-effects meta-analysis. Results: The use of thiazide diuretics was associated with an increased risk of squamous cell carcinoma (adjusted odds ratio (aOR), 1.86; 95% confidence interval (CI), 1.23 - 2.80) and marginally increased risk of basal cell carcinoma (aOR, 1.19; 95% CI, 1.02 - 1.38) and malignant melanoma (aOR, 1.14; 95% CI, 1.01 - 1.29). In the subgroup analysis, hydrochlorothiazide or hydrochlorothiazide combination medications were significantly associated with squamous cell carcinoma without significant heterogeneity among studies (aOR, 2.04; 95% CI, 1.79 - 2.33; Higgin's I2 value = 0.0 %; Q-statistics = 2.7, P value = 0.445). Conclusions: Our results suggested that the use of thiazide diuretics may be associated with an increased risk of skin cancers. This association was most prominent between the use of hydrochlorothiazide or hydrochlorothiazide combination medications and the risk of squamous cell carcinoma. Further studies are needed to confirm these findings. abstract_id: PUBMED:1353112 Thiazide diuretics: first-line therapy for hypertension. Purpose: To compare low-dose thiazides to beta-blockers, angiotensin converting enzyme (ACE) inhibitors, calcium antagonists and alpha-blockers for simplicity, tolerability, efficacy, safety and cost-effectiveness as first-line treatment for hypertension. Method: Review of short-term comparative studies, and the outcome of long-term trials with vascular complications of hypertension as endpoints. SIMPLICITY: Among the advantages of thiazides are a flat dose-response; no dose titration; effectiveness when used once a day; no first-dose hypotension; and few contra-indications. Tolerability: Thiazides are the best tolerated agents in patients over the age of 60 years and in younger women. They sometimes cause gout and impotence in younger men, in whom beta-blockers are equally acceptable first-line therapy. Efficacy: Thiazide-based regimens have consistently reduced vascular complications of hypertension, the real measure of efficacy. There is little evidence that regimens based on other drugs are effective in this sense. Safety: Concerns that thiazide-induced biochemical changes cause coronary events are baseless. An overview of outcome trials shows that thiazide-based treatment reduces coronary events significantly, and the reduction is not significantly different from that predicted by epidemiological data. Thiazide-based therapy has also reduced coronary events significantly and substantially in elderly patients with isolated systolic hypertension. Cost-effectiveness: Low-dose thiazide treatment needs minimal monitoring, and has proved most cost-effective in formal analyses. Conclusion: Low-dose thiazide treatment is a clear first-line choice for patients aged over 60 years and younger women, except those with diabetes or gout. In younger men there is little to choose between thiazides and beta-blockers. abstract_id: PUBMED:2518651 ACE inhibitors compared with thiazide diuretics as first-step antihypertensive therapy. While ACE inhibitors are considerably more expensive than thiazide diuretics, they are slightly more effective antihypertensive agents in white patients and have fewer side effects. They can be regarded as suitable first-line therapy in diabetic hypertensives. It is probable that as new drugs in this class are marketed, the price differential will lessen and they will be regarded as acceptable and useful first-line drugs in an increasingly large number of patients. abstract_id: PUBMED:21029871 Combination of loop diuretics with thiazide-type diuretics in heart failure. Volume overload is an important clinical target in heart failure management, typically addressed using loop diuretics. An important and challenging subset of heart failure patients exhibit fluid overload despite significant doses of loop diuretics. One approach to overcome loop diuretic resistance is the addition of a thiazide-type diuretic to produce diuretic synergy via "sequential nephron blockade," first described more than 40 years ago. Although potentially able to induce diuresis in patients otherwise resistant to high doses of loop diuretics, this strategy has not been subjected to large-scale clinical trials to establish safety and clinical efficacy. We summarize the existing literature evaluating the combination of loop and thiazide diuretics in patients with heart failure in order to describe the possible benefits and hazards associated with this therapy. Combination diuretic therapy using any of several thiazide-type diuretics can more than double daily urine sodium excretion to induce weight loss and edema resolution, at the risk of inducing severe hypokalemia in addition to hyponatremia, hypotension, and worsening renal function. We provide considerations about prudent use of this therapy and review potential misconceptions about this long-used diuretic approach. Finally, we seek to highlight the need for pragmatic clinical trials for this commonly used therapy. abstract_id: PUBMED:29676870 Which place for thiazide and thiazide-like diuretics in patients with type 2 diabetes ? The use of thiazides as antihypertensive agents has been challenged because associated metabolic adverse events, including new-onset diabetes. However, these metabolic disturbances are less marked with low-dose of hydrochlorothiazide and with thiazide-like diuretics such as chlorthalidone and indapamide. In post hoc analyses of subgroups of patients with hypertension and type 2 diabetes, thiazide(-like) diuretics resulted in a significant reduction in cardiovascular events, all-cause mortality and hospitalization for heart failure compared to placebo. Furthermore, they were shown to be non-inferior to other antihypertensive agents, including blockers of the renin-angiotensin system in diabetic patients without albuminuria. Benefits attributed to thiazide(-like) diuretics (especially at low dose) in terms of cardiovascular protection outweigh the risk of worsening glucose control and inducing other metabolic disorders in patients with type 2 diabetes. Thus low dose thiazide(-like) drugs still play a major role in the treatment of hypertension in patients with type 2 diabetes. abstract_id: PUBMED:33447776 Interamerican Society of Cardiology (IASC) position statement: Chlorthalidone vs. thiazide-type diuretics. The Interamerican Society of Cardiology (IASC) Position Statement for hypertension management in Latin America is a practical and useful review of five different hypertension guidelines. Though, thiazide diuretics have been recommended as firstline option, the position statement needs to highlight differences within the thiazide class. Chlorthalidone is structurally and pharmacokinetically distinct from thiazide-type iuretics like hydrochlorothiazide with a longer half-life and 24-h anti-hypertensive effect. It has been shown to reduce cardiovascular morbidity and mortality in several landmark studies evaluating anti-hypertensives. Answer: Thiazide diuretics have been a subject of debate regarding their use as first-line antihypertensive therapy versus their addition to other medications. The evidence suggests that thiazides are effective and have been a mainstay in hypertension management for over 60 years. They are effective as monotherapy in a majority of hypertensive patients and can augment the efficacy of other antihypertensive classes when used in combination (PUBMED:35404993). Thiazide-based regimens have consistently reduced vascular complications of hypertension, which is the real measure of efficacy, and have been shown to be cost-effective with minimal monitoring required (PUBMED:1353112). However, starting antihypertensive therapy with diuretics may lead to the addition of one or more medications to achieve effective blood pressure control, as diuretics alone are only rarely associated with optimal blood pressure control (PUBMED:24956971). Thiazide-like diuretics, such as chlorthalidone and indapamide, have been found to be superior to thiazide-type diuretics like hydrochlorothiazide in reducing blood pressure without increasing the incidence of hypokalemia, hyponatremia, and changes in blood glucose and serum total cholesterol (PUBMED:28631393). In certain populations, such as diabetic hypertensives, ACE inhibitors may be considered suitable first-line therapy due to fewer side effects and slightly higher efficacy in white patients compared to thiazides (PUBMED:2518651). However, the metabolic disturbances associated with thiazides are less marked with low doses or with thiazide-like diuretics, and their cardiovascular benefits outweigh the risk of worsening glucose control in patients with type 2 diabetes (PUBMED:29676870). In conclusion, thiazide diuretics can be used as first-line therapy for hypertension, particularly in patients over 60 years and younger women, except those with diabetes or gout. In younger men, there is little to choose between thiazides and beta-blockers. Thiazides can also be added to other medications to achieve adequate blood pressure control, especially in patients who exhibit fluid overload despite significant doses of loop diuretics (PUBMED:21029871). However, clinicians should be aware of the potential for hyperuricemia associated with thiazide diuretics (PUBMED:31641556) and the increased risk of skin cancers, particularly squamous cell carcinoma, with their use (PUBMED:30937114).
Instruction: Do vitamins C and E attenuate the effects of reactive oxygen species during pulmonary reperfusion and thereby prevent injury? Abstracts: abstract_id: PUBMED:12238844 Do vitamins C and E attenuate the effects of reactive oxygen species during pulmonary reperfusion and thereby prevent injury? Background: We established an in vivo pig model of standardized lung ischemia to analyze pulmonary reperfusion injury. Enhanced chemiluminescence measurement (CM) allowed immediate quantification of reactive oxygen species (ROS) and subsequent lipid peroxidation. In such model we analyzed efficacy of vitamins C and E to prevent reperfusion injury. Methods: After left lateral thoracotomy in group I (n = 6), normothermic lung ischemia was maintained for 90 minutes followed by a 5-hour reperfusion period. In group II, animals (n = 6) underwent ischemia as in group I, but received vitamins (preoperative IV bolus C = 1 g, E = 0.75 g, then continuous infusion (125 mg/h) each throughout the study). In Group III, animals (n = 6) underwent sham surgery and served as controls. Hemodynamic variables and gas exchange were assessed. The CM was performed for injury quantification in blood samples and to determine activation of isolated PMNs. The Wilcox rank test was used for statistical analysis. Results: During reperfusion, all animals in group I developed significant pulmonary edema with significant loss of pulmonary function. The addition of vitamins (group II) improved oxygenation and almost abolished pulmonary inflammatory cell infiltration; however, as in group I, pulmonary compliance still tended to decline and the number of circulating leucocytes increased. The CM showed that, compared with group I, vitamins reduced O2- basic release by PMNs significantly (460% to 170%, p < 0.05; control 165%), but could not prevent an increase of free ROS in whole blood similar to group I (443% to 270%, p = ns, control 207%). With regard to lipid peroxidation only a trend of reduction was observed (117% to 105%, p = ns, control 100%). Conclusions: Differentiated analysis by CM demonstrated that vitamins C and E inhibited PMN activation but were not able to prevent radical production by other sources. This offers a potential explanation why radical scavengers like vitamins only attenuate but ultimately do not prevent reperfusion injury. abstract_id: PUBMED:7959567 The potential role of reactive oxygen species in liver ischemia/reperfusion injury following liver surgery. Reperfusion of a previously ischemic tissue may lead to an aggravation of injury. The liver has been shown to be susceptible to this reperfusion injury in several experimental systems. Reactive oxygen species appear to play an important role in the development of such injury, as has been demonstrated by direct measurements of their release, and by the protective effects of antioxidants. Upon reperfusion, reactive oxygen species may be released by hepatocytes, Kupffer cells and neutrophils. The relative contribution of the various liver cell types to the release of reactive oxygen species depends on several factors, including the duration and condition of ischemia and the time elapsed after reperfusion. There is only limited evidence for the occurrence of reperfusion injury in humans following liver surgery. The role of reactive oxygen species in this injury in humans remains to be shown. abstract_id: PUBMED:17558599 Ischemia-reperfusion and cardioprotection: a delicate balance between reactive oxygen species generation and redox homeostasis. Ischemia-reperfusion injury of the myocardium has long been a subject of intense research. Cardiac preconditioning, an associated phenomenon, has also been critically investigated over the past two decades. Although the biochemistry of ischemia-reperfusion and its association with oxidative metabolism has long been established, recent studies have further revealed a more intricate role of a number of reactive oxygen-nitrogen species in those processes. Emerging evidence suggests that an elaborate network of enzymes (and other biomolecules) dedicated to the generation, utilization, and diminution of reactive oxygen-nitrogen species maintains the redox homeostasis in the myocardium, and any perturbation of its status has distinctive effects. It thus appears that while excessive generation of reactive species leads to cellular injury, their regulated generation may cause transient and reversible modifications of cellular proteins leading the transmission of intracellular signals with specific effects. Taken together, generation of reactive oxygen-nitrogen species in the myocardium plays a nodal role in mediating both ischemic injury and cardioprotection. abstract_id: PUBMED:14962477 New concepts in reactive oxygen species and cardiovascular reperfusion physiology. Increasingly complex behavior of free radicals and reactive oxygen species (ROS) are noted within biological systems. Classically free radicals and ROS were considered injurious, however current mechanisms describe both protective and deleterious effects. A burst of ROS has been well described with the first moments of reperfusion and is associated with injury. However ROS can also be protective as signal preconditioning protection and induce stress responses that lead to survival. ROS generation is appreciated to occur during ischemia despite the low oxygen tension, from a likely mitochondria source, and ROS-induced ROS release may amplify its signal. The burst of ROS seen during reperfusion may originate from a different cellular source than during ischemia and is not yet fully identified. ROS and cellular redox conditions regulate a large number of vital pathways (energy metabolism, survival/stress responses, apoptosis, inflammatory response, oxygen sensing, etc). While cellular systems may demonstrate reperfusion injury, whole organ and animal models continue to report contradictory results on reperfusion injury and the role of antioxidants as a therapy. Collectively, these data may offer insight into why clinical trials of antioxidants have had such mixed and mostly negative results. Future antioxidant therapies are likely to be effective but they must become: more specific for site of action, not have deleterious effects on other signaling pathways, be targeted to a specific reactive oxygen species or cellular compartment, and be "time sensitive" so they deliver the correct therapy at precisely the correct time in ischemia and reperfusion. abstract_id: PUBMED:32383522 The role of mitochondrial reactive oxygen species, NO and H2 S in ischaemia/reperfusion injury and cardioprotection. Redox signalling in mitochondria plays an important role in myocardial ischaemia/reperfusion (I/R) injury and in cardioprotection. Reactive oxygen and nitrogen species (ROS/RNS) modify cellular structures and functions by means of covalent changes in proteins including among others S-nitros(yl)ation by nitric oxide (NO) and its derivatives, and S-sulphydration by hydrogen sulphide (H2 S). Many enzymes are involved in the mitochondrial formation and handling of ROS, NO and H2 S under physiological and pathological conditions. In particular, the balance between formation and removal of reactive species is impaired during I/R favouring their accumulation. Therefore, various interventions aimed at decreasing mitochondrial ROS accumulation have been developed and have shown cardioprotective effects in experimental settings. However, ROS, NO and H2 S play also a role in endogenous cardioprotection, as in the case of ischaemic pre-conditioning, so that preventing their increase might hamper self-defence mechanisms. The aim of the present review was to provide a critical analysis of formation and role of reactive species, NO and H2 S in mitochondria, with a special emphasis on mechanisms of injury and protection that determine the fate of hearts subjected to I/R. The elucidation of the signalling pathways of ROS, NO and H2 S is likely to reveal novel molecular targets for cardioprotection that could be modulated by pharmacological agents to prevent I/R injury. abstract_id: PUBMED:21947963 Effect of pulmonary-generated reactive oxygen species on left-ventricular dysfunction associated with cardio-pulmonary ischemia-reperfusion injury. The purpose of the present study was to demonstrate the contribution of pulmonary-generated reactive oxygen species (ROS) on cardiac dysfunction using a rat model of ischemia-reperfusion (IR) injury. Three groups of rats were subjected to regional IR injury in (i) lung, (ii) heart, (iii) lung + heart. A fourth (control) group of rats were instrumented using the same methods but without induction IR. Hemodynamic data were recorded in real time. Blood from the proximal aorta was sampled during baseline, ischemia, and reperfusion, mixed with α-phenyl-N-tert-butylnitrone (PBN) for measuring ROS by electron paramagnetic resonance spectrometry. Data were analyzed by a two-way analysis of variance. The results showed that the lung IR generated an increased burst of ROS that resulted in significant cardiac dysfunction, including hypotension and ECG changes. The results indicated that generation of ROS as a result of acute IR lung injury may be sufficiently large enough to cause direct cardiac dysfunction that is independent of injury caused to the myocardium as a result of regional myocardial IR injury alone. abstract_id: PUBMED:32974285 Reactive Oxygen Species (ROS)-Responsive Nanomedicine for Solving Ischemia-Reperfusion Injury. Ischemia-reperfusion injury (IRI) is a severe condition for most organs, which could occur in various tissues including brain, heart, liver, and kidney, etc. As one of the major hazards, reactive oxygen species (ROS) is excessively generated after IRI, which causes severe damage inside tissues and further induces the following injury via inflammatory response. However, current medical strategies could not thoroughly diagnose and prevent this disease, eventually leading to severe sequelae by missing the best time point for therapy. In the past decade, various nanoparticles that could selectively respond to ROS have been developed and applied in IRI. These advanced nanomedicines have shown efficient performance in detecting and treating a series of IRI (e.g., acute kidney injury, acute liver injury, and ischemic stroke, etc.), which are well-summarized in the current review. In addition, the nano-platforms (e.g., anti-IL-6 antibody, rapamycin, and hydrogen sulfide delivering nanoparticles, etc.) for preventing IRI during organ transplantation have also been included. Moreover, the development and challenges of ROS-responsive nanomedicine are systematically discussed for guiding the future direction. abstract_id: PUBMED:34664015 Dioscin Attenuates Myocardial Ischemic/Reperfusion-Induced Cardiac Dysfunction through Suppression of Reactive Oxygen Species. Myocardial ischemic/reperfusion (MI/R) is a leading cause of cardiovascular disease with high morbidity and mortality. However, the mechanisms underlying pathological reperfusion remain obscure. In this study, we found that dioscin, a natural product, could be a potential candidate for treating MI/R through modulating cardiac dysfunction. Mechanistically, our work revealed that dioscin could suppress the production of reactive oxygen species (ROS) via repressing the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase 2 (Nox2) and enhancing the expression of antioxidant enzymes, including superoxide dismutase (SOD), catalase (CAT), glutathione (GSH), and glutathione peroxidase (GPx). These findings indicate that dioscin may be a potential candidate for therapeutic interventions in MI/R injury. abstract_id: PUBMED:15478900 Reactive oxygen species and molecular biology of ischemia/reperfusion. Ischemic reperfusion injury is a complex pathophysiological event associated with significant impairment of multiple vascular and cellular responses. Oxidative damage due to the presence of radical oxygen species is the essential step that initiates a wide range of intracellular stress signaling processes that culminate in excessive cytokine and chemokine response, adhesion molecule upregulation and nitric oxide overproduction. As we studied all the various mechanisms of injury, we began deciphering the best means to treat the ischemic insult by modulating those proteins or active mediators that are responsible for the lesion. In this manner, we have utilized free radical scavengers, calcium channel blockers, membrane stabilizers, vasodilators, exogenous nitric oxide and arginine, adhesion molecule blockers and small molecule selectin antagonists, in an effort to improve cell function and survival after ischemia and reperfusion. The continuous investigation of new and old compounds that mitigate the ischemic injury will permit us to advance this important field of medicine. abstract_id: PUBMED:16516885 Allopurinol modulates reactive oxygen species generation and Ca2+ overload in ischemia-reperfused heart and hypoxia-reoxygenated cardiomyocytes. Myocardial oxidative stress and Ca2+ overload induced by ischemia-reperfusion may be involved in the development and progression of myocardial dysfunction in heart failure. Xanthine oxidase, which is capable of producing reactive oxygen species, is considered as a culprit regarding ischemia-reperfusion injury of cardiomyocytes. Even though inhibition of xanthine oxidase by allopurinol in failing hearts improves cardiac performance, the regulatory mechanisms are not known in detail. We therefore hypothesized that allopurinol may prevent the xanthine oxidase-induced reactive oxygen species production and Ca2+ overload, leading to decreased calcium-responsive signaling in myocardial dysfunction. Allopurinol reversed the increased xanthine oxidase activity in ischemia-reperfusion injury of neonatal rat hearts. Hypoxia-reoxygenation injury, which simulates ischemia-reperfusion injury, of neonatal rat cardiomyocytes resulted in activation of xanthine oxidase relative to that of the control, indicating that intracellular xanthine oxidase exists in neonatal rat cardiomyocytes and that hypoxia-reoxygenation induces xanthine oxidase activity. Allopurinol (10 microM) treatment suppressed xanthine oxidase activity induced by hypoxia-reoxygenation injury and the production of reactive oxygen species. Allopurinol also decreased the concentration of intracellular Ca2+ increased by enhanced xanthine oxidase activity. Enhanced xanthine oxidase activity resulted in decreased expression of protein kinase C and sarcoendoplasmic reticulum calcium ATPase and increased the phosphorylation of extracellular signal-regulated protein kinase and p38 kinase. Xanthine oxidase activity was increased in both ischemia-reperfusion-injured rat hearts and hypoxia-reoxygenation-injured cardiomyocytes, leading to reactive oxygen species production and intracellular Ca2+ overload through mechanisms involving p38 kinase and extracellular signal-regulated protein kinase (ERK) via sarcoendoplasmic reticulum calcium ATPase (SERCA) and protein kinase C (PKC). Xanthine oxidase inhibition with allopurinol modulates reactive oxygen species production and intracellular Ca2+ overload in hypoxia-reoxygenation-injured neonatal rat cardiomyocytes. Answer: Vitamins C and E have been studied for their potential to attenuate the effects of reactive oxygen species (ROS) during pulmonary reperfusion and prevent injury. In an in vivo pig model of standardized lung ischemia, the administration of vitamins C and E improved oxygenation and almost abolished pulmonary inflammatory cell infiltration during reperfusion. However, these vitamins did not prevent the decline in pulmonary compliance and the increase in circulating leukocytes. The study demonstrated that vitamins C and E inhibited the activation of polymorphonuclear neutrophils (PMNs) and reduced the basic release of superoxide by PMNs significantly. Despite this, the vitamins could not prevent an increase in free ROS in whole blood similar to the control group, and only a trend of reduction in lipid peroxidation was observed, which was not statistically significant. Therefore, while vitamins C and E showed some efficacy in inhibiting PMN activation, they were not able to prevent radical production from other sources, which offers a potential explanation for why radical scavengers like these vitamins only attenuate but ultimately do not prevent reperfusion injury (PUBMED:12238844). This finding aligns with the broader understanding that while antioxidants can protect against oxidative damage by ROS, the role of ROS in reperfusion injury is complex, and they may also have protective effects in certain contexts, such as signaling preconditioning protection and inducing stress responses that lead to survival (PUBMED:14962477). Additionally, the balance between ROS generation and redox homeostasis is delicate, and while excessive ROS leads to injury, regulated generation may cause transient and reversible modifications of cellular proteins, leading to intracellular signaling with specific effects (PUBMED:17558599). Therefore, the role of vitamins C and E in attenuating ROS effects during pulmonary reperfusion is nuanced, and they may not fully prevent injury due to the multifaceted nature of ROS in ischemia-reperfusion contexts.
Instruction: Does the piperacillin minimum inhibitory concentration for Pseudomonas aeruginosa influence clinical outcomes of children with pseudomonal bacteremia? Abstracts: abstract_id: PUBMED:22696019 Does the piperacillin minimum inhibitory concentration for Pseudomonas aeruginosa influence clinical outcomes of children with pseudomonal bacteremia? Background: The Clinical and Laboratory Standards Institute (CLSI) recently elected to adjust the previous piperacillin susceptibility breakpoint of ≤64 µg/mL against Pseudomonas aeruginosa to ≤16 µg/mL, based largely on pharmacokinetic-pharmacodynamic (PK-PD) modeling studies. Data on whether PK-PD modeling correlates with clinical outcomes in children are needed before resorting to broader classes of antibiotics to treat P. aeruginosa. Methods: We performed a retrospective cohort study of children with P. aeruginosa bacteremia between 2001 and 2010 who were prescribed piperacillin. Baseline characteristics and clinical outcomes of children with piperacillin minimum inhibitory concentrations (MICs) of ≤16 µg/mL and of 32-64 µg/mL were compared. The primary outcome was 30-day mortality. Results: There were 170 children with P. aeruginosa bacteremia receiving piperacillin therapy who met inclusion criteria. One hundred twenty-four (72%) children had piperacillin MICs of ≤16 µg/mL and 46 (28%) children had piperacillin MICs of 32-64 µg/mL. There was no significant difference in baseline characteristics between the 2 groups. Thirty-day mortality was 9% and 24% in children with a piperacillin MIC of ≤16 µg/mL and of 32-64 µg/mL, respectively. Using multivariable logistic regression, children with elevated MICs had increased odds of mortality compared with children with lower MICs (odds ratio, 3.21; 95% confidence interval, 1.26-8.16). Conclusions: Our finding that elevated piperacillin MICs are associated with higher mortality in children supports the recent CLSI recommendation to lower the breakpoint of piperacillin against P. aeruginosa to ≤16 µg/mL. Alternate therapeutic choices should be considered when piperacillin MICs against P. aeruginosa are ≥32 µg/mL. abstract_id: PUBMED:28108367 A propensity score-matched analysis of the impact of minimum inhibitory concentration on mortality in patients with Pseudomonas aeruginosa bacteraemia treated with piperacillin/tazobactam. The Clinical and Laboratory Standards Institute (CLSI) recently re-examined Pseudomonas aeruginosa minimum inhibitory concentration (MIC) susceptibility breakpoints for piperacillin/tazobactam (TZP). The objectives of this study were to analyse the impact of elevated TZP MICs (32-64 mg/L) versus lower respective MICs on P. aeruginosa bacteraemia patient outcomes. Data were gathered from a Veterans Health Administration national clinical database on P. aeruginosa bacteraemia episodes from 2007 to 2013. Patients treated with TZP were identified, comprising 53 elevated MIC episodes and 301 low MIC episodes. Propensity score matching (1:2 ratio) utilising independent variables associated with 30-day all-cause mortality was conducted to compare the outcomes of 53 elevated MIC episodes with 106 matched low MIC episodes. Independent baseline variables associated with 30-day all-cause mortality for all 354 episodes were hyperkalaemia, elevated blood urea nitrogen, elevated temperature, hypoglycaemia, lack of urinary source and thrombocytopenia. Similar 30-day all-cause mortality was found between the two propensity-matched TZP groups (elevated MIC 24.5% vs. low MIC 22.6%; P = 0.79). abstract_id: PUBMED:28716109 Clinical characteristics and outcomes of Pseudomonas aeruginosa bacteremia in febrile neutropenic children and adolescents with the impact of antibiotic resistance: a retrospective study. Background: Although the proportion of Pseudomonas aeruginosa infections has reduced after the introduction of antibiotics with anti-pseudomonal effects, P. aeruginosa bacteremia still causes high mortality in immunocompromised patients. This study determined the clinical characteristics and outcomes of P. aeruginosa bacteremia and the antibiotic susceptibilities of strains isolated from febrile neutropenic patients. Methods: Thirty-one febrile neutropenic children and adolescents with underlying hematologic/oncologic disorders diagnosed with P. aeruginosa bacteremia between 2011 and 2016 were enrolled in the study. Their medical records were retrospectively reviewed to evaluate the demographic and clinical characteristics. Antibiotic susceptibility rates of the isolated P. aeruginosa to eight antibiotic categories (anti-pseudomonal penicillin, anti-pseudomonal penicillin and β-lactamase inhibitor combination, anti-pseudomonal cephalosporin, monobactam, carbapenem, aminoglycoside, fluoroquinolone, and colistin) were also determined. Among the investigated factors, risk factors for mortality and infections by a multidrug-resistance (MDR) strain were determined. Results: Thirty-six episodes of P. aeruginosa bacteremia were identified. The mean age of the enrolled patients was 9.5 ± 5.4 years, and 26 (72.2%) episodes occurred in boys. Acute myeloid leukemia (41.7%) and acute lymphoblastic leukemia (33.3%) were the most common underlying disorders. The 30-day mortality was 38.9%, and 36.1% of the episodes were caused by MDR strains. The deceased patients were more likely to experience breakthrough infection (P = 0.036) and bacteremia (P = 0.005) due to MDR strains when compared with the patients who survived. The survived patients more likely received appropriate empirical antibiotic therapy (P = 0.024) and anti-pseudomonal β-lactam and aminoglycoside combination therapy (P = 0.039) compared with the deceased patients. The antibiotic susceptibility rates of the isolated P. aeruginosa strains were as follows: piperacillin/tazobactam, 67.6%; meropenem, 72.2%; and amikacin, 100%. Conclusions: Mortality due to P. aeruginosa bacteremia remained at 38.9% in this study, and more than one-third of the isolated strains were MDR. In this context, empirical antibiotic combination therapy to expand the antibiotic spectrum may be a strategy to reduce mortality due to P. aeruginosa bacteremia in febrile neutropenic patients. abstract_id: PUBMED:12951354 Treatment and outcome of Pseudomonas aeruginosa bacteraemia: an antibiotic pharmacodynamic analysis. Objectives: To conduct a retrospective study of antibiotic pharmacodynamics in the treatment of Pseudomonas aeruginosa bacteraemia, and to identify pharmacodynamic indices associated with clinical cure. Methods: Cases of P. aeruginosa bacteraemia were identified, and information related to patient demographics, clinical status, antibiotic treatment and clinical outcome were documented. Anti-pseudomonal therapy was assessed, and concentration versus time profiles were constructed using measured levels for aminoglycosides, or population pharmacokinetic models for other antibiotics. P. aeruginosa isolates from all patients were retrieved and MICs for the anti-pseudomonal agents used to treat the episode of bacteraemia were determined. Patient- and treatment-related factors were tested for associations with clinical outcome using univariate and multivariate analyses. Results: Fifty cases of P. aeruginosa bacteraemia were identified and 38 cases were included in the pharmacodynamic analysis. Eighty-seven percent of patients received an aminoglycoside or ciprofloxacin and 79% received piperacillin or ceftazidime. A majority of patients, 71%, were administered a combination of antibiotics. Treatment outcomes were documented as persistent infection in 21%, death within 2-30 days in 21% and clinical cure in 58% of cases. Peak/MIC (P=0.001) and AUC24/MIC (P=0.002) for aminoglycosides and ciprofloxacin were significant factors in univariate tests. Only peak/MIC was associated independently with treatment outcome (P=0.017) in logistic regression analysis. The predicted probability of cure was > or =90% when peak/MIC was at least 8. Conclusion: Pharmacodynamic considerations including aggressive dosing with targeted peak/MICs for aminoglycosides and ciprofloxacin are strongly associated with clinical outcome and essential to the appropriate management of P. aeruginosa bacteraemia. abstract_id: PUBMED:25523856 Antibiotic therapy and clinical outcomes of Pseudomonas aeruginosa (PA) bacteraemia. Introduction: Pseudomonas aeruginosa (PA) bacteraemia is associated with high morbidity and mortality. We assessed clinical outcomes in patients with PA bacteraemia treated with piperacillin-tazobactam (TZP) versus other antibiotics, and monotherapy versus combination, all with proven activity by disc testing without minimum inhibitory concentration (MIC) data. Materials And Methods: All patients with PA bacteraemia in 2007 to 2008 were reviewed for demographic, comorbidity, clinical, laboratory, treatment and outcome data. Primary outcome was 30-day mortality. Secondary outcomes included microbiological clearance, clinical response and length of stay (LOS). Results: Median age for 91 patients was 65 years. Median Simplified Acute Physiology Score (SAPS) II score was 30. Monotherapy was used in 77 cases: 42 on ceftazidime, 17 on TZP, 10 on carbapenems, and 8 on other antipseudomonal antibiotics. The 30-day mortality was 20.9%, and similar between ceftazidime and TZP versus other antibiotics respectively. More patients in combination versus monotherapy group had cardiovascular diseases, diabetes mellitus and vascular access as source of bacteraemia. Patients on monotherapy had higher 30-day mortality (24.7% vs 0%, P = 0.037). Multivariate analysis identified SAPS II score (OR = 1.097, 95% CI, 1.032 to 1.166, P = 0.003) and cancer (OR = 4.873, 95% CI, 1.235 to 19.223, P = 0.024) as independent predictors of 30-day mortality. Conclusion: TZP appeared to be an effective culture-guided antibiotic for PA bacteraemia. High 30-day mortality in monotherapy might be confounded by comorbidity, illness severity and sample size. Cancer patients and a high SAPS II score were independent predictors of 30-day mortality. abstract_id: PUBMED:6812007 Pseudomonas aeruginosa bacteriaemia: new clinical and therapeutic aspects Fifty one cases of Pseudomonas aeruginosa bacteriaemia observed during the last 12 years are reported. Thirty five patients were over fifty years old; 92 p. cent were admitted for several days and about 50 p. cent were in post-operative period. A previous antibiotherapy and an impaired status are promotive factors. The respiratory or peritoneal origins are the most frequent. All patients were feverish; 24 have had an infectious shock which was inaugural in 12 cases. Seven pneumonitis, 3 endocarditis, one pericarditis and 2 osteitis were observed. An ecthyma gangrenosum was noted in three patients. Mortality was 70 p. cent. Comparison between recovered and died patients improved bad prognosis of old age, post operative period, neoplasic, previous organica weakness and pulmonary or peritoneal origins. Used alone, colimycin has seemed to be more effective than aminosid antibiotics; but their association with betalactamins was better. An in vitro study of the susceptibility of 100 Pseudomonas aeruginosa strains has proved the interest of piperacillin and cefsulodin; azlocillin, cefoperazone and ceftriaxone are just less effective. abstract_id: PUBMED:18279040 Outcomes of bacteremia due to Pseudomonas aeruginosa with reduced susceptibility to piperacillin-tazobactam: implications on the appropriateness of the resistance breakpoint. Background: Bacteremia due to Pseudomonas aeruginosa is associated with grave clinical outcomes. Recent studies have emphasized the importance of appropriate empirical therapy, but controversy arises when piperacillin-tazobactam is used against isolates with reduced susceptibility. Methods: We performed a retrospective cohort study of pseudomonal bacteremia from 2002 to 2006. Patients were identified by the microbiology laboratory database, and pertinent clinical data (demographic characteristics, baseline Acute Physiology and Chronic Health Evaluation [APACHE] II scores, source of bacteremia, and therapy) were retrieved from the electronic medical records. All patients received appropriate empirical therapy within 24 h of positive culture results. Patients receiving piperacillin-tazobactam were compared with those receiving other agents (control subjects). The primary outcome was 30-day mortality from the first day of bacteremia. Results: A total of 34 bacteremia episodes were identified involving isolates with reduced susceptibility to piperacillin-tazobactam (minimum inhibitory concentration, 32 or 64 mg/L, reported as susceptible); piperacillin-tazobactam was empirically given in 7 episodes. There was no significant difference in baseline characteristics between the 2 groups. Thirty-day mortality was found to be 85.7% in the piperacillin-tazobactam group and 22.2% in the control group (P = .004). Time to hospital mortality was also found to be shorter in the piperacillin-tazobactam group (P < .001). In the multivariate analysis, 30-day mortality was found to be associated with empirical piperacillin-tazobactam therapy (odds ratio, 220.5; 95% confidence interval, 3.8-12707.4; P = .009), after adjustment for differences in age and APACHE II score. Conclusions: In P. aeruginosa bacteremia due to isolates with reduced piperacillin-tazobactam susceptibility, empirical piperacillin-tazobactam therapy was associated with increased mortality. Additional studies are warranted to examine the appropriateness of the current Clinical Laboratory Standards Institute resistance breakpoint of piperacillin-tazobactam. abstract_id: PUBMED:26478584 Outcomes in Documented Pseudomonas aeruginosa Bacteremia Treated with Intermittent IV Infusion of Ceftazidime, Meropenem, or Piperacillin-Tazobactam: A Retrospective Study. Background: Pseudomonas aeruginosa, one of the leading causes of nosocomial gram-negative bloodstream infections, is particularly difficult to treat because of its multiple resistance mechanisms combined with a lack of novel antipseudomonal antibiotics. Despite knowledge of time-dependent killing with ß-lactam antibiotics, most hospitals in Canada currently administer ß-lactam antibiotics by intermittent rather than extended infusions. Objectives: To determine clinical outcomes, microbiological outcomes, total hospital costs, and infection-related costs for patients with P. aeruginosa bacteremia who received intermittent IV administration of antipseudomonal ß-lactam antibiotics in a tertiary care institution. Methods: For this retrospective descriptive study, data were collected for patients who were admitted between March 1, 2005, and March 31, 2013, who had P. aeruginosa bacteremia during their admission, and who received at least 72 h of treatment with ceftazidime, meropenem, or piperacillin-tazobactam. Clinical and microbiological outcomes were determined, and total and infection-related hospital costs were calculated. Results: A total of 103 patients were included in the analysis, of whom 79 (77%) experienced clinical cure. In addition, bacterial eradication was achieved in 41 (87%) of the 47 patients with evaluable data for this outcome. Twenty-eight (27%) of the 103 patients died within 30 days of discontinuation of antipseudomonal ß-lactam antibiotic therapy. The median total cost of the hospital stay was $121 718, and the median infection-related cost was $29 697. Conclusions: P. aeruginosa bacteremia is a clinically significant nosocomial infection that continues to cause considerable mortality and health care costs. To the authors' knowledge, no previous studies have calculated total and infection-related hospital costs for treatment of P. aeruginosa bacteremia with intermittent infusion of antipseudomonal ß-lactam antibiotics, with characterization of cost according to site of acquisition of the infection. This study may provide important baseline data for assessing the impact of implementing extended-infusion ß-lactam therapy, antimicrobial stewardship, and infection control strategies targeting P. aeruginosa infection in hospitalized patients. abstract_id: PUBMED:31620353 Pharmacokinetic/pharmacodynamic predictions and clinical outcomes of patients with augmented renal clearance and Pseudomonas aeruginosa bacteremia and/or pneumonia treated with extended infusion cefepime versus extended infusion piperacillin/tazobactam. Aim: We sought to correlate pharmacokinetic (PK)/pharmacodynamic (PD) predictions of antibacterial efficacy and clinical outcomes in patients with augmented renal clearance (ARC) and Pseudomonas aeruginosa bacteremia or pneumonia treated with extended infusion cefepime or piperacillin/tazobactam. Materials And Methods: Cefepime (2 g every 8 h) and piperacillin/tazobactam (4.5 g every 8 h) were administered over 4 h after a loading dose infused over 30 min, and minimum inhibitory concentration was determined by E-test. Published population PK evaluations in critically ill patients were used, and PD analyses were conducted using estimated patient-specific PK parameters and known minimum inhibitory concentration values for P. aeruginosa. Concentration-time profiles were generated every 6 min using first-dose drug exposure estimates including a loading infusion, and free concentration above the minimum inhibitory concentration (fT> MIC) was estimated. Clinical cure was defined as resolution of signs and symptoms attributable to P. aeruginosa infection without need for escalation of antimicrobial. Results: One hundred and two patients were included (36 cefepime and 66 piperacillin/tazobactam). The two groups of patients had similar age, serum creatinine, weight, and creatinine clearance. The majority of patients required intensive care unit care (63.9% vs. 63.6%) and most had pneumonia (61%). The fT>MIC (93.6 [69.9-100] vs. 57.2 [47.6-72.4], P < 0.001) and clinical cure (91.7% vs. 74.2%, P = 0.039) were significantly higher in cefepime group, whereas mortality (8.3% vs. 22.7%, P = 0.1) and infection-related mortality (0% vs. 2%, P = 0.54) were similar. Conclusions: Patients with ARC and P. aeruginosa pneumonia and/or bacteremia who received extended-infusion cefepime achieved higher fT>MIC and clinical cure than those receiving extended infusion piperacillin/tazobactam. abstract_id: PUBMED:19223699 Pseudomonas aeruginosa bacteremia: associations with a source of infection and antibiotic resistance Aim Of The Study: To determine the associations between the source of infection and antibiotic resistance in patients with Pseudomonas aeruginosa bacteremia. Material And Methods: A retrospective analysis of 50 patients with Pseudomonas aeruginosa bacteremia was carried out. If sepsis was suspected, blood culture was incubated in an automatic system BACTEC 9240. Then bacteria were identified, and their antibiotic resistance was estimated by disc diffusion method. If Pseudomonas aeruginosa strains were resistant to three or more antibiotics, they were considered as multidrug-resistant. Results: The origin of bacteremia was confirmed in 33 (66%) patients. Lower respiratory tract was the predominant source of Pseudomonas aeruginosa bacteremia (81.8%, n=27) as compared with infection of wound (39.4%, n=13), urinary tract (15.2%, n=5), and drain or cerebrospinal fluid (9.1%, n=3) (P<0.05). Eighteen percent (n=9) of strains, which caused bacteremia, were resistant to ceftazidime; 38% (n=19), to piperacillin; 22% (n=11), to imipenem; 26% (n=13), to meropenem; 24% (n=12), to ciprofloxacin; 40% (n=20), to gentamicin; and only 8% (n=4), to amikacin. Multidrug-resistant Pseudomonas aeruginosa strains were more frequently isolated if a source of infection was wound comparing to a source of other localization (61.5%, n=8 and 20.0%, n=4, respectively; P<0.05). Resistance of Pseudomonas aeruginosa strains to imipenem was associated with resistance to ciprofloxacin (13.2%, n=5 and 50.0%, n=6, retrospectively; P<0.05), but resistance to meropenem--both to ciprofloxacin and amikacin. Conclusions: The predominant source of Pseudomonas aeruginosa bacteremia was lower respiratory tract, and multidrug-resistant strains caused bacteremia more frequently if a source infection was wound. Pseudomonas aeruginosa resistance to carbapenems was associated with resistance to ciprofloxacin and resistance to meropenem--also to amikacin. Resistance of strains to ceftazidime and piperacillin was associated with resistance to gentamicin. Answer: Yes, the piperacillin minimum inhibitory concentration (MIC) for Pseudomonas aeruginosa does influence clinical outcomes of children with pseudomonal bacteremia. A retrospective cohort study of children with P. aeruginosa bacteremia who were prescribed piperacillin showed that children with elevated piperacillin MICs (32-64 µg/mL) had increased odds of mortality compared with children with lower MICs (≤16 µg/mL), with 30-day mortality rates of 24% and 9%, respectively (PUBMED:22696019). This finding supports the Clinical and Laboratory Standards Institute (CLSI) recommendation to lower the breakpoint of piperacillin against P. aeruginosa to ≤16 µg/mL, suggesting that alternate therapeutic choices should be considered when piperacillin MICs against P. aeruginosa are ≥32 µg/mL.
Instruction: Decision Regret following Treatment for Localized Breast Cancer: Is Regret Stable Over Time? Abstracts: abstract_id: PUBMED:25532824 Decision Regret following Treatment for Localized Breast Cancer: Is Regret Stable Over Time? Background: While studies suggest most women have little regret regarding their breast cancer treatment decisions immediately following treatment, no studies to date have evaluated how regret may change over time. Objective: To measure the stability of posttreatment decision regret over time among women with breast cancer. Methods: Women diagnosed with breast cancer between August 2005 and May 2007 reported to the Detroit, Michigan, or Los Angeles County Surveillance, Epidemiology, and End Results (SEER) registry and completed surveys at 9 months following diagnosis (time 1) and again approximately 4 years later (time 2). A decision regret scale consisting of 5 items was summed to create 2 decision regret scores at both time 1 and time 2 (range, 0-20). Multivariable linear regression was used to examine change in regret from 9 months to 4 years. Independent variables included surgery type, receipt of reconstruction, and recurrence status at follow-up. The model controlled for demographic and clinical factors. Results: The analytic sample included 1536 women. Mean regret in the overall sample was 4.9 at time 1 and 5.4 at time 2 (P < 0.001). In the multivariable linear model, we found no difference in change in decision regret over time by surgery type. Reporting a new diagnosis of breast cancer at time 2 was associated with a 2.6-point increase in regret over time compared with women without an additional diagnosis (P = 0.003). Receipt of reconstruction was not associated with change in decision regret over time. Conclusions: Decision regret following treatment was low and relatively stable over time for most women. Those facing an additional diagnosis of breast cancer following treatment may be at risk for elevated regret-related distress. abstract_id: PUBMED:25160051 Regret associated with the decision for breast reconstruction: the association of negative body image, distress and surgery characteristics with decision regret. This study investigated the influence of psychosocial and surgical factors on decision regret among 123 women diagnosed with breast cancer who had undergone immediate (58%) or delayed (42%) breast reconstruction following mastectomy. The majority of participants (52.8%, n = 65) experienced no decision regret, 27.6% experienced mild regret and 19.5% moderate to strong regret. Bivariate analyses indicated that decision regret was associated with negative body image and psychological distress - intrusion and avoidance. There were no differences in decision regret either with respect to methods or timing patterns of reconstructive surgery. Multinominal logistic regression analysis showed that, when controlling for mood state and time since last reconstructive procedure, increases in negative body image were associated with increased likelihood of experiencing decision regret. These findings highlight the need for optimal input from surgeons and therapists in order to promote realistic expectations regarding the outcome of breast reconstruction and to reduce the likelihood of women experiencing decision regret. abstract_id: PUBMED:34431105 A concept analysis of decision regret in women with breast cancer. The aim of this article is to understand the components of decision regret for women making breast cancer treatment decisions. Patient-centered care models encourage women to become more active in the decision-making process, inadvertently exposing them to the risk of experiencing decision regret. Enhancing the understanding of the concept of decision regret can offer insight into ways to mitigate this phenomenon. The Walker and Avant method was used to analyze this concept. Using PubMed, CINAHL, ERIC, Academic Search Complete, PsychINFO, SocINDEX, Joanna Briggs Institute of EBP Database, and an online dictionary, articles from 2011 to 2021 were analyzed to identify concept uses, attributes, antecedents, and consequences. Decision regret in women making breast cancer healthcare decisions is a negative cognitive-emotional response to a treatment decision that involves counterfactual thinking with three targets of regret: outcome regret, chosen option regret, and process regret. Experiencing decision regret can reduce a woman's quality of life, inflict psychological distress, and impact future decision-making. Unfavorable outcomes, decision uncertainty, and breakdowns in the decision-making process can lead to decision regret. Findings provide information on identifying women experiencing decision regret and illustrate opportunities to address causative factors through patient education and support to promote optimal patient outcomes. abstract_id: PUBMED:26941952 Anticipated regret in shared decision-making: a randomized experimental study. Background: Explicit consideration of anticipated regret is not part of the standard shared decision-making protocols. This pilot study aimed to compare decisions about a hypothetical surgery for breast cancer and examined whether regret is a consideration in treatment decisions. Methods: In this randomized experimental study, 184 healthy female volunteers were randomized to receive a standard decision aid (control) or one with information on post-surgical regret (experimental). The main outcome measures were the proportion of subjects choosing lumpectomy vs. mastectomy and the proportion reporting that regret played a role in the decision made. We hypothesized that a greater proportion of the experimental group (regret-incorporated decision aid) would make a surgical treatment preference that favored the less regret-inducing option and that they would be more likely to consider regret in their decision-making process as compared to the control group. Results: A significantly greater proportion of the experimental group subjects reported regret played a role in their decision-making process compared to the control counterparts (78 vs. 65 %; p = 0.039). Recipients of the regret-incorporated experimental decision aid had a threefold increased odds of choosing the less regret-inducing surgery (OR = 2.97; 95 % CI = 1.25, 7.09; p value = 0.014). Conclusions: In this hypothetical context, the incorporation of regret in a decision aid for preference-sensitive surgery impacted decision-making. This finding suggests that keying in on anticipated regret may be an important element of shared decision-making strategies. Our results make a strong argument for applying this design and pursuing further research in a surgical patient population. Trial Registration: Clinicaltrials.gov, NCT02563808. abstract_id: PUBMED:34845512 The Impact of Reconstructive Modality and Postoperative Complications on Decision Regret and Patient-Reported Outcomes following Breast Reconstruction. Shared decision-making gives patients greater autonomy in their healthcare decisions; however, decisions that result in negative outcomes may lead to decision regret. The complexity of reconstructive options makes post-mastectomy breast reconstruction particularly prone to decision regret. This study's purpose was to explore the relationship between breast reconstruction modalities and degree of postoperative decision regret. Patients who had undergone either implant-based or autologous breast reconstruction with a minimum of 12 months of follow-up were invited to complete the Decision Regret Scale and the BREAST-Q Satisfaction with Breasts module. The impact of reconstructive modality and occurrence of postoperative complications on decision regret and satisfaction with breasts was examined. Sixty-three patients completed the questionnaires-25 patients with implant-based reconstruction and 38 patients with autologous reconstruction. The average Decision Regret score was 84.6 ± 23.6; thirty-one patients experienced no decision regret. The average score for the BREAST-Q module was 81.9 ± 18.8. Neither satisfaction with breasts nor decision regret were impacted by the reconstructive modality. The occurrence of postoperative complications was strongly correlated with lower Decision Regret scores (91.6 vs. 74.6, p=0.004) but was not correlated with lower Satisfaction with Breasts scores (84.6 vs. 78.2, p=0.18). Patients had relatively low levels of decision regret and relatively high levels of satisfaction with breasts, irrespective of reconstructive modality. Having a postoperative complication led to significantly greater levels of decision regret without impacting satisfaction with breasts. Patients may benefit from additional preoperative education on possible complications to mitigate decision regret. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . abstract_id: PUBMED:36758448 Decision regret in breast cancer patients after adjuvant radiotherapy. Purpose: Breast cancer patients often engage in shared decision-making to select an individualized treatment regimen from multiple options. However, dissatisfaction with treatment outcomes can lead to decision regret. We evaluated decision regret and physical and psychological well-being among breast cancer patients who underwent adjuvant radiotherapy and explored their associations with patient, tumor, treatment, and symptom characteristics. Methods: This cross-sectional study involved retrospectively obtaining clinical data and data collected through interviews carried out as part of regular long-term medical aftercare. Decision regret regarding the radiotherapy was assessed using the Ottawa Decision Regret Scale, physical and psychological well-being were assessed using the PROMIS Global Health-10 questionnaire, and patients were asked about their treatment outcomes and symptoms. The questionnaire was administered 14 months to 4 years after completion of radiotherapy. Results: Of the 172 included breast cancer patients, only 13.9% expressed high decision regret, with most patients expressing little or no decision regret. More decision regret was associated with volumetric modulated arc therapy, chest wall irradiation, use of docetaxel as a chemotherapy agent, lymphangiosis carcinomatosa, new heart disease after radiotherapy, and lower psychological well-being. Conclusion: Although most patients reported little or no decision regret, we identified several patient, treatment, and symptom characteristics associated with more decision regret. Our findings suggest that psychological well-being influences patients' satisfaction with therapy decisions, implying that practitioners should pay special attention to maintaining psychological well-being during shared decision-making and ensuring that psychological assessment and treatment is provided after cancer therapy to deal with long-term effects of radiotherapy. abstract_id: PUBMED:31266493 Correlates of regret with treatment decision-making among Japanese women with breast cancer: results of an internet-based cross-sectional survey. Background: Satisfaction with medical decisions among patients with cancer is associated not only with the results of decisions they make but also with how they make those decisions. To elucidate the decision-making process among Japanese women with breast cancer, we explored the correlates of regret with patients' treatment decision-making. Methods: An Internet-based cross-sectional survey was utilized. Japanese women (N = 467) who self-reported that they had been diagnosed with stage 0-II breast cancer participated. Data regarding their decisional role (active, collaborative, or passive) in treatment decision-making, their most regrettable experience regarding their decision-making, the importance of various factors related to decision-making at the time, and clinical and sociodemographic factors were obtained. A forced-entry logistic regression analysis was performed on the likelihood that patients would have some regrets regarding the decision-making process. Results: About half the women expressed some regret (51.4%). Women who had a mastectomy were significantly more likely to have regret than women who had breast conserving surgery. Correlates of regret differed by surgical type. For women who had a mastectomy, those who were aged ≥50 years when diagnosed, or who made their decisions collaboratively with their doctors were significantly less likely to have regret with the decision-making. For women who had breast conserving surgery, those who worked on a contract or part-time basis or whose decision-making roles matched their preferred role were significantly less likely to have regret. Among women who reported some regret, 23.8% expressed that their most regrettable experience concerned gathering information, while 21.3% regretted not consulting with others. For women who were diagnosed at a younger age, the influence on their sex life and pregnancy and childbirth was more important when making their treatment decisions than for women diagnosed an older age. Conclusions: Approximately half of the Japanese women with breast cancer in this study reported some regret in the treatment decision-making process. Effective participation in decision-making differed by surgical types. Additionally, women who are diagnosed with breast cancer at a relatively younger age, as compared to those who are older, may need additional information and support regarding their sex life and fertility after cancer treatment. abstract_id: PUBMED:26658703 When do cancer patients regret their treatment decision? A path analysis of the influence of clinicians' communication styles and the match of decision-making styles on decision regret. Objective: To test the influence of physician empathy (PE), shared decision making (SDM), and the match between patients' preferred and perceived decision-making styles on patients' decision regret. Methods: Patients with breast or colon cancer (n=71) completed questionnaires immediately following (T1) and three months after a consultation (T2). Path analysis was used to examine the relationships among patient demographics, patient reports of PE, SDM, the match between preferred and perceived decision-making styles, and patient decision regret at T2. Results: After controlling for clinician clusters, higher PE was directly associated with more SDM (β=0.43, p<0.01) and lower decision regret (β=-0.28, p<0.01). The match between patients' preferred and perceived roles was negatively associated with decision regret (β=-0.33, p<0.01). Patients who participated less than desired reported more decision regret at T2. There was no significant association between SDM and decision regret (β=0.03, p=0.74). Conclusion: PE and the match between patients' preferred and perceived roles in medical decision making are essential for patient-centered cancer consultations and treatment decisions. Practice Implications: Ways to enhance PE and matching the consultation style to patients' expectations should be encouraged. abstract_id: PUBMED:30132239 The influence of partner involvement in the decision-making process on body image and decision regret among women receiving breast reconstruction. Purpose: This study examines the influence of partner involvement in decision-making for breast reconstruction (BR) on women's body image and post-BR decision regret. Methods: A cross-sectional and correlational approach was used in a convenience sample of 105 women in Taiwan who had breast cancer (BC) whose partners were involved in decision-making for BR. A structured questionnaire including the Involvement in the Breast Reconstruction Decision-Making Process Scale, the Body Image Scale, and the Decision Regret Scale was administered. Pearson's r and path analysis were used to examine the relationships among the dyadic BR decision-making process, women's body image, and decision regret. Results: The greater the amount of medical information women had, the better their body image and the less decision regret they experienced. Moreover, partner involvement was not related to women's body image, though it was inversely related to women's decision regret. The final path model showed that the amount of medical information women obtained was directly related to body image and decision regret, while body image also directly influenced decision regret. Overall, the amount of medical information women obtained and their body image explained 45% of the variance in decision regret. Conclusions: Medical teams should provide female BC patients with complete medical information, and through partner involvement, women can be supported to make an appropriate decision regarding BR to achieve optimal levels of body image and lower levels of decision regret. abstract_id: PUBMED:37556147 Long-Term Regret and Satisfaction With Decision Following Gender-Affirming Mastectomy. Importance: There has been increasing legislative interest in regulating gender-affirming surgery, in part due to the concern about decisional regret. The regret rate following gender-affirming surgery is thought to be approximately 1%; however, previous studies relied heavily on ad hoc instruments. Objective: To evaluate long-term decisional regret and satisfaction with decision using validated instruments following gender-affirming mastectomy. Design, Setting, And Participants: For this cross-sectional study, a survey of patient-reported outcomes was sent between February 1 and July 31, 2022, to patients who had undergone gender-affirming mastectomy at a US tertiary referral center between January 1, 1990, and February 29, 2020. Exposure: Decisional regret and satisfaction with decision to undergo gender-affirming mastectomy. Main Outcomes And Measures: Long-term patient-reported outcomes, including the Holmes-Rovner Satisfaction With Decision scale, the Decision Regret Scale, and demographic characteristics, were collected. Additional information was collected via medical record review. Descriptive statistics and univariable analysis using Fisher exact and Wilcoxon rank sum tests were performed to compare responders and nonresponders. Results: A total of 235 patients were deemed eligible for the study, and 139 responded (59.1% response rate). Median age at the time of surgery was 27.1 (IQR, 23.0-33.4) years for responders and 26.4 (IQR, 23.1-32.7) years for nonresponders. Nonresponders (n = 96) had a longer postoperative follow-up period than responders (median follow-up, 4.6 [IQR, 3.1-8.6] vs 3.6 [IQR, 2.7-5.3] years, respectively; P = .002). Nonresponders vs responders also had lower rates of depression (42 [44%] vs 94 [68%]; P < .001) and anxiety (42 [44%] vs 97 [70%]; P < .001). No responders or nonresponders requested or underwent a reversal procedure. The median Satisfaction With Decision Scale score was 5.0 (IQR, 5.0-5.0) on a 5-point scale, with higher scores noting higher satisfaction. The median Decision Regret Scale score was 0.0 (IQR, 0.0-0.0) on a 100-point scale, with lower scores noting lower levels of regret. A univariable regression analysis could not be performed to identify characteristics associated with low satisfaction with decision or high decisional regret due to the lack of variation in these responses. Conclusions And Relevance: In this cross-sectional survey study, the results of validated survey instruments indicated low rates of decisional regret and high levels of satisfaction with decision following gender-affirming mastectomy. The lack of dissatisfaction and regret impeded the ability to perform a more complex statistical analysis, highlighting the need for condition-specific instruments to assess decisional regret and satisfaction with decision following gender-affirming surgery. Answer: The study reported in abstract PUBMED:25532824 aimed to measure the stability of posttreatment decision regret over time among women with breast cancer. The study included women diagnosed with breast cancer who completed surveys at 9 months following diagnosis and again approximately 4 years later. The results showed that decision regret following treatment was low and relatively stable over time for most women. However, those facing an additional diagnosis of breast cancer following treatment may be at risk for elevated regret-related distress. Mean regret in the overall sample was 4.9 at time 1 and 5.4 at time 2, indicating a slight increase but not a significant change over time. The study concluded that decision regret is relatively stable over time for most women with localized breast cancer.
Instruction: Does a muscle strength index provide complementary information to traditional disease activity variables in patients with rheumatoid arthritis? Abstracts: abstract_id: PUBMED:7699619 Does a muscle strength index provide complementary information to traditional disease activity variables in patients with rheumatoid arthritis? Objective: To develop a muscle strength index (MSI) and determine whether it provides complementary information to traditional disease activity variables in patients with rheumatoid arthritis (RA). Methods: The MSI was developed on the basis of practical and empirical aspects and statistical considerations. Intra and interobserver reliability was assessed on the data from 3 observers on 2 strength measurements in each of 10 patients. The association of the MSI with variables of disease activity and severity was assessed in univariate analysis. The contribution of the MSI in the explanation of physician's global disease activity after accounting for the effect of traditional measures of disease activity was assessed in multiple linear regression models. Results: Eight strength measurements (extension and flexion of knee and elbow joints) obtained with a hand held pull gauge were aggregated into the MSI as the mean of the standardized scores. In 65 patients with RA, the MSI had a high internal consistency (Cronbach's alpha 0.95) and intra and interobserver reliability (Pearson correlation coefficient 0.94 each). The MSI correlated moderately with traditional measures of disease activity and strongly with physical functional disability and radiological damage. In contrast to grip strength, the MSI explained additional variation of physician's global assessment of disease activity if added to variables of pooled activity indices. Conclusion: The MSI is a reliable and valid measure of disease activity and severity and may improve the content validity of pooled disease activity indices. abstract_id: PUBMED:29630626 Decreasing muscle performance associated with increasing disease activity in patients with rheumatoid arthritis. Objectives: Increasing evidence suggests that inflammation has a detrimental effect on muscle strength. Our objective was to analyse the association between muscle performance and different disease activity levels in patients with rheumatoid arthritis (RA). Method: A total of 199 consecutive outpatients were subject to cross-sectional assessment. Measurements of grip strength, endurance of the upper and lower limbs and trunk strength were combined as a muscle performance composite score (MPCS), using a standardised method. The disease activity for 28 joints (DAS28), radiographs of small joints (Larsen score), rheumatoid factor, body mass index (BMI), comorbidities and anti-rheumatic drugs were verified. Patients' questionnaires included sociodemographic information, pain level, global disease activity, the Beck Depression Inventory, the mental and physical component scores of Short Form-36 and physical activity level. Results: Of the 199 patients, 36%, 17% and 47% patients had remission, low/moderate and high DAS28, respectively. The patients in remission had significantly shorter disease duration, better parameters in terms of pain, physician's assessment, Larsen, Beck or physical component score of Short Form-36, and they were more physically active than other patients. After adjustments for age, sex, RA duration, radiographs and BMI, the decreasing MPCS associated linearly with the increasing DAS28 activity levels (linearity, P <0.001). Conclusion: Poorer MPCS is clearly associated with higher disease activity in patients with RA. Muscle performance is a modifiable risk factor. The findings suggest evaluating muscle performance in clinical practice as a part of patient care. abstract_id: PUBMED:15901635 Muscle strength, pain, and disease activity explain individual subdimensions of the Health Assessment Questionnaire disability index, especially in women with rheumatoid arthritis. Objective: To study the extent to which muscle strength and performance, pain, and disease activity are associated with the total Health Assessment Questionnaire (HAQ) disability index and its subdimensions in male and female patients with rheumatoid arthritis. Methods: HAQ for functional capacity was completed by 135 patients with rheumatoid arthritis referred for orthopaedic surgery (74% women; mean (SD) age 62 (10) years; disease duration 19 (13) years, 70% positive for rheumatoid factor). Knee extension, trunk extension and flexion, grip strength, walking speed, and sit-to-stand test were measured to mirror physical function. Radiographs of hands and feet, pain, and the modified 28 joint disease activity score (DAS28) were also assessed. Results: Mean total HAQ was 1.08 (0.68) in women and 0.67 (0.70) in men (p = 0.0031). Women had greater disability than men in five of the eight subdimensions of the HAQ. Grip strength was 48%, knee extension strength 46%, trunk extension strength 54%, and trunk flexion strength 43% lower in women than in men. Knee extension strength was inversely correlated with walking time (r = -0.63 (95% confidence interval, -0.73 to -0.51)) and with sit-to-stand test (r = -0.47 (-0.60 to -0.31)). In an ordered logistic regression analysis in female rheumatoid patients, DAS28, pain, knee extension strength, and grip strength were associated with the total HAQ disability index. Conclusions: Women reported greater disability than men both in the total HAQ and in the majority of its eight subdimensions. In addition to disease activity and pain, muscle strength has a major impact on disability especially in female rheumatoid patients. abstract_id: PUBMED:37049577 Association between Nutritional Status, Lifestyle Habits, and Disease Activity in Dalmatian Patients with Rheumatoid Arthritis. The aim of this study was to evaluate body composition, handgrip strength, quality of life, disease duration and activity and lifestyle habits in patients with rheumatoid arthritis (RA) and to evaluate possible associations between all of the abovementioned factors. Seventy-five stable RA patients were included. Data on sociodemographic data, disease activity, quality of life, nutritional risk, body mass composition, anthropometric parameters, and clinical and laboratory parameters were collected for each study participant. The results showed that the mean score of the disease activity score (DAS28) was 5.4, the mean score of the health assessment questionnaire-disability index (HAQ-DI) was 1.19, and the mean disease duration in our population was 13.9 years. Our studied population had a long disease duration and high disease activity. Positive predictors of muscle mass in RA patients were daily caloric intake, fat-free mass, bone mass, basal metabolic rate, total body water, weight, body mass index (BMI), height, and muscle strength. There were no significant negative predictors. Positive predictors of muscle strength in RA patients were daily caloric intake, basal metabolic rate, predicted muscle mass, fat-free mass, bone mass, weight, total body water, metabolic age, hemoglobin, BMI, and number of exercises per week. In contrast, negative predictors of muscle strength were number of comorbidities, number of swollen joints, DAS, number of tender joints, erythrocyte sedimentation rate (ESR), and duration of RA. An association was also found between bone mineral density and both muscle mass and muscle strength. A structured nutritional approach in terms of multidisciplinary collaboration between rheumatologist, dietitian and physical medicine specialist is needed in the Dalmatian RA population. abstract_id: PUBMED:9439274 Traditional and complementary therapies used together in the treatment, relief and control of Chron's disease and polyarthritis. A diagnosis of polyarthritis and later Chron's disease was initially treated by traditional treatments and medicines, which was later supported by aromatherapy and acupuncture, but the turning point to a full recovery followed from a visit to a naturopath and the commencement of an individual/personal diet regime. Over the months of the acute phase of the illness, a valuable insight was gained to the challenges of a physical disability and the difficulties of immobility, and the loss of independence. Also, the experience of being a patient rather than a nursing practitioner, gave first hand experience of the different situations patients meet on a daily basis throughout the health care services, both traditional and complementary, from a patient's perspective rather than from a nursing point of view. The following paper will hopefully help others to consider a fresh individual situations, reflect on the collaboration of traditional and complementary treatments, and treatment of underlying cause as well as symptoms. abstract_id: PUBMED:28254736 Workability and Muscle Strength in Patients With Seropositive Rheumatoid Arthritis: Survey Study Protocol. Background: Rheumatoid arthritis (RA) and other rheumatic conditions not only fundamentally affect patients' quality of life and physiological needs but are also negatively associated with work ability. The costs of poor work ability, which, in sum, are more than treatment costs, pose an economic burden to society and patients. Work ability in RA appears to be multifactorial; symptoms such as pain, swelling, and stiffness play a major role, as these directly affect functional disability. Also, RA patients typically suffer from reduced muscle strength. Lower extremity function and grip strengths especially impair their quality of life. However, the role of muscle strength and disease activity as determinants of work ability have not yet been studied. Objective: The primary objective of this study is to compare work ability in working-age participants with seropositive RA and with high and low disease activity; the secondary objective is to evaluate the association of muscle strength, functional ability, and frailty with work ability. Methods: This monocentric cross-sectional study will be conducted at a rheumatologic outpatient clinic and day hospital with approximately 100 seropositive RA patients aged <65 years. A clinical disease activity index as a measure for rheumatoid disease activity will be assessed during the patients' routine visits at the clinic. Work ability, frailty, and functional disability will be evaluated with (self-reported) questionnaires as well as with physical tests (Work Ability Index/Score; Health Assessment Questionnaire Disability Index; Survey of Health, Ageing, and Retirement in Europe Frailty Instrument; Short Physical Performance Battery). Muscle strength will be determined with dynamometer measurements of isometric hand grip strength and quadriceps femoris muscle contraction strength. Sleep quality (Medical Outcomes Study Sleep Scale) and sexual functioning as physiological needs will additionally be determined with self-reported questionnaires. Results: For this study funding has already been awarded and enrollment has been completed. Data are currently being evaluated. Conclusions: This study will evaluate the association of work ability with modifiable parameters such as muscle strength and functional ability. It will provide further insights into work ability in RA and its associated risk factors. Any evidence of association will motivate further research, and the findings might encourage interventions focused specifically on improving muscle strength and lower extremity function to positively affect work ability. Trial Registration: ClinicalTrials.gov (NCT02581852); https://clinicaltrials.gov/ct2/show/NCT02581852 (Archived by WebCite at http://www.webcitation.org/6oNcelHtQ). abstract_id: PUBMED:24664868 Deficits in muscle mass, muscle density, and modified associations with fat in rheumatoid arthritis. Objective: To quantify muscle outcomes, independent of fat mass, in rheumatoid arthritis (RA) patients compared to healthy controls. Methods: Quantitative computed tomography scans measured calf muscle and fat cross-sectional area (CSA) and muscle density (an index of intramuscular adipose tissue), and isometric dynamometry was used to measure ankle muscle strength in 50 participants with RA ages 18-70 years and 500 healthy controls. Multivariable linear regression models assessed muscle deficits in RA after adjusting for group differences in adiposity and assessing for an altered muscle-fat association. Associations between RA disease characteristics and fat-adjusted muscle outcomes were also assessed. Results: Compared to controls, RA subjects had significantly greater body mass index (BMI) and fat area, and lower muscle area, muscle density, and muscle strength (P < 0.001 for all). Strength deficits were eliminated with adjustment for the smaller muscle area. The magnitude of muscle deficits, relative to controls, was significantly greater (P < 0.03 for interaction) in participants with lower fat area and BMI. Among those in the lower tertiles of adiposity, RA subjects demonstrated more significant deficits compared to controls with similar adiposity. In contrast, among those in the highest tertile for adiposity, RA was not associated with muscle deficits. Among RA, greater Sharp/van der Heijde scores were associated with lower muscle CSA and muscle density. Greater disease activity and disability were associated with low muscle density. Conclusion: Deficits in muscle area and muscle density are present in RA patients compared to controls and are most pronounced in subjects with low fat mass. Greater joint destruction is associated with greater muscle deficits. abstract_id: PUBMED:34628537 Handgrip Strength Features in Rheumatoid Arthritis Patients Assessed Using an Innovative Cylindrical-Shaped Device: Relationships With Demographic, Anthropometric and Clinical Variables. To investigate the relationship between handgrip strength (HGs) features, evaluated with an innovative cylindrical-shaped grip device, and demographic, anthropometric and clinical variables, in patients with rheumatoid arthritis (RA). Consecutive RA patients were prospectively enrolled for this cross-sectional study. For each patient were collected demographic, anthropometric, clinical data related to disease activity. HGs was assessed in terms of area under the force-time curve (AUC-FeT), peak grip force and time to reach the curve plateau. The correlations between the variables were studied with the Spearman's rho correlation coefficient. The receiver operating characteristic (ROC) curve analysis was used to test the discriminant accuracy of HGs features in identifying patients in moderate/high disease activity. A multivariate analysis was performed to estimate the contribution of covariates on the AUC-FeT. A significant correlation was found among AUC-FeT, age, Simplified Disease Activity Index (SDAI), Ultrasound-Clinical Arthritis Activity (US-CLARA) (all at p < 0.0001), and body mass index (BMI) (p = 0.0001). Any correlation was found between HGs and radiographic damage. The discriminatory power of AUC-FeT was good [area under-ROC curve = 0.810 (95% CI 0.746-0.864)]. Variables significantly associated with AUC-FeT in multivariate analysis were age (p = 0.0006), BMI (p = 0.012), gender (p = 0.004), SDAI (p = 0.047) and US-CLARA (p = 0.023). HGs is negatively influenced by demographic (gender and age), anthropometric (BMI), and disease activity variables (SDAI and US-CLARA). These findings highlight the role of HGs in RA patients' functional impairment and disability. abstract_id: PUBMED:30657069 Hand strength in patients with RA correlates strongly with function but not with activity of disease. Background: Rheumatoid arthritis (RA) is a systemic autoimmune disease characterized by chronic inflammation of the joints, especially of the hands. The evaluation of handgrip strength (HS) and pinch strength can be useful to detect reduction in hand function in RA patients. The aim of the study was to compare HS and pinch strength between RA patients (RA Group - RAG) and a non-RA control group (CG) and to relate HS and pinch strength to functional capacity, duration and disease activity in the RAG. Methods: A cross-sectional case control study. The RAG was assessed for disease activity by the Disease Activity Score (DAS-28); for functional capacity by the Health Assessment Questionnaire (HAQ), the Cochin Hand Functional Scale (CHFS) questionnaire, and the Disability of the Arm, Shoulder, and Hand (DASH) questionnaire; and for HS and pinch strength (2-point tip-to-tip, lateral or key, and 3-point) using Jamar® and pinch gauge dynamometers, respectively. Associations were analyzed by Pearson and Spearman tests, and groups were compared by the independent samples t test, with a significance level of P < 0.05. Results: The convenience sample included 121 rheumatoid patients and a control group matched by age, sex, and body mass index. The RAG showed lower strength values compared with the CG in all measurements (P < 0.01, 95% CI) and these values were associated with worse performance in the functional questionnaires and greater disease activity and duration. There was a strong correlation among the functional assessment instruments. Conclusions: The decrease in grip and pinch strength, easily measured by portable dynamometers, is a strong indicator of functional disability in RA patients. abstract_id: PUBMED:34829361 Morphological Parameters in Quadriceps Muscle Were Associated with Clinical Features and Muscle Strength of Women with Rheumatoid Arthritis: A Cross-Sectional Study. Background: Rheumatoid arthritis (RA) is an autoimmune, inflammatory and chronic disease that may lead to loss of muscle mass, muscle strength and decreased functionality. Our objectives are to assess the quadriceps muscle morphology by ultrasound (MU) and verify its associations with clinical features, muscle strength and physical function in RA patients. Methods: In this cross-sectional study, RA women (≥18 years) were included. Morphological parameters in quadriceps muscle consisted of the muscle thickness and pennation angle of rectus femoris (RF), vastus intermedius (VI) and vastus lateralis (VL). RA activity was measured by a 28-joint disease activity score (DAS28), muscle strength by handgrip and chair stand tests, and physical function by health assessment questionnaire (HAQ), timed-up-and-go (TUG) test and short physical performance battery (SPPB). Results: Fifty-five patients were included (age: 56.73 ± 9.46 years; DAS28: 3.08 ± 1.29). Muscle thickness in RF, VI and VL were negatively associated with age (RF, p < 0.001; VI, p = 0.013; VL, p = 0.002) and disease duration (RF, p < 0.001; VI, p = 0.005; VL, p = 0.001), and were positively associated with handgrip strength (RF, p = 0.015; VI, p = 0.022; VL, p = 0.013). In addition, decreased muscle thickness in VI (p = 0.035) and a smaller pennation angle in RF (p = 0.030) were associated with higher DAS-28 scores. Conclusion: Quadriceps muscle morphology by ultrasound appears to be affected by age, disease duration, disease activity and muscle strength in patients with RA. MU can be a useful method to evaluate the impact of the disease on skeletal muscle. Answer: Yes, a muscle strength index (MSI) does provide complementary information to traditional disease activity variables in patients with rheumatoid arthritis (RA). The development of an MSI based on strength measurements of the extension and flexion of knee and elbow joints showed that it had high internal consistency and reliability. The MSI correlated moderately with traditional measures of disease activity and strongly with physical functional disability and radiological damage. Importantly, the MSI explained additional variation in the physician's global assessment of disease activity when added to variables of pooled activity indices, suggesting that it is a reliable and valid measure of disease activity and severity and may improve the content validity of pooled disease activity indices (PUBMED:7699619). Furthermore, other studies have found that decreasing muscle performance composite scores (MPCS) are clearly associated with higher disease activity in RA patients, indicating that muscle performance is a modifiable risk factor and should be evaluated in clinical practice as part of patient care (PUBMED:29630626). Additionally, muscle strength and performance, pain, and disease activity are associated with the total Health Assessment Questionnaire (HAQ) disability index and its subdimensions, especially in women with RA, highlighting the impact of muscle strength on disability (PUBMED:15901635). Moreover, research has shown that handgrip strength features in RA patients are negatively influenced by demographic, anthropometric, and disease activity variables, emphasizing the role of handgrip strength in RA patients' functional impairment and disability (PUBMED:34628537). In summary, muscle strength indices and assessments provide valuable complementary information to traditional disease activity measures in RA and are important for understanding the full impact of the disease on patients' functional abilities.
Instruction: Is it possible to decrease antibiotic prescribing in primary care? Abstracts: abstract_id: PUBMED:33238587 Time Trends and Factors Associated with Antibiotic Prescribing in Swiss Primary Care (2008 to 2020). Antibiotic resistance (ABR) is a major threat to public health, and the majority of antibiotics are prescribed in the outpatient setting, especially in primary care. Monitoring antibiotic consumption is one key measure in containing ABR, but Swiss national surveillance data are limited. We conducted a retrospective cross-sectional study to characterise the patterns of antibiotic prescriptions, assess the time trends, and identify the factors associated with antibiotic prescribing in Swiss primary care. Using electronic medical records data, we analysed 206,599 antibiotic prescriptions from 112,378 patients. Based on 27,829 patient records, respiratory (52.1%), urinary (27.9%), and skin (4.8%) infections were the commonest clinical indications for antibiotic prescribing. The most frequently prescribed antibiotics were broad-spectrum penicillins (BSP) (36.5%), fluoroquinolones (16.4%), and macrolides/lincosamides (13.8%). Based on the WHO AWaRe classification, antibiotics were 57.9% Core-Access and 41.7% Watch, 69% of which were quinolones and macrolides. Between 2008 and 2020, fluoroquinolones and macrolides/lincosamides prescriptions significantly declined by 53% and 51%; BSP prescriptions significantly increased by 54%. Increasing patients' age, volume, and employment level were significantly associated with antibiotic prescribing. Our results may inform future antibiotic stewardship interventions to improve antibiotic prescribing. abstract_id: PUBMED:31633458 A qualitative literature review exploring the drivers influencing antibiotic over-prescribing by GPs in primary care and recommendations to reduce unnecessary prescribing. Aims: This qualitative literature review aims to critically assess and analyse published literature to determine drivers influencing over-prescribing by general practitioners (GPs) in primary care, exploring their views and opinions, and also to determine how antibiotic prescribing can be improved and unnecessary prescribing reduced, thus reducing the threat to public health from antibiotic resistance. It is intended to develop new thinking in this area and add to existing knowledge concerning GPs' antibiotic prescribing behaviour. Methods: Thematic analysis following Braun and Clarke's 2006 framework was used to analyse 17 qualitative studies chosen from EBSCOhost databases, focusing on GPs' views of antibiotic prescribing in primary care, with specific search strategies and inclusion criteria to ensure study quality and trustworthiness. Results: Three main themes and nine sub-themes were generated from the studies. The first main theme discussed GP factors related to over-prescribing, the main drivers being GP attitudes and feelings and anxiety/fear concerning prescribing. The second theme highlighted external factors, with pressures from time and financial issues as the main drivers within this theme. The final theme marked patient pressure, demand and expectation with lack of patient education as the major drivers affecting GP over-prescribing. Conclusion: The findings of this research show GPs' antibiotic prescribing in primary care is complex, being influenced by many internal and external factors. A multifaceted approach to interventions targeting the drivers identified could significantly reduce the level of antibiotic prescribing thus minimising the impact of antibiotic resistance and promoting a more efficient working environment for GPs and patients alike. abstract_id: PUBMED:38411087 Evaluation of an automated feedback intervention to improve antibiotic prescribing among primary care physicians (OPEN Stewardship): a multinational controlled interrupted time-series study. Tools to advance antimicrobial stewardship in the primary health care setting, where most antimicrobials are prescribed, are urgently needed. The aim of this study was to evaluate OPEN Stewarship (Online Platform for Expanding aNtibiotic Stewardship), an automated feedback intervention, among a cohort of primary care physicians. We performed a controlled, interrupted time-series study of 32 intervention and 725 control participants, consisting of primary care physicians from Ontario, Canada and Southern Israel, from October 2020 to December 2021. Intervention participants received three personalized feedback reports targeting several aspects of antibiotic prescribing. Study outcomes (overall prescribing rate, prescribing rate for viral respiratory conditions, prescribing rate for acute sinusitis, and mean duration of therapy) were evaluated using multilevel regression models. We observed a decrease in the mean duration of antibiotic therapy (IRR = 0.94; 95% CI: 0.90, 0.99) in intervention participants during the intervention period. We did not observe a significant decline in overall antibiotic prescribing (OR = 1.01; 95% CI: 0.94, 1.07), prescribing for viral respiratory conditions (OR = 0.87; 95% CI: 0.73, 1.03), or prescribing for acute sinusitis (OR = 0.85; 95% CI: 0.67, 1.07). In this antimicrobial stewardship intervention among primary care physicians, we observed shorter durations of therapy per antibiotic prescription during the intervention period. The COVID-19 pandemic may have hampered recruitment; a dramatic reduction in antibiotic prescribing rates in the months before our intervention may have made physicians less amenable to further reductions in prescribing, limiting the generalizability of the estimates obtained.IMPORTANCEAntibiotic overprescribing contributes to antibiotic resistance, a major threat to our ability to treat infections. We developed the OPEN Stewardship (Online Platform for Expanding aNtibiotic Stewardship) platform to provide automated feedback on antibiotic prescribing in primary care, where most antibiotics for human use are prescribed but where the resources to improve antibiotic prescribing are limited. We evaluated the platform among a cohort of primary care physicians from Ontario, Canada and Southern Israel from October 2020 to December 2021. The results showed that physicians who received personalized feedback reports prescribed shorter courses of antibiotics compared to controls, although they did not write fewer antibiotic prescriptions. While the COVID-19 pandemic presented logistical and analytical challenges, our study suggests that our intervention meaningfully improved an important aspect of antibiotic prescribing. The OPEN Stewardship platform stands as an automated, scalable intervention for improving antibiotic prescribing in primary care, where needs are diverse and technical capacity is limited. abstract_id: PUBMED:37183105 Antibiotic prescribing trends in primary care 2014-2022. Antimicrobial resistance (AMR) is a global healthcare challenge that governments and health systems are tackling primarily through antimicrobial stewardship (AMS). This should, improve antibiotic use, avoid inappropriate prescribing, reduce prescription numbers, aligning with national/international AMS targets. In primary care in the United Kingdom (UK) antibiotics are mainly prescribed for patients with urinary and respiratory symptoms (22.7% and 46% of all antibiotic prescriptions respectively). This study aimed to capture the time-series trends (2014-2022) for commonly prescribed antibiotics for respiratory and urinary tract infections in primary care in England. Trends for Amoxicillin, Amoxicillin sodium, Trimethoprim, Clarithromycin, Erythromycin, Erythromycin ethylsuccinate, Erythromycin stearate, Doxycycline hyclate, Doxycycline monohydrate and Phenoxymethylpenicillin (Penicillin V) were determined. In doing so providing evidence regarding meeting UK antibiotic prescribing rate objectives (a 15% reduction in human antibiotic use 2019-2024). Time series trend analysis of 62,949,272 antibiotic prescriptions from 6,370 General Practices in England extracted from the National Health Service (NHS) Business Services Authority web portal were explored. With additional investigation of prescribing rate trends by quintiles of the Index of Multiple Deprivation (IMD). Overall, there is a downwards trend in antibiotic prescribing for those explored. There is an association between IMD, geographical location, and higher antibiotic prescribing levels (prescribing hot spots). England has a well-documented North-South divide of health inequalities, this is reflected in antibiotic prescribing. The corona virus pandemic (COVID-19) impacted on AMS, with a rise in doxycycline and trimethoprim prescriptions notable in higher IMD areas. Since then, prescribing appears to have returned to pre-pandemic levels in all IMDs and continued to decline. AMS efforts are being adhered to in primary care in England. This study provides further evidence of the link between locality and poorer health outcomes (reflected in higher antibiotic prescribing). Further work is required to address antibiotic use in hot spot areas. abstract_id: PUBMED:29335323 Antibiotic prescribing quality for children in primary care: an observational study. Background: Overuse and inappropriate prescribing of antibiotics is driving antibiotic resistance. GPs often prescribe antibiotics for upper respiratory tract infections (URTIs) in young children despite their marginal beneficial effects. Aim: To assess the quality of antibiotic prescribing for common infections in young children attending primary care and to investigate influencing factors. Design And Setting: An observational, descriptive analysis, including children attending primary care sites in England and Wales. Method: The Diagnosis of Urinary Tract infection in Young children study collected data on 7163 children aged <5 years, presenting to UK primary care with an acute illness (<28 days). Data were compared with the European Surveillance of Antimicrobial Consumption Network (ESAC-Net) disease-specific quality indicators to assess prescribing for URTIs, tonsillitis, and otitis media, against ESAC-Net proposed standards. Non-parametric trend tests and χ2 tests assessed trends and differences in prescribing by level of deprivation, site type, and demographics. Results: Prescribing rates fell within the recommendations for URTIs but exceeded the recommended limits for tonsillitis and otitis media. The proportion of children receiving the recommended antibiotic was below standards for URTIs and tonsillitis, but within the recommended limits for otitis media. Prescribing rates increased as the level of deprivation decreased for all infections (P<0.05), and increased as the age of the child increased for URTIs and tonsillitis (P<0.05). There were no other significant trends or differences. Conclusion: The quality of antibiotic prescribing in this study was mixed and highlights the scope for future improvements. There is a need to assess further the quality of disease-specific antibiotic prescribing in UK primary care settings using data representative of routine clinical practice. abstract_id: PUBMED:34348560 Encouraging rational antibiotic prescribing behaviour in primary care - prescribing practice among children aged 0-4 years 2016-2018: an observational study. Objective: To study antibiotic prescriptions among 0- to 4-year-old children before and after implementing a quality project on prudent prescribing of antibiotics in primary healthcare in the capital region of Iceland. Design: An observational, descriptive, retrospective study using quantitative methodology. Setting: Primary healthcare in the Reykjavik area with a total population of approximately 220,000. Subjects: A total of 6420 children 0-4 years of age presenting at the primary healthcare centres in the metropolitan area over three years from 2016 to 2018. Main Outcome Measures: Reduction of antibiotic prescriptions and change in antibiotic profile. Data on antibiotic prescriptions for children 0-4 years of age was obtained from the medical records. Out-of-hours prescriptions were not included in the database. Results: The number of prescriptions during the study period ranged from 263.6 to 289.6 prescriptions/1000 inhabitants/year. A reduction of 9% in the total number of prescriptions between 2017-2018 was observed. More than half of all prescriptions were for otitis media, followed by pneumonia and skin infections. Amoxicillin accounted for over half of all prescriptions, increasing between 2016 and 2018 by 51.3%. During this period, the prescribing of co-amoxiclav and macrolides decreased by 52.3% and 40.7%, respectively. These changes were significant in all cases, p < 0.0001. Conclusion: The results show an overall decrease in antibiotic prescribing concurrent with a change in the choice of antibiotics prescribed and in line with the recommendations presented in the prescribing guidelines implemented by the Primary Healthcare of the Capital Area, and consistent with the project's goals.Key pointsA substantial proportion of antibiotic prescribing can be considered inappropriate and the antibiotic prescription rate is highest in Iceland of the Nordic countries.After implementing guidance on the treatment of common infections together with feedback on antibiotic prescribing, a decrease in the total number of prescriptions accompanied by a shift in the antibiotic profile was observed. abstract_id: PUBMED:37324137 Guidance on C-reactive protein point-of-care testing and complementary strategies to improve antibiotic prescribing for adults with lower respiratory tract infections in primary care. The world faces the threat of increasing antimicrobial resistance, and there is growing consensus that swift action must be taken to improve the rational use of antibiotics and increase the stewardship of antibiotics to safeguard this key resource in modern healthcare. This paper provides the perspective of an international group of experts on the role of C-reactive protein point-of-care testing (CRP POCT) and other complementary strategies to improve antibiotic stewardship in primary care, with regards to the diagnosis and treatment of adult patients presenting symptoms of lower respiratory tract infections (LRTIs). It provides guidance regarding the clinical assessment of symptoms in combination with C-reactive protein (CRP) results, at the point of care, to support the management decision, and discusses enhanced patient communication and delayed prescribing as complementary strategies to decrease the inappropriate use of antibiotics. Recommendation: CRP POCT should be promoted to improve the identification of adults presenting with symptoms of LRTIs in primary care who might gain additional benefit from antibiotic treatment. Appropriateness of antibiotic use can be maximized when CRP POCT is used together with complementary strategies such as enhanced communication skills training and delayed prescribing in addition to routine safety netting. abstract_id: PUBMED:32899123 Guideline Adherence in Antibiotic Prescribing to Patients with Respiratory Diseases in Primary Care: Prevalence and Practice Variation. Respiratory tract infections (RTIs) account for a large part of antibiotic prescriptions in primary care. However, guidelines advise restrictive antibiotic prescribing for RTIs. Only in certain circumstances, depending on, e.g., comorbidity, are antibiotics indicated. Most studies on guideline adherence do not account for this. We aimed to assess guideline adherence for antibiotic prescribing for RTIs as well as its variation between general practices (GPs), accounting for patient characteristics. We used data from electronic health records of GPs in the Netherlands. We selected patients who consulted their GP for acute cough, rhinitis, rhinosinusitis or sore throat in 2014. For each disease episode we assessed whether, according to the GP guideline, there was an indication for antibiotics, using the patient's sociodemographic characteristics, comorbidity and co-medication. We assessed antibiotic prescribing for episodes with no or an unsure indication according to the guidelines. We analysed 248,896 episodes. Diagnoses with high rates of antibiotic prescribing when there was no indication include acute tonsillitis (57%), strep throat (56%), acute bronchitis (51%) and acute sinusitis (48%). Prescribing rates vary greatly between diagnoses and practices. Reduction of inappropriate antibiotic prescribing remains a key target to tackle antimicrobial resistance. Insight into reasons for guideline non-adherence may guide successful implementation of the variety of interventions already available for GPs and patients. abstract_id: PUBMED:32611320 Treatment of urinary tract infections in Swiss primary care: quality and determinants of antibiotic prescribing. Background: Urinary tract infections are one of the most common reasons for prescribing antibiotics in primary care. Current guidelines recommend fosfomycin, nitrofurantoin, or trimethoprim - sulfamethoxazol as empiric first line antimicrobial agents in uncomplicated infections. However, there is evidence that the use of fluoroquinolones, which are no longer recommended, is still inappropriate high. We determined antibiotic prescription patterns, quality and factors affecting antibiotic prescriptions in urinary tract infections in primary care in Switzerland. Methods: From June 2017 to August 2018, we conducted a cross-sectional study in patients suffering from a urinary tract infection (UTI). Patient and general practitioners characteristics as well as antibiotic prescribing patterns were analysed. Results: Antibiotic prescribing patterns in 1.352 consecutively recruited patients, treated in 163 practices could be analysed. In 950 (84.7%) patients with an uncomplicated UTI the prescriptions were according to current guidelines and therefore rated as appropriate. Fluoroquinolones were prescribed in 13.8% and therefore rated as inappropriate. In multivariable analysis, the age of the general practitioner was associated with increasing odds of prescribing a not guideline recommended antibiotic therapy. Conclusions: We found a high degree of guideline conform antibiotic prescriptions in patients with an uncomplicated urinary tract infection in primary care in Switzerland. However, there is still a substantial use of fluoroquinolones in empiric therapy. abstract_id: PUBMED:36658638 Prescribing pattern of antibiotics by family physicians in primary health care. Purpose: Irrational prescription of antibiotics is an ongoing global public health concern, leading to antibiotic resistance. Understanding the prescribing pattern of antibiotics is important to tackling mal-prescription and antibiotic resistance. We aimed to investigate the pattern and factors affecting outpatients' antibiotic prescribing by family physicians in Primary Health Care (PHC). Methods: A cross-sectional study was conducted in 19 PHC facilities in Alborz province. Prescribing pattern of antibiotics was evaluated among 1068 prescriptions by family physicians. Prescribing pattern of antibiotics included prescriptions containing antibiotics, the number of antibiotics per prescription, type, name of antibiotic, and mal-prescription. Multiple logistic regression analysis was used to estimate the adjusted odds ratios and 95% confidence intervals. Results: Overall, 57% of the prescriptions had ≥ 1 antibiotic and the average number of antibiotics per prescription was 1.27. Amoxicillin was the commonly prescribed antibiotic. There was a significant relationship between age, sex, type of health insurance, work experience of the physician, and seasons with antibiotic prescribing (P < 0.05). In 59.31% of antibiotic prescriptions at least one of the scientific criteria was not fulfilled. In the final analysis, after adjusting for the potential confounders, field experts of physicians (OR = 1.59; 95% CI: 1.08-6.17), female sex (OR = 2.23; 95% CI: 1.18-4.21), and winter season (OR = 3.34; 95% CI: 1.26-8.15) were found associated factors with antibiotic prescribing. Conclusion: The average number of antibiotics per prescription and the percentage of irrational prescriptions were relatively high in this study. There is need to improve antibiotic prescribing patterns among family physicians working in primary health care. Answer: Yes, it is possible to decrease antibiotic prescribing in primary care. Various studies and interventions have demonstrated that targeted efforts can lead to a reduction in antibiotic prescriptions. For instance, an observational study in Iceland showed a 9% reduction in the total number of antibiotic prescriptions for children aged 0-4 years after implementing a quality project on prudent prescribing of antibiotics in primary healthcare (PUBMED:34348560). Similarly, a multinational controlled interrupted time-series study, OPEN Stewardship, found that primary care physicians who received personalized feedback reports prescribed shorter courses of antibiotics, although they did not write fewer antibiotic prescriptions (PUBMED:38411087). In England, a study capturing time-series trends from 2014 to 2022 for commonly prescribed antibiotics for respiratory and urinary tract infections in primary care showed an overall downward trend in antibiotic prescribing, suggesting adherence to antimicrobial stewardship efforts (PUBMED:37183105). Additionally, a study in Swiss primary care found a high degree of guideline-conform antibiotic prescriptions for patients with uncomplicated urinary tract infections, although there was still some use of fluoroquinolones that were not recommended by guidelines (PUBMED:32611320). Moreover, guidance on C-reactive protein point-of-care testing (CRP POCT) and other complementary strategies, such as enhanced communication skills training and delayed prescribing, have been recommended to improve antibiotic stewardship in primary care, particularly for adult patients with lower respiratory tract infections (PUBMED:37324137). However, the challenge of reducing antibiotic prescribing is complex and influenced by many factors, including GP attitudes, patient expectations, and external pressures such as time and financial issues (PUBMED:31633458). Therefore, a multifaceted approach targeting these drivers is necessary to achieve significant reductions in antibiotic prescribing and minimize the impact of antibiotic resistance (PUBMED:31633458).
Instruction: Does the colon adopt small bowel features in a small bowel environment? Abstracts: abstract_id: PUBMED:8712989 Does the colon adopt small bowel features in a small bowel environment? Background: Isoperistaltic colon interposition is one of several surgical options available in the treatment of short bowel syndrome. It functions primarily by slowing the rapid intestinal transit that follows small bowel resection (SBR). Previous studies have interposed distal colonic segments and although there is evidence of adaptation by the interposed colon it is not known whether these segments develop histologic or functional small bowel characteristics. Methods: In this study evidence was sought of metaplasia in the interposed segments after 80% SBR and, because proximal colon is a midgut derivative, the differences between proximal and distal colonic segments were examined. Results: There was no qualitative histological evidence of metaplasia by the interposed segments. There was a significant increase in crypt depth, mucosal thickness and maltase concentration of the interposed segments compared with the nontransposed colon of the controls. The maltase concentations were increased to the extent that they were not significantly different from the concentration present in normal ileum. Conclusions: Although there was no gross evidence of small bowel intestinalization by the interposed segments, there was evidence of adaptation which was not merely an extension of the adaptive process seen after SBR alone. These changes resulted in the colon taking on some small bowel features. There was no significant difference between proximal vs distal interposed segments. abstract_id: PUBMED:26940448 Gastrointestinal Motility, Part 2: Small-Bowel and Colon Transit. Because of the difficulty often encountered in deciding whether a patient's symptoms originate in the upper or lower gastrointestinal tract, gastrointestinal transit scintigraphy is a uniquely suited noninvasive, quantitative, and physiologic method of determining whether there is a motility disorder affecting the stomach, small bowel, or colon. Small-bowel and colon transit studies can be performed alone or together with gastric emptying studies after oral administration of an appropriately radiolabeled meal. It is hoped that newly published standards for performing these studies and the anticipated arrival of new Current Procedural Terminology codes in the United States for small-bowel and colon transit studies will increase their availability and use. abstract_id: PUBMED:24679432 Small bowel and colon perforation. For patients with small bowel and colonic perforations, a definitive diagnosis of the cause of perforation is not necessary before operation. Bowel obstruction and inflammatory bowel disease are the most common causes of nontraumatic intestinal perforations in industrialized countries, whereas infectious causes of intestinal perforations are more common in developing countries. Treatment of small bowel and colonic perforations generally includes intravenous antibiotics and fluid resuscitation, but the specific management of the bowel depends on the underlying cause of the perforation. abstract_id: PUBMED:26159586 Gastrointestinal Motility, Part 2: Small-Bowel and Colon Transit. Because of the difficulty often encountered in deciding whether a patient's symptoms originate in the upper or lower gastrointestinal tract, gastrointestinal transit scintigraphy is a uniquely suited noninvasive, quantitative, and physiologic method of determining whether there is a motility disorder affecting the stomach, small bowel, or colon. Small-bowel and colon transit studies can be performed alone or together with gastric emptying studies after oral administration of an appropriately radiolabeled meal. It is hoped that newly published standards for performing these studies and the anticipated arrival of new Current Procedural Terminology codes in the United States for small-bowel and colon transit studies will increase their availability and use. abstract_id: PUBMED:11464023 Interposed colon between remnants of the small intestine exhibits small bowel features in a patient with short bowel syndrome. We describe herein the case of a 48-year-old man who underwent emergency massive resection of the small intestine due to a strangulated ileus, which led to short bowel syndrome (SBS), as he was left with only 7 cm of jejunum and 8 cm of ileum with ileocecal valve. He then received interposition of a colon segment between the jejunum and ileum remnants isoperistaltically. For 24 months after the operation, he has been able to tolerate oral intake, but still requires partial home parenteral nutritional support during the night on a bimonthly basis. Biochemical and nutritional parameters, including the analysis of minerals and trace elements, indicated that the patient was in relatively good health. Histological examination revealed that the mucosa of the interposed colon showed hypertrophy and hyperplasia of the crypt glands, and cells resembling Paneth cells which are usually seen in the small intestine, suggesting that the colon segment exhibits adaptive changes to the small intestine. Colon interposition may be a useful technique in patients with SBS when the small bowel is too short for the other surgical considerations. abstract_id: PUBMED:34911841 Clinical features of 44 patients with small bowel Crohn's disease. Objectives: At present, there are many studies on Crohn's disease of terminal ileum and colon, but few studies on Crohn's disease of small intestine alone. This study aims to analyze the clinical features and therapeutic effect of small bowel in adult patients with Crohn's disease so as to strengthen the diagnosis and treatment for this disease. Methods: From July 1, 2015 to October 31, 2018, patients with small bowel Crohn's disease at Department of Gastroenterology, Third Xiangya Hospital of Central South University, were enrolled. At the same time, patients' demographics and clinical data were collected. Results: A total of 44 patients were small bowel Crohn's disease. Among them, 40 patients were male. The age at diagnosis was (35.8±10.3) years old and disease duration was (35.2±59.5) months. The subtypes included 29(65.9%) of ileum, 7(15.9%) of jejunum, 8(18.2%) of ileum with jejunum. There were 27(61.4%) of stricture behavior, 4(9.1%) of penetrating behavior, and 13(29.5%) of non-stricture and non-penetrating behavior. Endoscopic visible stricture (29/85, 34.1%) was common, followed by longitudinal ulcers (27/85, 31.8%). Non-caseate granulomatous were found in 2 cases (4.5%). The score of Crohn's disease activity index was correlated to hemoglobin, hematocrit, and erythrocyte sedimentation rate (all P<0.05). Among the most common symptom of abdominal pain (34/44, 77.3%), 11 patients (25.0%) developed complications (eg. intestinal obstruction) during the follow-up, and 66.7% of patients treated with mesalazine had complications. Conclusions: Patients with ileum account for a large proportion in patients with small bowel Crohn's disease. Stricture is more common in small bowel Crohn's disease. Stricture and longitudinal ulcer are more common under enteroscopy. Crohn's disease activity index is correlated to hemoglobin, red blood cell specific volume, and erythrocyte sedimentation rate. The most common complication is intestinal obstruction. Mesalazine is less effective on small bowel Crohn's disease. abstract_id: PUBMED:29985072 Reference curves for the normal fetal small bowel and colon diameters; their usefulness in fetuses with suspected dilated bowel. Objectives: To establish reference curves of normal fetal small bowel and colon diameters and to assess the clinical applicability.Method: Serial longitudinal ultrasound examinations at 4-week intervals between 20 to 41 weeks of gestation in 39 low-risk fetuses. The largest loop of the small bowel and colon was identified. The bowel lumen short axis was measured. Linear mixed modeling was used to determine individual developmental trajectories. Twenty-eight fetuses with suspected bowel dilatation were analyzed relative to the reference curves.Results: Development of the small bowel and colon diameters was best described by a linear and cubic model, respectively. The intraobserver and interobserver concordance were >0.94. In cases with suspected bowel dilatation, normal fetal outcome occurred if the bowel dilatation was transient. Progressive increase of fetal bowel diameter was associated with pathology after birth. Cases with small bowel pathology had a z-score >8 after 25 weeks of gestation.Conclusion: We provided the first ultrasound reference curves for normal fetal small bowel and colon diameters. Progressive increase in the fetal bowel diameter z-score was highly predictive of intestinal abnormalities after birth. Longitudinal follow-up of dilated fetal bowel is important to distinguish normality from disease. abstract_id: PUBMED:35004049 Persistent Small Bowel Obstruction due to Small Bowel Adenocarcinoma: A Case Report. Small bowel obstruction (SBO), of both partial and complete types, is a condition predominantly caused by intra-abdominal adhesions and hernias. However, a known but very uncommon cause of SBO is malignancies, which are more complicated than those caused by adhesions and hernias, and associated with poorer prognoses; of these, small bowel adenocarcinoma is an even rarer etiology of SBO. The majority of SBO cases that are treated have resolution of symptoms and do not have recurrence/persistence of the condition; however, reports suggest that approximately one-fifth of SBO cases that are treated will result in recurrence/persistence of SBO requiring repeat admission. Here we report the case of an 89-year-old female with a past medical history of right lower extremity deep venous thrombosis, inferior vena cava filter placement, iron deficiency anemia, diverticular disease, internal hemorrhoids, sick sinus syndrome, emphysema, hypertension, dyslipidemia, and hypothyroidism, who presented with diarrhea and intermittent dark stool. Abdominal computed tomography (CT) while in the emergency department initially showed possible ischemic bowel and SBO. After an exploratory laparotomy with small bowel resection and adhesiolysis, pathological analysis of a resected specimen showed infiltrating small bowel adenocarcinoma. Persistence of symptoms necessitated subsequent abdominal imaging, which demonstrated persistent SBO, which was treated with a second exploratory laparotomy with small bowel resection and end ileostomy. abstract_id: PUBMED:31773257 Characteristics and prognosis of isolated small-bowel Crohn's disease. Background: Previous studies suggest that Crohn's disease (CD) with both small bowel and colon involvement is more serious than colonic CD. However, clinical features of isolated small-bowel CD have not been fully investigated. Methods: In this retrospective case-control study, 89 patients were divided into two groups according to capsule endoscopy, ileocolonoscopy, and enhanced computed tomography results. The case group was isolated small-bowel CD (n = 50) and the control group was CD with both small bowel and colon involvement (n = 39). We collected data of the patients and analyze it. Results: In univariate analysis, isolated small-bowel CD group had higher percentage of stricture, Lewis score, platelet, plateletcrit and lower Harvey-Bradshaw index, high-sensitivity C-reactive protein, and erythrocyte sedimentation rate. In multivariable analysis, it had lower Harvey-Bradshaw index (p = 0.000), which meaned relatively mild symptoms. However, it had higher Lewis score (p = 0.007), which meaned more serious small-bowel inflammation. The Kaplan-Meier survival curve also suggested that isolated small-bowel CD patients were more likely to accept partial small intestinal resection surgery (p = 0.029). Conclusions: Isolated small-bowel CD is easily overlooked for milder clinical symptoms and relatively limited lessions, but severe small-bowel histological injury results in owing worse clinical outcomes. Clinicians should pay more attention to the isolated small-bowel CD and take aggressive intervention during therapy. abstract_id: PUBMED:32393134 A young male with epitheliolysis in the small bowel and colon. Background: Toxic epidermal necrolysis (TEN) is characterized by epidermal necrosis of various degree, and can affect the entire body surface. Affection of small bowel and colon is a rare manifestation of TEN. We present a case with an unusual appearance of epitheliolysis of the small bowel and colon due to a toxic reaction.Case report: A 19 year old male was diagnosed with ulcerative colitis (UC) after treatment with tetracyclines followed by isotretinoin due to acne vulgaris. Medical treatment did not lead to improvement of his UC, and an emergency resection of the colon was performed. Postoperatively his condition worsened due to small bowel epitheliolysis, and he recovered finally 6 months later after a partial small bowel resection.Conclusion: The true cause of this very serious situation with severe gastrointestinal involvement is not fully understood. In this case, successive treatment with antibiotics and isotretinoin given to a patient with an inflamed colon might have triggered the destruction of the epithelial barrier, leading to an immense immunological reaction in the intestinal wall. We suggest that physicians should be aware of UC-like symptoms occurring prior to or during treatment with tetracyclines and/or isotretinoin. Answer: Yes, the colon can adopt small bowel features in a small bowel environment. This phenomenon has been observed in cases of short bowel syndrome (SBS) where a segment of the colon is interposed between remnants of the small intestine. In a study examining the effects of isoperistaltic colon interposition after 80% small bowel resection (SBR), there was no qualitative histological evidence of metaplasia in the interposed segments. However, there was a significant increase in crypt depth, mucosal thickness, and maltase concentration in the interposed segments compared to the nontransposed colon of the controls. The maltase concentrations in the interposed segments were increased to levels that were not significantly different from those present in normal ileum, indicating that the colon had adapted and taken on some small bowel features (PUBMED:8712989). Additionally, a case report of a patient with SBS who underwent interposition of a colon segment between the jejunum and ileum remnants showed that the mucosa of the interposed colon exhibited hypertrophy and hyperplasia of the crypt glands, as well as cells resembling Paneth cells, which are typically seen in the small intestine. This suggests that the colon segment underwent adaptive changes to resemble the small intestine (PUBMED:11464023).
Instruction: Cesarean section for suspected fetal distress. Does the decision-incision time make a difference? Abstracts: abstract_id: PUBMED:9219122 Cesarean section for suspected fetal distress. Does the decision-incision time make a difference? Objective: To compare perinatal outcomes in patients at term (37 weeks) in whom the decision-incision time for cesarean delivery was due to suspected fetal distress. Study Design: All parturients who underwent cesarean delivery primarily for possible fetal distress during a three-year period were identified retrospectively. Student's t test and the chi 2 test were utilized, and P < .05 was considered significant. A regression analysis of decision-incision time and umbilical arterial pH was performed. Results: From 1991 to 1993, 1.3% (117/9,137) of term laboring patients underwent emergency cesarean delivery for the primary indication of possible fetal distress. In 61 patients (52%) the decision-incision time was 30 minutes, while it exceeded 30 minutes in the remaining 56 women. The two patient groups were similar in maternal demographics, antepartum complications, oxytocin usage, thick meconium, type of abnormal fetal heart rate tracing prompting surgery, use of amnioinfusion (41% vs. 36%), general anesthesia (97% vs. 93%), mean birth weight and Apgar score < 7 at five minutes. Three adverse outcomes were observed more frequently in association with decision-incision time > 30 minutes: (1) lower mean (+/-SD) umbilical arterial pH (7.16 +/- 0.15 vs. 7.26 +/- 0.06, P = .001), (2) pH < 7.00 (8/61 vs. 0/56, P = .005), and (3) admission to the neonatal intensive care unit (P = .008). When the incision was made longer than 30 minutes after the decision, there was no apparent adverse neonatal or infant outcome. Conclusion: Although a cesarean decision-incision time < or = 30 minutes is a desirable goal for the fetus possibly in distress, failure to achieve this goal is not associated with a measurable negative impact on newborn outcome. abstract_id: PUBMED:16816049 Decision-to-incision times and maternal and infant outcomes. Objective: To measure decision-to-incision intervals and related maternal and neonatal outcomes in a cohort of women undergoing emergency cesarean deliveries at multiple university-based hospitals comprising the National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network. Methods: All women undergoing a primary cesarean delivery at a Network center during a 2-year time span were prospectively ascertained. Emergency procedures were defined as those performed for umbilical cord prolapse, placental abruption, placenta previa with hemorrhage, nonreassuring fetal heart rate pattern, or uterine rupture. Detailed information regarding maternal and neonatal outcomes, including the interval from the decision time to perform cesarean delivery to the actual skin incision, was collected. Results: Of the 11,481 primary cesarean deliveries, 2,808 were performed for an emergency indication. Of these, 1,814 (65%) began within 30 minutes of the decision to operate. Maternal complication rates, including endometritis, wound infection, and operative injury, were not related to the decision-to-incision interval. Measures of newborn compromise including umbilical artery pH less than 7 and intubation in the delivery room were significantly greater when the cesarean delivery was commenced within 30 minutes, likely attesting to the need for expedited delivery. Of the infants with indications for an emergency cesarean delivery who were delivered more than 30 minutes after the decision to operate, 95% did not experience a measure of newborn compromise. Conclusion: Approximately one third of primary cesarean deliveries performed for emergency indications are commenced more than 30 minutes after the decision to operate, and the majority were for nonreassuring heart rate tracings. In these cases, adverse neonatal outcomes were not increased. Level Of Evidence: II-2. abstract_id: PUBMED:19190880 Cesarean section for suspected fetal distress, continuous fetal heart monitoring and decision to delivery time. Objective: To find out the efficacy of continuous fetal heart monitoring by analyzing the cases of cesarean section for nonreassuring fetal heart in labor, detected by cardiotocography (CTG) and correlating these cases with perinatal outcome. To evaluate whether a 30 minute decision to delivery (D-D) interval for emergency cesarean section influences perinatal outcome. Methods: This was a prospective observational study of 217 patients who underwent cesarean section at > or = 36 weeks for non-reassuring fetal heart in labor detected by CTG. The maternal demographic profile, specific types of abnormal fetal heart rate tracing and the decision to delivery time interval were noted. The adverse immediate neonatal outcomes in terms of Apgar score <7 at 5 minutes, umbilical cord thornH <7.10, neonates requiring immediate ventilation and NICU admissions were recorded. The correlation between non-reassuring fetal heart, decision to delivery interval and neonatal outcome were analyzed. Results: Out of 3148 patients delivered at > or = 36 weeks, 217 (6.8%) patients underwent cesarean section during labor primarily for non-reassuring fetal heart. The most common fetal heart abnormality was persistent bradycardia in 106 (48.8%) cases followed by late deceleration in 38 (17.5%) cases and decreased beat to beat variability in 17 (7.8%) cases. In 33 (15.2%) babies the 5 minutes Apgar score was <7 out of which 13 (5.9%) babies had cord thornH <7.10. Thirty three (15.2%) babies required NICU admission for suspected birth asphyxia. Rest 184 (84.7%) neonates were born healthy and cared for by mother. Regarding decision to delivery interval of < or =30 minutes versus >30 minutes, there was no significant difference in the incidence of Apgar score <7 at 5 minutes, cord pH <7.10 and new born babies requiring immediate ventilation. But the need for admission to NICU in the group of D-D interval < or = 30 minutes was significantly higher compared to the other group where D-D interval was >30 minutes. Conclusion: Non-reassuring fetal heart rate detected by CTG did not correlate well with adverse neonatal outcome. There was no significant difference in immediate adverse neonatal outcome whether the D-D time interval was < or = 30 minutes or >30 minutes; contrary to this, NICU admission for suspected birth asphyxia in </= 30 minutes group was significantly higher. abstract_id: PUBMED:37808561 Exploring factors influencing skin incision to the delivery time and their impact on neonatal outcomes among emergency cesarean deliveries indicated for non-reassured fetal heart rate status. Background: The time interval between skin incision and delivery (S-D) is crucial in determining neonatal outcome; however, little is known about the influencing factors and their impact on neonatal outcomes, particularly among emergency cesarean deliveries (ECD) indicated for fetal distress. This study investigated the factors influencing S-D time and their effects on neonatal outcomes among mothers who underwent ECD for non-reassured fetal heart rate status. Methods: This retrospective cohort study involved 426 mother-infant pairs over four years. We retrieved data from the medical records, including baseline characteristics, perioperative data, and neonatal outcomes. Using multivariable logistic regression analysis, adjusted odd ratios, and a 95% confidence interval, potential factors influencing S-D time and their impacts on neonatal outcomes were assessed. A p-value of less than 0.05 was considered statistically significant. Results: Factors independently associated with longer S-D time (>8 min) were mothers who had previous CD (AOR 5.9: 95% CI 2.2-16.1), obese mothers (AOR 6.2: 95% CI 1.6-24.5), and the second stage of labor (AOR 5.3: 95% CI 2.4-11.7). Adverse neonatal outcomes, including a 5th minute Apgar score of less than 7, the need for NICU admission, and neonatal death, were significantly higher in the longer S-D time interval [47.7% vs. 8.9%; p-value 0.001], [21.9% vs. 9.1%; p-value 0.001], and [32% vs. 11.8%; p-value = 0.004], respectively. Obese mothers and the second stage of labor, but not previous CD, adversely impact neonatal outcomes. Conclusion: Longer S-D times are significantly associated with adverse neonatal outcomes. Factors that prolong the time interval between skin incision and delivery may or may not necessarily be associated with adverse neonatal outcomes. Considering surgical techniques that shorten the incision-delivery time and preparation for advanced neonatal care for risky subjects would help reduce detrimental neonatal consequences. abstract_id: PUBMED:25539409 Time from uterine incision to delivery and hypoxic neonatal outcomes. Objective: The objective of this study was to estimate the association between time from uterine incision to delivery and hypoxic neonatal outcomes in nonanomalous term infants. Methods: All women undergoing in-labor term cesarean deliveries (CDs) in the first 2 years of an ongoing prospective cohort study were included. The primary exposure was time in seconds from uterine incision to delivery. The primary outcome was a composite of hypoxia-associated neonatal outcomes, defined as at least one of: seizures, hypoxic ischemic encephalopathy, need for hypothermia treatment, and death within 7 days. Results: Of 812 patients who underwent in-labor CD, the composite hypoxia outcome occurred in 18 (2.2%) neonates. There was no significant difference in the rate of hypoxic morbidity with increasing increments of 60 seconds from uterine incision to delivery (p = 0.35). There was a significantly increased risk of hypoxic morbidity in those delivered in the highest quintile (>240 seconds) compared with those in the lowest quintile (≤ 60 seconds) in cesareans performed for an indication other than nonreassuring fetal status (relative risk, 5.58; 95% confidence interval, 1.30-23.91). Conclusion: Overall, duration from uterine incision to delivery for in-labor cesareans of nonanomalous term infants was not associated with an increase in risk of hypoxia-associated morbidities. abstract_id: PUBMED:28625305 Decision-to-delivery interval: Is 30 min the magic time? What is the evidence? Does it work? Emergency caesarean section is required when delivery can reduce the risk to the life of the mother or foetus. When a caesarean section is indicated for foetal compromise, a decision-to-delivery interval of 30 min (or less) has been suggested as the ideal time frame within which an obstetric team should achieve delivery. In theory, a short decision-to-delivery interval may minimise intra-uterine hypoxia and improve neonatal outcome. Current medical evidence does not support this time frame. There are certain indications for caesarean section that necessitate a much shorter decision-to-delivery interval, but evidence suggests that the majority of neonates may be safely delivered within a longer interval of time. Current tools available for the diagnosis of foetal distress are imperfect, and the concept of foetal distress is poorly defined. Future research should focus on finding accurate means of diagnosing foetal distress in labouring women and establishing universally agreed evidence-based decision-to-delivery targets without compromising maternal or foetal safety. abstract_id: PUBMED:10845173 Cesarean delivery for suspected fetal distress among preterm parturients. Objective: Among preterm parturients (< 37 weeks) who underwent cesarean delivery for suspected fetal distress, to determine the factors associated with decision-incision time (DIT) of < or = 30 minutes and to assess if umbilical arterial pH < 7.10 is more common with DIT < or = 30 or > 30 minutes. Study Design: The peripartum course of all patients who had cesareans for suspected fetal distress over three years was reviewed. The inclusion criteria were reliable gestational age < 37 weeks and a single indication for cesarean delivery, suspected fetal distress. Twenty antepartum and intrapartum factors were used in a univariate analysis. Results: The mean DIT among the 84 parturients was 30.5 +/- 21.2 minutes, and 63% of patients had surgery started within 30 minutes. The incidence of pH < 7.10 was 20%. Multivariate analysis indicated that the two factors significantly associated with prolonged time to surgery were tachycardia with decreased variability (odds ratio [OR] 5.9, 95% confidence interval [CI] 1.6-21.6) and use of spinal anesthesia (OR 6.2, 95% CI 1.1-35.0). Though none of the 20 variables had significant univariate associations with neonatal acidosis at alpha = .05, those with P < .20 were considered in multiple logistic regression analysis. None of the 20 factors were associated with pH < 7.10, including DIT of > or = 30 minutes (OR 0.26, 95% CI 0.06-1.03). Conclusion: DIT is likely to be > 30 minutes if cesarean delivery is due to decreased fetal heart variability or if spinal anesthesia is utilized; neonatal acidosis, however, is not significantly associated with a prolonged interval. abstract_id: PUBMED:32028489 Association Between Time of Day and the Decision for an Intrapartum Cesarean Delivery. Objective: To examine whether the decision and indications for performing intrapartum cesarean delivery vary by time of day. Methods: We conducted a secondary analysis of a multicenter observational cohort of 115,502 deliveries (2008-2011), including nulliparous women with term, singleton, nonanomalous live gestations in vertex presentation who were attempting labor. Those who attempted home birth, or underwent cesarean delivery scheduled or decided less than 30 minutes after admission were excluded. Time of day was defined as cesarean delivery decision time among those who delivered by cesarean and delivery time among those who delivered vaginally, categorized by each hour of a 24-hour day. Primary outcomes were decision to perform cesarean delivery and the indications for cesarean delivery (labor dystocia, nonreassuring fetal status, or other indications). Secondary outcomes included whether a dystocia indication adhered to standards promoted to reduce cesarean delivery rates. Bivariate analyses were performed using χ and Kruskal-Wallis tests for categorical and continuous outcomes, respectively, and generalized additive models with smoothing splines explored nonlinear associations without adjustment for other factors. Results: Seven thousand nine hundred fifty-six (22.1%) of 36,014 eligible women underwent cesarean delivery. Decision for cesarean delivery (P<.001) decreased from midnight (21.2%) to morning, reaching a nadir at 10:00 (17.9%) and subsequently rising to peak at 21:00 (26.2%). The frequency of cesarean delivery for dystocia also was significantly associated with time of day (P<.001) in a pattern mirroring overall cesarean delivery. Among cesarean deliveries for dystocia (n=5,274), decision for cesarean delivery at less than 5 cm dilation (P<.001), median duration from 5 cm dilation to cesarean delivery decision (P=.003), and median duration from complete dilation to cesarean delivery decision (P=.014) all significantly differed with time of day. The frequency of nonreassuring fetal status and "other" indications were not significantly associated with time of day (P>.05). Conclusion: Among nulliparous women who were attempting labor at term, the decision to perform cesarean delivery, particularly for dystocia, varied with time of day. Some of these differences correlate with labor management differences, given the changing frequency of latent phase cesarean delivery and median time in active phase. abstract_id: PUBMED:12066937 What is a reasonable time from decision-to-delivery by caesarean section? Evidence from 415 deliveries. Objective: To determine how long it takes from the decision to achieve delivery by non-elective caesarean section (DDI), the influences on this interval, and the impact on neonatal condition at birth. Design: Twelve months prospective data collection on all non-elective caesarean sections. Methods: Prospective collection of data relating to all caesarean sections in 1996 in a major teaching hospital obstetric unit was conducted, without the knowledge of the other clinicians providing clinical care. Details of the indication for section, the day and time of the decision and the interval till delivery were recorded as well as the seniority of the surgeon, and condition of the baby at birth. Results: The mean time from decision-to-delivery for 100 emergency intrapartum caesarean sections was 42.9 minutes for fetal distress and 71.1 minutes for 230 without fetal distress (P < 0.0001). For 22 'crash' sections the mean time from decision-to-delivery was 27.4 minutes; for 13 urgent antepartum deliveries for fetal reasons it was 124.7 minutes and for 21 with maternal reasons it was 97.4 minutes. The seniority of the surgeon managing the patient did not appear to influence the interval, nor did the time of day or day of the week when the delivery occurred. Intrapartum sections were quicker the more advanced the labour, and general anaesthesia was associated with shorter intervals than regional anaesthesia for emergency caesarean section for fetal distress (P < 0.001). Babies born within one hour of the decision tended to be more acidaemic than those born later, irrespective of the indication for delivery. Babies tended to be in better condition when a time from decision-to-delivery was not recorded than those for whom the information had been recorded. Conclusion: Fewer than 40% intrapartum deliveries by caesarean section for fetal distress were achieved within 30 minutes of the decision, despite that being the unit standard. There was, however, no evidence to indicate that overall an interval up to 120 minutes was detrimental to the neonate unless the delivery was a 'crash' caesarean section. These data thus do not provide evidence to sustain the recommendation of a standard of 30 minutes for intrapartum delivery by caesarean section. abstract_id: PUBMED:17077260 Decision-to-incision times and maternal and infant outcomes. N/A Answer: The decision-incision time in cesarean sections for suspected fetal distress has been a topic of research to determine if it impacts perinatal outcomes. Several studies have investigated whether a shorter decision-incision time, often considered as 30 minutes or less, leads to better neonatal outcomes. One study found that while a decision-incision time of ≤30 minutes is a desirable goal for fetuses possibly in distress, failure to achieve this goal is not associated with a measurable negative impact on newborn outcome (PUBMED:9219122). Another study reported that adverse neonatal outcomes were not increased when the cesarean delivery was commenced more than 30 minutes after the decision to operate (PUBMED:16816049). Similarly, another research indicated that there was no significant difference in immediate adverse neonatal outcome whether the decision to delivery (D-D) time interval was ≤30 minutes or >30 minutes (PUBMED:19190880). However, a study exploring factors influencing skin incision to the delivery time found that longer times are significantly associated with adverse neonatal outcomes, and factors that prolong this time interval may or may not necessarily be associated with adverse neonatal outcomes (PUBMED:37808561). Another study concluded that overall, the duration from uterine incision to delivery for in-labor cesareans of nonanomalous term infants was not associated with an increase in risk of hypoxia-associated morbidities (PUBMED:25539409). The evidence suggests that while a shorter decision-incision time is generally aimed for, especially in cases of suspected fetal distress, it may not always correlate with better neonatal outcomes. Some studies have found no significant difference in outcomes whether the decision-incision time was under or over 30 minutes, while others have noted that longer times can be associated with adverse outcomes. It is important to note that the decision-incision time should be considered alongside other clinical factors and the specific circumstances of each case.
Instruction: Are interventions recommended by pharmacists during Home Medicines Review evidence-based? Abstracts: abstract_id: PUBMED:20722881 Are interventions recommended by pharmacists during Home Medicines Review evidence-based? Introduction: Over the past decade medication review services have been implemented in many countries, including Australia, UK and USA. Although, the attitudes and barriers to the implementation of evidence-based medicine have been investigated, the extent to which medication review recommendations are evidence-based is not known. Objective: To determine (1) the extent to which pharmacist's recommendations during medication review services were consistent with the evidence-based guides at the time of the review; and (2) the nature and extent of drug-related problems (DRPs) and the actions recommended by pharmacists to resolve DRPs. Method: A retrospective review of Home Medicines Review cases performed on 224 community-dwelling older people (65 years or older). The chi-squared test for categorical variables was used to compare the proportion of recommendations that were evidence-based and recommendations for which no evidence could be identified in the most common Australian information sources. DRPs and the actions recommended to resolve the DRPs were classified according to previously employed criteria. Results: Pharmacists made a total of 1114 therapeutic recommendations to general practitioners, of which 964 required supporting pharmacotherapeutic evidence. The majority (94%) of the pharmacists' recommendations were in accordance with the evidence-based guides, as compared with the recommendations for which no evidence could be identified (P < 0.001). Pharmacists reported that 98% of the patients had at least one problem with use of their medication detected. Conclusion: The majority of the actions recommended by pharmacists during the medication review process were consistent with the literature embedded in key Australian information sources. Medication management in older people is complex and challenging, as highlighted by the DRPs identified in 98% of our patients. A suitably trained pharmacist, with full access to the patient, medical record and supporting resources, can help to improve the quality use of medicines in this at-risk population. abstract_id: PUBMED:25692718 Investigating general practitioner engagement with pharmacists in Home Medicines Review. The Home Medicines Review (HMR) involves a home visit from an accredited HMR pharmacist to review a patient's medicines, and a report to the patient's general practitioner (GP) with recommendations for improving medicine management. Notwithstanding evidence supporting the benefits of medicines review, broad uptake by GPs in Australia remains low. We developed the 10-item Home Medicines Review Inventory (HMRI) to assess GP attitudes and behaviours regarding the HMR and modelled factors associated with the frequency of GP engagement with HMRs. Four items assessing frequency of behaviours and six items assessing attitudes related to HMR were answered by 180 GPs in a national GP survey. The HMRI's psychometric properties were examined with exploratory factor analysis (EFA), Rasch analysis, and correlations with related instruments. Structural equation modelling was used to evaluate factors associated with HMR-related behaviours. EFA and Rasch analysis generally supported the current format of the instrument. Attitudes to HMR, gender, previous positive experiences with pharmacists, a system for working together, and participation in joint education activities predicted frequency of HMR-related behaviours. Although GPs' attitudes to HMR were generally positive, HMR-related behaviours tended to occur with low frequency. This instrument may be used to investigate why HMR uptake has thus far been low and also help identify opportunities for building interprofessional communication and trust between GPs and pharmacists. abstract_id: PUBMED:29656935 A review of interventions ≥ 6 months by pharmacists on adherence to medicines in cardiovascular disease: Characteristics of what works and what doesn't. Background: Nonadherence to cardiovascular medicines occurs in 60% of subjects with chronic cardiovascular disease and leads to poor outcomes. In an attempt to improve adherence and cardiovascular outcomes, interventions are often used. Interventions may involve a pharmacist, but it is not always clear whether these are effective. Objectives: The primary objective of this review is to determine whether interventions by pharmacists, alone, discussing adherence to medicines, improve adherence to medicines for cardiovascular disease. Subsequently, the review links the characteristics of the individual studies with effectiveness or lack of effect. The second objective of this review is to consider whether any improvement in adherence with interventions by pharmacist is associated with better clinical outcomes. Methods: A literature search of PubMed and CINAHL for 'pharmacist', 'medicine' with 'adherence' or 'compliance' or 'persistence' was undertaken. To be included in this review, papers had to be of a pharmacist working alone and in person in an intervention of subjects with hypertension, hyperlipidemia (prior to or after a coronary artery event) or heart failure. The paper had to be published in a peer review journal, with a measure of adherence to medicines. The effectiveness of the intervention had to be evaluated after ≥6 months. Results: Only 3 out of 8 interventions by pharmacists in hypertension, and 5 out of 12 interventions in subjects with hyperlipidemia led to improved adherence to medicines. In contrast, all 6 interventions by a pharmacist in subjects with heart failure were successful in improving adherence. One characteristic of successful interventions by pharmacists to improve adherence to cardiovascular medicines is that they must be more than brief/single interventions. A second characteristic is that the intervention should not involve subjects who are already highly adherent, as it is unlikely adherence can be improved in this population. Only 2 of 3 successful interventions in hypertension were associated with small reductions in blood pressure, and only one intervention in hyperlipidemia was shown to decrease LDL-cholesterol to a small extent. In subjects with heart failure, 5 of the 6 successful studies of the successful interventions by pharmacists to increase adherence also showed improved clinical outcomes. Conclusions: When planning an intervention to improve adherence to medicines and cardiovascular outcomes in subjects with hypertension or hyperlipidemic, by a pharmacist alone, or as part of a multi-faceted interventions, it is essential to use an intervention that has been shown to be effective, as most interventions are not effective at improving adherence or only improve adherence and clinical outcomes to a small extent. In heart failure, there is well documented evidence of interventions by pharmacists that do improve clinical outcomes, which should be adopted widely. abstract_id: PUBMED:25322893 Attitudes of pharmacists to provision of Home Medicines Review for Indigenous Australians. Background: Home Medicines Reviews could improve the quality use of medicines and medicines adherence among Aboriginal people. Despite high level of chronic disease very few Home Medicines Review are currently being conducted for Aboriginal and Torres Strait Islander people. Objective: The aim of this research was to explore the barriers and facilitators from the pharmacists' perspective for the provision of Home Medicines Review to Aboriginal people attending Aboriginal Health Services. Setting: A cross sectional survey was used to gather demographic, qualitative and quantitative data from 945 Australian pharmacists accredited to undertake Home Medicines Review. Method: The survey consisted of 39 items which included both closed, open ended and Likert scale questions. Data was extracted from the online survey tool and analysed. Descriptive statistics were used to explore the quantitative data while qualitative data was thematically analysed and coded for emergent themes. Main Outcome Measure: Number of Home Medicines Review conducted for Aboriginal and Torres Strait Islander patients. Results: A total of 187 accredited pharmacists responded to the survey. They reported that barriers to Home Medicines Review to Aboriginal patients may include lack of understanding of cultural issues by pharmacists; lack of awareness of Home Medicines Review program by Aboriginal Health Service staff; difficulties in implementation of Home Medicine Review processes; burdensome program rules; the lack of patient-pharmacist relationship, and the lack of pharmacist-Aboriginal Health Service relationship. Conclusion: Changes to the medication review processes and rules are needed to improve the accessibility of the Home Medicine Review program for Aboriginal and Torres Strait Islander people. Improved relationships between pharmacists and Aboriginal health service staff, would increase the likelihood of more Home Medicines Reviews being conducted with Aboriginal and Torres Strait Islander patients. abstract_id: PUBMED:31531815 A systematic review of pharmacists' interventions to support medicines optimisation in patients with visual impairment. Background People with visual impairment often report poorer health and encounter many challenges when using medicines. Pharmacists can play a significant role in optimising medicines use for these patients. However, little is known about pharmacists' current practices when providing services to this population nor the impact of such services, if any, on medicines optimisation-related outcomes. Aim of the review This systematic review aims to identify the types, and assess the effectiveness of, interventions provided by pharmacists on medicines optimisation-related outcomes. Method Systematic searches of the following electronic databases were carried out from date of inception to March 2018: Cochrane Library; MEDLINE; EMBASE; International Pharmaceutical Abstracts; Scopus; and Cumulative Index to Nursing and Allied Health Literature. Several trial registries and grey literature resources were also searched. Any randomised controlled trials, non-randomised controlled trials, controlled before-and-after studies, or interrupted time series analyses reporting on interventions provided by pharmacists to adult visually impaired patients and/or their caregivers in order to improve medicines optimisation-related outcomes of medicine safety, adherence, patient satisfaction, shared decision making, or quality of life were included. Results A total of 1877 titles/abstracts were screened, and 27 full text articles were assessed for eligibility. On examination of full texts, no studies met the inclusion criteria for this review. Conclusion This review highlights the need for future research that would be vital for promoting the safe and effective use of medicines and the delivery of pharmaceutical care services to people with visual impairment. abstract_id: PUBMED:32162188 Home medicines reviews and residential medication management reviews in Western Australia. Background Australian government funding for Residential Medication Management Reviews and Home Medicines Reviews commenced in 1997 and 2001 respectively. Limited data are available on their provision in Australia. Objective To investigate the extent and characteristics of Home Medicines Review and Residential Medication Management Review services provided by accredited pharmacists practising in Western Australia. Setting Pharmacists in Western Australia accredited by the Australian Association of Consultant Pharmacy or Society of Hospital Pharmacists of Australia. Method A paper questionnaire was developed and sent to 198 accredited pharmacists in Western Australia in June 2017. Simple descriptive statistics summarised demographic information and other responses. Logistic regression evaluated factors associated with the frequency of provision of Home Medicines Reviews. Main outcome measure Frequency and factors influencing services provided. Results Of 102 (51.5%) questionnaires returned, 67 (65.7%) respondents were female. Many were aged between 31 and 40 years (53; 52.0%). Most were accredited by the Australian Association of Consultant Pharmacy (101; 99.0%) and mainly offered Home Medicines Reviews (70; 68.6%). Home Medicines Reviews provided over the previous 12 months were limited in frequency with one quarter providing either 1-10 (27; 26.5%) or 21-50 (28; 27.5%) reviews. The median "average" preparation, interview and report writing times, plus communication with other health professionals aggregated to 175.0 min (interquartile range: 140.0-235.0 min) for Home Medicine Reviews and 110.0 min (90.0-140.0) for Residential Medication Management Reviews. Pharmacists born overseas and those who were accredited for a longer time were associated with performing 51 or more Home Medicines Reviews annually. Only one-third (36/101; 35.6%) agreed the current payment was appropriate. Most agreed their Home Medicines Reviews (92/96; 95.8%) and Residential Medication Management Reviews (26/28; 92.9%) provided improved patient outcomes. Over 97% of accredited pharmacists intended to continue to remain accredited. Conclusions Wide variations were evident in the times taken for tasks associated with performing reviews. Most respondents considered their medication reviews contributed to improved patient outcomes. The wide variation in times taken for the reviews suggests a tiered structure for service provision, with appropriate payment within each tier, since most consider current remuneration inadequate. abstract_id: PUBMED:23275665 An evidence-based course in complementary medicines. Objective: To evaluate the impact of an evidence-based course in complementary medicines on the attitudes, knowledge, and professional practice behavior of undergraduate pharmacy students. Design: A required 12-week evidence-based complementary medicine course was designed and introduced into the third-year undergraduate pharmacy curriculum. The course included a combination of traditional lectures, interactive tutorial sessions, and a range of formal assessments. Assessment: Pre- and post-course survey instruments were administered to assess changes in students' attitudes, perceptions, knowledge, and the likelihood they would recommend the use of complementary medicines in a pharmacy practice environment. Conclusion: Completion of a required evidence-based complementary medicines course resulted in a positive change in pharmacy students' perceptions of the value of various complementary medicines as well as in their willingness to recommend them, and provided students with the required knowledge to make patient-centered recommendations for use of complementary medicines in a professional pharmacy practice setting. These findings support the need for greater evidence-based complementary medicine education within pharmacy curricula to meet consumer demand and to align with pharmacists' professional responsibilities. abstract_id: PUBMED:29770700 Quality Use of the Pathology Data in Home Medicines Reviews: A Retrospective Evaluation. Background: Laboratory tests can be important tools for the assessment of pharmacotherapy. Nonetheless, there are no previous studies that have explicitly focused on the role of pathology data in Home Medicines Reviews (HMR), an Australian medication review program. Objective: Evaluate pharmacists' recommendations regarding laboratory testing in the medication review process. Methods: This retrospective review of HMRs assessed the prevalence of the pathology data provided by general practitioners. Additionally, the pharmacists' recommendations based on these laboratory data were compared with national and international guidelines. Results: In total, 580 reports were evaluated. Of these, 179 reports did not contain any pathology data. Pharmacists commented on provided laboratory values in 324 reports and recommended further testing in 473 reports. Not all suggestions were related to previous values or were in line with guidelines. Most recommendations were regarding vitamin D and lipids (69% and 62% of medication review reports, respectively). Particularly, regarding renal impairment, pharmacists used their knowledge on dose adjustments and contraindications. In relation to full blood count, vitamin B12, and thyroid function, unjustified screenings were often recommended. In 26% of all reports, the pharmacists requested an array of tests without explaining the necessity for these tests. Conclusion and Relevance: Pharmacists provided useful advice based on the pathology data, which was concordant with national and international guidelines; however, in some cases, there was no rationale for the test recommendations provided. The outcome of the HMR program might be further enhanced if pharmacists had direct access to the patients' pathology data. abstract_id: PUBMED:29875880 The Utility of Home-Practice in Mindfulness-Based Group Interventions: A Systematic Review. A growing body of research supports the efficacy of mindfulness-based interventions (MBIs). MBIs consider home-practice as essential to increasing the therapeutic effects of the treatment. To date however, the synthesis of the research conducted on the role of home-practice in controlled MBI studies has been a neglected area. This review aimed to conduct a narrative synthesis of published controlled studies, evaluating mindfulness-based group interventions, which have specifically measured home-practice. Empirical research literature published until June 2016 was searched using five databases. The search strategy focused on mindfulness-based stress reduction (MBSR), mindfulness-based cognitive therapy (MBCT), and home-practice. Included studies met the following criteria: controlled trials, participants 18 years and above, evaluations of MBSR or MBCT, utilised standardised quantitative outcome measures and monitored home-practice using a self-reported measure. Fourteen studies met the criteria and were included in the review. Across all studies, there was heterogeneity in the guidance and resources provided to participants and the approaches used for monitoring home-practice. In addition, the guidance on the length of home-practice was variable across studies, which indicates that research studies and teachers are not adhering to the published protocols. Finally, only seven studies examined the relationship between home-practice and clinical outcomes, of which four found that home-practice predicted improvements on clinical outcome measures. Future research should adopt a standardised approach for monitoring home-practice across MBIs. Additionally, studies should assess whether the amount of home-practice recommended to participants is in line with MBSR/MBCT manualised protocols. Finally, research should utilise experimental methodologies to explicitly explore the relationship between home-practice and clinical outcomes. abstract_id: PUBMED:24841328 An evaluation of an intervention designed to improve the evidence-based supply of non-prescription medicines from community pharmacies. Objectives: The aims of this study were to conduct the proof of concept study and to develop and evaluate an educational intervention that promotes the evidence-based supply of non-prescription medicines (NPMs). Method: An educational intervention was delivered to pharmacy assistants and pharmacists in three pharmacies in England. The intervention included the provision of summaries of evidence for the treatment of four minor ailments and resulted in the preparation of evidence-based portfolios for the treatment of the following ailments: athlete's foot, cough, nasal congestion and period pain. The effect of the intervention was evaluated using a combination of direct overt observation, vignettes, self-reported behaviour and interviews. Key Findings: Evaluation data were collected from the three pharmacies. Data were derived from 3 pharmacists and 13 assistants, of whom 10 (3 pharmacists; 7 assistants) attended the training event. Comparing pre- and post-intervention practice, 8/11 (pre-) versus 5/6 (post-) observed, 46/80 versus 62/80 vignette and 25/30 versus 39/40 self-reported recommendations were evidence based. Prior to the intervention, 3/16 participants understood the role of evidence regarding the supply of NPMs compared with 16/16 post-intervention. Participants reported relying upon experiential knowledge to inform their decision making prior to the educational intervention. Thereafter, the participants reported using evidence to a greater extent. Barriers and facilitators for evidence-based practice were also identified. Conclusion: A one-off educational intervention increased participants' self-reported awareness and potential application of evidence to inform their supply of NPMs. Further research is needed to assess the effectiveness, long-term impact, generalisability and cost-effectiveness of this intervention for a wider range of common conditions. Answer: Yes, interventions recommended by pharmacists during Home Medicines Review (HMR) are predominantly evidence-based. A retrospective review of HMR cases performed on 224 community-dwelling older people revealed that the majority (94%) of the pharmacists' recommendations were in accordance with evidence-based guides, as compared with the recommendations for which no evidence could be identified (PUBMED:20722881). This indicates that pharmacists are largely making recommendations that are consistent with the literature and guidelines available in key Australian information sources.
Instruction: Should patient-rated performance status affect treatment decisions in advanced lung cancer? Abstracts: abstract_id: PUBMED:18827609 Should patient-rated performance status affect treatment decisions in advanced lung cancer? Introduction: The Eastern Cooperative Oncology Group (ECOG) score is a well known prognostic factor and almost always used to determine eligibility for clinical trials. The patient-rated performance status score (Pt-PS), section of the patient generated subjective global assessment scale, has identical criteria to the physician-rated ECOG scale (MD-PS). We compared the Pt-PS with MD-PS in patients with advanced non-small cell lung cancer and compared the effect of each rating on eligibility for a hypothetical clinical trial. Methods: Consecutive patients with newly diagnosed advanced non-small cell lung cancer completed a patient generated subjective global assessment self-rated questionnaire, which was then correlated (kappa statistic) with the ECOG PS recorded at the same time. Patients were treated with standard chemotherapy. Survival was determined using Kaplan-Meier statistics. Results: One hundred nine patients (M:F-54:55) were recruited. Pt-PS differed from MD-PS in 59 (54%) instances (p = 0.0001). When scores were not congruent, 41/59 (69%) patients evaluated themselves as having a worse PS than the physician's rating. Pt-PS was 0 to 1 in 60 (55%) patients whereas MD-PS was 0 to 1 in 78 (72%) patients. The functional status irrespective of evaluator was predictive of survival (p = 0.001 for MD-PS and p = 0.001 for Pt-PS). However, the median survival in those with MD-PS >/=2 was 3.3 (CI; 1.7-4.9) months whereas individuals with Pt-PS >/=2 had a median survival of 6.2 (CI; 5.4-6.9) months. Conclusions: Pt-PS and MD-PS were not congruent in over half of the cases, with Pt-PS scores usually poorer. Almost half the patients would have excluded themselves from a hypothetical clinical trial (Pt-PS >/=2). This requires prospective evaluation. abstract_id: PUBMED:30761639 Performance status agreement assessed by the patient and clinician in a rapid access lung cancer service: Can either predict completion of treatment? Introduction: Clinician-rated performance status (C-PS) is used routinely to predict whether patients are fit enough to undergo treatment for lung cancer. However, a good proportion of those with seemingly good C-PS do not go on to receive, let alone complete treatment. The value of C-PS in accurately predicting this is unclear, as is the merit of evaluating patient-rated PS (P-PS). Objectives: Our aim was to prospectively assess Eastern Cooperative Oncology Group (ECOG) and Karnofsky C-PS and P-PS in patients attending a rapid access lung cancer service (RALCS), the agreement between these scores, and whether any score could predict receipt and completion of multidisciplinary team (MDT)-planned treatment. Results: ECOG and Karnofsky scores were highly correlated (Spearman's rho -0.79 for C-PS and -0.828 for P-PS, both p < 0.001). There was poor agreement between C-PS and P-PS scores (kappa statistics 0.275 for ECOG and 0.172 for Karnofsky); however, clinicians did not tend to consistently under- or overestimate patients' scores. ECOG P-PS showed an association with completion of MDT-planned treatment (p = 0.007), but C-PS did not. Conclusion: Clinician-rated PS was not associated with completion of MDT-planned treatment, but there may be a role for patient-rated PS. C-PS and P-PS were poorly correlated in a RALCS. abstract_id: PUBMED:15599828 Challenges in defining and identifying patients with non-small cell lung cancer and poor performance status. Performance status (PS), a subjective measure of the functional status of a patient with cancer and his or her ability to perform normal activities, is influenced both by tumor-related and by comorbidity-related factors. It is a reliable independent prognostic indicator for survival in patients with advanced non-small cell lung cancer. Patients with a poor PS (PS2) constitute up to 30% to 40% of the population of patients with advanced non-small cell lung cancer, yet they are underrepresented in clinical trials. These patients are heterogeneous, which makes it challenging to use their PS scores alone to guide their therapy. A greater understanding of PS scores and the factors that affect them can be gained through PS2-specific clinical trials, which can lead to the development of better PS instruments to aid in making therapeutic decisions. abstract_id: PUBMED:28284318 Obtaining tissue diagnosis in lung cancer patients with poor performance status and its influence on treatment and survival. Introduction: 25% of patients with lung cancer have performance status 3 or 4. A pragmatic approach to investigative procedures is often adopted based on the risks and benefits in these patients and whether tissue diagnosis is necessary for anticipated future treatment. This cohort study investigated factors influencing a clinician's decision to pursue a tissue diagnosis in patients with lung cancer and performance status 3 and 4 and to examine the association of tissue diagnosis with subsequent management and survival. Methods: All patients with lung cancer diagnosed in North Glasgow from 2009 to 2012 were prospectively recorded in a registry. We investigated the relationships between achieving a tissue diagnosis, treatment and survival. Results: Of 2493 patients diagnosed with lung cancer, 490 patients (20%) were PS 3 and 122 patients (5%) were PS 4. Tissue diagnosis was attempted in 60% and 35% patients with PS 3 and PS 4 respectively. Younger age, better performance status and having stage 4 disease were independently associated with a diagnostic procedure being performed. Only 5% of patients with poor performance status received treatment conventionally requiring a tissue diagnosis. Age, stage and performance status were independent predictors of mortality. Achieving a tissue diagnosis was not associated with mortality. Receiving treatment requiring tissue diagnosis is associated with survival benefit. Conclusions: The majority of patients with poor fitness undergo a diagnostic procedure which does not influence further treatment or affect survival. However, the cohort of patients who do undergo therapy determined by tissue diagnosis have improved survival. abstract_id: PUBMED:34377493 Dramatic response to alectinib in an ALK-positive LCNEC patient with a poor performance status: A case report. The echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) fusion gene, a driver mutation in lung carcinoma, is fairly common in lung adenocarcinoma but rare in large cell neuroendocrine carcinoma (LCNEC). Here we report a case of stage IV LCNEC positive for this fusion gene in a patient with a poor performance status (PS) who was effectively treated with alectinib. The patient was a 72-year-old non-smoking man diagnosed as LCNEC with multiple metastases. Because of his poor PS, cytotoxic chemotherapy was not indicated, but he was later found to be positive for the ALK fusion gene and treated with alectinib as first-line therapy. One month later, the tumour had shrunk remarkably, and the therapeutic effect was rated as a partial response. The PS also improved from 4 to 1. Investigating actionable driver mutations seems worth doing for advanced LCNEC, especially if the patient's PS is poor. abstract_id: PUBMED:26987388 Dramatic response to alectinib in a patient of ALK-rearranged lung cancer with poor performance status. Background: Lung cancers with anaplastic lymphoma kinase rearrangements are highly sensitive to anaplastic lymphoma kinase tyrosine kinase inhibition, underscoring the notion that such cancers are addicted to anaplastic lymphoma kinase activity. Several anaplastic lymphoma kinase inhibitors have been identified and are being evaluated in clinical trials. However patients with poor performance status (3 or 4) were not involved in these clinical trials, it has been unclear to use anaplastic lymphoma kinase-tyrosine kinase inhibitors for these patients. Here, we report an anaplastic lymphoma kinase-positive non small cell lung cancer patient with performance status 4, who was successfully treated with alectinib. Case Presentation: We report on a 52-year-old patient diagnosed as non small cell lung cancer harboring echinoderm microtubule-associated protein-like 4-anaplastic lymphoma kinase fusion gene. His performance status was 4 because of severe respiratory failure. We treated this patient with alectinib as the first line therapy. Dramatic response was obtained and his performance status improved from 4 to 1 without severe adverse events. Conclusion: Alectinib is a therapeutic option for the anaplastic lymphoma kinase positive patients with poor performance status. abstract_id: PUBMED:17127906 Lung cancer in the elderly: performance status and/or geriatric indices? Introduction: Demographic aging poses a problem of management in patients over 65 years old with lung cancer (LC). Performance status (PS) is an index of global activity that, in part, determines treatment. Geriatric indices allow a multifactorial assessment of the elderly subject. The aim of our study is to evaluate whether PS correlates with the geriatric indices in elderly patients with lung cancer. Methods: In a single centre prospective study the geriatric indices (ADL, IADL, PINI, MMS) and the Charlson score (CS) were measured before treatment. Results: Forty one patients aged 75.7 +/- 6.6 years were included in the study. PS 3-4 was found in 15% of patients and 44% had stage IV disease. Half of them were ADL dependent and 95% were IADL dependent. A MMS<24 was found in 29% and 17% had a PINI > 20. The CS was 2.7 +/- 2.1. There was a correlation between PS and the geriatric indices but no correlation between PS and CS. Conclusion: PS is significantly correlated with the geriatric indices but is independent of CS. PS appears to be a good parameter for the assessment of global activity in the elderly subject with LC. abstract_id: PUBMED:24086873 The Incidence of Hyponatraemia and Its Effect on the ECOG Performance Status among Lung Cancer Patients. Context: Hyponatraemia is one of the common electrolytic disorders which are associated with lung cancer. Hyponatraemia may influence the ECOG performance status at presentation. Also, to the best of our knowledge, we found only limited Indian studies where the ECOG score was correlated with the serum sodium status in lung cancer patients on presentation. Aim: To assess the incidence of hyponatraemia among the patients of carcinoma of the lung before putting them into the specific treatment category for cancer and to check the effects on their ECOG performance status. Settings And Design: A cross-sectional, observational study was conducted on 116 consecutive patients of lung cancer during the period from November 2011 to October 2012. Material And Methods: The patients with a histologically proven diagnosis of lung cancer were grouped initially according to their ECOG performance statuses. The serum sodium value of each patient was measured and the hyponatraemic patients were given treatment according to the protocol. The correlation of the ECOG performance status with the serum sodium of the lung cancer patients was measured. To check for any laboratory error in serum sodium, we selected (n = 58) age, sex and socioeconomic matched control patients. Results: At presentation 44.8% of the lung cancer patients showed hyponatraemia [52/116]. The ECOG score was significantly poor in the advanced clinical stages (ECOG ≤2 Vs ECOG ≥ 3 in NSCLC cases, χ(2) =11.25, P=.0008). The ECOG performance status score at admission showed a negative correlation with the serum sodium status which was measured on admission among all the patients (Pearson correlation coefficient = - 0.186). The clinical stage of the lung cancer also showed a positive correlation with the ECOG score at admission in our study (Pearson correlation coefficient = 0.295). Conclusion: Hyponatraemia is not an uncommon condition and it should be suspected and screened in each patient, as it may influence the ECOG performance status score, which serves as an important factor in the prognosis of lung cancer. abstract_id: PUBMED:10607984 Lung cancer treatment decisions: patients' desires for participation and information. This study examined responses of 21 lung cancer outpatients to questions about their desired and actual levels of involvement in treatment decisions, and about information needed for treatment decision-making. A 'treatment trade-off' interview was used to assess patients' preferences for hypothetical treatment options. Desired roles in decisions are examined in relation to treatment preference and information needs. Forty-three percent of patients recalled desiring an active/collaborative role in their real treatment decision. For 29% of patients, there was a discrepancy between their recalled desired role and their recalled actual role; in each of these cases the patient had been less involved in the decision than they had desired. At the time of the interview (mean 26 months post-treatment), 57% of patients desired an active or collaborative role in treatment decisions. The majority of patients rated the following types of information as 'essential' to treatment decisions: details of the treatment regimen, early and late side-effects, survival, and effects of treatment on disease symptoms. The data suggest that: we should be attentive to the individual's desired role in treatment decisions at each step of care to avoid a mismatch between desired and actual involvement; desired role in decision-making does not predict treatment preference; and patients generally want a wide variety of information on treatment options in order to participate in treatment decisions. abstract_id: PUBMED:36186221 Dabrafenib plus trametinib treatment for a patient with BRAF V600E-mutated large-cell lung carcinoma and a poor performance status. Dabrafenib plus trametinib is active against metastatic lung cancer with the BRAF V600E mutation. However, the feasibility of dabrafenib plus trametinib for patients with a poor performance status (PS) has not been reported. We report the case of an 80-year-old woman was diagnosed with metastatic large-cell lung carcinoma. Her general statuses worsened due to cancer, resulting in a PS of 4. Genotype testing revealed a BRAF V600E mutation. The patient received dabrafenib plus trametinib without significant adverse effects. This report is the first to describe dabrafenib plus trametinib administration for large-cell lung carcinoma in a patient with a poor PS. Answer: Patient-rated performance status (Pt-PS) should indeed be considered when making treatment decisions in advanced lung cancer. Studies have shown that there can be significant discrepancies between physician-rated performance status (MD-PS or C-PS) and Pt-PS, with patients often rating their performance status worse than physicians do (PUBMED:18827609; PUBMED:30761639). This discrepancy can impact eligibility for clinical trials and treatment options, as nearly half of the patients might exclude themselves from trials based on their own assessments (PUBMED:18827609). Moreover, patient-rated performance status has been associated with the completion of multidisciplinary team-planned treatment, whereas clinician-rated performance status was not (PUBMED:30761639). This suggests that Pt-PS may be a more accurate predictor of a patient's ability to complete treatment. Additionally, while obtaining a tissue diagnosis in patients with poor performance status does not always influence treatment or survival, for those who do undergo therapy determined by tissue diagnosis, there is an associated survival benefit (PUBMED:28284318). Furthermore, there have been reports of dramatic responses to targeted therapies, such as alectinib, in patients with poor performance status who have specific genetic mutations like ALK rearrangements (PUBMED:34377493; PUBMED:26987388). This highlights the importance of considering molecular profiling and targeted treatments even in patients with poor performance status. Performance status is also correlated with geriatric indices in elderly patients with lung cancer, suggesting it is a good parameter for assessing global activity in this population (PUBMED:17127906). Additionally, hyponatraemia, a common condition in lung cancer patients, can influence the ECOG performance status, which is an important prognostic factor (PUBMED:24086873). Patients' desires for participation in treatment decisions and the need for comprehensive information also underscore the importance of considering patient-rated performance status (PUBMED:10607984). Lastly, the successful use of treatments like dabrafenib plus trametinib in patients with poor performance status further supports the notion that patient-rated assessments can guide therapy choices (PUBMED:36186221). In conclusion, patient-rated performance status is a valuable factor that should be taken into account when making treatment decisions in advanced lung cancer, as it can influence treatment completion, eligibility for clinical trials, and may better reflect the patient's actual ability to tolerate and benefit from specific therapies.
Instruction: The INSPIRE study: do personality traits predict general quality of life (Short form-36) in distressed patients with ulcerative colitis and Crohn's disease? Abstracts: abstract_id: PUBMED:18777439 The INSPIRE study: do personality traits predict general quality of life (Short form-36) in distressed patients with ulcerative colitis and Crohn's disease? Objective: To assess the role of personality as a predictor of Short form-36 (SF-36) in distressed patients (perceived stress questionnaire, PSQ) with ulcerative colitis (UC) and Crohn's disease (CD). Material And Methods: Fifty-four patients with CD and 55 with UC (age 18-60 years) who had relapsed in the previous 18 months, i.e. with an activity index (AI) for UC or CD> or =4, PSQ> or =60, and without severe mental or other major medical conditions, completed the Buss-Perry Aggression Questionnaire (BPA), the Neuroticism and Lie scales of the Eysenck Personality Questionnaire (EPQ-N and -L), the Multidimensional Health Locus of Control Scale (LOC) (Internal (I), Powerful Other (PO), Chance (C)), the Toronto Alexithymia Scale (TAS) and the SF-36. Results: Multiple linear regression analyses controlling for gender, age and clinical disease activity (AI) in separate analyses for UC and CD showed that the mental and vitality subscales were predicted by neuroticism in both UC and CD. The highest explained variance was 43.8% on the "mental" subscale in UC. The social function subscale was related to alexithymia only in UC, while the role limitation and pain subscales were related to personality in CD only. The physical function subscale related differently to personality in UC and CD. Conclusions: While mental and vitality subscales were predicted by neuroticism in both UC and CD, other subscales had different relationships to personality, suggesting different psychobiological interactions in UC and CD. abstract_id: PUBMED:18509900 The INSPIRE study: are different personality traits related to disease-specific quality of life (IBDQ) in distressed patients with ulcerative colitis and Crohn's disease? Background: To explore the relationship between personality and disease-specific quality of life [Inflammatory Bowel Disease Questionnaire (IBDQ)] in distressed [Perceived Stress Questionnaire (PSQ)] patients with ulcerative colitis (UC) and Crohn's disease (CD). Methods: Included in the study were 56 patients with UC and 54 patients with CD ranging in age from 18 to 60 years with a relapse in the previous 18 months, a UC or CD activity index 4, a PSQ 60, and without serious mental or other serious medical condition. The patients completed the Buss-Perry Aggression Questionnaire, the Neuroticism and Lie (social conformity/desirability) scales of the Eysenck Personality Questionnaire, the Multidimensional Health Locus of Control (LOC) Scale [Internal (I), Powerful Other (PO), Chance (C)], the Toronto Alexithymia Scale, and the IBDQ. Results: In linear regression controlling for sex, education (years), and clinical disease activity (AI) in separate analyses of UC and CD patients, higher IBDQ score was related to less social conformity in CD and less neuroticism in UC; higher emotional function score was related to less neuroticism in both CD and UC and less PO-LOC in UC. Higher social function score was related to less social conformity in CD and lower I-LOC and PO-LOC in UC. Bowel function and systemic symptoms were unrelated to personality in either UC or CD. Conclusions: Although the emotional function subscale was related to neuroticism in both UC and CD, the social function subscale and total IBDQ were related to different personality traits in UC and CD. Personality traits should be taken into account when using IBDQ in studies. abstract_id: PUBMED:37219727 Health-related quality of life in patients with newly diagnosed inflammatory bowel disease: an observational prospective cohort study (IBSEN III). Purpose: This unselected, population-based cohort study aimed to determine the level of health-related quality of life (HRQoL) in patients with Crohn's disease (CD) and ulcerative colitis (UC) at the time of diagnosis compared with a reference population and identify the demographic factors, psychosocial measures, and disease activity markers associated with HRQoL. Methods: Adult patients newly diagnosed with CD or UC were prospectively enrolled. HRQoL was measured using the Short Form 36 (SF-36) and Norwegian Inflammatory Bowel Disease Questionnaires. Clinical significance was assessed using Cohen's d effect size and further compared with a Norwegian reference population. Associations between HRQoL and symptom scores, demographic factors, psychosocial measures, and disease activity markers were analyzed. Results: Compared with the Norwegian reference population, patients with CD and UC reported significantly lower scores in all SF-36 dimensions, except for physical functioning. Cohen's d effect sizes for men and women in all SF-36 dimensions were at least moderate, except for bodily pain and emotional role for men with UC and physical functioning for both sexes and diagnoses. In the multivariate regression analysis, depression subscale scores ≥ 8 on the Hospital Anxiety and Depression Scale, substantial fatigue, and high symptom scores were associated with reduced HRQoL. Conclusion: Patients newly diagnosed with CD and UC reported statistically and clinically significantly lower scores in seven of the eight SF-36 dimensions than the reference population. Symptoms of depression, fatigue, and elevated symptom scores were associated with poorer HRQoL. abstract_id: PUBMED:16733855 Malnutrition affects quality of life in gastroenterology patients. Aim: To investigate the association between malnutrition and quality of life in patients with benign gastrointestinal disease. Methods: Two hundred patients (104 wellnourished and 96 malnourished) were assessed according to the Subjective Global Assessment, anthropometric measurements and bioelectrical impedance analysis. Quality of life was determined with the validated Medical Outcomes Study 36-item Short-Form General Health Survey (SF 36). Muscle function was assessed by hand grip strength and peak flow. Results: Body mass index, body cell mass, arm muscle area and hand grip strength were significantly lower in the malnourished patients. Quality of life was generally lower when compared to norm values. Seven out of eight quality of life scales (excluding bodily pain) were significantly reduced in the malnourished patients. Comparing patients with liver cirrhosis and inflammatory bowel disease (IBD), patients with IBD experienced significantly lower values in the perception of bodily pain,social functioning and mental health. Malnourished liver cirrhotics suffered reductions in more scales (six out of eight) than malnourished IBD patients did (four out of eight). Conclusion: Quality of life is generally low in benign gastrointestinal disease and is further reduced in patients who are classified as malnourished. It appears that liver cirrhosis patients experience a higher quality of life than IBD patients do, but the impact of malnutrition seems to be greater in liver cirrhosis than in IBD. abstract_id: PUBMED:23303148 Long-term functional and quality-of-life outcomes after IPAA in children. Background: The impact of IPAA on long-term outcomes in adults is favorable. Studies on functional outcome and quality of life in children are sparse. Objective: The aim of this study was to evaluate the functional and quality-of-life outcomes after IPAA in children and to investigate the possible correlations between the two. Design: Standardized validated questionnaires of bowel/psychosocial functioning and quality of life were used to assess outcome. Setting: This study was conducted at a tertiary medical center. Patients: All children who had surgery at <18 years of age and were followed up at our institute were included. Global Assessment of Function Scale questionnaires were used to evaluate functional outcome. Quality of life in patients >18 years at the time of follow-up was scored by using the Short Form 36 questionnaire completed by the patients. Quality of life in the younger patients at the time of follow-up was scored using the Child Health Questionarie-28 filled by the parents. Results: Data were obtained in 26 patients (25 with a preoperative diagnosis of ulcerative colitis) at a median follow-up of 6 years. The median age at surgery was 15 years. Quality of life was comparable to the general population for children >18 at the time of follow-up in 7 of 8 items, but was lower than in the general population in 7 of 12 items for those who were younger at the time of follow-up. The mean functional score was found to be negatively correlated to all 8 Short Form 36 parameters (r > 0.5; p < 0.005), but only to one of the Child Health Questionarie-28 parameters. Limitations: This study was limited by its small sample size. Conclusions: The functional outcomes among pediatric patients undergoing IPAA are good. Parents' perception of their child's quality of life is poorer in comparison with the perception by patients themselves when older and with no correlation to the functional results. abstract_id: PUBMED:22921528 The Short Health Scale: a valid and reliable measure of health related quality of life in English speaking inflammatory bowel disease patients. Background: Health related quality of life in inflammatory bowel disease is influenced both by disease activity as well as by the psychosocial characteristics of the individual patient. The Short Health Scale (SHS) is a four-part visual analogue scale questionnaire using open-ended questions that are designed to assess the impact of inflammatory bowel disease on a health related quality of life. The four dimensions include bowel symptoms, activities of daily life, worry and general wellbeing. It has previously been validated in Swedish and Norwegian speaking patients. Aim: To evaluate the SHS in an English speaking inflammatory bowel disease population. Methods: Four hundred and ninety Crohn's disease and ulcerative colitis patients completed the SHS. Individual SHS items were correlated with Inflammatory Bowel Disease Questionnaire (IBDQ) dimensions and with disease activity to assess validity. Test-retest reliability was assessed in 38 patients who completed the Short Health Scale two weeks apart. Results: All four items correlated with corresponding IBDQ dimensions with correlation coefficients ranging from -0.66 to -0.74 (all p values<0.001). In addition, total SHS scores correlated with total IBDQ scores in both Crohn's disease (-0.836) and ulcerative colitis (0.797). There was a stepwise increase in Short Health Scale scores with increasing disease activity (all p values<0.001). Reliability was confirmed with test-retest correlations ranging from 0.70 to 0.89 (all p values<0.005). Conclusions: The Short Health Scale is a valid and reliable measure of health related quality of life in English speaking inflammatory bowel disease patients. abstract_id: PUBMED:29878288 Quality of life in inflammatory bowel disease in Tunisian patients. Background: The management of the health-related quality of life (HRQL) is increasingly considered as an important treatment goal in chronic diseases including inflammatory bowel diseases (IBD). Aim: The aim of our study was to determine the impact of IBD on HRQLand identify the factors involved in the deterioration of HRQL in these patients. Methods: We conducted a case-control study including 108 patients; 66 had Crohn's disease (CD) and 42 had ulcerative colitis (UC). In the measurement of HRQL, we used a general questionnaire "Short Form 36: SF36" and a specific questionnaire "Tunisian Inflammatory Bowel Disease Questionnaire: T-IBDQ". Results: HRQL of patients was worse than controls with a statistically significant difference for six of the eight dimensions of the SF 36 "Physical Functionning PF", "Role Physical RP", "General health GH","SocialFunctionning SF "," Mental Health MH "and" Role Emotional RE "as well as the Mental summary score (MCS) and the Physical summary score (PCS ) (p <0.05). The factors involved in the alteration of HRQL were: age <30 years, poor socioeconomic conditions, disease activity, use of corticosteroids, a number of surgeries ≥2 and anterior hospitalization history. Conclusion: In this study, IBD cause impaired HRQL affecting almost all areas of the SF 36 questionnaire. Incriminated factors may be related to the patient, disease and even treatment. abstract_id: PUBMED:31106617 Short Health Scale: a valid and reliable measure of quality of life in Dutch speaking patients with inflammatory bowel disease. Objectives: Patient reported outcomes are widely used in today's clinical practice. The Short Health Scale has been proven to be an easy-to-use and reliable measure to evaluate quality of life in patients with inflammatory bowel disease. We aimed to validate this Short Health Scale in Dutch speaking patients. Methods: A total of 157 Crohn's disease and ulcerative colitis patients (46% male, median age 39 years) completed the Short Health Scale on a tablet during the outpatient clinic. Validity was assessed by correlating both individual and total Short Health Scale scores with short Inflammatory Bowel Disease Questionnaire dimensions and clinical disease activity. Test-retest reliability was assessed in eight patients in stable remission who completed the Short Health Scale a second time after 4-8 weeks. Results: All Short Health Scale items correlated with corresponding short Inflammatory Bowel Disease Questionnaire dimensions (correlation coefficients ranging from -0.403 to -0.833, all p < .01). Short Health Scale scores increased stepwise with increasing clinical disease activity (all p < .001). The results of the Short Health Scale questionnaire remained stable on repeated measurements in patients in remission (rs between 0.699 and 0.994, all p < .01 except for well-being). Conclusions: The Short health Scale is a rapid and valid instrument for measuring quality of life in Dutch speaking patients with Inflammatory Bowel Disease. Its simplicity and usability make it a good candidate for routine care and suitable for home-monitoring of patients. abstract_id: PUBMED:36059905 The Short Health Scale: A Valid and Reliable Quality-of-Life Scale for Mainland Chinese Patients with Inflammatory Bowel Disease. Background: The aim of our study was to translate and validate the mainland Chinese version of the short health scale (SHS), a disease-specific quality-of-life (QoL) scale for patients with inflammatory bowel disease (IBD). Methods: The SHS was translated and validated according to the standard process: a translation and back-translation procedure and a reliability and validation study. Patients with IBD were enrolled, and their QoL was assessed using the SHS, the short inflammatory bowel disease questionnaire (SIBDQ), and the Bristol stool form scale. Reliability (internal consistency reliability, split-half reliability, and test-retest reliability) and validity analyses were performed to evaluate the psychometric characteristics of the SHS. The impacts of different severity of major symptoms on QoL were analyzed by comparing the scores of SHS. Results: A total of 112 patients with IBD (69 with ulcerative colitis and 43 with Crohn's disease) completed the mainland Chinese version of the SHS, and 34 patients completed the SHS a second time within one to two weeks. Cronbach's alpha value of the SHS was 0.90, and its split-half coefficient was 0.83. Intraclass correlation coefficients of the four items ranged from 0.52 to 0.72. All four items of the SHS were significantly associated with the corresponding domains of the SIBDQ, with correlation coefficients ranging from -0.52 to -0.69 (p < 0.001). The results of confirmatory factor analysis indicated a good fit of the one-factor model, with comparative fit index (CFI) = 0.878, normed fit index (NFI) = 0.874, incremental fit index (IFI) = 0.880, and goodness of fit index (GFI) = 0.842. The patients with severe symptoms had higher scores in the SHS than those with no or mild symptoms. Conclusions: The SHS was simple and quick to be used. The SHS had good validity and reliability and was suitable for evaluating the QoL of patients with IBD in mainland China. abstract_id: PUBMED:17206693 Quality-of-life measurement in patients with inflammatory bowel disease receiving social support. Background: Crohn's disease and ulcerative colitis, referred to as inflammatory bowel diseases, affect mainly young adults and have an elevated morbidity and a negative effect on quality of life. This study aimed to compare the health-related quality of life between 2 randomized groups of patients with inflammatory bowel disease: (1) the supported group (SG), patients receiving social support for an 18-month period, and (2) the control group (CG), patients receiving no social support. Methods: Health-related quality of life was assessed at 4 moments with the Portuguese versions of the Medical Outcomes Study Short Form 36 and the Inflammatory Bowel Disease Questionnaire (IBDQ), both validated in Brazil. Results: In the SG, using analysis of variance for repeated measures complemented by the Bonferroni test positive variations were observed (1) in the Social Aspects domain, between the first and third evaluations (P = 0.044), and (2) in the Emotional Aspects domain, between the first and second and the third and fourth evaluations (P = 0.029). Conclusions: In the sample studied, social support, measured by use of the Inflammatory Bowel Disease Questionnaire, had a positive impact on the social and emotional aspects of quality of life. Answer: Yes, according to the INSPIRE study, personality traits do predict general quality of life as measured by the Short Form-36 (SF-36) in distressed patients with ulcerative colitis (UC) and Crohn's disease (CD). The study found that the mental and vitality subscales of the SF-36 were predicted by neuroticism in both UC and CD patients. The highest explained variance was 43.8% on the "mental" subscale in UC. Additionally, the social function subscale was related to alexithymia only in UC, while the role limitation and pain subscales were related to personality in CD only. The physical function subscale related differently to personality in UC and CD, suggesting different psychobiological interactions in these conditions (PUBMED:18777439).
Instruction: Matrix metalloproteinase-1 promoter polymorphism and epithelial ovarian cancer: does ethnicity matter? Abstracts: abstract_id: PUBMED:17441888 Matrix metalloproteinase-1 promoter polymorphism and epithelial ovarian cancer: does ethnicity matter? Aim: To estimate the relationship between matrix metalloproteinase (MMP)-1 promoter -1607 bp polymorphism and the risk of epithelial ovarian cancer (EOC) in Korean women and to clarify the ethnic difference in genotype distribution of this polymorphism. Methods: Single nucleotide polymorphism (SNP) of MMP-1 promoter -1607 region in 133 EOC patients and 332 cancer-free patients were investigated. Then the associations of this polymorphism with EOC or its clinicopathological parameters were analyzed. In addition, genotype distributions of this polymorphism in Korean women were compared with those of other races by extracting data from the previously published literature. Results: We found no relationship between MMP-1 promoter -1607 bp polymorphism and epithelial ovarian cancer in a Korean population. Furthermore, we found ethnicity-dependent differences in genotype distributions and allele frequencies by comparison with previous articles on this topic. We report significant ethnic differences in the genotype distributions and allele frequencies of the MMP-1 promoter -1607 bp polymorphism. Conclusion: Our results indicate that MMP-1-1607 bp polymorphism shows ethnic diversity, and that the hypothesis that this polymorphism is associated with epithelial ovarian cancer is not supported by this study in a Korean population. Moreover, this finding concurs with results obtained in white Americans and Europeans. abstract_id: PUBMED:16214204 A polymorphism in the matrix metalloproteinase-1 gene promoter is associated with the prognosis of patients with ovarian cancer. Objective: The enzyme matrix metalloproteinase (MMP)-1 is involved in ovarian carcinogenesis. A common guanine insertion-deletion promoter polymorphism within the gene encoding MMP-1 (MMP1) has been suggested to be a candidate gene for ovarian cancer. We investigated whether this common polymorphism can also serve as independent prognostic parameter in a large series of affected women. Methods: The MMP1 promoter polymorphism was examined in 151 Caucasian patients with epithelial ovarian cancer using polymerase chain reaction. Results were correlated with clinical data. Results: No associations were ascertained between the MMP1 polymorphism and tumor stage (P = 1.0, odds ratio [OR] 1.08), lymph node involvement (P = 1.0, OR 0.8), tumor grading (P = 0.2, OR 0.5), and patient's age at diagnosis (P = 1.0, OR 1.04). Besides the clinically established prognosticators, tumor stage and histological grade, presence of the MMP1 polymorphism was associated with a shortened disease-free and overall survival in a univariate Kaplan-Meier analysis (P = 0.01) and a multivariate Cox regression model (P = 0.04). Conclusion: Presence of the MMP1 gene promoter polymorphisms was found to be a negative prognostic parameter in patients with ovarian cancer. abstract_id: PUBMED:10485461 Correlation between expression of the matrix metalloproteinase-1 gene in ovarian cancers and an insertion/deletion polymorphism in its promoter region. Matrix metalloproteinases (MMPs), a family of closely related enzymes that degrade the extracellular matrix, are likely to be involved in invasion and metastasis of tumor cells. A guanine (G) insertion/deletion polymorphism within the promoter region of MMP-1 influences the transcription of this gene; i.e., the 2G (insertion-type) promoter possesses greater transcriptional activity than the 1G (deletion-type) promoter. To investigate whether this feature contributes to cancer development and/or progression, we genotyped 163 ovarian cancer patients for the polymorphism and then analyzed levels of expression of the MMP-1 gene in their tumors. The proportion of patients who were either heterozygotes or homozygotes for the 2G allele was significantly higher than that observed among 150 individuals without cancer (P = 0.028). Moreover, the levels of MMP-1 expression in cancer tissues among the patients carrying 2G alleles were elevated significantly in comparison with 1G homozygotes (P = 0.0038). By stimulating degradation of extracellular matrix, an excess of MMP-1 production may enhance development and/or rapid progression of ovarian cancers. abstract_id: PUBMED:16080875 Association of single nucleotide polymorphism in matrix metalloproteinases promoter with susceptibility to ovarian cancer Objective: To investigate the association of single nucleotide polymorphism in matrix metalloproteinase (MMP)-1 and MMP-3 promoter with susceptibility to ovarian cancer. Methods: The genotype of MMP-1 and MMP-3 gene promoter region was determined by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) in 122 ovarian cancer patients (ovarian cancer group) and 151 unrelated healthy women (control group). Results: The frequencies of 2G and 1G alleles in ovarian cancer group were 68.0%, 32.0% and in control group 66.9%, 33.1%, with no significant difference between the two groups (P > 0.05); the genotype frequencies of 1G/1G, 1G/2G, 2G/2G in ovarian cancer group (16.4%, 31.1% and 52.5%) was not significantly different from that in control group (16.6%, 33.1% and 50.3%) (P > 0.05). Compared to 1G/1G genotype, neither 2G/2G nor in combination with 1G/2G genotype significantly modified the risk of developing ovarian cancer. The adjusted odds ratio was 1.05 (95% CI = 0.53-2.07) and 1.00 (95% CI = 0.52-1.90), respectively. The frequencies of 5A and 6A alleles in MMP-3 in ovarian cancer group were 17.2%, 82.8% and in control group 20.2%, 79.8%, with no significant difference between the two groups (P > 0.05). No significant difference in genotype (5A/5A, 5A/6A and 6A/6A) distribution between ovarian cancer and control groups was observed, either. Compared to 6A/6A genotype, 5A/5A plus 5A/6A genotype did not significantly modify the risk of developing ovarian cancer, the adjusted odds ratio was 1.34 (95% CI = 0.81-2.23). 2G allele of MMP-1 and 6A allele of MMP-3 were in linkage disequilibrium (chi(2) = 56.53, P < 0.01). Conclusion: MMP-1 and MMP-3 promoter polymorphism is not associated with the susceptibility to ovarian cancer. abstract_id: PUBMED:12969782 Matrix metalloproteinase-1 gene promoter polymorphism and risk of ovarian cancer. Objective: It has been suggested that the 2G allele of a guanine insertion-deletion promoter polymorphism in the promoter of the matrix metalloproteinase-1 (MMP1) gene may increase susceptibility to ovarian cancer. The 2G allele also has been associated with increased MMP1 expression. We investigated the relationship between the MMP1 polymorphism and ovarian cancer risk in a large population-based, case-control study. Methods: The MMP1 promoter polymorphism was examined in white blood cell DNA from 311 cases and 387 age- and race-matched controls using a radiolabeled polymerase chain reaction assay. In addition, genotyping of the MMP1 polymorphism performed in 42 advanced-stage invasive serous ovarian cancers was compared to their mean relative MMP1 expression from Affymetrix microarrays. Results: The 2G allele frequency did not differ significantly between cases (0.49) and controls (0.48), and the distribution of genotypes was in Hardy-Weinberg equilibrium. Using 1G homozygotes as the reference group, neither 2G homozygotes (odds ratio 1.1, 95% confidence interval 0.7-1.7) nor heterozygotes plus 2G homozygotes (odds ratio 0.9, 95% confidence interval 0.7-1.3) had an increased risk of ovarian cancer. There was also no relationship between MMP1 genotype and histologic grade, histologic type, stage, or tumor behavior (borderline versus invasive). The mean MMP1 expression was twice as high in 2G homozygotes relative to 1G homozygotes, but this difference was not statistically significant. Conclusion: The reported association between the MMP1 promoter polymorphism and ovarian cancer risk was not supported by our data. There was a suggestion that the 2G allele may be associated with higher MMP1 expression, and this finding is worthy of further investigation. abstract_id: PUBMED:16278009 Polymorphisms in the promoter regions of the matrix metalloproteinases-1, -3, -7, and -9 and the risk of epithelial ovarian cancer in China. Purpose: To investigate the association of single nucleotide polymorphisms (SNP) in the promoter region of the matrix metalloproteinases-1 -1607bp1G/2G, matrix metalloproteinases-3 -1171bp5A/6A, matrix metalloproteinases-7 A-181G and matrix metalloproteinases-9 C-1562T with susceptibility to ovarian cancer in a population of North China. Experimental Design: We analyzed four different functional promoter polymorphisms in the respective genes by polymerase chain reaction (PCR)-restriction fragment length polymorphism (RFLP) in a sample of patients with epithelium ovarian cancer and control women, all from North China. Results: No significant difference was detected between the patient and control groups in genotype and allelotype distribution of MMP-1, MMP-3, MMP-9 of the polymorphisms studied. However, the genotype and allelotype of the MMP-7 distribution in ovarian cancer patients were significantly different from that in healthy controls. The frequency of the -181G allele of MMP-7 in patients was significantly higher than that in healthy controls women (8.2% vs. 2.8%, P = 0.002). Compared to the A/A genotype, the genotypes with the -181G allele (A/G + G/G) significantly increased susceptibility to ovarian cancer, with adjusted odds ratio [OR] = 3.53 95% confidence interval [CI] [1.58 to 7.89]. Conclusions: The study suggested that a possible association between the MMP-7 A/G polymorphism with susceptibility to epithelium ovarian cancer, but there is no support for an association of the selected MMP-1 1G/2G, MMP-3 5A/6A, and MMP-9 C/T polymorphisms with the risk for ovarian cancer. abstract_id: PUBMED:25950130 Analysis of the Association of Matrix Metalloproteinase-1 Gene Promoter (rs1799750) Polymorphism and Risk of Ovarian Cancer. Objective: Studies investigating the association between matrix metalloproteinase-1 (MMP1) gene promoter 1607-base pair (bp) polymorphism and ovarian cancer risk have yielded conflicting results. Methods: We therefore carried out a meta-analysis of 754 ovarian cancer cases and 1184 controls from 5 published case-control studies. The strength of the association between MMP1 1607-bp polymorphism and ovarian cancer susceptibility was calculated using pooled odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Results: The results suggest that no statistically significant associations exist between MMP1 1607-bp polymorphisms and ovarian cancer risk in all 4 genetic models (2G2G vs 1G1G: OR, 1.08; 95% CI, 0.81-1.43; P = 0.23; 1G2G vs 1G1G: OR, 1.06; 95% CI, 0.82-1.36; P = 0.15; 1G2G + 2G2G vs 1G1G: OR, 1.05; 95% CI, 0.83-1.34; P = 0.16; 2G2G vs 1G1G + 1G2G: OR, 0.98; 95% CI, 0.80-1.20; P = 0.84). Conclusions: In summary, this meta-analysis showed that the MMP1 1607-bp polymorphism is not associated with ovarian cancer risk. abstract_id: PUBMED:15939478 DHPLC analysis of the matrix metalloproteinase-1 promoter 1G/2G polymorphism that can be easily used to screen large population. Matrix metalloproteinase-1 has been shown to play an important role in all stages of cancer initiation, invasion, and metastasis. The 1G/2G single nucleotide polymorphism (SNP) at -1607 to -1608 creates an Ets binding site and elevates the rate of transcription. Moreover, the presence of the 2G allele in the MMP-1 promoter has been reported to be associated with the development and/or progression of carcinomas of the ovary, endometrium, lung, and colorectum. However, further studies on a wide variety of cancers in various sufficiently large populations will be required to verify that 2G is risk factor for cancers. A major challenge confronting such studies is the need to develop accurate, fast and inexpensive high-throughput genotyping techniques. To set up a fast and sensitive test for MMP-1 1G/2G genotyping, we analyzed 126 healthy persons by denaturing high performance liquid chromatography (DHPLC). The genotypes of MMP-1 1G/2G revealed by DHPLC analysis were further confirmed by DNA sequencing. In conclusion, DHPLC is a cost-effective, rapid, sensitive, and high-throughput technique for MMP-1 1G/2G genotyping. abstract_id: PUBMED:11106348 MMP1 and MMP3 polymorphisms in promoter regions and cancer. N/A abstract_id: PUBMED:24390664 Is there any diagnostic value of serum protease-activated receptor-1 (PAR1) levels on determination of epithelial ovarian carcinoma? The role of molecular markers in ovarian cancer is still a matter of debate. Protease-activated receptor-1 (PAR1) might be a good marker in some types of malignant tumors and might provide useful information in diagnosis and prognosis. The objective of this study was to evaluate the serum levels of PAR1 in regard to diagnostic, predictive, and prognostic value in epithelial ovarian cancer (EOC) patients. Forty-four EOC patients were enrolled in this study. Serum PAR1 levels were determined by enzyme-linked immunosorbent assay (ELISA) method. Twenty-five age- and sex-matched healthy controls were included in the analysis. The median age of patients was 58 years old, ranging from 22 to 83 years, where most of them had advanced disease (stage III-IV) (n = 40, 91%). The median serum PAR1 values were significantly elevated in patients compared to healthy controls (1.52 ng/ml vs. 1.13 ng/ml) (p = 0.03), whereas any clinical variables including response to chemotherapy did not associate with serum assay (p > 0.05). Progression-free survival (PFS) and overall survival (OS) of patients who did not respond to chemotherapy nor had platinum resistance in relapsed disease were poorer in the analyses. On the other hand, serum PAR1 levels showed no significant adverse effect on either PFS or OS (p = 0.43 and p = 0.49, respectively). These results proved that baseline serum PAR1 levels of patients with EOC were significantly higher than those of healthy people. However, these assays suggested no predictive or prognostic value in this group of patients. Answer: The relationship between matrix metalloproteinase-1 (MMP-1) promoter polymorphism and the risk of epithelial ovarian cancer (EOC) appears to be influenced by ethnicity. A study conducted on Korean women found no relationship between MMP-1 promoter -1607 bp polymorphism and EOC, and significant ethnic differences in genotype distributions and allele frequencies were reported when compared with other races, suggesting that ethnicity does matter (PUBMED:17441888). This finding is consistent with results obtained in white Americans and Europeans, indicating that the hypothesis that this polymorphism is associated with EOC is not supported in these populations either. In contrast, other studies have found associations between MMP-1 promoter polymorphisms and various aspects of ovarian cancer. For instance, a study on Caucasian patients suggested that the presence of the MMP1 polymorphism was associated with a shortened disease-free and overall survival, indicating a negative prognostic parameter in patients with ovarian cancer (PUBMED:16214204). Another study found that the proportion of ovarian cancer patients with the 2G allele was significantly higher than in individuals without cancer, and that the levels of MMP-1 expression were elevated in cancer tissues among patients carrying 2G alleles (PUBMED:10485461). However, several other studies have reported no association between MMP-1 and MMP-3 promoter polymorphisms and the susceptibility to ovarian cancer (PUBMED:16080875, PUBMED:12969782). A meta-analysis also concluded that there was no statistically significant association between MMP1 1607-bp polymorphisms and ovarian cancer risk (PUBMED:25950130). In summary, while some studies suggest that MMP-1 promoter polymorphisms may have prognostic significance in ovarian cancer, the evidence regarding the association with susceptibility to EOC is mixed and may be influenced by ethnicity. Further research is needed to clarify the role of these polymorphisms in different ethnic groups.
Instruction: Birth weight-breast cancer revisited: is the association confounded by familial factors? Abstracts: abstract_id: PUBMED:19690178 Birth weight-breast cancer revisited: is the association confounded by familial factors? Purpose: The study aimed to investigate whether the association between birth weight and the risk of breast cancer can be confounded by familial factors, such as shared environment and common genes. Materials And Methods: Eligible were all female like-sexed twins of the Swedish Twin Registry, born during the period 1926-1958 and alive in 1973. Data were obtained from birth records, and the final study population with reliable birth weight data was made up of 11,923 twins. Hazard ratios (HR) for breast cancer according to birth weight were estimated through Cox regression, using robust SE to account for the dependence within twin pairs. Paired analysis was done to account for potential confounding by familial factors. Results: In the cohort analysis, a birth weight &gt;or=3,000 g was associated with an increased risk of breast cancer diagnosed at or before 50 years [adjusted HR, 1.57; 95% confidence interval (95% CI), 1.03-2.42] but not with breast cancer with a later onset (adjusted HR, 0.80; 95% CI, 0.57-1.12). From &gt;or=2,500 g, a 500-g increase in birth weight conferred a HR of 1.62 (95% CI, 1.16-2.27) for breast cancer diagnosed at or before 50 years. This risk remained in analysis within twin pairs (HR, 1.57; 95% CI, 1.00-2.48). Conclusion: In the present study, findings indicate that the association between birth weight and breast cancer risk, seen only in women diagnosed early (&lt;or=50 years), is not confounded by familial factors. abstract_id: PUBMED:16641898 Birth weight and breast cancer risk. Exploring whether the positive association between birth weight and breast cancer risk differs by other breast cancer risk factors may help inform speculation about biological mechanism. In these data, high birth weight was associated with breast cancer risk in younger and in more educated women, but was not associated overall. abstract_id: PUBMED:17538980 Birth weight and risk of cancer. Background: It is well established that prenatal biologic processes are important for the development of some childhood cancers, whereas less is known regarding their influence on adult cancer risk. High birth weight has been associated with risk of breast cancer, whereas studies of other specific cancers and all cancers together have been less conclusive. Methods: The authors established a cohort of more than 200,000 men and women who were born between 1936 and 1975. Birth weights were obtained from school health records and information concerning cancer from the Danish Cancer Registry. Follow-up was performed between April 1, 1968 and December 31, 2003. During 6,975,553 person-years of follow-up, a total of 12,540 primary invasive cancers were diagnosed. Results: Analyses of site-specific cancers revealed that the majority of cancers had a positive linear association with birth weight. Departures from a positive linear association were found to be statistically significant for cancers of the pancreas and bladder, which demonstrated a V-shaped association, and testicular cancer, which demonstrated an inverse association with birth weight. Excluding these 3 exceptions, the trends for the individual cancer sites were not heterogeneous, and the overall trend was a relative risk of 1.07 (95% confidence interval, 1.03-1.11) per 1000-g increase in birth weight. This trend was the same in men and women and in all age groups. Conclusions: A 7% increase in cancer risk was observed per 1000-g increase in birth weight. Few cancers demonstrated a nonlinear association with birth weight, and testicular cancer was found to be negatively associated with birth weight. The authors hypothesized that the biologic explanation behind the association between birth weight and cancer at different sites should be sought in a common pathway. abstract_id: PUBMED:22815728 Birth weight, breast cancer and the potential mediating hormonal environment. Background: Previous studies have shown that woman's risk of breast cancer in later life is associated with her infants birth weights. The objective of this study was to determine if this association is independent of breast cancer risk factors, mother's own birth weight and to evaluate association between infants birth weight and hormonal environment during pregnancy. Independent association would have implications for understanding the mechanism, but also for prediction and prevention of breast cancer. Methods And Findings: Risk of breast cancer in relation to a first infant's birth weight, mother's own birth weight and breast cancer risk factors were evaluated in a prospective cohort of 410 women in the Framingham Study. Serum concentrations of estriol (E3), anti-estrogen alpha-fetoprotein (AFP), and pregnancy-associated plasma protein-A (PAPP-A) were measured in 23,824 pregnant women from a separate prospective cohort, the FASTER trial. During follow-up (median, 14 years) 31 women (7.6%) were diagnosed with breast cancer. Women with large birth weight infants (in the top quintile) had a higher breast cancer risk compared to other women (hazard ratio (HR), 2.5; 95% confidence interval (CI), 1.2-5.2; P = 0.012). The finding was not affected by adjustment for birth weight of the mother and traditional breast cancer risk factors (adjusted HR, 2.5; 95% CI, 1.2-5.6; P = 0.021). An infant's birth weight had a strong positive relationship with the mother's serum E3/AFP ratio and PAPP-A concentration during pregnancy. Adjustment for breast cancer risk factors did not have a material effect on these relationships. Conclusions: Giving birth to an infant with high birth weight was associated with increased breast cancer risk in later life, independently of mother's own birth weight and breast cancer risk factors and was also associated with a hormonal environment during pregnancy favoring future breast cancer development and progression. abstract_id: PUBMED:21710135 Birth weight and other prenatal factors and risk of breast cancer in Asian-Americans. Little is known about the role of birth weight and other prenatal factors in the etiology of breast cancer in Asian-Americans. We investigated the relation between birth weight and other prenatal factors and breast cancer risk in a population-based case-control study in Los Angeles County that included 2,259 Asian-American women with incident, histologically confirmed breast cancer and 2,019 control women, who were frequency matched to cases on age, Asian ethnicity, and neighborhood of residence. Breast cancer risk nearly doubled (odds ratio (OR) = 1.97, 95% confidence interval (CI) = 1.15-3.39) among those with high (≥ 4000 g) birth weight compared to those with low (&lt;2500 g) birth weight after adjusting for age at menarche, parity, adult body mass index, and other covariates. Risk increased 8% per 500 g increase in birth weight (P trend = 0.10). We observed a significant relationship between birth weight and age at menarche in both cases and controls. Mean birth weight was higher (2948 g) for control women who had early menarche (age ≤ 11 years) compared to those who had menarche late (age ≥ 15 years) (2807 g) (P trend = 0.016); results were similar among case patients (P trend = 0.020). Older maternal age was also a risk factor; risk increased by 6% (95% CI = 1.01-1.12) per 5 years increase in maternal age with adjustment for parity and other risk factors. Our results support the hypothesis that high birth weight and older maternal age at pregnancy may have contributed to the rising breast cancer incidence in Asian-Americans. abstract_id: PUBMED:15477861 Birth weight is associated with postmenopausal breast cancer risk in Swedish women. There is some evidence that birth weight is associated with breast cancer. Whether this association differs between premenopausal and postmenopausal ages is still unclear. The results from this study suggest that higher birth weight is a risk factor for postmenopausal breast cancer (OR 1.06, CI 1.00-1.12, per 100 g), independent of selected early-life and adult factors. abstract_id: PUBMED:12516100 Early life risk factors in cancer: the relation of birth weight to adult obesity. The intrauterine environment appears to play a role in the development of adult diseases, including several prominent cancers. Our study aims to characterize the relationship between birth weight, a measure of the intrauterine environment, and adult obesity. A population-based sample of women aged 50-79, living in the states of Massachusetts, New Hampshire or Wisconsin, were randomly selected from lists of licensed drivers and Medicare beneficiaries to participate as controls in a case-control study of breast cancer. Information on birth weight, adult height and adult weight were collected through structured telephone interviews from 1992-1995. Our analysis was based on 1,850 interviews. A U-shaped relationship between birth weight and adult BMI was observed. Median adult BMI for the birth weight categories (in kilograms) &lt;2.3, 2.3&lt;2.5, 2.5&lt;3.2, 3.2&lt;3.9, 3.9&lt;4.5 and &gt; or =4.5 were 26.6, 24.4, 25.1, 25.5, 25.4 and 26.6 kg/m respectively. Compared to women 2.5&lt;3.2 kg at birth, women in highest birth weight category (&gt; or =4.5 kg) had an odds ratio of 1.99 (95% CI 1.13-3.48) of being obese (&gt; or =30 kg/m(2)) as adults. The odds ratio for women in the &lt;2.3 kg birth weight category was 1.67 (95% CI 1.01-2.76). These data suggest that both low and high birth weights are associated with higher adult BMI and support the hypothesis that fetal experience may influence adult obesity with potential consequences for risk of several major cancers. abstract_id: PUBMED:32665189 Birth Weight and Incidence of Breast Cancer: Dose-Response Meta-analysis of Prospective Studies. Background: Many studies have shown the association between birth weight and breast cancer (BC), but the evidence remains limited and inconsistent, especially in different menopause status. We sought to clarify the relationship and shape of the dose-response relation between birth weight and BC. Methods: The Web of Science, PubMed, and Embase databases were searched for prospective studies involving the relationship between birth weight and risk of BC published to November 2019. Random effects of generalized least squares regression models were used to estimate the quantitative dose-response association, and restricted cubic splines were used to model the association. Results: We included reports of 16 prospective studies describing 16,000 incident cases among 553,644 participants. We identified a modest-in-magnitude, but significant, association between birth weight and BC risk: risk increased by 2% (risk ratio, 1.02, 95% confidence interval, 1.01-1.03) and 9% (risk ratio, 1.09, 95% confidence interval, 1.04-1.15) with a per-500 g birth weight increment in all ages and premenopausal women, respectively. Our results showed a linear dose-response relationship between birth weight and BC risk (Pnonlinearity = .311) in premenopausal women, with statistical significance when birth weight was above about 3.5 kg. No significant association was found in postmenopausal women. Conclusion: Higher birth weight has a relationship with increased risk of BC in premenopausal women, particularly when birth weight is above 3.5 kg. abstract_id: PUBMED:11857017 Birth weight as a predictor of breast cancer: a case-control study in Norway. The hypothesis that birth weight is positively associated with adult risk of breast cancer implies that factors related to intrauterine growth may be important for the development of this malignancy. Using stored birth records from the two main hospitals in Trondheim and Bergen, Norway, we collected information on birth weight, birth length and placenta weight among 373 women who developed breast cancer. From the same archives, we selected as controls 1150 women of identical age as the cases without a history of breast cancer. Information on age at first birth and parity were collected from the Central Person Registry in Norway. Based on conditional logistic regression analysis, breast cancer risk was positively associated with birth weight and with birth length (P for trend=0.02). Birth weights in the highest quartile (3730 g or more) were associated with 40% higher risk (odds ratio, 1.4, 95% confidence interval, 1.1-1.9) of breast cancer compared to birth weights in the lowest quartile (less than 3090 g). For birth length, the odds ratio for women who were 51.5 cm or more (highest quartile) was 1.3 (95% confidence interval, 1.0-1.8) compared to being less than 50 cm (lowest quartile) at birth. Adjustment for age at first birth and parity did not change these estimates. Placenta weight was not associated with breast cancer risk. This study provides strong evidence that intrauterine factors may influence future risk of breast cancer. A common feature of such factors would be their ability to stimulate foetal growth and, simultaneously, to influence intrauterine development of the mammary gland. abstract_id: PUBMED:24231452 Breast cancer risk in opposite-sexed twins: influence of birth weight and co-twin birth weight. Most, but not all, studies report a positive association between birth weight, as an indirect marker of prenatal hormone exposure, and offspring breast cancer risk, particularly premenopausal breast cancer. Females from opposite-sexed twin pairs may also be prenatally exposed to androgens from their twin brothers. A Swedish study of opposite-sexed twins with a small sample size found a very strong positive association between female birth weight and breast cancer risk. In this case-control study, nested within a cohort of female opposite-sexed twins, we included 543 breast cancer case subjects diagnosed in the period from 1972 to 2008 and 2715 matched control subjects. Conditional logistic regression estimated the breast cancer risk associated with birth weight and other birth characteristics, including gestational age and co-twin birth weight. All statistical tests were two-sided. There was no association between birth weight (odds ratio = 1.01; 95% confidence interval = 0.70 to 1.46) or twin brother's birth weight and risk of breast cancer, which suggests the previously reported strong positive association may have been a chance finding. Answer: The association between birth weight and breast cancer risk has been a subject of investigation in various studies, with some suggesting that higher birth weight is associated with an increased risk of breast cancer, particularly in premenopausal women. However, the question arises whether this association is confounded by familial factors such as shared environment and common genes. A study using data from the Swedish Twin Registry, which included 11,923 female like-sexed twins born between 1926 and 1958, aimed to address this question (PUBMED:19690178). The study found that a birth weight of ≥3000 g was associated with an increased risk of breast cancer diagnosed at or before 50 years, but not with breast cancer with a later onset. Importantly, this risk remained even in paired analysis within twin pairs, suggesting that the association between birth weight and breast cancer risk is not confounded by familial factors. Other studies have also explored the relationship between birth weight and breast cancer risk. For instance, a study in Asian-American women found that high birth weight (≥4000 g) nearly doubled the risk of breast cancer compared to low birth weight (<2500 g), after adjusting for various covariates (PUBMED:21710135). Similarly, a meta-analysis of prospective studies reported a modest but significant association between birth weight and breast cancer risk, with risk increasing by 2% per 500 g birth weight increment in all ages and by 9% in premenopausal women (PUBMED:32665189). In contrast, a study of opposite-sexed twins found no association between birth weight or twin brother's birth weight and the risk of breast cancer, suggesting that the previously reported strong positive association may have been a chance finding (PUBMED:24231452). Overall, the evidence suggests that while birth weight may be associated with an increased risk of breast cancer, particularly in premenopausal women, this association is not likely to be confounded by familial factors, as indicated by the study on Swedish twins (PUBMED:19690178).
Instruction: Is the clock face an accurate, precise, and reliable measuring tool for anterior cruciate ligament reconstruction? Abstracts: abstract_id: PUBMED:24821225 Is the clock face an accurate, precise, and reliable measuring tool for anterior cruciate ligament reconstruction? Purpose: (1) To assess the use and practice of the clock face among surgeons who routinely perform anterior cruciate ligament (ACL) reconstructions, and (2) to assess the accuracy, precision, and reliability of 3 commonly used clock-face schemes in ACL reconstruction. Methods: First, 9 surgeons completed a questionnaire assessing the use and definition of the clock-face technique. Next, to assess the accuracy, precision, and reliability of the clock face, each surgeon estimated the "time" of 8 artificial femur models with a black dot located on the posterior aspect of the lateral condylar wall. The estimates were performed using 3 different clock-face schemes and were repeated 10 months later. Solutions for each specimen were obtained by use of a computer graphical interface. Results: More than half of the respondents (55%) use the clock face in ACL reconstructions, with the reported mean ideal "time" for a femoral tunnel in a right knee of 10:05 (SD, 31 minutes). When we accounted for the different clock definitions, this ideal position was found along the entire lateral condylar wall. In the assessment of the performance of the clock face, the mean error was 32 to 40 minutes (which translates to 3 to 4 mm) among the 3 clock schemes. The maximum error was 4 hours 0 minutes, and the range of responses was 1 hour 0 minutes to 4 hours 0 minutes depending on the specimen and clock scheme. Regardless of the clock scheme used, the intrarater and inter-rater reliabilities were similar-measuring, on average, 0.78 and 0.68, respectively. Conclusions: The clock face continues to be commonly used in ACL reconstruction. Different clock-face definitions affect the position for the same "time." When the clock-face parameters were strictly defined, there was good reliability with borderline accuracy and poor precision. Clinical Relevance: Considering the borderline performance of the clock face in accuracy and poor precision, we recommend against using the clock face in ACL reconstruction. abstract_id: PUBMED:21602567 Anatomic single- and double-bundle anterior cruciate ligament reconstruction, part 2: clinical application of surgical technique. The anterior cruciate ligament has been and is of great interest to scientists and orthopaedic surgeons worldwide. Anterior cruciate ligament reconstruction was initially performed using an open approach. When the approach changed from open to arthroscopic reconstruction, a 2- and, later, 1-incision technique was applied. With time, researchers found that traditional arthroscopic single-bundle reconstruction did not fully restore rotational stability of the knee joint and a more anatomic approach to reconstruct the anterior cruciate ligament has been proposed. Anatomic anterior cruciate ligament reconstruction intends to replicate normal anatomy, restore normal kinematics, and protect long-term knee health. Although double-bundle anterior cruciate ligament reconstruction has been shown to result in better rotational stability in both biomechanical and clinical studies, it is vital to differentiate between anatomic and double-bundle anterior cruciate ligament reconstruction. The latter is merely a step closer to reproducing the native anatomy of the anterior cruciate ligament; however, it can still be done nonanatomically. To evaluate the potential benefits of reconstructing the anterior cruciate ligament in an anatomic fashion, accurate, precise, and reliable outcome measures are needed. These include, for example, T2 magnetic resonance imaging mapping of cartilage and quantification of graft healing on magnetic resonance imaging. Furthermore, there is a need for a consensus on which patient-reported outcome measures should be used to facilitate homogeneous reporting of outcomes. abstract_id: PUBMED:29765190 Evaluating equilibrium in anterior cruciate ligament reconstruction. [Purpose] The purpose of this review was to summarize available methods and equipment which are administered to evaluate the balance in anterior cruciate ligament reconstruction. [Subjects and Methods] A literature search was performed and reviewed using the narrative approach. This study reviewed English articles concerning balance assessment methods in anterior cruciate ligament reconstruction subjects from 1985 to 2015 using the following key words: Anterior Cruciate Ligament Injury, Anterior Cruciate Ligament Reconstruction, Postural Control, Equilibrium, Balance and Stability. [Results] This review included 44 studies out of 117 initially retrieved articles. These articles were discussed in balance measurement procedure without comparing the effect of various surgical, medical or rehabilitation approaches. [Conclusion] Biodex and EquiTest, functional dynamic tests are of choice in the evaluation and tracking of anterior cruciate ligament reconstruction subjects. Force plate parameters provide information regarding strategies of static postural control and is not sensitive enough to challenge postural control system in physical activities. abstract_id: PUBMED:30381954 Anterior cruciate ligament reconstruction without the use of a tourniquet. Introduction: Anterior cruciate ligament tears are a common ligamentous knee injury and are usually managed with arthroscopic reconstruction surgery using a tourniquet. The employment of a tourniquet in the management of an anterior cruciate ligament tear has been a debatable topic for some time. Here, we discuss our technique in the reconstruction of anterior cruciate ligament tears without the use of a tourniquet. Materials And Methods: We performed 26 anterior cruciate ligament reconstruction surgeries using hamstring grafts without the use of a tourniquet. The outcomes measured were pain using the visual analogue scale, knee swelling, range of movement and bleeding. Results: Using this technique, we have noted considerable reduction in knee swelling and pain. In addition, there was considerable improvement in the range of movement postoperatively, and there were no bleeding complications noted. Conclusions: We consider ACL reconstruction without a tourniquet to be an excellent alternative to conventional reconstruction with the use of tourniquet. abstract_id: PUBMED:36534873 What to Do When It Is Anterior Cruciate Ligament Reconstruction Number Two. Several factors contribute to the greater complexity of revision anterior cruciate ligament reconstruction compared with primary anterior cruciate ligament reconstructive surgery. Prior tunnels and hardware may compromise revision tunnel placement and secure fixation. This may necessitate two-stage revision or specific techniques to achieve anatomic revision tunnels. Prior autograft use may limit graft options. Individuals with a failed anterior cruciate ligament reconstruction are more likely to have risk factors for further failure. These may include malalignment, occult instability, knee hyperextension, or increased tibial slope. There are also higher rates of meniscus and cartilage injuries in revision anterior cruciate ligament reconstruction that may require intervention. Successful revision anterior cruciate ligament reconstruction requires thoughtful preoperative planning along with multiple potential intraoperative plans depending on the pathology encountered. It is important to provide the orthopaedic surgeon with an up-to-date, evidence-based overview of how to approach and execute a successful revision anterior cruciate ligament reconstruction. abstract_id: PUBMED:31739879 Pediatric Anterior Cruciate Ligament Reconstruction. The increasing incidence of anterior cruciate ligament injuries in skeletally immature children demands careful attention by orthopedic surgeons. Assessing skeletal age is essential to selecting the appropriate reconstruction technique. Males with a bone age of 15 years or older and females of 13 years and older are ideal candidates for a transphyseal anterior cruciate ligament reconstruction because there is minimal risk of growth disturbance. Children with substantial growth remaining (skeletal age males ≤12 years and females ≤10 years) seem to be at risk for more significant growth disturbance, so we generally recommend physeal-sparing techniques for these younger patients. abstract_id: PUBMED:22570842 Ligament reconstruction in congenital absence of the anterior cruciate ligament: a case report. Congenital absence of the cruciate ligament is an extremely rare condition that was first reported in Giorgi's radiographic study in 1956. The authors report on a case of anterior cruciate ligament reconstruction performed on a 21-year-old female patient with congenital anterior cruciate ligament absence. We also discuss radiographic evidence that could provide clues to the congenital absence and possible difficulties that may be encountered during surgery with a review of the relevant literature. abstract_id: PUBMED:31512450 Progress of sterilization and preservation methods for allografts in anterior cruciate ligament reconstruction Objective: To review the current status and progress of sterilization and preservation for allograft in anterior cruciate ligament reconstruction. Methods: The related literature about the sterilization and preservation of allografts in anterior cruciate ligament reconstruction was extensively reviewed and summarized. Results: There are many sterilization methods for allografts, the most commonly used method is γ-ray irradiation, but the optimal irradiation dose is still unclear. Electron beam irradiation is also available, but excessive dose is harmful to graft shaping. A combined sterilization method combining physics and chemistry methods is still being explored. Cryopreservation is the most commonly used method of preservation. In order to reduce the influence of crystals, the principle of "slow cooling and rapid rewarming" should be adhered to as far as possible. Conclusion: The processing methods of allograft can affect the effectiveness of anterior cruciate ligament reconstruction. The clinical doctors should consider the sterilization and preservation methods in practice. abstract_id: PUBMED:31054726 Editorial Commentary: Taking a Wider View During Anterior Cruciate Ligament Reconstruction? The Case for Doing More Than Just Reconstructing the Anterior Cruciate Ligament Itself. Anterior cruciate ligament reconstruction may leave a residual instability. If other pathology is identified, the surgeon should consider doing more than the isolated anterior cruciate ligament reconstruction to address peripheral lesions to the menisci, ligaments, and capsule in selected cases. abstract_id: PUBMED:27771171 Inter- and Intrarater Reliability of the Femoral Tunnel Clock-Face Grading System During Anterior Cruciate Ligament Reconstruction. Purpose: To determine the inter- and intrarater reliability of the clock-face grading system as used by 3 fellowship-trained sports medicine surgeons. Methods: Arthroscopic video was taken of the femoral tunnel placement during 20 consecutive anterior cruciate ligament (ACL) reconstructions performed by 2 surgeons. All femoral tunnels were created using a medial portal technique. The video was taken using a 30° arthroscope placed in the lateral portal and showed the femoral tunnel as well as the remainder of the femoral notch, the posterior cruciate ligament, and the menisci for orientation. Three fellowship-trained sports medicine surgeons were asked to review the videos and assign an o'clock position to the femoral tunnel from the 9 to the 3 o'clock positions in "half-hour" increments. They were also asked to review the videos again 6 months later to determine intrarater reliability. Inter-rater reliability was evaluated using the intraclass correlation coefficient (ICC) 2-way mixed effect model with absolute agreement. The Spearman rank-order correlation coefficient (r) was applied to evaluate intrarater reliability. Results: The inter-rater reliability as measured by the ICC revealed poor agreement between the 3 surgeons (ICC = 0.204, 95% confidence interval = -0.015 to 0.491, F = 2.8, P = .004). The intrarater reliability at a 6-month interval was found to be moderate (r's = .43, P = .004). Conclusions: The inter-rater reliability of the clock-face femoral tunnel grading system was found to be poor among fellowship-trained sports medicine surgeons whereas the intrarater reliability was found to be moderate. The utility of the femoral tunnel clock-face grading system may be compromised by suboptimal inter- and intrarater reliability, making it less useful as a tool of communication between surgeons. Level Of Evidence: Level IV, case series with poor reference standard. Answer: The clock face is not considered an accurate, precise, and reliable measuring tool for anterior cruciate ligament (ACL) reconstruction. More than half of the surgeons (55%) use the clock face in ACL reconstructions, but the reported mean ideal "time" for a femoral tunnel in a right knee varies significantly, with a standard deviation of 31 minutes. This variation in the ideal position was found along the entire lateral condylar wall, indicating that different clock-face definitions affect the position for the same "time" (PUBMED:24821225). The mean error in the assessment of the performance of the clock face was found to be 32 to 40 minutes, which translates to 3 to 4 mm, with a maximum error of up to 4 hours, depending on the specimen and clock scheme. The intrarater and inter-rater reliabilities were similar, measuring on average 0.78 and 0.68, respectively, which indicates good reliability but borderline accuracy and poor precision. Therefore, the study recommends against using the clock face in ACL reconstruction due to its borderline performance in accuracy and poor precision (PUBMED:24821225). Additionally, another study found that the inter-rater reliability of the clock-face femoral tunnel grading system was poor among fellowship-trained sports medicine surgeons, while the intrarater reliability was moderate. This suggests that the utility of the femoral tunnel clock-face grading system may be compromised by suboptimal inter- and intrarater reliability, making it less useful as a tool of communication between surgeons (PUBMED:27771171).
Instruction: Pouch dilatation and slippage after adjustable gastric banding: is it still an issue? Abstracts: abstract_id: PUBMED:24339300 Three-year experience of pouch dilatation and slippage management after laparoscopic adjustable gastric banding. Purpose: Pouch dilatation and band slippage are the most common long-term complications after laparoscopic adjustable gastric banding (LAGB). The aim of the study is to present our experience of diagnosis and management of these complications. Materials And Methods: The pars flaccida technique with anterior fixation of the fundus was routinely used. All band adjustments were performed under fluoroscopy. We analyzed the incidence, clinico-radiologic features, management, and revisional surgeries for treatment of these complications. We further presented the outcome of gastric plication techniques as a measure for prevention of these complications. Results: From March 2009 to March 2012, we performed LAGB on 126 morbidly obese patients. Among them, 14 patients (11.1%) were diagnosed as having these complications. Four patients (3.2%) had concentric pouch dilatations, which were corrected by band adjustment. Ten (7.9%) had eccentric pouch with band slippage. Among the ten patients, there were three cases of posterior slippage, which were corrected by reoperation, and seven cases of eccentric pouch dilatation with anterior slippage. Three were early anterior slippage, which was managed conservatively. Two were acute anterior slippage, one of whom underwent a revision. There were two cases of chronic anterior slippage, one of whom underwent a revision. The 27 patients who underwent gastric plication did not present with eccentric pouch with band slippage during the follow-up period. Conclusion: The incidence of pouch dilatation with/without band slippage was 11.1%. Management should be individualized according to clinico-radiologic patterns. Gastric plication below the band might prevent these complications. abstract_id: PUBMED:36160820 Computed Tomography Assessment of Gastric Band Slippage. Background: The purpose of this study was to develop and validate reliable computed tomography (CT) imaging criteria for the diagnosis of gastric band slippage. Material And Methods: We retrospectively evaluated 67 patients for gastric band slippage using CT. Of these, 14 had surgically proven gastric band slippage (study group), 22 had their gastric bands removed for reasons other than slippage (control group 1), and 31 did not require removal (control group 2). All of the studies were read independently by two radiologists in a blinded fashion. The "O" sign, phi angle, amount of inferior displacement from the esophageal hiatus, and gastric pouch size were used to create CT diagnostic criteria. Standard statistical methods were used. Results: There was good overall interobserver agreement for diagnosis of gastric band slippage using CT diagnostic criteria (kappa = 0.83). Agreement was excellent for the "O" sign (kappa = 0.93) and phi angle (intraclass correlation coefficient = 0.976). The "O" sign, inferior displacement from the hiatus &gt;3.5 cm, and gastric pouch volume &gt;55 cm3 each had 100% positive predictive value. A phi angle &lt;20° or &gt;60° had the highest negative predictive value (NPV) (98%). Of all CT diagnostic criteria, enlarged gastric pouch size was most correlated with band slippage with an AUC of 0.991. Conclusion: All four imaging parameters were useful in evaluating for gastric band slippage on CT, with good interobserver agreement. Of these parameters, enlarged gastric pouch size was most correlated with slippage and abnormal phi angle had the highest NPV. abstract_id: PUBMED:27650451 Laparoscopic Treatment of a Large Gastric Pouch Following Gastric Band Slippage. Introduction: Gastric band slippage with formation of a gastric pouch is a frequent complication after gastric banding. There is no consensus on the treatment of this condition, which differs within and between centers. Here, we describe our treatment of a large gastric pouch after gastric band slippage. Despite removal of the band, the pouch caused oral feeding intolerance. Methods: The video shows our treatment of large gastric pouch following gastric banding and surgical revision. A 45-year-old woman with history of gastric banding was referred to our center for persistent oral feeding intolerance following gastric band removal. Results: We found that a large gastric pouch was responsible for extrinsic compression and thus required revisional surgery after optimization of the patient's nutritional status. Video endoscopy revealed a large gastric pouch, which was responsible for oral feeding intolerance. Intra-operative exploration then revealed major adhesions, which had probably caused the pouch to form. During surgery that was somewhat analogous to sleeve gastrectomy, we removed the adhesions and resected the gastric pouch. The operating time was 70 min. An uneventful postoperative course enabled rapid discharge (on postoperative day 2). Conclusion: The formation of a large gastric pouch is a rare complication of chronic gastric band slippage. Following nutritional optimization, surgery is required. abstract_id: PUBMED:12630624 Pouch dilatation and slippage after adjustable gastric banding: is it still an issue? Background: Laparoscopic adjustable gastric banding (LAGB) in France is currently the most common bariatric surgical procedure for the treatment of severe obesity; its most reported complication is band slippage and/or pouch dilatation, which usually requires reoperation. It is highly important to assess whether a change in the operation could improve these results. Methods: From April 1995 to October 2001, 973 patients underwent LAGB in our institution. Since January 1999, our technique changed: the band was positioned according to the so called "pars flaccida technique", ie. around the gastric vessel instead of close to the gastric wall. 511 patients had been operated before this period, and 462 after. Other details in the technique did not change (dissection above the lesser sac, no posterior stitch, three anterior stitches), meaning that potential differences could not be related to a learning curve. Results: Band slippage occurred in 27 patients of the first group during the first period of 34 months (5.2%), and 5 more afterwards (total 6.2%). Only 3 patients of the second group (0.6%) had a slippage during the same period of time. Conclusion: Although the problem of band slippage is not likely to be completely solved, changing the technique has made it possible to decrease the rate of this complication. The height or the shape of different types of band also remain under scrutiny. abstract_id: PUBMED:18431612 Symmetrical pouch dilatation after laparoscopic adjustable gastric banding: incidence and management. Background: Laparoscopic adjustable gastric banding (LAGB) has commonly been complicated by the problem of band slippage or prolapse. Since popularization of the pars flaccida approach and improved anterior fixation, it is our impression that the problem of symmetrical dilatation of the proximal gastric pouch has become more important. Methods: We have reviewed the results of a series of 425 LAGB all performed by the pars flaccida approach from June 2003 to October 2007 to analyze the incidence and implications of this new pattern. Results: There were no posterior prolapses, 2 anterior prolapses, and 17 cases of symmetrical pouch dilatation (SPD) (revision rate 4.4%). Teenage patients had a 22% revision rate for SPD. All revisions were completed laparoscopically with no mortality, no significant complications, and a median hospital stay of 1 day. The median weight loss following revisional surgery was not significantly different from the background cohort. Conclusion: SPD is the most common reason for revision of LAGB in this series. We postulate that SPD is caused by excessive pressure in the proximal gastric pouch. This may be generated either by eating too quickly or too large a volume or excessive tightening of the band. The radial forces in the pouch may ultimately cause pressure on the phrenoesophageal ligament and a secondary hiatal hernia. abstract_id: PUBMED:17138233 Outcomes after laparoscopic adjustable gastric band repositioning for slippage or pouch dilation. Background: Slippage and pouch dilation are the most common surgical complications after laparoscopic adjustable gastric banding, yet few reports have described the specific outcomes after band repositioning for these complications. The objective of this study was to examine the intermediate outcomes of our patients who underwent band repositioning for slippage or pouch dilation. Methods: From October 2000 to December 2005, 1275 patients underwent laparoscopic adjustable gastric banding at our center. Of these, we retrospectively reviewed the data of the first 40 consecutive patients (92.5% women, mean age 41.7 years, mean preoperative weight 123.3 kg [range 86.2-180.5], mean body mass index 44.8 kg/m2 [range 34.6-66.4]) who presented with anterior slippage (52.5%), posterior slippage (20%), or pouch dilation (27.5%, 7 with associated hiatal hernias) that required band repositioning (95%) or explantation (5%). Results: The average time between laparoscopic adjustable gastric banding and reoperation was 22.9 months. Before band repositioning, the mean weight was 91.5 kg, mean body mass index was 33.2 kg/m(2), and mean percentage of excess weight loss was 49.4% (range 1-79.8%). One patient had a recurrent anterior slippage that required a second band repositioning. Two bands were explanted, one for intraoperative gastric perforation and one at the patient's request. The mean percentage of excess weight loss after band repositioning was 48.1% (range 18.2-77.4%) at an average follow-up of 17.6 months (range 6-36). To date, 38 (95%) of the 40 patients have functioning bands. Co-morbidity resolution was seen in 3 (60%) of 5 patients with diabetes, 13 (65%) of 20 with hypertension, and 8 (72%) of 11 with sleep apnea. Conclusion: Laparoscopic band repositioning can result in preservation of most of the initial weight loss and co-morbidity resolution. abstract_id: PUBMED:16469212 Prevention of pouch dilatation after laparoscopic adjustable gastric banding. Background: The major long-term complication of laparoscopic adjustable gastric banding (LAGB) is dilatation of the gastric pouch, that is reported with a frequency ranging from 1 to 25%, and often requires removal of the band. In addition to the usual recommendations of bariatric surgery centers and dietetic advice to prevent this complication, over the last 4 years we introduced a technical modification of the procedure. Methods: From Nov 1993 to Dec 2004, 684 morbidly obese patients underwent adjustable gastric banding, 83 patients by open surgery and 601 patients by laparoscopy. The first 323 patients (group A) were operated by the perigastric approach, and 57 patients (group B) were operated by the pars flaccida approach. Since Dec 2000, 304 patients (group C) were operated with a modified pars flaccida technique, which consisted in suturing the gastric lesser curvature below the band with one or two stitches to the right phrenic crus to secure the band in place. Results: In group A, the most important late complication was irreversible dilatation of the gastric pouch, which occurred in 35 patients (10.8%), and required removal of the band in 30 cases and replacement in 5. In group B, there were 3 pouch dilatations (5.2%). In group C, only 4 dilatations occurred (1.31%), which required 3 band removals and 1 band replacement. Conclusion: Dilatation of the gastric pouch appears to be dramatically reduced by our minor technical modification of band placement. abstract_id: PUBMED:9869692 Laparoscopic repositioning of an adjustable silicone gastric band for pouch dilatation and stoma obstruction. Pouch dilatation with stoma obstruction is a well-known late complication after adjustable gastric banding operations for morbid obesity. Surgical treatment of this problem usually results in removal of the band, with or without replacement by another, or in repositioning of the band via laparotomy. We present the case of a patient with late pouch dilatation and stoma obstruction after placement of a Laparoscopic Adjustable Gastric Banding system (LAGB-Bioenterics) and in whom the adjustable band was laparoscopically opened, disconnected from the access port, and repositioned more proximally on the stomach. The postoperative course was uneventful. A postoperative radiographic contrast examination showed a correct repositioning of the band. The case demonstrates that the LAGB can be successfully opened and repositioned by a minimal invasive procedure. This is the first time to our knowledge that such a procedure has been reported. abstract_id: PUBMED:11501365 Laparoscopic band repositioning for pouch dilatation/slippage after gastric banding: disappointing results. Background: Pouch dilatation with or without slippage of the band is a serious complication of gastric banding, often attributed to initial malpositioning of the band. Food intake is increased, and weight regain occurs. Progressive rotation of the band follows, leading to functional stenosis and dysphagia. Reoperation is necessary in most cases, and may consist of band removal, band change, band repositioning, or conversion to another bariatric procedure. Material And Methods: The study consisted of chart review of all patients who underwent laparoscopic repositioning of the band for pouch dilatation/slippage, and long-term follow-up through regular office visits and phone calls. Results: Among 272 patients who had laparoscopic gastric banding, 20 (7.3%) developed pouch dilatation and/or slippage, of whom 19 underwent reoperation. Laparoscopic band repositioning was performed in 9 patients. One of them developed an intraabdominal collection postoperatively and required percutaneous CT-guided drainage. Recovery was uneventful in the other 8. Follow-up since reoperation varies from 13 to 42 months (mean 20 months). The result was good in 2 patients who lost further weight, satisfactory in 1 whose weight remained stable, and unsatisfactory in 6 patients. Weight loss was insufficient in 2, dilatation recurred in 2, and band infection or erosion developed each in 1 patient. 5 patients required further surgery: band removal in 3 and conversion to gastric bypass in 2. Conclusions: Laparoscopic band repositioning is feasible and safe if pouch dilatation and/or slippage develops after gastric banding. The mid-term results are disappointing in two-thirds of the patients. In some patients, pouch dilatation could result from poor adjustment to diet restriction rather than merely from original malplacement. Conversion to gastric bypass may be a better option in these cases. abstract_id: PUBMED:28217406 Gastropericardial Fistula as a Late Complication of Laparoscopic Gastric Banding. Laparoscopic adjustable gastric banding (LAGB) is a bariatric procedure that is being performed with increasing frequency as an alternative management option for morbid obesity. Several common complications have been reported including gastric band slippage and associated pouch dilatation, intragastric erosion of the band, gastric wall perforation, and abscess formation. We present a case of gastropericardial fistula occurring nine years after an LAGB. There have been no previous documented cases of the complication after this procedure. Answer: Yes, pouch dilatation and slippage after adjustable gastric banding (LAGB) continue to be issues. The incidence of pouch dilatation with or without band slippage was reported to be 11.1% in a study where management was individualized according to clinico-radiologic patterns, and gastric plication below the band was suggested as a preventive measure (PUBMED:24339300). Another study highlighted the importance of computed tomography (CT) imaging criteria for diagnosing gastric band slippage, with enlarged gastric pouch size being most correlated with slippage (PUBMED:36160820). The formation of a large gastric pouch is a rare but significant complication that can cause oral feeding intolerance and may require revisional surgery (PUBMED:27650451). A change in the operation technique to the "pars flaccida technique" has been shown to decrease the rate of band slippage (PUBMED:12630624), and symmetrical pouch dilatation (SPD) has become a more prominent issue since the introduction of the pars flaccida approach and improved anterior fixation (PUBMED:18431612). Outcomes after repositioning the band for slippage or pouch dilation have been mixed, with some patients preserving initial weight loss and resolving comorbidities, while others experienced recurrent issues (PUBMED:17138233). A technical modification of the band placement procedure, involving suturing the gastric lesser curvature below the band to the right phrenic crus, has been reported to dramatically reduce the incidence of gastric pouch dilatation (PUBMED:16469212). However, laparoscopic band repositioning for pouch dilatation/slippage has shown disappointing mid-term results in two-thirds of patients, suggesting that conversion to gastric bypass may be a better option in some cases (PUBMED:11501365). Lastly, while not directly related to pouch dilatation or slippage, it is worth noting that LAGB can lead to other severe late complications such as gastropericardial fistula, although this is extremely rare (PUBMED:28217406).
Instruction: Is common sheath extravesical reimplantation an effective technique to correct reflux in duplicated collecting systems? Abstracts: abstract_id: PUBMED:14501656 Is common sheath extravesical reimplantation an effective technique to correct reflux in duplicated collecting systems? Purpose: We evaluate the outcome vesicoureteral reflux (VUR) in duplicated collecting systems compared to single collecting systems corrected using an extravesical detrusorrhaphy approach. Materials And Methods: We reviewed the records of 266 patients (422 ureters) treated for VUR using an extravesical approach between 1991 and 2001. Inclusion criteria were primary reflux in single (201 patients, 125 bilateral) or duplicated collecting systems (65, 31 bilateral) in patients not undergoing other concomitant surgery with at least 1 year of postoperative followup. The indication for surgical intervention was unresolved reflux (greater than 4 years) in more than 70% of patients. Postoperative evaluation included a voiding cystourethrogram at 3 months and 12 months if reflux was unresolved at 3 months. Also ultrasound was performed at 6 weeks or earlier if clinically indicated and 12 months. Results: Group 1 (duplicated collecting systems) and group 2 (single collecting systems) were comparable for age, sex distribution and reflux grade distribution. Overall success rate at 3 and 12 months was 94.7% and 98.9% for group 1, and 95.1% and 98.5% for group 2, respectively. The difference in success rate at 3 and 12 months was not statistically significant (p &gt;0.05). Of note in both groups postoperative VUR was contralateral in more than 40% of cases. Postoperative hydronephrosis (Society for Fetal Urology grade 1, 2 or 1 increment in grade from preoperative status) was observed in 5.3% and 7.3% of ureteral units at 6 weeks in groups 1 and 2, respectively (p &gt;0.05). At 12 months less than 1% of ureteral units exhibited low grade residual hydronephrosis. No high grade postoperative hydronephrosis was observed in either group, and there were no intraoperative complications. Postoperative urinary retention occurred in 4.7% and 4% of patients in groups 1 and 2, respectively (p &gt;0.05). Conclusions: Common sheath extravesical reimplantation is highly effective in treating VUR. The common sheath extravesical reimplantation for duplicated collecting systems is as effective in nonduplicated systems and is associated with minimal perioperative morbidity. abstract_id: PUBMED:11743342 Extravesical common sheath detrusorrhaphy (ureteroneocystotomy) and reflux in duplicated collecting systems. Purpose: Various surgical approaches have been used for treatment of vesicoureteral reflux in uncomplicated duplicated collecting systems. We retrospectively evaluated our success with extravesical common sheath detrusorrhaphy in such a setting. Materials And Methods: Between 1990 and 1997, unilateral and bilateral extravesical nondismembered detrusorrhaphy was performed in 24 and 19 patients, respectively, with 51 duplicated collecting systems, including 8 bilateral, for management of vesicoureteral reflux. Vesicoureteral reflux was present in the moieties of ipsilateral duplicated kidney in 2 patients, and bilateral reimplantation was performed in 2 who had a history of contralateral vesicoureteral reflux that resolved. Patient age ranged from 1 to 16 years (median 3.67). Followup ranged from 36 to 114 months (median 86), and included cystourethrography and renal ultrasound. Results: Vesicoureteral reflux resolved in all but 1 duplicated renal moiety, which was a success rate of 98%. In this patient the persistent reflux resolved by 16 months postoperatively on conservative treatment. There were no intraoperative complications and all patients resumed spontaneous voiding before discharge home. No postoperative obstruction of the involved kidney or de novo contralateral reflux developed in any patient. Conclusions: Common sheath ureteral reimplantation with extravesical detrusorrhaphy can be successfully used to treat vesicoureteral reflux in uncomplicated duplicated collecting systems. This approach demonstrates excellent long-term surgical outcome with minimal perioperative morbidity. abstract_id: PUBMED:8632597 Common sheath reimplantation yields excellent results in the treatment of vesicoureteral reflux in duplicated collecting systems. Purpose: We evaluated our 10-year experience with the surgical treatment of vesicoureteral reflux in uncomplicated duplicated collecting systems. Materials And Methods: Between 1984 and 1994, 54 refluxing renal units (8 bilateral) in 37 female and 9 male patients required surgery. Patient age ranged from 7 months to 17 years (average 4.9 at surgery). Postoperative followup (average 14.2 months) included voiding cystourethrography and renal sonography or excretory urography. Results: Common sheath ureteral reimplantation via an intravesical approach was performed in 48 of the 54 refluxing renal units. Of the remaining 6 renal units detrussorrhaphy was performed in 4, and ureteroureterostomy combined with ureteral reimplantation and partial lower pole nephrectomy were done in 1 each. Two treated renal units had persistent postoperative vesicoureteral reflux, which resolved after subureteral polytetrafluoroethylene (Teflon) injection. No renal had postoperative hydronephrosis. Contralateral reflux was identified in 1 patient who underwent unilateral reimplantation. Our overall success rate was 96% for the surgical correction of vesicoureteral reflux in uncomplicated duplicated collecting systems. Common sheath reimplantation had a 98% success rate. Conclusions: Although a duplicated collecting system increases the risk for surgical treatment, the presence of a duplication anomaly does not adversely affect surgical outcome. Modifications of procedures commonly performed in the surgical treatment of single system reflux to accommodate common sheath reimplantation have excellent surgical results with minimal morbidity. abstract_id: PUBMED:30007499 Robot-assisted laparoscopic common sheath ureteral reimplantation in duplex ureters: LUAA technique tips for optimal outcomes. Objective: To describe surgical tips for robot-assisted laparoscopic (RAL) common sheath ureteral reimplantation using the previously reported LUAA technique in patients with duplex renal collecting systems, and to assess the efficacy of this technique by reviewing operative outcomes in a single-institution case series. Methods: We retrospectively reviewed a consecutive series of patients with duplicated collecting systems and vesicoureteral reflux (VUR) who underwent RAL extravesical common sheath ureteral reimplantation at a single medical center from 2010 to 2017. We included all duplex ureters requiring antireflux surgery. A standardized technique (LUAA) was used in each case, with additional tips used as described herein: a) careful common sheath ureteral mobilization, b) wide detrusorotomy with adequate detrusor flap elevation from the mucosa, and c) appropriate tension during tunnel detrusorraphy. Patient demographics, perioperative data, and follow-up imaging were reviewed. Radiographic resolution was defined as absence of VUR on voiding cystourethrogram (VCUG) performed 4 months postoperatively. Only patients undergoing ureteral reimplantation for VUR were included. Those with ureterovesical junction obstruction were excluded. Results: A total of 13 patients underwent RAL common sheath ureteral reimplantation, with three bilateral cases for a total of 16 duplicated ureters reimplanted. Mean age at surgery was 3.64 ± 1.04 years. Mean operative time was 125.1 ± 30.73 min in unilateral cases and 200 ± 51.18 in bilateral cases. Complete resolution of VUR was achieved in 14 of 16 ureters (87.50%). There were no high-grade complications (IIIV on the Clavien-Dindo scale). There were two grade II complications in our cohort, with a mean follow-up of 17.18 ± 18.41 months. Conclusions: We report a radiographic success rate of 87.50% for RAL common sheath ureteral reimplantation in children with duplicated ureters, using the LUAA technique with several additional technical considerations. In our experience, careful mobilization of the ureter to preserve vascularity, a wide detrusorotomy to adequately elevate the detrusor flaps, and appropriate tension during detrusorraphy are essential components to optimize outcomes and prevent complications. abstract_id: PUBMED:27775275 Laparoscopic extravesical ureteral reimplantation following Lich-Gregoire technique. Medium-term prospective study Objectives: To evaluate our results of Laparoscopic extravesical ureteral reimplantation according to Lich-Gregoire technique in the of vesicoureteral reflux treatment. Material And Methods: Between August 2007 and January 2013, 108 renal units in 80 patients were treated. There were 23 patients with double renal system; in 5 cases were associated with obstruction, and 3 with completely impaired function of the upper pole and ipsilateral ureterocele presence. Results: Laparoscopic extravesical ureteral reimplantation was performed in all cases. The mean age was 52 months; The mean operative time was 70 minutes in unilateral and 144 minutes in bilateral. Three laparoscopic heminephroureterectomies were performed at the same time, with incision of ureterocele. The average hospital stay was 27 hours. Complete resolution was 96.5%. Mean follow-up was 27 months. Conclusion: Laparoscopic extravesical ureteral reimplantation following Lich-Gregoire technique is effective against the unilateral, bilateral and associated with double renal system vesicoureteral reflux. It's possible to perform others procedures in a securely and concomitantly way (eg., heminephroureterectomy). A shorter hospital stay, more comfort and quicker recovery are achieved, with similar succes rates to open surgery. abstract_id: PUBMED:24841534 Robot-assisted laparoscopic ureteric reimplantation: extravesical technique. Objectives: To describe our standardised approach to performing robot-assisted extravesical ureteric reimplantation. Patients And Methods: A total of 29 children, with high grade (III-V) vesico-ureteric reflux (VUR) underwent robot-assisted extravesical ureteric reimplantation between September 2010 and September 2013. Follow-up renal ultrasonography was performed at 1 month and 3 months and a voiding cysto-urethrogram (VCUG) was obtained at 4 months to assess VUR resolution. Results: The mean (range) patient age at the time of surgery was 5.38 (3.0-10.0) years. Postoperative VCUG showed complete resolution of VUR in 32/40 ureters (80%). Of the remaining refluxing ureters, downgrading of VUR on VCUG was shown in 7/8 ureters (87.5%). The mean (range) length of hospital stay was 1.8 (1-3) days. Conclusions: In conclusion, robot-assisted extravesical ureteric reimplantation is technically feasible with acceptable resolution of VUR. abstract_id: PUBMED:34564839 Extravesical Common Sheath Ureteral Reimplantation Versus Intravesical Techniques for Refluxing Duplex Systems in Children. Purpose: Retrospective comparative study of the efficacy of extravesical non-dismembered common sheath ureteral reimplantation (ECSR) versus intravesical common sheath ureteral reimplantation (ICSR) techniques for the correction of vesicoureteral reflux (VUR) in complete duplex systems. Material And Methods: Between 2010 and 2019, ECSR was performed in 38 children (8 bilaterally), and the mean ages at presentation and at surgery were 31 and 57 months, respectively. The ICSR technique was performed in 25 units (25 patients). Voiding cystography and ultrasound of the kidney and bladder were performed 3 and 12 months postoperatively. We analyzed the surgical outcomes for both groups. Results: The mean follow-up times for the ECSR and ICSR groups were 15 and 18 months, respectively. The success rate of the ECSR group was 93.5% at 3 months, improving to 95.7% at an average of one year; the rate of the ICSR group was 96% at 3 months and was the same after one year, with no significant difference between the two groups (p = .66). Postoperative complications were compared in the ECSR and ICSR groups: transient contralateral VUR was seen in 5 renal units versus 4, de novo hydronephrosis was seen in 3 units versus 2, and UTIs were observed in 3 patients versus 4. Conclusion: Both (ECSR) and (ICSR) surgeries are highly successful for the correction of VUR in uncomplicated complete duplex systems. The results of the extravesical approach are comparable with those of the intravesical technique with less morbidity and a shorter hospital stay. Thus, ECSR is our preferred technique when open surgical repair is indicated. ICSR should be reserved for complicated duplex systems necessitating concomitant reconstructive surgery. abstract_id: PUBMED:36340705 Extravesical vs. intravesical ureteric reimplantation for primary vesicoureteral reflux: A systematic review and meta-analysis. Purpose: This study aims to compare the outcomes of extravesical (EVUR) and intravesical (IVUR) ureteric reimplantation for primary vesicoureteral reflux (VUR) via systematic review and meta-analysis. Methods: Literature review from Medline, Embase, and Cochrane since inception to March 2022 was performed. Meta-analysis was conducted on eligible randomized controlled trials (RCT) and observational cohort studies (OCS) comparing outcomes between EVUR and IVUR. Results: Twelve studies were included, comprising 577 patients (778 ureters) operated by EVUR and 395 patients (635 ureters) by IVUR. Pre-operative VUR grade, postoperative VUR persistence and hydronephrosis was not statistically significant. EVUR had shorter operative time [mean differences (MD) -22.91 min; 95% confidence interval (CI), -44.53 to -1.30, P = 0.04] and hospital stay (MD -2.09 days; 95% CI, -2.82 to -1.36, P &lt; 0.00001) compared to IVUR. Bilateral EVUR had higher risk of postoperative acute urinary retention (ARU) (8.1%) compared to bilateral IVUR (1.7%) (OR = 4.40; 95% CI, 1.33-14.58, P = 0.02). No patient undergoing unilateral EVUR or IVUR experienced ARU. Conclusion: Both EVUR and IVUR are equally effective in correcting primary VUR. Operative time and hospital stay are shorter after EVUR compared to IVUR. However, bilateral EVUR is associated with higher risk of postoperative ARU. abstract_id: PUBMED:32980963 Robot-assisted laparoscopic extravesical versus conventional laparoscopic extravesical ureteric reimplantation for pediatric primary vesicoureteric reflux: a systematic review and meta-analysis. To perform a systematic review and meta-analysis comparing the outcomes of robotic-assisted laparoscopic extravesical ureteric reimplantation (RALUR) with conventional laparoscopic extravesical ureteric reimplantation (LEVUR) for primary vesicoureteric reflux in children. We searched the databases to identify all papers of RALUR and LEVUR between 2001 and 2020. Systematic review was performed to identify patient data, age, reflux grades, laterality, duration of surgery, time to discharge, success rate and complications. Heterogeneity was reported with I2 statistics and publication bias was assessed by Doi curve and Luis Furuya-Kanamori index. Pooled data from both groups were compared with Student's t test and Fisher's exact test, wherever appropriate. From a total of 43 articles screened, 28 articles were included (18 RALUR and 10 LEVUR). The I2 statistics for RALUR and LEVUR showed heterogeneity of 86% and 25%, respectively. Both groups had comparable minor publication bias. RALUR had higher proportion of grade 5 VUR (p &lt; 0.001) and bilateral reimplantations (p &lt; 0.001). The success rate of RALUR was significantly lower than that of LEVUR (97.6% vs. 93.4%, p = 0.0018). RALUR took a significantly longer duration for surgery compared to LEVUR, both for unilateral and bilateral cases (p &lt; 0.001). The complication rate was not significantly different: 6.6% for RALUR and 5.35% for LEVUR (p = 0.32). The most common complication in both groups was post-operative urinary retention in bilateral cases. Articles on LEVUR reported more consistent success. RALUR series had higher proportion of grade 5 cases and bilateral reimplantations. RALUR reported longer operative time and lower success compared to LEVUR, with a complication rate comparable to LEVUR. abstract_id: PUBMED:30984718 The Robot-Assisted Extravesical Anti-reflux Surgery: How We Overcame the Learning Curve. Management of vesicoureteral reflux (VUR) has evolved over the past several decades, with a trend toward a decrease in surgical management. In spite of this, ureteral reimplantation remains a commonly performed procedure by pediatric urologists in selected cases. Although the basic tenets of the ureteral reimplant procedure remain the same, the extra- vs. intravesical approach, and the traditional open vs. minimally invasive approach remain the primary options to correct reflux. Considering the advantages conferred by the robotic surgery platform, many leading centers have preferentially adopted robot-assisted laparoscopic extravesical anti-reflux surgery, or in common surgical parlance, the robot-assisted laparoscopic ureteral reimplantation (RALUR), over pure laparoscopic or open approaches. Predicated on our experience of performing over 170 cases of RALUR, we have made technical modifications which we posit reduce the morbidity of the procedure while offering acceptable outcomes. This review highlights the evolution and establishment of RALUR as a standardization of care in the surgical management of VUR at our institution. In particular, we emphasize the technical nuances and specific challenges encountered through the learning curve in hopes of facilitating this process for others. Answer: Yes, common sheath extravesical reimplantation is an effective technique to correct reflux in duplicated collecting systems. Studies have shown that this technique has high success rates and is associated with minimal perioperative morbidity. Specifically, one study reported an overall success rate of 94.7% at 3 months and 98.9% at 12 months for duplicated collecting systems, which was comparable to the success rate for single collecting systems (95.1% at 3 months and 98.5% at 12 months) (PUBMED:14501656). Another study demonstrated a success rate of 98% with extravesical common sheath detrusorrhaphy in treating vesicoureteral reflux in duplicated collecting systems, with no intraoperative complications and no postoperative obstruction or de novo contralateral reflux (PUBMED:11743342). Additionally, a 10-year experience with common sheath reimplantation via an intravesical approach showed a 96% success rate for the surgical correction of vesicoureteral reflux in uncomplicated duplicated collecting systems (PUBMED:8632597). Furthermore, robot-assisted laparoscopic common sheath ureteral reimplantation using the LUAA technique in patients with duplex renal collecting systems has been reported to have a radiographic success rate of 87.50%, with careful mobilization of the ureter, wide detrusorotomy, and appropriate tension during detrusorraphy being essential components to optimize outcomes (PUBMED:30007499). Another study comparing extravesical non-dismembered common sheath ureteral reimplantation (ECSR) with intravesical common sheath ureteral reimplantation (ICSR) found that both techniques were highly successful for the correction of vesicoureteral reflux in complete duplex systems, with success rates of 93.5% at 3 months and 95.7% at one year for ECSR, and 96% at 3 months and one year for ICSR (PUBMED:34564839). In summary, the evidence suggests that common sheath extravesical reimplantation is a highly effective technique for correcting reflux in duplicated collecting systems, with success rates comparable to those for single systems and minimal perioperative morbidity.
Instruction: Extended field intensity modulated radiation therapy for gynecologic cancers: Is the risk of duodenal toxicity high? Abstracts: abstract_id: PUBMED:25532491 Extended field intensity modulated radiation therapy for gynecologic cancers: Is the risk of duodenal toxicity high? Purpose: There have been conflicting reports regarding the incidence of duodenal toxicity in patients receiving intensity modulated radiation therapy (IMRT) with an extended field covering the para-aortic (PA) lymph nodes for gynecologic cancers. We reviewed our experiences and rates of duodenal toxicity in patients treated with extended field IMRT. Methods And Materials: Patients with either cervical or endometrial cancer who were treated with IMRT to the PA nodes for involved lymph nodes or for prophylactic intent between 2005 and 2013 were included. For prophylactic intent, the radiation dose to the PA nodes was 45 Gy in 25 fractions. For involved lymph nodes, a boost was delivered to the gross disease with a 0.7-cm expansion, with editing for critical structures. The entire duodenum was retrospectively contoured on all patients from the gastric outlet to the jejunal transition. Results: We identified 76 eligible patients with endometrial and cervical cancer. The PA region was treated prophylactically in 46.1% (n = 35) and for involved PA lymph nodes in 53.9% (n = 41). The duodenum was contoured on all patients with a median volume of 83.2 cm(3) (range, 21.2-174.9 cm(3)). The mean volume of duodenum receiving 55 Gy (V55) for those treated prophylactically and for involved PA nodes was 0 cm(3) and 0.8 cm(3) (range, 0-10.6 cm(3)), respectively (P = .014). Specifically, no patient had a V55 &gt;15 cm(3). The mean V40 was 28.3 cm(3) (range, 0-77.3 cm(3)) and 41.4 (range, 0-90.0 cm(3)), respectively (P = .016). The mean dose delivered to 2 cm(3) of the duodenum was 34.9 Gy (range, 0-52.3 Gy) and 50.1 Gy (range, 31.3 - 58.3 Gy), respectively. Grade 3 acute gastrointestinal toxicity was recorded in 3.9% (n = 3) of patients. Conclusions: In our experience, the treatment of PA lymph nodes using an IMRT technique is associated with a low duodenal toxicity profile and there has been no high-grade late duodenal toxicity. abstract_id: PUBMED:16730136 Preliminary outcome and toxicity report of extended-field, intensity-modulated radiation therapy for gynecologic malignancies. Purpose: The aim of this article is to report a preliminary analysis of our initial clinical experience with extended-field intensity-modulated radiotherapy for gynecologic malignancies. Methods And Materials: Between November 2002 and May 2005, 13 women with gynecologic malignancies were treated with extended-field radiation therapy. Of the women, 7 had endometrial cancer, 4 cervical cancer, 1 recurrent endometrial cancer, and 1 suspected cervical cancer. All women underwent computed tomography planning, with the upper vagina, parametria, and uterus (if present) contoured within the CTV. In addition, the clinical target volume contained the pelvic and presacral lymph nodes as well as the para-aortic lymph nodes. All acute toxicity was scored according to the Common Terminology Criteria for Adverse Events (CTCAE v 3.0). All late toxicity was scored using the Radiation Therapy Oncology Group late toxicity score. Results: The median follow-up was 11 months. Extended-field intensity-modulated radiation therapy (IMRT) for gynecologic malignancies was well tolerated. Two patients experienced Grade 3 or higher toxicity. Both patients were treated with concurrent cisplatin based chemotherapy. Neither patient was planned with bone marrow sparing. Eleven patients had no evidence of late toxicity. One patient with multiple previous surgeries experienced a bowel obstruction. One patient with bilateral grossly involved and unresectable common iliac nodes experienced bilateral lymphedema. Extended-field-IMRT achieved good local control with only 1 patient, who was metastatic at presentation, and 1 patient not able to complete treatment, experiencing in-field failure. Conclusions: Extended-field IMRT is safe and effective with a low incidence of acute toxicity. Longer follow-up is needed to assess chronic toxicity, although early results are promising. abstract_id: PUBMED:24411609 Dosimetric predictors of duodenal toxicity after intensity modulated radiation therapy for treatment of the para-aortic nodes in gynecologic cancer. Purpose: To determine the incidence of duodenal toxicity in patients receiving intensity modulated radiation therapy (IMRT) for treatment of para-aortic nodes and to identify dosimetric parameters predictive of late duodenal toxicity. Methods And Materials: We identified 105 eligible patients with gynecologic malignancies who were treated with IMRT for gross metastatic disease in the para-aortic nodes from January 1, 2005, through December 31, 2009. Patients were treated to a nodal clinical target volume to 45 to 50.4 Gy with a boost to 60 to 66 Gy. The duodenum was contoured, and dosimetric data were exported for analysis. Duodenal toxicity was scored according to Radiation Therapy Oncology Group criteria. Univariate Cox proportional hazards analysis and recursive partitioning analysis were used to determine associations between dosimetric variables and time to toxicity and to identify the optimal threshold that separated patients according to risk of toxicity. Results: Nine of the 105 patients experienced grade 2 to grade 5 duodenal toxicity, confirmed by endoscopy in all cases. The 3-year actuarial rate of any duodenal toxicity was 11.7%. A larger volume of the duodenum receiving 55 Gy (V55) was associated with higher rates of duodenal toxicity. The 3-year actuarial rates of duodenal toxicity with V55 above and below 15 cm(3) were 48.6% and 7.4%, respectively (P&lt;.01). In Cox univariate analysis of dosimetric variables, V55 was associated with duodenal toxicity (P=.029). In recursive partitioning analysis, V55 less than 13.94% segregated all patients with duodenal toxicity. Conclusions: Dose-escalated IMRT can safely and effectively treat para-aortic nodal disease in gynecologic malignancies, provided that care is taken to limit the dose to the duodenum to reduce the risk of late duodenal toxicity. Limiting V55 to below 15 cm(3) may reduce the risk of duodenal complications. In cases where the treatment cannot be delivered within these constraints, consideration should be given to other treatment approaches such as resection or initial chemotherapy. abstract_id: PUBMED:26715873 Para-aortic and pelvic extended-field radiotherapy for advanced-stage uterine cancer: dosimetric and toxicity comparison between the four-field box and intensity-modulated techniques. Background: In patients with advanced-stage endometrial carcinoma (eca), extended-field radiotherapy (efrt) is traditionally delivered by the 3-dimensional conformal (3d-crt) 4-field box technique. In recent years, the use of intensity-modulated radiotherapy (imrt) in gynecologic cancers has increased. We compared the delivery of efrt by the 3d-crt and contemporary imrt techniques. Methods: After surgical staging and adjuvant chemotherapy in 38 eca patients, efrt was delivered by either imrt or 3d-crt. Doses to the organs at risk, side effects, and outcomes were compared between the techniques. Results: Of the 38 eca patients, 33 were stage iiic, and 5 were stage ivb. In the imrt group, maximal doses to rectum, small intestine, and bladder were significantly higher, and mean dose to bladder was lower (p &lt; 0.0001). Most acute gastrointestinal, genitourinary, and hematologic side effects were grade i or ii and were comparable between the groups. In long-term follow-up, only grade 1 cystitis at 3 months was statistically higher in the imrt patients. No grade iii or iv gastrointestinal or genitourinary toxicities were observed. No statistically significant differences in overall and disease-free survival or recurrence rates were observed between the techniques. Conclusions: In advanced eca patients, imrt is a safe and effective technique for delivering efrt to the pelvis and para-aortic region, and it is comparable to the 3d-crt 4-field box technique in both side effects and efficacy. For centres in which imrt is not readily available, 3d-crt is a valid alternative. abstract_id: PUBMED:33541379 Dosimetric predictors of nephrotoxicity in patients receiving extended-field radiation therapy for gynecologic cancer. Purpose: We sought dosimetric predictors of a decreasing estimated glomerular filtration rate (eGFR) in gynecological oncology patients receiving extended-field radiation therapy (EFRT). Materials And Methods: Between July 2012 and April 2020, 98 consecutive cervical or endometrial cancer patients underwent EFRT or whole-pelvis radiation therapy (WPRT) with concurrent cisplatin chemotherapy in our institution. To explore the effect of concurrent cisplatin chemotherapy on renal function, the renal function of the WPRT patients was examined. Of the 98 patients, 34 cervical or endometrial cancer patients underwent EFRT including extended-field intensity-modulated radiation therapy (EF-IMRT) and 64 cervical cancer patients underwent WPRT with cisplatin. Of the 34 EFRT patients, 32 underwent concurrent cisplatin chemotherapy. Excluding patients exhibiting recurrences within 6 months, 31 EFRT patients were analyzed in terms of the dose-volume kidney histograms (the percentages of kidney volumes receiving 12, 16, 20, and 24 Gy) and the post- to pre-treatment eGFR ratios. We calculated Pearson correlation coefficients between the renal dose volume and the percentage eGFR reductions of the 31 EFRT patients, and those treated via EF-IMRT. Renal dose constraint significance was evaluated using the Mann-Whitney U test. Results: The eGFR value after WPRT with cisplatin remained largely unchanged for 12 months, unlike that after EFRT. In EFRT patients, a strong correlation was evident between the KV20Gy dose and the post- to pre-treatment eGFR ratio (correlation coefficients - 0.80 for all patients and - 0.74 for EF-IMRT patients). In EF-IMRT patients, the kidney volume receiving 20 Gy tended to correlate negatively with the eGFR reduction. The Mann-Whitney U test showed that patients with KV20Gy values &lt; 10% retained significantly better renal function than did patients with KV20Gy values &gt; 10% (P = 0.002). Conclusions: Imposition of a severe kidney dose constraint during EF-IMRT may reduce nephrotic toxicity. Future prospective investigations of kidney-sparing EF-IMRT are required. abstract_id: PUBMED:24416580 The role of intensity modulated radiotherapy in gynecological radiotherapy: Present and future. Aim: This manuscript reviews the English language literature on the use of intensity modulated radiation therapy (IMRT) for gynecologic malignancies, focusing on the treatment cervical cancer. Background: Radiation therapy plays a key role in both definitive and adjuvant treatment of these patients, although efforts continue to minimize acute and chronic toxicity. IMRT is an attractive option because of the potential to dose escalate to the target while sparing organs at risk. Methods And Materials: The English language literature was reviewed for relevant studies. Results: Multiple heterogeneous studies have showed dosimetric and clinical benefits with reduction in acute and late gastrointestinal, genitourinary and hematologic toxicity, especially in the post hysterectomy scenario and for dose escalation to para-aortic nodes. Consensus is evolving regarding necessary margins and target delineation in the context of organ movement and tumor shrinkage during the course of radiotherapy. Protocols with daily soft-tissue visualization are being investigated. Conclusions: Consistency in approach and reporting are vital in order to acquire the data to justify the considerable increased expense of IMRT. abstract_id: PUBMED:20724189 Which intensity modulated radiation therapy? From "step and shoot" to volumetric modulated arc therapy, point of view of the radiation oncologist Intensity modulated radiation therapy (IMRT) offers optimal dosimetric and clinical results in terms of acute toxicity, allows augmenting the dose to the target volumes and therefore, appears promising for local control and disease-free survival. However, several pitfalls to this treatment are to be considered, namely a long treatment time and a high number of monitor unit (MU) required. The dosimetric results of the volumetric modulated arctherapy gives at least similar target coverage and preservation of organs at risk, while significantly reducing the number of required MUs and the overall treatment time. This has a potential impact on the treatment quality and the potential risk of secondary cancers. Volumetric modulated arctherapy allows implementation of stereotactic radiation therapy and complex treatments previously considered not feasible with IMRT. The future will involve this technology of high precision to determine the dose and to the target in real time using the image-guided radiotherapy. Tools combining these two methods are in development. abstract_id: PUBMED:20605763 Intensity modulated radiation therapy Intensity modulated radiation therapy enables to increase the dose delivered to the target volume while avoiding the irradiation of organs at risk and normal tissues. Indications of such technique are mainly proposed in the treatment of prostatic carcinoma, head and neck cancers and gyneacological pelvic malignancies. The different mandatory steps for implementing these treatments require rigorous quality assurance program in order to validate indications, definitions of target volumes and constraints to be respected for normal tissues. Nowadays its efficiency and results are well-known for high dose irradiation of prostatic cancer and to prevent xerostomia in patients treated for head and neck cancers. The training of the whole team involved in the implementation of these treatments is required. It is a basic requirement to treat an increasing number of patients who will benefit from this technological improvement. abstract_id: PUBMED:20731296 Intensity modulated radiation therapy technique in the treatment of gynecologic malignancies Objective: A description of intensity modulated radiation therapy (IMRT) technique and its implementation in the treatment of gynecologic malignancies. Subject: A review article. Setting: Department of Oncology and Radiotherapy, University Hospital in Hradec Králové. Subject And Method: General explanation of IMRT priciples, its benefits and limitations, and a review of published data about its utilization in the treatment of endometrial, cervical, and vulvar carcinoma. Conclusion: IMRT represents an accessible and highly conformal external beam radiothrapy technique, which enables a significant sparing of healthy tissue with consequent reduction of radiation morbidity in comparison with other conventional and conformal techniques. A feasibilty of dose excalation with preservation of low toxicity is another advandage of IMRT. It can be utilized in the treatment of most frequent gynecologic tumors, especially in endometrial, cervical, and vulvar carcinoma. abstract_id: PUBMED:23279854 Small bowel toxicity in pelvic radiotherapy for postoperative gynecological cancer: comparison between conformal radiotherapy and intensity modulated radiotherapy. Aims: To compare the dosimetric advantages of pelvic intensity modulated radiotherapy (IMRT) with three-dimensional conformal radiotherapy (3D CRT) in small bowel dose reduction and dosimetrically correlate the clinical benefit, if any. Methods: This retrospective study included 60 patients with gynecological cancers treated postoperatively with radiotherapy to the whole pelvis. Radiation Therapy Oncology Group (RTOG) contouring guidelines were used for contouring the pelvic nodal stations. All plans were generated using Plato Sunrise treatment planning system. The RTOG acute morbidity scoring criteria were used to define acute small bowel toxicity in our patients. Treatment was delivered using the Elekta Precise system. Patients were followed up twice weekly while on treatment. Statistical analyses (Spearman's rho and Kruskal-Wallis test) were performed using SPSS software (vers. 15). Results: In all, 65% were postoperative endometrium carcinoma and 35% were postoperative cervix carcinoma patients; 34 patients were treated with IMRT and 26 patients underwent 3D CRT. In all, 28 patients experienced ≥ grade 2 small bowel toxicity (eight in the IMRT group). IMRT was able to significantly reduce the dose to bowel for doses above 30 Gy. Small bowel toxicity showed no correlation with doses below 30 Gy. The volume of the small bowel receiving more than 35 Gy of radiation was a significant predictor of the need for medication to control diarrhea during radiotherapy. Conclusion: Use of IMRT resulted in lower doses to the small bowel, resulting in lesser toxicity, and translated to the better tolerability of pelvic radiotherapy. Answer: Based on the provided abstracts, the risk of duodenal toxicity from extended field intensity modulated radiation therapy (IMRT) for gynecologic cancers appears to be low. One study reviewed experiences and rates of duodenal toxicity in patients treated with extended field IMRT and found that the treatment of para-aortic (PA) lymph nodes using an IMRT technique is associated with a low duodenal toxicity profile, with no high-grade late duodenal toxicity observed (PUBMED:25532491). Another study reported that extended-field IMRT for gynecologic malignancies was well tolerated, with a low incidence of acute toxicity and promising early results for chronic toxicity (PUBMED:16730136). However, it is important to note that dosimetric predictors of duodenal toxicity have been identified, and care must be taken to limit the dose to the duodenum to reduce the risk of late duodenal toxicity. Specifically, limiting the volume of the duodenum receiving 55 Gy (V55) to below 15 cm^3 may reduce the risk of duodenal complications (PUBMED:24411609). Additionally, another study comparing extended-field radiotherapy delivered by IMRT versus the 3D conformal technique found that IMRT is a safe and effective technique for delivering extended-field radiotherapy to the pelvis and para-aortic region, comparable to the 3D conformal technique in terms of side effects and efficacy (PUBMED:26715873). In conclusion, while there is a potential risk of duodenal toxicity with extended field IMRT for gynecologic cancers, careful planning and dose constraints can minimize this risk, and the overall incidence of high-grade duodenal toxicity appears to be low.
Instruction: Inguinal hernia repair: can one identify the three main nerves of the region? Abstracts: abstract_id: PUBMED:26291254 Inguinal hernia repair: can one identify the three main nerves of the region? Objective: To identify the nerves in the groin during inguinal hernia repair by inguinotomy. Methods: We conducted a prospective, sequenced, non-randomized study comprising 38 patients undergoing inguinal hernia repair with placement of polypropylene mesh. Results: The male patients were 36 (94.7%), with a mean age and standard deviation of 43.1 ± 14.5, body mass index of 24.4 ± 2.8. Comorbidities were hypertension in two (5.2%), smoking in 12 (31.5%) and obesity in two (5.2%). The hernia was located only on the right in 21 (55.2%) patients, only on the left in 11 (28.9%), and was bilateral in six (15.7%) patients. Prior hernia repair was present in seven (18.4%) patients. The identification of the three nerves during operation was made in 20 (52.6%) patients, the ilioinguinal nerve and the iliohypogastric nerve were identified in 33 (86.8%), and the genital nerve branch of the genitofemoral nerve, in 20 (52.6%). Resection of at least one of the nerves was performed in seven (18.4%) cases, two iliohypogastric nerves and five ilioinguinal nerves. The average operating time was 70.8 ± 18.2 minutes. The hospital stay was 1.42 ± 1.18 days. Ten patients (26.3%) returned to physical activity around the first postoperative visit, and 37 (97.3%) in the last. The follow-up time was 95.6 ± 23.5 days. The inability to identify the ilioinguinal nerve was associated with previous repair (p = 0.035). Conclusion: The identification of the three nerves during inguinal hernia surgery has been described in more than half of the cases and prior repair interfered with the identification of ilioinguinal nerve. abstract_id: PUBMED:22165058 A short postgraduate anatomy course may improve the junior surgical residents' anatomy knowledge for the nerves of the inguinal region. Background And Aim: Inguinal hernia repair is one of the most common operations in a junior surgical resident's postgraduate training. Short recall courses can improve junior residents' anatomy knowledge and results in better surgical outcomes. We aimed to investigate the effect of a short course on anatomical competency during inguinal hernia repairs. Methods: During the first 25 inguinal hernia repairs, two junior residents were asked to identify iliohypogastric, ilioinguinal, and genital branch of genitofemoral nerves. Then, the residents were given a short recall course by anatomists. Afterwards, the participants were taken into an in-vivo anatomy test again. The same parameters were recorded in another 25 inguinal hernia repairs. In addition to the nerve identification records, case characteristics [body mass index (BMI &lt; or = 25 vs. &gt;25), hernia type (indirect vs. direct), and anesthesia used (general or regional vs. local)] were recorded. Results: Anatomy education had a clear impact on the correct identification rates for the iliohypogastric and ilioinguinal nerves. The rates increased from 70% to 90% and above. Correct identification rate for the three nerves together significantly increased from 16 to 52% following anatomy education (P = 0.006). All three nerves were identified with significantly higher success rates after anatomy education. The increase in the success rate for identification of the genital branch of genitofemoral nerve was 4-fold. Conclusions: Short anatomy courses in specific subjects for junior surgical residents given by formal anatomists may be effective during postgraduate education. The benefit obtained in the present study for the inguinal region nerves may be expanded to more important anatomical structures, such as the recurrent laryngeal nerve in a thyroidectomy, or more complex subjects. abstract_id: PUBMED:12197017 Anatomical variations of the inguinal nerves and risks of injury in 110 hernia repairs. The aim of this study was to identify the anatomical variations in the inguinal course of the ilioinguinal nerve (IIN) and the iliohypogastric nerve (IHN) during operative repair of inguinal hernias. A consecutive series of 110 primary inguinal hernias were repaired by the mesh technique. Particular attention was paid to early identification and recording of the course of both the IIN and IHN and preserving them throughout the operative procedure. The course of both nerves was found to be consistent with that described in anatomical texts in only 46 of 110 explorations (41.8%). The course of one or both nerves was found to be a variant in the other 64 of 110 (58.2%) explorations and often rendered them susceptible to injury. These variations included: (1) acute infero-lateral angulation of the IIN at its exit behind the superficial inguinal ring (SIR) fibers in 20 of 64 cases; (2) similar direction of the IIN but in a plane superficial to the external oblique aponeurosis (EOA) and proximal to the SIR in 18 of 64 cases; (3) a single stem for both nerves over the spermatic cord in 24 of 64 cases, with variation in the subsequent course; (4) absence of one or both nerves in 8 of 64 cases; (5) accessory IIN or IHN in 3 of 64 cases; and (6) aberrant origin of the IIN from the genitofemoral nerve (GFN) in 2 of 64 cases. None of 68 of 105 patients seen at 3 months postoperatively (compliance rate 65%) complained of sensory disturbances or pain in a dermatome distribution of the IIN or IHN. It is concluded that anatomical variations in the inguinal course of the IIN and the IHN are extremely common but readily identifiable. Their early identification and preservation is likely to abolish, or considerably decrease, the incidence of postoperative sensory changes and/or neuralgia pain. abstract_id: PUBMED:19617174 The ilioinguinal and iliohypogastric nerves. The anatomic bases in preventing postoperative neuropathies after appendectomy, inguinal herniorraphy, caesareans Study Aim: To identify danger areas in surgical incisions (appendectomy, inguinal and Pfannenstiel incisions) by mapping the course of ilioinguinal and iliohypogastric nerves. Material And Methods: The courses of the ilioinguinal and iliohypogastric nerves from 37 unembalmed adult cadavers were mapped from their lateral emergence on the internal obliquus nerve to their midline termination in reference to anatomic landmarks (anterior superior iliac spine, midline, upper border of the pubic symphysis). With use of a mapping technique, the standard courses of both nerves were compared with sites of standard abdominal surgical incisions. Results: Sixty seven ilioinguinal and sixty four iliohypogastric nerves were identified and mapped. On average the ilioinguinal nerve perforated the internal obliquus muscles 3.30 cm medial and 3.27 inferior to the anterior superior iliac spine, and terminated its course 2.50 cm lateral to the midline and 1.92 cm superior to the upper border of the pubic symphysis. On average the iliohypogastric nerve perforated the internal obliquus muscles 2.30 cm medial and 1.20 cm inferior to the anterior superior iliac spine, and terminated its course 3.10 cm lateral to the midline and 4,80 cm superior to the upper border of the pubic symphysis. Conclusion: Surgical incisions performed below the level of the anterior superior iliac spine carry the risk of injury to the ilioinguinal and iliohypogastric nerves. abstract_id: PUBMED:18783670 Virtual simulation of the human inguinal region At the present time the development of computers allows access to some three-dimensional models (3D) of physiological or pathological human situations obtained from patients with different medical or surgical problems. These models have a wide variety of applications: knowledge of the pathogenic mechanisms, help in diagnosis, planning of surgical strategies, training of surgical residents, teaching of medicine and surgery, design of devices and materials for use in medical and surgical practice or even distance interventions. Our objective is to design a 3D model of the inguinal region with the purpose of improving knowledge of the pathogenic mechanisms of inguinal hernia, the planning of surgical strategies and the teaching of the surgery of these hernias. They can also be used study the mechanical response of biomaterials into the abdominal wall. abstract_id: PUBMED:36875277 Nerve identification in open inguinal hernioplasty: A meta-analysis. Objectives: In open inguinal hernioplasty, three inguinal nerves are encountered in the surgical field. It is advisable to identify these nerves as careful dissection reduces the chances of debilitating post-operative inguinodynia. Recognizing nerves during surgery can be challenging. Limited surgical studies have reported on the identification rates of all nerves. This study aimed to calculate the pooled prevalence of each nerve from these studies. Material And Methods: We searched PubMed, CENTRAL, CINAHL, ClinicalTrials.gov and Research Square. We selected articles that reported on the prevalence of all three nerves during surgery. A meta-analysis was performed on the data from eight studies. IVhet model from the software MetaXL was used for preparing the forest plot. Subgroup analysis was performed to understand the cause of heterogeneity. Results: The pooled prevalence rates for Ilioinguinal nerve (IIN), Iliohypogastric nerve (IHN), and genital branch of genitofemoral nerve (GB) were 84% (95% CI 67-97%), 71% (95% CI 51-89%) and 53% (95% CI 31-74%), respectively. On subgroup analysis, the identification rates were higher in single centre studies and studies with a single primary objective as nerve identification. The heterogeneity was significant in all pooled values, excluding the subgroup analysis of IHN identification rates in single-centre studies. Conclusion: The pooled values indicate low identification rates for IHN and GB. Significant heterogeneity and large confidence intervals reduce the importance of these values as quality standards. Better results are observed in single-centre studies and studies which are focused on nerve identification. abstract_id: PUBMED:24941716 Effects of variable courses of inguinal nerves on pain in patients undergoing Lichtenstein repair for inguinal hernia: preliminary results. Background: Problems due to damage to ilioinguinal and iliohypogastric nerves which have many variations following surgery for inguinal hernia cause additional work leave and delay in return to daily life. We aimed to compare outcomes of nerve identification and preservation with a careful dissection during Lichtenstein repair of inguinal hernia between normal courses of inguinal based nerves and variable courses of inguinal based nerves. Methods: This is a prospective study and 116 adult male patients with primary inguinal hernia were operated between December 2009 and June 2010. The patients with a normal nerve trace identified on exploration were assigned into Group Normal Course and those with variable nerve course were assigned into Group Variable Course. These two groups were compared in terms of demographic features, preoperative and postoperative variables, return to work and daily routines, duration of fulfilling personal needs, visual analogue scores before and one and six months after surgery, four-point verbal-rank scale scores, numbness and patient satisfaction. Results: Out of 116 patients, 70 (60.3%) had variable courses of the nerves and were assigned into the Group Variable Course and 46 (39.7%) had normal courses of the nerves and were assigned into Group Normal Course. A single stem for both nerves over spermatic cord (21.6%) and acute infero-lateral angulation of the Ilioinguinal nerve in close contact with and parallel to the Superficial Inguinal Ring fibers at exit (15.5%) were the most observed variations. Duration of surgery was significantly longer in Group Variable Course (p &lt;0.001). Numbness was also slightly higher in this group one month after surgery. This difference nearly disappeared six months after surgery. Conclusion: Although careful and gentle exploration increases the duration of surgery and early neuropraxia, identification and preservation of nerves during surgery for inguinal hernia help to achieve similar outcomes in both patients with a normal course of nerves and those with a variable course of nerves. abstract_id: PUBMED:20145966 Influence of nerve identification and the resection of nerves 'at risk' on postoperative pain in open inguinal hernia repair. Background: Surgical strategy regarding nerve identification and resection in relation to chronic postoperative pain remains controversial. A central question is whether nerves in the operation field, when identified, should be preserved or resected. In the present study, the hypotheses that the identification and consequent resection of nerves 'at risk' have no influence on postoperative pain has been tested. Methods: A single-centre study was conducted in 525 patients undergoing Lichtenstein hernioplasty. One surgeon (364 operations, Group A) consequently resected nerves 'at risk' for being injured and nine surgeons (161 operations, Group B) adhered to the general routine of nerve preservation. All cases were ambulatory surgery on anaesthetised patients and the groups were similar with regard to age, body mass index (BMI) and preoperative pain. Self-reported pain at 3 months was recorded on a 10-box visual analogue scale (VAS). The identification and resection of nerves were continuously registered. Statistical calculations were performed with Fisher's exact test and ordinal logistic regression. Results: There was no significant difference in the number of identified nerves in the two groups of patients (iliohypogastricus, P = 0.555; ilioinguinalis, P = 0.831; genital branch, P = 0.214). However, the number of resected nerves was significantly higher in Group A for the iliohypogastric nerve, P &lt; 0.001, but not for ilioinguinalis, P = 0.064, and genital branch, P = 0.362. Non-identification of the ilioinguinal nerve correlated to the highest level of self-reported postoperative pain at 3 months. Patients in Group A, who had nerves 'at risk' resected from the operation field, reported significantly less postoperative pain at 3 months, P = 0.007. Conclusion: This register study confirms the importance of nerve identification. Nerve resection strategy with the consequent removal of nerves 'at risk' gives a significantly better outcome in Lichtenstein hernioplasty. abstract_id: PUBMED:33090551 The most commonly injured nerves at surgery: A comprehensive review. Iatrogenic nerve injury during surgery is a major source of concern for both patients and surgeons. This study aimed to identify the nerves most commonly injured during surgery, along with the commonly associated operative procedures. A literature search was conducted using the PubMed database to identify nerves commonly injured during surgery, along with the surgical procedure associated with the injury. The following 11 nerves, ranked in order with their associated surgical procedures, were found to be the most commonly injured: (a) intercostobrachial nerve in axillary lymph node dissections and transaxillary breast augmentations, (b) vestibulocochlear nerve in cerebellopontine tumor resections and vestibular schwannoma removals, c) facial nerve in surgeries of the inner ear and cheek region, (d) long thoracic nerve in axillary lymph node dissections, (e) spinal accessory nerve in surgeries of the posterior triangle of the neck and cervical lymph node biopsies, (f) recurrent laryngeal nerve in thyroid surgeries, (g) genitofemoral nerve in inguinal hernia and varicocele surgeries, (h) sciatic nerve in acetabular fracture repairs and osteotomies, (i) median nerve in carpal tunnel release surgeries, (j) common fibular nerve in varicose vein and short saphenous vein surgeries, and (k) ulnar nerve in supracondylar fracture surgeries. Although the root cause of iatrogenic nerve injury differs for each nerve, there are four unifying factors that could potentially decrease this risk for all peripheral nerves. These four influencing factors include knowledge of potential anatomical variations, visual identification of at-risk nerves during the procedure, intraoperative nerve monitoring, and expertise of the surgeon. abstract_id: PUBMED:23808236 Radical operations of inguinal hernias with the temporary translocation of inguinal nerves This article presents the results of anatomic researches of the innervations of the inguinal area, performed for studying an arrangement of the main nerves of the inguinal area in relation to the operation access and the area of plasty of the posterior wall of the inguinal canal. The method of temporary translocation of inguinal nerves is developed for their preservation at radical operations of inguinal hernias. Long-term experience of surgical treatment of inguinal hernias with the temporary translocation of inguinal nerves is summarized. Answer: Inguinal hernia repair involves the identification of three main nerves in the region: the ilioinguinal nerve, the iliohypogastric nerve, and the genital branch of the genitofemoral nerve. The ability to identify these nerves during surgery is crucial as it can influence postoperative outcomes, including the risk of chronic pain and sensory disturbances. Studies have shown that the identification of all three nerves can be achieved in a significant number of cases. For instance, one study reported that the three nerves were identified in 52.6% of patients undergoing inguinal hernia surgery, with the ilioinguinal nerve and the iliohypogastric nerve being identified in 86.8% of cases, and the genital branch of the genitofemoral nerve in 52.6% of cases (PUBMED:26291254). However, previous hernia repair was associated with difficulty in identifying the ilioinguinal nerve. Anatomical variations of the inguinal nerves are common and can affect the success rate of nerve identification. One study found that the course of the nerves was consistent with anatomical texts in only 41.8% of cases, with variations in the remaining cases that could make the nerves susceptible to injury (PUBMED:12197017). Educational interventions, such as short postgraduate anatomy courses, have been shown to improve the identification rates of these nerves. After such a course, the correct identification rates for the iliohypogastric and ilioinguinal nerves increased significantly, and the overall correct identification rate for all three nerves together increased from 16% to 52% (PUBMED:22165058). A meta-analysis also highlighted the variability in nerve identification rates, with pooled prevalence rates for the ilioinguinal nerve, iliohypogastric nerve, and genital branch of the genitofemoral nerve being 84%, 71%, and 53%, respectively. The study suggested that better results are observed in single-centre studies and studies focused on nerve identification (PUBMED:36875277). In conclusion, while it is possible to identify the three main nerves of the inguinal region during hernia repair, the success rate can be influenced by factors such as anatomical variations, previous surgeries, and the level of anatomical knowledge and surgical expertise.
Instruction: Do health and medical workforce shortages explain the lower rate of rural consumers' complaints to Victoria's Health Services Commissioner? Abstracts: abstract_id: PUBMED:16313531 Do health and medical workforce shortages explain the lower rate of rural consumers' complaints to Victoria's Health Services Commissioner? Objective: To identify which explanations account for lower rural rates of complaint about health services--(i) fear of consequences where there is little choice of alternative provider; (ii) a higher complaint threshold for rural consumers; (iii) lack of access to complaint mechanisms; or (iv) reduced access to services about which to complain. Design: Ecological study incorporating consumer complaint, population and workforce distribution data sources. Setting: All health care providers practising in Victoria. Participants: De-identified records of all closed consumer complaints made to the Health Services Commissioner, Victoria, between March 1988 and April 2001 by Victorian residents (13 856 records). Main Outcome Measures: Differences in the percentage of under-representation in complaint rates in total and for each of four categories of health services providers for different size communities. Results: No consistent relationship was observed between community size and either degree of under-representation of complaints against any category of provider, or the proportion of serious or substantial complaints. Rural under-representation was highest (41%) for dentists, the provider category with the lowest proportion working in rural areas (17%), and lowest (18%) for hospitals, with the highest representation in rural areas (28% of beds). More rural complaints were about access issues (10.7% rural and 8.4% metropolitan). Conclusions: Reduced opportunity to use health services due to rural health and medical workforce shortages was the best-supported explanation for the lower rural complaint rate. Workforce shortages impact on the quality of rural health services and on residents' opportunities to improve their health status. abstract_id: PUBMED:8073159 Role of the Health Services Commissioner in the state of Victoria in Australia In 1988 a Health Commissioner (HSC) was established in Victoria, Australia. The role of the HSC is to apply the concept of social justice in the area of health care issues. The HSC is independent of the normal ministerial machinery of government and reports directly to the Victorian Parliament. The HSC is able to investigate complaints about all aspects of health care provision and attempts to settle disputes by conciliation rather than by recourse to the more cumbersome and expensive option of litigation. The HSC is also empowered to make recommendations which would reduce complaints in the future. Significant progress has been made and now similar schemes based on the successful Victorian model are being established elsewhere. At the same time financial constraints in Victoria are preventing the HSC from achieving its full potential as an agency to promote quality health care. abstract_id: PUBMED:16255710 How do rural consumers contribute to quality assurance of health services? Objective: To investigate the reasons for complaint or non-complaint by rural consumers of health services. Design: Qualitative study using focus group discussion of hypothetical scenarios. Setting: Selected rural communities in the Loddon-Mallee region of north-western Victoria. Participants: Sixty volunteer participants in eight focus groups recruited through advertising. Main Outcome Measure: Issues and themes concerning circumstances leading to, and factors inhibiting, complaints about a health service and awareness of complaints mechanisms. Results: Compared with residents of larger towns, those of small communities were more likely to report they would complain to the local provider, whereas those in larger towns were more likely to mention Hospital Boards or the Commissioner. Deterrents to making complaints included the lack of services, scepticism about the role of complaints in bringing about change and an attitude that it was more appropriate to try to fix the problem than complain about it. Lack of awareness of appropriate complaint mechanisms which feed into quality assurance processes was also identified. Conclusions: Previously documented lower complaint rates from rural consumers can not be taken to mean greater satisfaction with health services. abstract_id: PUBMED:31502344 Is rural mental health workforce a policy imperative? An available and effective rural mental health workforce is critical to the provision of contemporary mental health care. During the last 5 years new mental health plans and strategies have been released every state and territory of Australia. This policy analysis examines the extent to which workforce, and particularly rural workforce issues are considered in these policies. The analysis revealed that rural workforce issues receive scant attention in state and territory mental health plans. Rural Australians comprise 28% of the total population, yet rural workforce issues are canvased, on average only 6% of the time general workforce issues are addressed. National mental health workforce plans do focus on the rural workforce, but these are not referenced in the state or territory strategies or workforce plans. Given the rural mental health workforce shortages, and consumer challenges in accessing rural mental health services, more planning and consideration to supporting and developing a rural workforce appears warranted. abstract_id: PUBMED:32796738 Context Matters: Findings from a Qualitative Study Exploring Service and Place Factors Influencing the Recruitment and Retention of Allied Health Professionals in Rural Australian Public Health Services. Chronic health workforce shortages significantly contribute to unmet health care needs in rural and remote communities. Of particular and growing concern are shortages of allied health professionals (AHPs). This study explored the contextual factors impacting the recruitment and retention of AHPs in rural Australia. A qualitative approach using a constructivist-interpretivist methodology was taken. Semi-structured interviews (n = 74) with executive staff, allied health (AH) managers and newly recruited AHPs working in two rural public health services in Victoria, Australia were conducted. Data was coded and categorised inductively and analysed thematically. The findings suggest that to support a stable and sustainable AH workforce, rural public sector health services need to be more efficient, strategic and visionary. This means ensuring that policies and procedures are equitable and accessible, processes are effective, and action is taken to develop local programs, opportunities and supports that allow AH staff to thrive and grow in place at all grade levels and life stages. This study reinforces the need for a whole-of-community approach to effectively support individual AH workers and their family members in adjusting to a new place and developing a sense of belonging in place. The recommendations arising from this study are likely to have utility for other high-income countries, particularly in guiding AH recruitment and retention strategies in rural public sector health services. Recommendations relating to community/place will likely benefit broader rural health workforce initiatives. abstract_id: PUBMED:18482116 Review of small rural health services in Victoria: how does the nursing-medical division of labour affect access to emergency care? Aims: This paper is based on a review of the Australian and International literature relating to the nursing-medical division of labour. It also explores how the division of labour affects patient access to emergency care in small rural health services in Victoria, Australia. Background: The paper describes the future Australian health workforce and the implications for rural Victoria. The concept of division of labour and how it relates to nursing and medicine is critically reviewed. Two forms of division of labour emerge - traditional and negotiated division of labour. Key themes are drawn from the literature that describes the impact of a traditional form of division of labour in a rural context. Methods: This paper is based on a review of the Australian and international literature, including grey literature, on the subject of rural emergency services, professional boundaries and roles, division of labour, professional relationships and power and the Australian health workforce. Results: In Australia, the contracting workforce means that traditional divisions of labour between health professionals cannot be sustained without reducing access to emergency care in rural Victoria. A traditional division of labour results in rural health services that are vulnerable to slight shifts in the medical workforce, unsafe services and recruitment and retention problems. A negotiated form of division of labour provides a practical alternative. Conclusion: A division of labour that is negotiated between doctors and nurses and supported by a legal and clinical governance framework, is needed to support rural emergency services. The published evidence suggests that this situation currently does not exist in Victoria. Strategies are offered for creating and supporting a negotiated division of labour. Relevance To Clinical Practice: This paper offers some strategies for establishing a negotiated division of labour between doctors and nurses in rural emergency care. abstract_id: PUBMED:18844530 Enhancing scope of practice for the second level nurse: a change process to meet growing demand for rural health services. This article reports on Australian research, funded by the Victorian Department of Human Services Nurse Policy Branch and conducted in two rural Victorian health services. Predicted workforce shortages in the health professions and accompanying growth in demand for services have set the scene for exploring new ways of working. Focused on Victoria's second level nurse, the division two registered or enrolled nurse, this research explored opportunities to enhance scope of practice as one mechanism to meet anticipated workforce demand and expectations. The study revealed a lack of collective wisdom as to how scope of practice may be enhanced, and an accompanying lack of organisational readiness to facilitate the broad changes needed to implement a different practice role. In response to these findings, the SCOPED model was developed as a tool to assist health services explore and implement an enhanced scope of practice for the second level nurse. abstract_id: PUBMED:31219003 Connecting Rural and Global Health Education for Workforce Development. Healthcare workforce shortages are continuing to increase worldwide with more profound deficits seen in rural communities in both developed and developing countries. These deficits impede progress towards heath equity and global health initiatives including the 2030 Sustainable Development Goals. Medical training has supported the idea that having a rural background influences future practice in rural settings. With a majority of global health experiences taking place in rural settings, there is an opportunity for health profession programs to take advantage of expanding global health education to encourage future practice in rural settings and address inequalities in workforce distribution. abstract_id: PUBMED:21280971 Creating a sustainable and effective mental health workforce for Gippsland, Victoria: solutions and directions for strategic planning. Introduction: The Gippsland region of Victoria, in common with other Australian rural and regional areas, is experiencing a shortage of qualified mental health professionals. Attracting mental health professionals to work in such areas is a major concern for service providers, policy-makers and rural/regional communities. Previous studies have focused on identifying factors contributing to the maldistribution of the health workforce, principally medical practitioners. Recent reviews have highlighted the strengths and weaknesses of evidence for the effectiveness of initiatives designed to address workforce shortages in underserved locations. The reported study sought the views of mental health organisation leaders from Gippsland to identify current approaches and potential solutions to the challenges of workforce recruitment, retention and training. A key goal of the study was to inform a strategic regional approach to the development of a more sustainable and effective mental health workforce. Methods: Investigators conducted semi-structured individual interviews with 26 administrators, managers and senior clinicians from public and private sector mental health organisations throughout Gippsland. Thematic content analysis of the transcribed interviews identified current approaches and potential solutions to the recruiting, retaining and training problems in the region. Results: The study categorised solutions as focusing on factors external or internal to organisations. Solutions external to organisations included efforts to enhance the pool of available workers, improve intra-sectoral collaboration and cross-sectoral linkages, make funding more flexible, and to institute a contemporary curriculum and take innovative pedagogical approaches to training. Internal solutions included the need for strong leadership and quality organisational culture, flexible and adaptable approaches to meeting individual worker and community needs, promoting the organisation and local area and adopting models of care. Informants indicated that individual organisations are limited in their capacity to address recruitment, retention and training issues and highlighted the potential benefits of a regional mental health workforce recruitment, retention and training strategy. Conclusion: This study provides a unique insight into the creative approaches adopted by managers in Gippsland to recruit and retain mental health workers. It brings a new perspective to the literature due to the reflections of managers, grounded as they are in day-to-day work. abstract_id: PUBMED:22356526 Rural mental health: implications for telepsychiatry in clinical service, workforce development, and organizational capacity. In Hawai'i, rural residents suffer disproportionately from poor health and mental health outcomes. Hawai'i's island geography makes rural health service disparities especially compelling. Physician workforce shortages are projected to increase, despite 30 years of programs aimed at recruiting physicians to rural areas. Telepsychiatry has been shown to be a feasible way to provide a variety of health services to individuals living in rural areas with limited access to healthcare. The University of Hawai'i Rural Health Collaboration (UHRHC) was established by the Department of Psychiatry to address the need for workforce development and rural access to mental health services across the State of Hawai'i by using telepsychiatry. Partnerships with community health clinics have been formed to provide patient care and consultation-liaison services through telepsychiatry technology. In addition, UHRHC focuses on workforce development in its residency training curriculum by utilizing a service-learning approach to rural mental health. Evaluation of these efforts is currently underway, with preliminary evidence suggesting that UHRHC is a promising strategy to increase access to critical mental health services and reduce health disparities in rural Hawai'i. Answer: Yes, health and medical workforce shortages appear to be a significant factor explaining the lower rate of rural consumers' complaints to Victoria's Health Services Commissioner. The ecological study incorporating consumer complaint, population, and workforce distribution data sources found that reduced opportunity to use health services due to rural health and medical workforce shortages was the best-supported explanation for the lower rural complaint rate (PUBMED:16313531). Workforce shortages impact the quality of rural health services and residents' opportunities to improve their health status, which likely contributes to fewer complaints being lodged by rural consumers. Additionally, the study noted that more rural complaints were about access issues, which further supports the connection between workforce shortages and complaint rates.
Instruction: Is there a link between pharmaceutical consumption and improved health in OECD countries? Abstracts: abstract_id: PUBMED:11151308 Is there a link between pharmaceutical consumption and improved health in OECD countries? Objective: The objective of this study was to determine whether there is a measurable health return associated with high pharmaceutical consumption in a sample of developed countries. Design And Setting: The study focused on the production of health, disaggregating healthcare into pharmaceuticals and other healthcare. We controlled for wealth and lifestyle factors. The sample consisted of 21 Organization for Economic Cooperation and Development (OECD) countries and the measure of pharmaceutical consumption used was the best available for a large number of OECD countries. We proxied health with life expectancies at birth, at age 40, and age 60. Main Outcome Measures And Results: Pharmaceutical consumption had a positive and statistically significant effect on remaining life expectancy at age 40 and 60 years (significant at the 0.10 and 0.05 level, respectively, based on a 2-tailed test), although the effect on life expectancy at birth was small and not significant. Sensitivity analysis showed that these results were generally robust. A significant effect of pharmaceutical consumption on infant mortality was not demonstrated and results of the infant mortality model were very sensitive to small changes. Conclusions: Increased pharmaceutical consumption helps improve mortality outcomes, especially for those at middle age and older. abstract_id: PUBMED:15660475 The effects of pharmaceutical consumption and obesity on the quality of life in the organization of economic cooperation and development (OECD) countries. Objective: The objective of this study is to validate our earlier work on life expectancy with more recent data and, more importantly, to extend it to examine quality of life, not only the length of life. Design And Setting: The analysis focuses on the production of health, disaggregating healthcare into pharmaceutical consumption and other healthcare. Going beyond our earlier work, measures of health include life expectancy and disability-adjusted life expectancy (DALE). Also, we consider the impact of obesity. The sample was 18 Organization of Economic Cooperation and Development (OECD) countries. The measure of pharmaceutical consumption is the best that is available for these countries. Main Outcome Measures And Results: Confirming our earlier work, pharmaceutical consumption has a positive and statistically significant effect on life expectancy at 40 and 60 years (significant at the 0.05 level, based on a two-tailed test). The effects are slightly larger than in the earlier work. Turning to DALE, pharmaceutical consumption has a positive and statistically significant effect at birth and at 60 years (significant at the 0.05 and 0.01 levels, respectively), based on a two-tailed test. The effects on DALE are larger than the effects on life expectancy. Conclusions: Increased pharmaceutical consumption helps improve quality of life, as well as life expectancy. abstract_id: PUBMED:30958724 Pharmaceutical expenditure and total health-care expenditure in OECD countries and China: bidirectional Granger causality on the basis of health level. Background: Previous studies have ignored the bidirectional causality and the heterogeneity of health level. This study explored the bidirectional causality between pharmaceutical expenditure (PE) and total health-care expenditure (Total HCE) considering the health level for effective health investment and national health improvement.Methods: Based on the panel data on PE and Total HCE in 32 OECD countries and China, the Granger causality test was applied. Countries were divided into low- and high-health-level (LH and HH) groups according to their life expectancies.Results: A 1% increase in the growth rate of per capita PE boosted a 0.11% increase in that of per capita Total HCE of the following year in HH group, whereas no such causality existed in LH group. A 1% increase in the growth rate of per capita Total HCE boosted a 0.46% increase in that of the per capita PE of the following year in LH group, whereas no such causality existed in HH group.Conclusion: High-health-level countries should continue to increase the health investment by promoting rational PE growth such as expanding insurance coverage for innovative medicine for health improvement. Low-health-level countries should take optimizing policy to increase health investment in pharmaceutical for medicine accessibility and national health. abstract_id: PUBMED:14997690 Health expenditure trends in OECD countries, 1990-2001. This article presents data on health care spending for 30 OECD countries from OECD Health Data 2003, the latest edition of OECD's annual data collection on health systems across industrialized countries. OECD data show health care expenditures as a proportion of gross domestic product at an all-time high, due to both increased expenditures and overall economic slow-down. The article discusses similarities and differences across countries in how health care expenditures are funded and how the health care dollar is spent among types of services. abstract_id: PUBMED:28819772 Convergence and determinants of health expenditures in OECD countries. This study examines the trend and determinants of health expenditures in OECD countries over the 1975-2004 period. Based on recent developments in the economic growth literature we propose and test the hypothesis that health care expenditures in countries of similar economic development level may converge. We hypothesise that the main drivers for growth in health care costs include: aging population, technological progress and health insurance. The results reveal no evidence that health expenditures among OECD countries converge. Nevertheless, there is evidence of convergence among three sub-groups of countries. We found that the main driver of health expenditure is technological progress. Our results also suggest that health care is a (national) necessity, not a luxury good as some other studies in this field have found. abstract_id: PUBMED:16459005 The pervasiveness of pharmaceutical expenditure inertia in the OECD countries. This paper constructs and estimates an economic model for testing statistically the strength of possible 'expenditure inertia' as a plausible reason for rising drug expenditures of the Organization for Economic Cooperation and Development (OECD) countries. The ethical drugs sector in the OECD health care systems is increasingly targeted as the major culprit in the rising cost. Using multiple regression analysis, and the maximum likelihood estimation method, the data of each country (taken from OECD Health Data, 1997) were first tested for functional form optimality with the Box-Cox power family transformations model. Drug expenditure elasticities, at data means, were computed using each country's optimal regression model estimates. The results indicate that the traditionally fitted a priori limited functional form models (e.g., linear, log-log) are not globally consistent with data across countries. The effect of a one-period lagged real per-capita drug expenditure (capturing inertia or habit persistence) on current period real per-capita prescription expenditure is statistically significant in most countries. Pharmaceutical demands are inelastic, and tend to behave like a necessity, as expected. Since the significant effects of economic, demographic, and other drivers of high drug spending differ across countries, country-specific implications and policy suggestions for cost controls ought to differ. abstract_id: PUBMED:29978727 How Does Stronger Protection of Intellectual Property Rights Affect National Pharmaceutical Expenditure? An Analysis of OECD Countries. Intellectual property rights (IPR) protection for pharmaceuticals has been comprehensive and strict since the establishment of the World Trade Organization in 1995 and the subsequent implementation of the TRIPS Agreement. This study investigated the relationship between the level of IPR and national pharmaceutical expenditure using panel data of 22 OECD countries from 1970 to 2009. The patent index was used to measure the level of national protection for IPR along with other covariates: GDP per capita, the percentage of population aged over 65, number of doctors, proportion of public financing among total pharmaceutical expenditure, under-5 mortality, price index, and period indicators. The regression analysis results showed that the level of IPR protection was significantly correlated with pharmaceutical spending even after controlling for various factors that affect pharmaceutical expenditure. The results were consistent in OLS regression and GLS regression. However, the effect of IPR was stronger and more significant in countries with a relatively small-sized pharmaceutical market than in those with big market. Many developed countries incur a financial burden due to rapidly growing pharmaceutical expenditure; therefore, the results of this study present the possibility that stronger IPR would produce welfare loss in developed countries. abstract_id: PUBMED:34641978 Investigation the determinants of pharmaceutical expenditure share of GDP in Iran and selected OECD countries. Aim: This study estimated the GDP share of pharmaceuticals in Iran based on the drivers of pharmaceutical expenditure and compared it with that of 31 members of the Organisation for Economic Cooperation and Development (OECD). Subject And Methods: The factors contributing to pharmaceutical expenditure were identified through literature review and studied by 8 experts to classify the factors. Then, using the panel data method, a model was built to estimate the GDP share of pharmaceutical expenditure based on the extracted factors of the selected countries in Iran's model. To explain the observed differences, several regression analyses were performed based on cross-sectional data. The analyses were performed using EVIEWS software, version 10. Results: The explanatory variables for the selected countries in the panel model (R2 = 0.98) were specified. Government health expenditure (β = 0.1432), the share of generic drugs (β = - 0.0143), gross domestic product (GDP) per capita (β = - 0.0058) and the rate of disability-adjusted life-years (DALY) (β = 0.0028) contributed most to pharmaceutical expenditure. In comparison, in the Iranian estimation model (R2 = 0.84), government health expenditure (β = 0.0536) and the share of generic drugs (β = 0.0369) had a significant impact on pharmaceutical expenditure. In the estimation model with more estimators for Iran (R2 = 0.99), government health expenditure (β = 0.1694), disease prevalence (β = 0.0537), the share of generic drugs (β = 0.0102), the DALY rate (β = 0.0039), GDP per capita (β = - 0.0033), and the drug price index (β = 0.0007) contribute most to pharmaceutical expenditure. Conclusion: In the models of the study, factors related to the structure of the healthcare system and the pharmaceutical system contributed most to pharmaceutical expenditure as a share of GDP. Moreover, disease profiles show its predictive role in the second model for Iran. abstract_id: PUBMED:11481789 Health expenditure trends in OECD countries, 1970-1997. This article provides an overview of current trends in health expenditures in 29 OECD countries and recent revisions of OECD health accounts. U.S. health expenditures are compared with those of other OECD countries. The interactions of cost-containment measures with changes in the public-private mix of financing and in the composition of health care spending are discussed. abstract_id: PUBMED:25662194 Financial impact of the GFC: health care spending across the OECD. Since the onset of the global financial crisis (GFC), health spending has slowed markedly or fallen in many OECD countries after years of continuous growth. However, health spending patterns across the 34 countries of the OECD have been affected to varying degrees. This article examines in more detail the observed downturn in health expenditure growth, analysing which countries and which sectors of health spending have been most affected. In addition, using more recent preliminary data for a subset of countries, this article tries to shed light on the prospects for health spending trends. Given that public sources account for around three-quarters of total spending on health on average across the OECD, and, in an overall context of managing public deficits, the article focuses on the specific areas of public spending that have been most affected. This study also tries to link the observed trends with some of the main policy measures and instruments put in place by countries. The investigation finds that while nearly all OECD countries have seen health spending growth decrease since 2009, there is wide variation as to the extent of the slowdown, with some countries outside of Europe continuing to see significant growth in health spending. While all sectors of spending appear to have been affected, initial analysis appears to show the greatest decreases has been experienced in pharmaceutical spending and in areas of public health and prevention. Answer: Yes, there is evidence of a link between pharmaceutical consumption and improved health in OECD countries. Studies have found that pharmaceutical consumption has a positive and statistically significant effect on remaining life expectancy at age 40 and 60 years, although the effect on life expectancy at birth was small and not significant (PUBMED:11151308). Further research confirmed these findings and extended them to show that pharmaceutical consumption also has a positive and statistically significant effect on disability-adjusted life expectancy (DALE) at birth and at 60 years, indicating that increased pharmaceutical consumption helps improve quality of life, as well as life expectancy (PUBMED:15660475). Additionally, the bidirectional causality between pharmaceutical expenditure and total health-care expenditure was explored, considering the health level. It was found that in high-health-level countries, a growth in per capita pharmaceutical expenditure could lead to an increase in total health-care expenditure in the following year, suggesting that rational pharmaceutical expenditure growth, such as expanding insurance coverage for innovative medicine, could promote health improvement (PUBMED:30958724). However, it is important to note that the relationship between pharmaceutical consumption and health outcomes can be complex and may vary by country, as indicated by the heterogeneity in the determinants of pharmaceutical expenditure among different OECD countries (PUBMED:34641978). Additionally, the financial impact of pharmaceutical expenditure and the strength of intellectual property rights protection can also affect national pharmaceutical spending, with implications for both health outcomes and economic welfare (PUBMED:29978727). Overall, the evidence suggests that there is a measurable health return associated with high pharmaceutical consumption in developed countries, particularly in terms of life expectancy and quality of life for middle-aged and older populations.
Instruction: Can an endocytoscope system (ECS) predict histology in neoplastic lesions? Abstracts: abstract_id: PUBMED:17554643 Can an endocytoscope system (ECS) predict histology in neoplastic lesions? Background And Study Aims: An endocytoscope system (ECS) has recently been developed with the possibility of super-high magnification of gastrointestinal mucosa, thus allowing in vivo imaging of living cells. The aim of the present study was to assess the potential of ECS in the prediction of histology in both normal gastrointestinal mucosa and neoplastic lesions. Patients And Methods: In total, 76 patients (57 men, 19 women; age range 37-86 years) with neoplastic lesions in the esophagus, stomach, or colon were enrolled into the study and underwent esophagogastroduodenoscopy or colonoscopy. After staining with 1% methylene blue, the mucosa was examined with the ECS probe (x 450 and x 1100 magnification), and video sequences were recorded on video disk. Biopsies from the examined areas were taken for histology and served as the gold standard. The endocytoscope video sequences were evaluated by two blinded pathologists. Finally the results were compared with those resulting from the evaluation of an experienced endoscopist who was aware of the macroscopic endoscopic pictures and the endocytoscope image results. Results: A total of 25 patients with esophageal lesions, 28 patients with colonic lesions, and 23 patients with gastric lesions were examined. The sensitivity and specificity for the evaluation of the blinded pathologists was 81% and 100%, respectively, in the esophagus, 56% and 89% in the stomach, and 79% and 90% in the colon. If an endoscopist evaluated the endocytoscopic pictures in combination with the macroscopic endoscopic images sensitivity and specificity increased significantly. Conclusions: First experiences with ECS show good sensitivity rates even by blinded assessment for esophageal and colonic lesions. Sensitivity for neoplastic lesions in the stomach is lower because of gastric mucous secretion. Combining the endoscopic and cytoscopic appearance of the lesion may further enhance the diagnostic value of the method. abstract_id: PUBMED:34304331 Multicenter prospective in vivo study of an endocytoscope system (ECS) for superficial esophageal cancer. Background: Endocytoscope systems (ECS) can visualize cellular nuclei of the mucosa of the gastrointestinal tract and are predicted to provide real-time microscopic diagnosis. However, their practical diagnostic performance remains unclear. Therefore, we conducted a multicenter prospective study to evaluate the visualization of superficial esophageal neoplasm in vivo using an ECS, and its diagnostic capability. Methods: The study target was histologically confirmed squamous cell carcinoma (SCC) and high-grade intraepithelial neoplasia (HGIN). An integrated ECS was used to obtain ECS images. In each patient, three ECS images of cancerous and corresponding noncancerous regions were selected for evaluation. A pathological review board of five certified pathologists made the final diagnosis of the images. The primary endpoint was the sensitivity of ECS diagnosis by pathologists. Results: ECS images of 68 patients were assessed: 42 lesions were mucosal SCC, 13 were submucosal SCC, and 13 were HGIN. The rate of assessable images was 96% (95% CI 87.6-99.1). The sensitivity of ECS diagnosis by pathologists was 88% (95% CI 77.2-94.5). Conclusions: ECS can provide high-quality images of cancerous lesions and a high diagnostic accuracy by pathologists, and could be useful for real-time endoscopic histological diagnosis of SCC and HGIN. Trial Registration: The UMIN Clinical Trials Registry Identification Number: 000004218. abstract_id: PUBMED:20976836 Real-time histology with the endocytoscope. Endoscopic Imaging has progressed tremendously over the last few decades. Novel imaging technologies such as high-resolution and high-magnification white light endoscopy, narrow band imaging, optimal band imaging, autoflourescence imaging and optical coherence tomography not only aid the endoscopist in detecting malignant or pre-malignant lesions but also assist in predicting histology. Recently, the introduction of Endocytoscopy (EC) and Confocal Endomicroscopy has taken us into a new realm of diagnostic endoscopy. With the ability to magnify up to 1000 ×, cellular structures can be visualized in real-time. This advance in technology could potentially lead to a paradigm shift negating the need to obtain biopsies. EC is, however, still in the early stages of development and further research needs to be carried out before it can be accepted as standard practice. This review will focus on the diagnostic utility of the Endocytoscope. abstract_id: PUBMED:38164300 Accuracy of Proton Magnetic Resonance Spectroscopy in Distinguishing Neoplastic From Non-neoplastic Brain Lesions. Objective: To evaluate the advantage of a combination of magnetic resonance spectroscopy (MRS) and magnetic resonance imaging (MRI) over MRI in the diagnosis of intracranial mass lesions to differentiate between neoplastic and non-neoplastic lesions and compare them with histopathology and clinical data as gold standard. Methodology: This was a descriptive cross-sectional study conducted at the Department of Radiology, Apollo Hospital located in Jubilee Hills, Hyderabad. In the present study, a total of 60 patients of all ages with brain masses found through MRI with positive clinical symptoms, regardless of gender, were included. We also involved patients with non-brain cancers suspected of spreading to the brain. Result: MRI identified 63% of lesions as neoplastic and 37% as non-neoplastic. Combining MRI and MRS increased accuracy, with 65% of the lesions diagnosed as neoplastic and 35% as non-neoplastic, demonstrating that MRS significantly enhances diagnostic precision compared to MRI alone. Conclusion: This study aimed to see how combining MRI and MRS helps diagnose brain masses, comparing with histopathology as the gold standard. MRI alone identified 63% as neoplastic, but MRI with MRS improved accuracy (65%). MRI sensitivity was 87.80%, but combined with MRS, it increased to 92.68%. Thus, the study concluded that the combination of MRI and MRS is more accurate than MRI alone. abstract_id: PUBMED:35118252 Thymic lesions of the paediatric age group: a comprehensive review of non-neoplastic and neoplastic etiologies. Thymus is an important part of the immune system of the paediatric age group. Thymic lesions are rare causes of anterior mediastinal pathology. Their occurrence in children is rarer, nevertheless, knowledge about their pathologies help in clinching the correct diagnosis. All lesions both non-neoplastic as well as neoplastic are found in children, however, they differ from adults not only in the clinical presentation but also in the pathogenesis and natural history of the disease. Thymic hyperplasia may not actually be true if age is taken into consideration. Similarly, a type B thymoma may be mistakenly diagnosed as a lymphoblastic lymphoma especially on small biopsies. This review has been carried out after extensive search of literature using the Google and Pubmed search engines. An attempt has been made to comprehensively cover all aspects of thymic pathology of the paediatric age group having historical significance as well as the recent updates. abstract_id: PUBMED:26837288 Neoplastic Lesions of the Cervix. This review presents a discussion of the gross and microscopic features, diagnosis, differential diagnosis, and prognosis of neoplastic lesions of the cervix. Biomarkers are discussed for each entity presented - cervical intraepithelial neoplasia, squamous carcinoma, glandular neoplasms, adenocarcinoma in situ, adenosquamous carcinoma, and others. abstract_id: PUBMED:27061229 Role of brush cytology in the diagnosis of neoplastic and non-neoplastic upper gastrointestinal lesions. Objective: To evaluate the utility of brush cytology (BC) in diagnosing non-neoplastic and neoplastic lesions of the upper gastrointestinal (GI) tract. Methods: The study comprised 106 brushings from 104 patients with clinically suspected lesions in the upper GI tract. Diagnoses were rendered on correlating brush cytological findings with clinicoradiological findings and compared with endoscopic biopsies (EB). An attempt was made to grade oesophageal squamous cell carcinomas (SCC) and to subcategorise gastric adenocarcinomas (AC). The sensitivity, specificity and predictive values of BC were calculated with respect to clinico-radiological outcome. Results: Malignant and benign lesions on BC comprised 74.5% and 25.5%, respectively, and on histology, 64.2% and 28.3%, respectively. Twenty-one of 32 (65.6%) gastric ACs were accurately sub-typed (16 intestinal and five diffuse) and 25/36 (69.4%) oesophageal SCCs (7, 15 and three well, moderately and poorly differentiated, respectively) were accurately graded. On correlation with the final clinico-radiological diagnosis, there was one false-positive and no false-negative cases, respectively, on BC; three false-negative and no false-positive cases, respectively, on EB; eight EB were non-representative in which a diagnosis was not rendered, out of which seven were malignant. Thus, BC had a sensitivity and specificity of 100% and 96.4%, respectively, with a positive predictive value (PPV), negative predictive value (NPV) and diagnostic accuracy (DA) of 98.7%, 100% and 99.1% (P ˂ 0.05), respectively. BC was superior to EB in detecting Candida species and EB in detecting Helicobacter pylori. Conclusions: BC is a useful, routinely applicable and reliable test for diagnosing upper GI lesions. abstract_id: PUBMED:26602569 Non-neoplastic pancreatic lesions that may mimic malignancy. The widespread use of abdominal ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) has resulted in an increased identification of asymptomatic pancreatic lesions. Preoperative diagnoses of pancreatic lesions can be difficult. Solid and cystic lesions and anatomic variants of normal can all mimic tumor clinically and radiologically. Newer imaging modalities have increased the likelihood of the accurate diagnosis of non-neoplastic pancreatic disease, however, despite the many advances; it still remains a challenge to differentiate rarer non-neoplastic entities and inflammatory masses from adenocarcinoma, preoperatively. Adding to the challenge is the fact that a variety of inflammatory, solid and cystic non-neoplastic lesions have significant clinical and radiological overlap with malignancies. About 5-10% of pancreatectomies performed with the primary clinical diagnosis of pancreatic carcinoma are later proved to be essentially non-neoplastic lesions. It is vital to include these non-neoplastic entities in the differential diagnosis while working up abnormal clinical and radiological pancreatic findings because it may drastically alter therapeutic options for the patients. The significance of recognizing these lesions preoperatively is to help to guide the clinical decision-making process and the avoidance of an unnecessary pancreatectomy. Examples of such entities include chronic pancreatitis, sarcoidosis, intrapancreatic accessory spleen (IPAS), lymphoid hyperplasia, lipomatous pseudohypertrophy (LPH), lymphangioma, lymphoepithelial cyst (LEC) and endometriosis. abstract_id: PUBMED:31360281 The spectrum of non- neoplastic skin lesions in Ibadan, Nigeria: a histopathologic study. Introduction: Non-neoplastic skin lesions constitute the majority of skin diseases. There is a paucity of histopathology studies of non-neoplastic skin diseases in Nigeria and the West Africa sub-region in general. This is because the dermato-pathology sub-specialty is poorly developed. Therefore, the main aim of this study is to determine the spectrum of histologically diagnosed non-neoplastic skin lesions in Ibadan, Nigeria. Methods: This is a retrospective study. All non-neoplastic skin lesions diagnosed in the Department of Pathology, University College Hospital, Ibadan over a five year period. (January 2006 to December 2010) was reviewed. The lesions were classified into eight groups according to the International Classification of Diseases (ICD)-10 of skin and subcutaneous disorders. The main classes include Dermatitis/Eczema, Papulosquamous disorders, Infectious disorders, Connective tissue diseases, Bullous disorders, Naevi/Developmental lesions, Granulomatous lesions, keratinizing disorders and other categories/Miscellaneous group. Results: A total of 209 non-neoplastic skin lesions comprised 1.3% of all surgical pathology specimen received within the study period. The modal age group was 20-29. The Dermatitis/Eczema group has the highest frequency of 87 cases representing 41.6% of cases, papulosquamous disorders 39 (18.7%), infectious disorders 37 (17.7%), bullous disorders 11 (5.3%) and connective tissue disorder 9 (4.3%). Chronic non-specific dermatitis was the commonest specific diagnosis comprising 60 cases (28.7%) of all the skin diseases. The other common specific skin lesions were lichen planus/lichenoid dermatitis 27(12.9% of 209 cases), verruca vulgaris 25 (12% of 209 cases). Conclusion: The number of histologically diagnosed non-neoplastic skin lesions is relatively small. There is a very wide spectrum of non-neoplastic skin lesions diagnosed within this period. There is a need for a specific diagnosis considering the high frequency of chronic non-specific dermatitis. abstract_id: PUBMED:36579207 Role of MRSI Major Metabolite Ratios in Differentiating Between Intracerebral Ring-Enhancing Neoplastic and Non-Neoplastic Lesions, High-Grade Gliomas and Metastases, and High-Grade and Low-Grade Gliomas. Introduction The purpose of this study was to determine whether multi-voxel magnetic resonance spectroscopic imaging (MRSI) can differentiate between intracranial neoplastic and non-neoplastic and between neoplastic ring-enhancing lesions (RELs) based on differences in major metabolite ratios in their enhancing and peri-enhancing regions. Methods In a prospective observational study involving patients with an intracerebral RELs, MRSI using the two-dimensional multi-voxel point-resolved spectroscopy (PRESS) chemical-shift imaging (CSI) sequence at an echo time (TE) of 135 milliseconds (ms) was performed on a total of 38 patients. Of 38 lesions, 23 (60.5%) were neoplastic and 15 (39.5%) were non-neoplastic. Of the 23 neoplastic lesions, 12 were high-grade gliomas (HGGs), seven were metastases, and four were low-grade gliomas (LGGs). Major metabolite ratios, i.e., choline-to-N-acetylaspartate (Cho/NAA), choline-to-creatine (Cho/Cr), and N-acetylaspartate-to-creatine (NAA/Cr), were calculated in the enhancing and peri-enhancing regions of the RELs. A Mann-Whitney U test was run to determine differences in metabolite ratios at different voxel locations between neoplastic versus non-neoplastic lesions, HGGs versus metastatic lesions, and HGGs versus LGGs. A receiver operating characteristic (ROC) curve analysis was performed to derive cut-off values for Cho/NAA and NAA/Cr ratios in the enhancing and peri-enhancing portions of the lesions. Results The sensitivity, specificity, positive predictive value, and negative predictive value for categorizing an REL in either neoplastic or non-neoplastic lesions using MRSI with magnetic resonance imaging (MRI) were 91.3%, 73.3%, 84%, and 84.6%, respectively. There was a statistically significant difference between Cho/NAA (p = 0.006) and NAA/Cr (p = 0.021) ratios in the enhancing region of 23 neoplastic and 15 non-neoplastic lesions. In the voxel placed in the peri-enhancing portions, the differences between Cho/Cr ratios were just significant (p = 0.047). A cut-off score of Cho/NAA &gt;1.67 in the enhancing regions gave a sensitivity of 82.6% and specificity of 60%. The cut-off score for NAA/Cr of &lt;0.80 in the enhancing regions showed a sensitivity and specificity of 60.9% and 86.7%, respectively. Of the 23 neoplastic lesions, 12 HGGs and seven metastases were differentiated using the Cho/NAA ratio in the peri-enhancing region with a cut-off value of 1.21, sensitivity of 100%, and specificity of 85%. A cut-off value of Cho/Cr ≥1.45 in the peri-enhancing regions showed a sensitivity of 83% and a specificity of 71.4%. For discriminating between 12 HGGs and four LGGs both from the 23 neoplastic REL group, using the cut-off score for Cho/NAA in the enhancing portions ≥4.16 showed a sensitivity of 0.75 and specificity of 100%. In the peri-enhancing regions, a cut-off score of ≥2.07 provided a sensitivity and specificity of 83% and 100%, respectively. Conclusion Conventional MRI sometimes poses a diagnostic challenge in distinguishing between neoplastic and non-neoplastic lesions and other neoplastic RELs. Interpreting MRSI findings by comparing the major metabolite ratios in the enhancing and peri-enhancing regions of these lesions may enable distinction between the two. Answer: Yes, an endocytoscope system (ECS) can predict histology in neoplastic lesions. Studies have shown that ECS, which allows for super-high magnification of gastrointestinal mucosa and in vivo imaging of living cells, can be used to assess the histology of both normal and neoplastic gastrointestinal mucosa. In a study involving 76 patients with neoplastic lesions in the esophagus, stomach, or colon, ECS demonstrated good sensitivity rates for esophageal and colonic lesions even when evaluated by blinded pathologists. The sensitivity and specificity were 81% and 100%, respectively, for esophageal lesions, and 79% and 90% for colonic lesions. However, the sensitivity for gastric lesions was lower, attributed to gastric mucous secretion. When an experienced endoscopist evaluated the endocytoscopic pictures in combination with macroscopic endoscopic images, the sensitivity and specificity increased significantly (PUBMED:17554643). Furthermore, a multicenter prospective study evaluating the visualization of superficial esophageal neoplasm in vivo using ECS found that the sensitivity of ECS diagnosis by pathologists was 88%. This indicates that ECS can provide high-quality images of cancerous lesions and achieve high diagnostic accuracy, suggesting its usefulness for real-time endoscopic histological diagnosis of squamous cell carcinoma (SCC) and high-grade intraepithelial neoplasia (HGIN) (PUBMED:34304331). Overall, these findings suggest that ECS has the potential to predict histology in neoplastic lesions, particularly in the esophagus and colon, and could be a valuable tool for real-time histological assessment during endoscopic examinations.
Instruction: Does technique or material used affect bladder tissue reactions when injecting teflon or silicone paste? Abstracts: abstract_id: PUBMED:8693631 Does technique or material used affect bladder tissue reactions when injecting teflon or silicone paste? Objectives: Submucosal injections of polytetrafluoroethylene (PTFE) and polydimethylsiloxane (PDMS) paste continue to be used for the correction of vesicoureteral reflux and urinary incontinence. Potential distant particle migration and foreign body tissue reactions are thought to be significantly affected by the technique of injection and the paste material used. A study to determine if injection technique, paste material, or pretreatment of the injection site significantly affects local tissue reaction was performed on 44 New Zealand white rabbits. Methods: Animals were randomly assigned to undergo four distinct injection treatments into the submucosa or bladder muscularis. Prior to injecting 0.1 cc of the PTFE or PDMS paste, some sites were pretreated with 0.1 cc saline to separate tissue planes, epinephrine to cause vasoconstriction, or doxycycline to provide sclerosis. Pretreatment controls were also performed without injecting PTFE or PDMS paste. Animals were killed, and quadrant bladder biopsies were performed at 1 hour (9), 1 day (9), 7 days (10), 1 month (8), and 6 months (8). A single pathologist, blinded to the injection treatments, performed a histologic evaluation to determine bleb location and the degree of inflammation, fibrosis, and epithelial necrosis. Results: Seven of the 12 submucosal PTFE injection treatments were found on biopsy to be primarily within the muscularis, compared to 1 of 9 with PDMS paste. Three of the 24 PTFE injections could not be found at autopsy. Of these, two were injected into the muscularis, and all were discovered at least 1 month following injection. Particles were present in all 22 PDMS injection sites that were retrieved. Pretreatments, especially with doxycycline, resulted in significant epithelial necrosis at 1 and 7 days. At 1 and 6 months there were no differences in inflammation or fibrosis between PTFE and PDMS or any pretreatment combinations with saline or epinephrine. Conclusions: Correctly injected, silicone (PDMS) paste appears most likely to remain in the submucosal space. Pretreatment injections may cause early epithelial necrosis. PTFE and PDMS elicit similar foreign body reactions over time. abstract_id: PUBMED:2607168 Experimental study of submucous vesical injections in rats. Teflon versus silicone Correction of vesicoureteric reflux by endoscopic subureteric injection of polytef paste is an expending procedure. But polytef past is thick, hard to inject and can migrate at distant sites of injection. We have tried a silicone paste on experimental studie observes tissue reaction at the site of injection (bladder) and in pelvic nodes, kidneys, liver, lungs, brain. 22 rats were injected: 8 with polytef paste, 14 with silicone paste. The study demonstrated that polytef paste sub mucosal injection is followed by migration nodes and local granulomatous reaction. The silicone paste is easier to inject; none particules of silicone were found in nodes nor in others visceria; granulomatous reaction and fibrous capsule around foreign body were smaller. abstract_id: PUBMED:2980235 Structural and ultrastructural modifications following transurethral injection of teflon paste. Structural and ultrastructural tissue modifications caused by polytetrafluoroethylene (teflon) paste injection in incontinent male patients have been studied. The most considerable observations are: a diffuse interstitial splitting of teflon paste in particles of about 3-100 microns. Given that the tissue reaction is limited to a modest infiltration of lymphocytes and monocytes and a slight increase in the collagen fibres, it is concluded that the teflon paste injection is well tolerated by the tissues. abstract_id: PUBMED:2530362 Bladder neck suspension material investigated in a rabbit model. Different principles for providing stable and durable suspension of the bladder neck and proximal urethral tissue are employed in endoscopic bladder neck suspension for female stress urinary incontinence. Six different anchor materials, some of which are currently used as tissue bolsters (anchors) for endoscopic bladder neck suspension, were implanted in the abdominal wall muscle of 45 rabbits. Tension on the anchors was measured and maintained by springs positioned between pairs of anchors. At two months evaluation, vascular graft and loops of suture retained 40% of the starting tension, while two sizes of screw anchors lost all tension. Silicone pads and kink free silicone tubing rapidly pulled through the tissue in 83% and 100% of the cases, respectively. Local tissue reaction with scar formation, physical characteristics of the anchor material (silicone, stainless steel, dacron or polypropolene) and cross sectional area of the anchor were identified as the important factors for anchor stabilization of tissue which has been repositioned under tension. abstract_id: PUBMED:2719557 Results of implantation of teflon paste into the vocal cord according to the data of light and electron microscopy Light and electron microscopy was used to study 5 observations of the tissue response to the teflon paste implanted into the vocal cords to modulate the vocal function. The study showed that teflon was not an absolutely inert substance, it produced inflammation as giant cell granuloma of foreign bodies. Giant cells of foreign bodies (GCFB) enclosing and resorbing the particles of teflon were found to have all the ultrastructural signs of actively phagocytizing cells presenting with various differentiation. The ultrastructure of GCFB provides support for the fact that the cells making up polykaryons belong to the system of mononuclear phagocytes. abstract_id: PUBMED:9153105 Comparison of soft tissue response in rabbits following laryngeal implantation with hydroxylapatite, silicone rubber, and Teflon. This study evaluates the soft tissue response in rabbits following laryngeal implantation for medialization using hydroxylapatite prostheses, carved silicone rubber prostheses, and injectable Teflon. Sixteen rabbits underwent left recurrent laryngeal nerve section for denervation and laryngeal implantation with hydroxylapatite. At 1, 3, 6, and 12 months, 4 animals were painlessly sacrificed and processed for histology. Similarly, animals were implanted with carved silicone rubber prostheses or with Teflon injected through a flap in the thyroid lamina for comparison at 1, 3, and 6 months. In animals implanted with hydroxylapatite, histologic findings include limited acute inflammatory response, thin fibrous encapsulation, and osteogenesis in the region of the fenestra, with lamellar bone bridging the space between the implant and thyroid lamina. With silicone rubber prostheses, there is a limited inflammatory response and fibrous encapsulation of the implant without evidence of osteogenesis. Animals implanted with Teflon demonstrated a classic foreign body reaction with multinucleated giant cells, granuloma formation, and migration of Teflon into surrounding muscle. With respect to soft tissue response, both hydroxylapatite and silicone rubber are less reactive than Teflon. The osteogenesis observed in the presence of hydroxylapatite increases implant stability and minimizes the risk of migration. Conversely, the presence of bone growth may limit the reversibility of medialization procedures performed with hydroxylapatite. abstract_id: PUBMED:2653551 Endoscopic correction of vesico-ureteric reflux by subureteric Teflon injection: follow-up ultrasound and voiding cystography. The technique of subureteric Teflon injection for the correction of vesico-ureteric reflux was first described by Puri and O'Donnell in 1984. The Teflon granuloma that occurs after the injection anchors the submucosal ureter, thereby preventing reflux. This paper reviews the ultrasound and cystographic findings in 88 patients with 115 refluxing ureters followed-up for periods ranging from 3 months to 3 years. Reflux disappeared in 91 ureters (79.1%) and the degree of reflux improved in 22 ureters (19.1%). In this follow-up study, the size of the granuloma on ultrasound was assessed in relation to the number of injections, the total volume of Polytef paste used and the outcome of the procedure. A well circumscribed granuloma was seen at the orifices of 92 ureters. There was no ureteric granuloma evident in 23 ureters and Teflon plaques were found at the trigone in six patients. Complications were rare. One large granuloma caused severe obstruction requiring re-implantation. The size of the granuloma appears to depend on individual tissue reaction rather than on the other factors. The presence or the size of the granuloma bears no direct relationship to the success of the procedure. abstract_id: PUBMED:9444705 Soft tissue response to polytetrafluoroethylene and silicone rubber in humans: morphological and immunohistochemical observations. The objective of this study was by morphological and immunohistochemical means to investigate the cellular tissue response to the alloplastic materials polytetrafluoroethylene (PTFE polymer), and soft and hard silicone rubber over time. In seven healthy volunteers implants made of Proplast-Teflon, and soft and hard silicone were inserted subcutaneously in unloaded areas in the iliac crest region. After 1, 2, 4, 12, and 26 weeks, respectively, the implants with surrounding soft tissue were removed en bloc for histological and immunohistochemical examination using a panel of antibodies to various leukocyte markers. The tissue reaction to the various alloplastic materials varied greatly with the focus on macrophage and giant cell reactions and eventual formation of a peri-implant fibrous capsule. The most extensive changes developed next to porous Proplast, both with respect to degree of changes and endurance of tissue reaction. Less intense reactions were seen, in decreasing order, to soft silicone, Teflon, and hard silicone. The study gave no clues to a toxic, allergic, or traditional immunological pathogenesis of the tissue reaction induced by the test materials. abstract_id: PUBMED:24039333 Evaluation of the biocompatibility of silicone gel implants - histomorphometric study. Conflict Of Interest: NONE DECLARED Breast implants are medical devices that are used to augment breast size or to reconstruct the breast following mastectomy or to correct a congenital abnormality. Breast implants consist of a silicone outer shell and a filler (most commonly silicone gel or saline). Approximately 5 to 10 million women worldwide have breast implants. Histomorphometric study to evaluate the biological tissue compatibility of silicone implants suitable for plastic surgery and the adverse effects and risks of this material. Thirty Wistar white rats received subcutaneous implants and the revestiment of silicone gel Silimed ®®, and randomized into six groups of five animals each, according to the type of implanted material and the time of sacrifice. Eight areas of 60.11mm2 corresponding to the obtained surgical pieces were analyzed, counting mesenchymal cells, eosinophils, and foreign body giant cells, observing an acceptable biocompatibility in all implants, for subsequent statistical analysis by Tukey test. Silicone gel showed inflammation slightly greater than for other groups, with tissue reactions varying from light to moderate, whose result was the formation of a fibrous capsule around the material, recognized by the organism as a foreign body. Despite frequent local complications and adverse outcomes, this research showed that the silicone and top layer presented an acceptable chronic inflammatory reaction, which did not significantly differ from the control group. In general, it is possible to affirm that silicone gel had acceptable levels of biocompatibility, confirmed the rare presence of foreign body giant cells, and when of the rupture, formed a fibrous capsule around the material, separating the material of the organism. abstract_id: PUBMED:403487 Tissue response to silicone rubber when used as a root canal filling. To test the tissue compatibility of silicone rubber when it is used as a root canal filler, excess material was intentionally forced into the apical tissues in primates. The tissue response was one of general acceptance, with the usual response being fibrotic encapsulation. A low degree of inflammation was noted. Further studies are in progress. Answer: Yes, both the technique and material used affect bladder tissue reactions when injecting Teflon (polytetrafluoroethylene, PTFE) or silicone (polydimethylsiloxane, PDMS) paste. A study on New Zealand white rabbits found that the technique of injection significantly influenced the location of the injected material, with PTFE more likely to be found within the muscularis rather than the intended submucosal space, compared to PDMS paste. Additionally, pretreatment injections, particularly with doxycycline, caused significant epithelial necrosis at early time points. Over time, however, PTFE and PDMS elicited similar foreign body reactions, regardless of pretreatment with saline or epinephrine (PUBMED:8693631). An experimental study on rats demonstrated that PTFE paste injections were followed by particle migration to nodes and a local granulomatous reaction, whereas silicone paste was easier to inject, did not result in particle migration to nodes or other viscera, and induced a smaller granulomatous reaction and fibrous capsule around the foreign body (PUBMED:2607168). Other studies have shown that PTFE paste can cause a modest infiltration of lymphocytes and monocytes and a slight increase in collagen fibers, indicating that it is generally well tolerated by tissues (PUBMED:2980235). However, Teflon has also been associated with a classic foreign body reaction with multinucleated giant cells, granuloma formation, and potential migration into surrounding muscle (PUBMED:9153105). In contrast, silicone rubber implants have been associated with a limited inflammatory response and fibrous encapsulation without evidence of osteogenesis (PUBMED:2719557). Overall, the material used for injections can lead to different degrees and types of tissue reactions, and the technique of injection can influence the localization and potential migration of the material, affecting the tissue response in the bladder (PUBMED:8693631; PUBMED:2607168; PUBMED:2980235; PUBMED:9153105; PUBMED:2719557).
Instruction: Can a deprivation index be used legitimately over both urban and rural areas? Abstracts: abstract_id: PUBMED:24929662 Can a deprivation index be used legitimately over both urban and rural areas? Background: Although widely used, area-based deprivation indices remain sensitive to urban-rural differences as such indices are usually standardised around typical urban values. There is, therefore, a need to determine to what extent available deprivation indices can be used legitimately over both urban and rural areas. Methods: This study was carried out in Brittany, France, a relatively affluent region that contains deep rural areas. Among the 1,736 residential census block groups (IRIS) composing the Brittany region, 1,005 (57.9%) are rural. Four deprivation indices were calculated: two scores (Carstairs and Townsend) developed in the UK and two more recent French measures (Havard and Rey). Two standardisation levels were considered: all of the IRIS and only the urban IRIS of the region. Internal validity (Kappa coefficients and entropy values) and external validity (relationship with colorectal cancer screening [CCS] attendance) were investigated. Results: Regardless of the deprivation measure used, wealthy areas are mostly clustered in the West and at the outskirts of major towns. Carstairs and Rey scores stand out by all evaluation criteria, capturing both urban and rural deprivation. High levels of agreements were found across standardisation levels (κ = 0.96). The distributions of deprivation scores were balanced across urban and rural areas, and high Shannon entropy values were observed in the capital city (≥0.93). Similar and significant negative trends were observed between CCS attendance and both deprivation indices, independent of the degree of urbanisation. Conclusions: These results provide support, despite potential sociological objections, for the use of a compromise index that would facilitate comparisons and interpretations across urban and rural locations in public health research. abstract_id: PUBMED:30528071 The development of an index of rural deprivation: A case study of Norfolk, England. Geographical deprivation indices such as the English Index of Multiple Deprivation (IMD) have been widely used in healthcare research and planning since the mid-1980s. However, such indices normally provide a measure of disadvantage for the whole population and can be inflexible to adaptation for specific geographies or purposes. This can be an issue, as the measurement of deprivation is subjective and situationally relative, and the type of deprivation experienced within rural areas may differ from that experienced by urban residents. The objective of this study was to develop a Rural Deprivation Index (RDI) using the English county of Norfolk as a case study, but with a view to adopting a flexible approach that could be used elsewhere. It is argued that the model developed in this research gives clarity to the process of populating an index and weighting it for a specific purpose such as rural deprivation. This is achieved by 'bundling' highly correlated indicators that are applicable to both urban and rural deprivation into one domain, and creating a separate domain for indicators relevant to the setting of interest, in this case rural areas. A further domain is proposed to account for population differences in rural areas. Finally, a method was developed to measure variability in deprivation within small areas. The RDI results in more rural areas in Norfolk falling in the most deprived quintile, particularly those classified as 'Rural town and fringe in sparse settings'; these areas also have high levels of heterogeneity of deprivation when using the variability measure created. This model proposed has the potential to provide a starting point for those who wish to create a summary deprivation measure taking into account rurality, or other local geographic factors, and as part of a range of approaches that can be used to allocate, or apply for, resources. abstract_id: PUBMED:11027953 Deprivation and poor health in rural areas: inequalities hidden by averages. Poor health and social deprivation scores in 570 wards in East Anglia, UK, were much less associated in rural than in urban areas. The deprivation measure most closely related to poor health in the least accessible rural wards was male unemployment, but use of this measure did not remove the urban-rural gradient of association strength. Neither did replacing wards by smaller enumeration districts as the units of analysis. The differences between urban and rural correlations were removed by restricting the comparison to wards with the same unemployment range and combining pairs of rural wards with similar deprivation values. Apparent differences between rural and urban associations are therefore not due to the choice of deprivation indices or census areas but are artifacts of the greater internal variability, smaller average deprivation range and smaller population size of rural small areas. Deprived people with poor health in rural areas are hidden by favourable averages of health and deprivation measures and do not benefit from resource allocations based on area values. abstract_id: PUBMED:18810884 Differences in mortality between rural and urban areas in England and Wales, 2002-04. This article examines differences in mortality between rural and urban areas in England and Wales in the years 2002-04 using the Rural and Urban Area Classification 2004. The analysis includes adjustment using the Index of Multiple Deprivation 2004 and Welsh Index of Multiple Deprivation 2005 to investigate whether mortality differences between rural and urban areas could be explained by differences in the distribution of deprivation. abstract_id: PUBMED:8281087 Deprivation, low birth weight, and children's height: a comparison between rural and urban areas. Objective: To compare proportions of low birthweight babies and mean heights of schoolchildren between rural and urban areas at different levels of social deprivation. Design: Cross sectional population based study classifying cases by Townsend material deprivation index of enumeration district of residence and by rural areas, small towns, and large towns. Setting: Northumberland Health District. Subjects: 18,930 singleton infants delivered alive during January 1985 to September 1990 and resident in Northumberland in October 1990; 9055 children aged 5 to 8 1/2 years attending Northumberland schools in the winter of 1989-90. Main Outcome Measures: Odds ratios for birth weight less than 2800 g; difference in mean height measured by standard deviation (SD) score. Results: Between the most deprived and most affluent 20% of enumeration districts the odds ratio for low birth weight adjusted for rural or urban setting was 1.71 (95% confidence interval 1.51 to 1.93) and the difference in mean height -0.232 SD score (-0.290 to -0.174). Between large towns and rural areas the odds ratio for low birth weight adjusted for deprivation was 1.37 (1.23 to 1.53) and the difference in mean height -0.162 SD score (-0.214 to -0.110). Results for small towns were intermediate between large towns and rural areas. Conclusions: Inequalities in birth weight and height exist in all rural and urban settings between deprived and affluent areas. In addition, there is substantial disadvantage to living in urban areas compared with rural areas which results from social or environmental factors unrelated to current levels of deprivation. abstract_id: PUBMED:20531365 Variations in life expectancy between rural and urban areas of England, 2001-07. Background: This study was part of a wider project commissioned by the Department for Environment, Food and Rural Affairs (Defra) to examine inequalities in health outcomes in rural areas. It investigated variations in life expectancy at birth between rural and urban areas of England, taking the effect of deprivation into account. The study aimed to produce results which provide specific evidence of the needs of rural communities, as they have often been overlooked in previous research. Methods: The Rural and Urban Area Classification (RUAC) 2004 and the Index of Multiple Deprivation (IMD) 2007 were used to categorise area types at the Lower Super Output Area (LSOA) level. Population and mortality data used were produced by the Office for National Statistics (ONS). Abridged life tables were constructed to calculate period life expectancy at birth for males and females, for the years 2001 to 2007 combined. Confidence intervals (95%) were also produced. Results: For the 2001-07 period, life expectancy at birth in England was 76.9 years for males and 81.3 years for females. However, when deprivation was examined, results between the most deprived and least deprived quintiles varied by 7.8 years for men and 5.4 years for women.Overall, life expectancy was higher in rural areas than in urban areas. Deprivation had a considerable impact on the results and wide inequalities were evident, particularly in men and in urban areas. In both area types, males living in the less deprived quintiles had similar life expectancies to females living in the more deprived quintiles.Within rural area types, life expectancy was higher in village and dispersed settlements than in town and fringe areas. There were large differences between the fourth and fifth (most deprived) quintiles in village and dispersed settlements, which shows that there may be acute pockets of deprivation within this area type that need to be addressed.In terms of sparsity, there was little difference in life expectancy between densely and less densely populated localities within rural and urban areas. However, variations were observed when deprivation was taken into account and greater differences were evident in less sparse areas than in sparse areas. Conclusions: There were clear inequalities in life expectancy between rural and urban areas in England. There were also intricate differences within area types, which can be overlooked when only examining differences between them. The results were consistent with the findings of previous studies and demonstrated that it is important to examine differences in life expectancy in both area and deprivation contexts. abstract_id: PUBMED:12123438 Deprivation and stillbirth risk in rural and urban areas. The objective of the study was to investigate whether stillbirth risk was higher, and the effect of deprivation on inequality in stillbirth risk more marked, in rural than in urban areas. We carried out a cohort study of all 280 757 singleton births to mothers domiciled in Cumbria, north-west England, 1950-92. After allowing for individual social class and community deprivation, the risk of stillbirth was lower outside urban centres both during 1950-65 (OR = 0.91, 95% CI 0.84, 1.00) and during 1966-92 (OR = 0.82, 95% CI 0.73, 0.92). In earlier years, unsupported mothers in rural areas and mothers living in remote areas were particularly at risk. Urban/rural differences in risk were not explained by individual social class, community deprivation or overcrowding and have persisted over a 40-year time period. abstract_id: PUBMED:16088374 Deprivation: different implications for forensic psychiatric need in urban and rural areas. Background: Ecological relationships between deprivation and forensic psychiatric admission rates may differ in urban and rural areas. Aims: The aim of the study was to compare the relationship between material deprivation and forensic admission rates in rural and urban areas for a whole-national service in Ireland over a 3-year period. Method: All Irish forensic admissions from 1997 to 1999 were allocated to the appropriate small area. Material deprivation scores were calculated from census data. Mean annual admission rates and Bayesian standardised forensic admission ratios for small areas were aggregated by material deprivation score and population density. Results: At small area level, there were significant non-linear increases in forensic admissions with increasing deprivation. The increases in urban areas (population density &gt;10/hectare) were absent in less densely populated areas. Conclusions: Deprivation alone may not be the key factor in predicting forensic service utilisation. Factors associated with specifically urban deprived areas may be of greater relevance in planning services. abstract_id: PUBMED:14512237 Rural/urban differences in the association between deprivation and healthcare utilisation. Whilst associations between inequalities in healthcare utilisation and socio-economic deprivation are well established in the UK it is argued that deprivation indices, such as the Townsend index, remain insensitive to rural/urban differences. This study examines how Townsend and its components differ in their association with healthcare utilisation across the rural/urban spectrum of a large health region. Our research was carried out in the West Midlands National Health Service region (population 5.3 million), comprising of a similar geographical population diversity to that of the United Kingdom (UK) using Hospital Episode Statistics (1994/5-1998/9) and 1991 census socio-demographic data. Retrospective multilevel multivariate models compare three ward-level healthcare utilisation measures (standardised episode-, admission-, and bed-rates) in relation to the Townsend index of material deprivation, its components, and four rural/urban characteristics (population density, population potential, electoral ward area and perimeter size). The associations between outcomes and Townsend were generally not attenuated by the rural/urban characteristics. The constituent component of car-ownership was similarly unperturbed, whereas population potential significantly perturbed the home-ownership model and overcrowding was significantly perturbed by all four rural/urban characteristics considered. A deprivation index may encapsulate different meanings to that of its components when used to assess variations in healthcare utilisation. Constituent components may yield considerable perturbation in relation to healthcare utilisation across the rural/urban spectrum, whilst the composite measure does not. In particular, and contrary to anecdotal opinion, car-ownership and unemployment (as recorded in the 1991 UK census) exhibited a stable relationship across different rural/urban areas with respect to healthcare utilisation. abstract_id: PUBMED:33607097 Analysis of the impact of heat waves on daily mortality in urban and rural areas in Madrid. The objective of this study was to analyze and compare the effect of high temperatures on daily mortality in the urban and rural populations in Madrid. Data were analyzed from municipalities in Madrid with a population of over 10,000 inhabitants during the period from January 1, 2000 to December 31, 2020. Four groups were generated: Urban Metropolitan Center, Rural Northern Mountains, Rural Center, and Southern Rural. The dependent variable used was the rate of daily mortality due to natural causes per million inhabitants (CIE-X: A00-R99) between the months of June and September for the period. The primary independent variable was maximum daily temperature. Social and demographic "context variables" were included: population &gt;64 years of age (%), deprivation index and housing indicators. The analysis was carried out in three phases: 1) determination of the threshold definition temperature of a heat wave (Tumbral) for each study group; 2) determination of relative risks (RR) attributable to heat for each group using Poisson linear regression (GLM), and 3) calculation of odds ratios (OR) using binomial family GLM for the frequency of the appearance of heat waves associated with context variables. The resulting percentiles (for the series of maximum daily temperatures for the summer months) corresponding to Tthreshold were: 74th percentile for Urban Metropolitan Center, 76th percentile for Southern Rural, 83rd for Rural Northern Mountains and 98th percentile for Center Rural (98). Greater vulnerability was found for the first two. In terms of context variables that explained the appearance of heat waves, deprivation index level, population &gt;64 years of age and living in the metropolitan area were found to be risk factors. Rural and urban areas behaved differently, and socioeconomic inequality and the composition of the population over age 64 were found to best explain the vulnerability of the Rural Center and Southern Rural zones. Answer: Yes, a deprivation index can be used legitimately over both urban and rural areas, but it may require careful consideration and adaptation to accurately reflect the different types of deprivation experienced in these settings. The study carried out in Brittany, France, found that certain deprivation indices, such as the Carstairs and Rey scores, were effective in capturing both urban and rural deprivation, with high levels of agreement across standardisation levels and significant negative trends observed between colorectal cancer screening attendance and both deprivation indices, independent of the degree of urbanisation (PUBMED:24929662). However, the development of a Rural Deprivation Index (RDI) in Norfolk, England, highlighted the need for a flexible approach that accounts for the specific types of deprivation experienced in rural areas, suggesting that traditional indices may not fully capture the nuances of rural deprivation (PUBMED:30528071). Research in East Anglia, UK, indicated that poor health and social deprivation scores were less associated in rural than in urban areas, suggesting that deprived individuals with poor health in rural areas might be hidden by favourable averages and not benefit from resource allocations based on area values (PUBMED:11027953). Moreover, studies have shown that there are differences in mortality between rural and urban areas in England and Wales, and these differences could not be fully explained by the distribution of deprivation, indicating that other social or environmental factors may contribute to the disadvantage of living in urban areas (PUBMED:18810884, PUBMED:8281087). In terms of life expectancy, rural areas in England had higher life expectancy than urban areas, with deprivation having a considerable impact on the results, demonstrating the importance of examining life expectancy in both area and deprivation contexts (PUBMED:20531365). In summary, while a deprivation index can be used over both urban and rural areas, it may require modification or the development of specific indices to accurately reflect the unique aspects of deprivation in each setting. The use of such indices can facilitate comparisons and interpretations across locations in public health research, but it is crucial to ensure that they are sensitive to the differences between urban and rural deprivation (PUBMED:24929662, PUBMED:30528071).
Instruction: Does participation in organized sports predict future physical activity for adolescents from diverse economic backgrounds? Abstracts: abstract_id: PUBMED:19237113 Does participation in organized sports predict future physical activity for adolescents from diverse economic backgrounds? Purpose: To examine cross-sectional and longitudinal associations between socioeconomic status (SES), gender, sports participation and moderate-to-vigorous physical activity (MVPA) in adolescents. Methods: Project EAT (Eating Among Teens), a population-based longitudinal study followed a socioeconomically and ethnically diverse sample of 1709 adolescents in 1998-1999 (Time 1) and 2003-2004 (Time 2). Mixed model regression analyses were used to examine longitudinal trends in MVPA as a function of SES and previous sports involvement. Results: For both genders, participation in organized sports and weekly hours of MVPA were positively associated with SES. On average, MVPA decreased between high school and young adulthood for both genders. Adolescents who participated in sports during high school showed a steeper decline in weekly hours of MVPA than their non-sports-participating counterparts. SES had a significant moderating effect on the change in MVPA over time for boys who participated in organized sports, with low SES boys showing a steeper decline in MVPA between time periods than higher SES boys. Although on average, a statistically significant difference in MVPA between previous sports participants and nonparticipants remained at Time 2, for all SES groups and both genders, the gap between hours of MVPA was either overcome or significantly narrowed by young adulthood. Conclusions: Increased dependence on organized sports for MVPA may be insufficient to meet the needs of youth following high school, especially for low SES youth. Designing physical activity promotions that reach and address the unique needs of lower SES youth and families is a public health priority. abstract_id: PUBMED:34150481 Does organized sports participation in childhood and adolescence positively influence health? A review of reviews. Sports participations have the potential for both positive and negative health outcomes. We hence aimed (i) to assess systematically reviewed associations between organized sports participation in children and adolescents and their health, and (ii) to assess qualitative syntheses of experiences among children and adolescents concerning organized sports participation and health. A search was undertaken in April 2020 across the databases MEDLINE, EMBASE, APA PsycInfo (Ovid), Scopus, SPORTDiscus (EBSCOhost), and specialized databases for reviews. The recommended Joanna Briggs Institute approach to critical appraisal, study selection, data extraction, and data synthesis was used. Systematic reviews were included that covered children and/or adolescents aged ≤ 18 years, and adult participants with retrospective exposure to-or experiences of-organized sports participation before the age of 19 and examining health outcomes and experiences (Prospero protocol CRD 42020206677). Five reviews based mainly on cross-sectional data, two mainly on longitudinal data, and one on experimental studies were included. A causal relationship of moderate-to-high level of evidence between organized sports participation and moderate crude weight reduction accentuated by diet control and team sports was identified. Evidence of causal relationships between sports participation and reduced anxiety, and depression and increased physical activity was at a moderate level. Evidence of causal relationships between sports participation and health was of low-to-moderate level concerning obesity status (inconclusive), bone health (positive), and psychological and social health (positive and negative). Causal relationships between organized child and adolescent sports participation and health remain uncertain. Experimental and well-conducted longitudinal primary studies are highly warranted. abstract_id: PUBMED:31179262 Associations of Participation in Organized Sport and Self-Organized Physical Activity in Relation to Physical Activity Level Among Adolescents. Although physical activity level (PAL) is positively correlated with adolescents' health, many adolescents do not fulfill recommendations for physical activity. This study examines the associations of organized sport and self-organized physical activity, with PAL among adolescents. Participants were 301 adolescents (12-13 year-olds). The adolescents wore accelerometers for 1 week according to international standards, and reported their participation in organized sport and self-organized physical activity in a questionnaire. The results showed that the level of participation in organized sport was positively associated with the adolescents' total PAL, while there was no significant association between time spent in self-organized physical activity and adolescents' daily minutes of moderate and vigorous physical activity. In addition, boys who participated &lt;3 h per week (or not at all) in organized sport stood out with the lowest fulfillment of recommended PAL. Our findings underline the critical importance of getting adolescents, especially boys, to participate in organized sport and not to drop out from organized sport during adolescence. abstract_id: PUBMED:35718651 Physical Activity, Sports Participation, and Psychosocial Health in Adolescents With Hearing Loss. Purpose: This study examined the association between hearing status (i.e., adolescents with and without hearing loss) and physical activity and sports participation. Secondarily, we explored the association between physical activity and sports participation and psychosocial outcomes among adolescents with hearing loss. Methods: Analyses included 29,034 adolescents (52.1% male, 13.8 ± 2.3 (M ± SD) years) from the combined 2018-2019 National Survey of Children's Health. Adolescents were grouped by hearing status. Adjusted logistic regression models assessed physical activity level (i.e., 0, 1-3, 4-6, and 7 days/week) and sports participation (i.e., participation in sports within the past 12 months) by hearing status. Secondary analyses examined associations between physical activity and sports participation with psychosocial outcomes among adolescents with hearing loss adjusting for relevant confounders. Results: Relative to their hearing peers, adolescents with hearing loss (n = 359) were 40% [adjusted odds ratio (AOR), 0.60; 95% confidence interval (CI), 0.44, 0.81], 43% [AOR, 0.57; 95% CI, 0.41, 0.80], and 33% [AOR, 0.67; 95% CI, 0.47, 0.95] less likely to engage in 1-3 days/week of physical activity, 4-6 days/week of physical activity, and meet physical activity guidelines, respectively. Further, adolescents with hearing loss were 31% [AOR, 0.69; 95% CI, 0.55, 0.85] less likely to participate in sports. Sports participation, but not physical activity, was associated with a significant reduction in the likelihood of experiencing adverse psychosocial outcomes among adolescents with hearing loss (p's &lt; .05). Discussion: Sports participation, but not physical activity, was associated with attenuated likelihood of experiencing adverse psychosocial outcomes in adolescents with hearing loss, suggesting unique characteristics of sports participation confer protection of psychosocial health. Increasing access to and reducing barriers to engagement in sports should be prioritized to improve psychosocial health in adolescents with hearing loss. abstract_id: PUBMED:37772531 Trends in physical activity in adolescents participating and not participating in organized team or individual physical activity. Introduction And Objective: Participation of adolescents in organized physical activity (PA), together with physical education, is essential for the development of a habit of regular PA, achievement of PA recommendations and adoption of physical literacy. The aim of the study is to identify the trends and differences in PA levels between adolescents participating in organized team PA, organized individual PA, and those not participating in organized PA. Another aim is to formulate proposals to eliminate the negative effects of the pandemic on adolescents' participation in organized PA. Material And Methods: The research took place between 2012-2021 in 51 secondary schools and involved a total of 1,202 boys and 1,561 girls. Participation in organized PA was examined using the 'Sports preferences survey'. Weekly PA was investigated by means of the 'International Physical Activity Questionnaire-Long Form'. Results: The results confirmed the negative trend in participation in organized PA. However, this was predominantly caused by the negative effects of the pandemic in 2020-2021. Boys participating in organized team and individual PA had more school PA, recreation PA, vigorous PA, moderate PA and overall PA during the week, compared with boys not participating in organized PA (girls had more vigorous PA, moderate PA and overall PA). Conclusions: Participation in organized team PA and individual PA plays an important role, primarily in supporting vigorous PA but also in overall weekly PA among boys and girls. The negative effects of the pandemic should be eliminated by increasing adolescents' participation in various forms of organized PA with an emphasis on organized individual PA which can be regularly pursued, even under restrictive measures at home, in the countryside or in nature. abstract_id: PUBMED:35627770 Physical Activity and Sports Participation among Adolescents: Associations with Sports-Related Knowledge and Attitudes. The impact of physical activity and sport on the development of adolescents' knowledge and attitudes has not been studied sufficiently. We assume that young people with more knowledge of sports will continue to be physically active on their own. The aim of this study is to identify the differences in the sports and physical activity-related knowledge and attitudes of adolescents who engage in organised sports training and those who do not. A total of 1033 6th-grade students from (aged 12.7 ± 0.4 years) 52 schools in Estonia were involved in the study. Logistic regression analysis was used to examine the associations between sports-related knowledge and attitudes towards students participating in organised sports training. The main reasons which hindered adolescents from being active were a lack of suitable equipment, being weaker than others, and laziness. Knowledge of physical activity, higher perceived benefits, and lower barriers to physical activity were the predictors contributing to adolescents' participation in sports training. The results demonstrated that physically active students have better knowledge of sports and physical activity. Students participating in organised sports training have strong positive benefits from being regularly physically active and feel that sports training maintains their health. abstract_id: PUBMED:34869148 Sociodemographic Correlates of Organized Sports Participation in a Sample of Middle School Students in China. Background: Organized sport participation (OSP) is considered as one method with the potential to increase overall physical activity (PA) levels in young people. It is essential to understand the correlates of OSP to inform future PA interventions. Purpose: This study aimed to explore the sociodemographic correlates of OSP among middle school students from the Nanjing City of China. Methods: A total of 7,097 adolescents (50.1% girls) aged 12-15 years from Nanjing, China, were recruited in this survey. Self-reported data on sex, grade, race, residence areas, proficient sport skills, and parental highest education were obtained. OSP was assessed by the question asked in the questionnaire on whether participants were involved in any "sports club or team" with the binary answer options of "yes" and "no," for statistical analysis purposes. Generalized linear models were used to determine the correlates of OSP. Results: Only 16.6% reported participating in any organized sport over the past whole year, while boys (OR = 1.34, 95% CI: 1.18-1.53) and 7th graders (OR = 1.40, 95% CI: 1.18-1.65) were more likely to participate in organized sport. Adolescents being Han ethnicity were less likely to either participate in organized sport (OR = 0.60, 95% CI: 0.40-0.92), or masterless (one or two) proficient sport skills [OR (one) = 0.27, 95%CI: 0.20-0.37; OR (two) = 0.43, 95% CI: 0.36-0.50]. Besides, both residence area and parental highest education were not significantly associated with OSP among the participating adolescents. Conclusion: The current study confirmed that only one-sixth of adolescents participate in the organized sport over the past year. At-risk population subgroups include girls, older adolescents, being Han ethnicity, and those proficient in fewer sport skills. Sex, grades, race, and proficient sport skills were significant correlates of OSP. School, community, and families need to provide more resources and support for disadvantaged populations in OSP. abstract_id: PUBMED:33924245 Continued Participation of Israeli Adolescents in Online Sports Programs during the COVID-19 Pandemic Is Associated with Higher Resilience. Background: Coronavirus disease 2019 (COVID-19) has forced adolescents to adapt rapidly to a new reality of physical and social distancing, while introducing a range of new sources of stress and adversity. Our primary aim was to study the relationship between adolescents' resilience and their participation in online sports programs during the COVID-19 pandemic lockdown period. Our secondary aims were to assess the associations between the organized sports programs' determinants and resilience. Methods: Online surveys designed to examine resilience, lifestyle, psychosocial health and characteristics of the organized sports programs were administered to 473 adolescents who were enrolled in organized sports programs before the COVID-19 pandemic. Results: Adolescents who continued to participate in online structured programs during the lockdown period were significantly more resilient and physically active, had higher self-related health, satisfaction with life, and ability to cope during the pandemic, compared to those who did not participate. Relationships with the adult instructor and levels of physical activity were the most important factors of the programs that were associated with resilience. Conclusions: Participation of adolescents in sports programs is an important resource associated with higher levels of resilience. Youth programs should continue their activities during globally challenging times, such as the COVID-19 pandemic. abstract_id: PUBMED:30781609 Participation in Organized Sports and Self-Organized Physical Activity: Associations with Developmental Factors. Engagement in organized sports is associated with developmental factors, such as, healthy growth, cognitive abilities, psychological well-being and lower substance use. Research also suggest that the spontaneous free play that characterises self-organized physical activity (PA) provides young people with opportunities to learn social skills, such as self-regulation and conflict-resolution skills. We assessed associations between participation in the two activity types and several demographics along with developmental factors (e.g., body mass index (BMI)). Data was from a representative sample of 2060 students attending 38 schools in Norway (mean age (Mage) = 15.29, standard deviation (SD) = 1.51; 52% females). Results indicated that while engagement in organized sports was more related to developmental factors, relative to self-organized PA, engaging concurrently in both activities for at least an hour a week was more developmentally beneficial than engaging only in one for the same amount of time. Thus, PA programmes for students will enhance their effectiveness if they focus on structured activities but also self-organized activities where students can coordinate themselves. abstract_id: PUBMED:26928601 Identifying profiles of actual and perceived motor competence among adolescents: associations with motivation, physical activity, and sports participation. The present study identified adolescents' motor competence (MC)-based profiles (e.g., high actual and low perceived MC), and accordingly investigated differences in motivation for physical education (PE), physical activity (PA) levels, and sports participation between profiles by using regression analyses. Actual MC was measured with the Körperkoordinationstest für Kinder. Adolescents (n = 215; 66.0% boys; mean age = 13.64 ± .58 years) completed validated questionnaires to assess perceived MC, motivation for PE, PA-levels, and sports participation. Actual and perceived MC were only moderately correlated and cluster analyses identified four groups. Two groups of overestimators (low - overestimation, average - overestimation) were identified (51%), who particularly displayed better motivation for PE when compared to their peers who accurately estimated themselves (low - accurate, average - accurate). Moreover, adolescents with low actual MC, but high perceived MC were significantly more active than adolescents with low actual MC who accurately estimated themselves. Results pointed in the same direction for organised sports participation. Underestimators were not found in the current sample, which is positive as underestimation might negatively influence adolescents' motivation to achieve and persist in PA and sports. In conclusion, results emphasise that developing perceived MC, especially among adolescents with low levels of actual MC, seems crucial to stimulate motivation for PE, and engagement in PA and sports. Answer: Participation in organized sports has been associated with various outcomes related to future physical activity (PA) among adolescents from diverse economic backgrounds. However, the relationship is complex and influenced by multiple factors. For adolescents from higher socioeconomic status (SES), participation in organized sports and moderate-to-vigorous physical activity (MVPA) are positively associated, but this association does not necessarily predict future physical activity. Adolescents who participated in sports during high school showed a steeper decline in weekly hours of MVPA than their non-sports-participating counterparts, and the gap in MVPA between sports participants and nonparticipants narrowed by young adulthood. SES had a moderating effect on the change in MVPA over time for boys, with low SES boys showing a steeper decline in MVPA than higher SES boys (PUBMED:19237113). The evidence of causal relationships between sports participation and health outcomes, including future physical activity levels, is of low-to-moderate level, with some positive associations identified for bone health, psychological, and social health, but the evidence is inconclusive for obesity status (PUBMED:34150481). Participation in organized sport is positively associated with adolescents' total PAL, and boys who participated less than 3 hours per week in organized sport had the lowest fulfillment of recommended PAL. This suggests that organized sport plays a critical role in maintaining adolescents' physical activity levels (PUBMED:31179262). For adolescents with hearing loss, sports participation, but not physical activity, was associated with a reduced likelihood of experiencing adverse psychosocial outcomes, suggesting that sports participation may have unique benefits for psychosocial health (PUBMED:35718651). The negative trend in participation in organized PA, exacerbated by the pandemic, highlights the need for interventions to increase adolescents' participation in organized PA, which supports vigorous PA and overall weekly PA among boys and girls (PUBMED:37772531). Adolescents who engage in organized sports training have better knowledge of sports and physical activity, and this participation is associated with positive benefits and attitudes towards being regularly physically active (PUBMED:35627770). In summary, while participation in organized sports is associated with higher levels of physical activity during adolescence, its predictive value for future physical activity varies and is influenced by SES, gender, and other sociodemographic factors. It is important to design interventions that address the unique needs of lower SES youth to ensure the benefits of organized sports participation are accessible to all adolescents (PUBMED:19237113).
Instruction: Clinical guidelines online: do they improve compliance? Abstracts: abstract_id: PUBMED:22874224 An engine for compliance checking of clinical guidelines. Although, clinical guidelines are regarded as best practices for clinicians, clinician activities are not always compliant with guideline recommendations. This paper aims to improve clinician compliance with guidelines. We have developed an engine to automatically report three non-compliance situations: 1) guideline recommendations exist, and the clinician performed some activities, but not according to the guidelines; 2) guideline recommendations exist, but the clinician did nothing; 3) guideline recommendations do not exist, but the clinician performed some activities. In particular, we highlight effective time for compliance checking, as well as membership, numeric relationships, concept subsumption and contextualization. We deployed our engine to a clinical setting involving the daily care routine of diabetes patients, and generated non-compliance reports for pilot users. abstract_id: PUBMED:33998499 Compliance with the clinical guidelines for managing asthma by primary care physicians: An ambispective observational study. Objective: To assess the degree of compliance with the recommendations of the 2009 and 2015 versions of the Spanish guidelines for managing asthma (Guía Española para el Manejo del Asma [GEMA]) and the effect of this compliance on controlling the disease. Material And Methods: We conducted an observational ambispective study between September 2015 and April 2016 in which 314 primary care physicians and 2864 patients participated. Results: Using retrospective data, we found that 81 of the 314 physicians (25.8%; 95% CI 21.3-30.9) stated that they complied with the GEMA2009 recommendations. At the start of the study, 88 of the 314 physicians (28.0%; 95% CI 23.4-33.2) complied with the GEMA2015 recommendations. Poorly controlled asthma (OR, 0.19; 95% CI 0.13-0.28) and persistent severe asthma at the start of the study (OR, 0.20; 95% CI 0.12-0.34) were negatively associated with having well-controlled asthma by the end of the follow-up. In contrast, compliance with the GEMA2015 recommendations was positively associated with a greater likelihood that the patient would have well-controlled asthma by the end of the follow-up (OR, 1.70; 95% CI 1.40-2.06). Conclusions: Low compliance with the clinical guidelines for managing asthma is a common problem among primary care physicians. Compliance with these guidelines is associated with a better asthma control. Actions need to be taken to improve primary care physician compliance with the asthma management guidelines. abstract_id: PUBMED:32151431 Compliance with the clinical guidelines for managing asthma by primary care physicians: An ambispective observational study. Objective: To assess the degree of compliance with the recommendations of the 2009 and 2015 versions of the Spanish guidelines for managing asthma (Guía Española para el Manejo del Asma [GEMA]) and the effect of this compliance on controlling the disease. Material And Methods: We conducted an observational ambispective study between September 2015 and April 2016 in which 314 primary care physicians and 2864 patients participated. Results: Using retrospective data, we found that 81 of the 314 physicians (25.8%; 95% CI 21.3-30.9) stated that they complied with the GEMA2009 recommendations. At the start of the study, 88 of the 314 physicians (28.0%; 95% CI 23.4-33.2) complied with the GEMA2015 recommendations. Poorly controlled asthma (OR, 0.19; 95% CI 0.13-0.28) and persistent severe asthma at the start of the study (OR, 0.20; 95% CI 0.12-0.34) were negatively associated with having well-controlled asthma by the end of the follow-up. In contrast, compliance with the GEMA2015 recommendations was positively associated with a greater likelihood that the patient would have well-controlled asthma by the end of the follow-up (OR, 1.70; 95% CI 1.40-2.06). Conclusions: Low compliance with the clinical guidelines for managing asthma is a common problem among primary care physicians. Compliance with these guidelines is associated with better asthma control. Actions need to be taken to improve primary care physician compliance with the asthma management guidelines. abstract_id: PUBMED:15205774 Compliance with infection control guidelines Compliance is the degree to which advice is adhered to. It can be assumed that compliance rates are considerably less than 100% for nearly all infection control guidelines. Factors that may influence compliance rates can be found on different levels of the health care system. Health care workers may adhere to guidelines because they believe that a measure is effective, they receive positive or negative feedback, or because they follow role models. The institution is responsible for providing the structural conditions that allow easy adherence. Incentives for the institution are regulations that have been made mandatory by law or financial considerations. Finally, guidelines must be accepted by society to be widely adhered to. Compliance can be measured by direct or indirect surveillance. If it is found to be under the expected value, it should be improved using interventional programs. Long-lasting effects on compliance rates can be achieved by using a multifactorial approach that considers all levels of the health care system. The goal of the program should be clearly defined and accepted by all parties. Especially the institution should highly value the goal. Strategies that may be used to improve compliance include educational programs for health care workers, provision of better structures by the institution, e. g., administrative sanction or rewarding, or education of patients to influence the expectations of society. As long as there are no striking new interventions to protect the patients, the improvement of compliance with existing guidelines remains the most important measure to reduce rates of nosocomial infection. abstract_id: PUBMED:18806326 Compliance with clinical practice guidelines. Compliance with clinical practice guidelines is a challenging topic because it depends on a variety of factors, some related to guidelines themselves, some related to users, and some to the implementation context. Among the former are guideline quality, purpose and implementation modality. Among the user-related factors are attitude to behavioural changes, authority interventions to foster adherence and eventually the type of users (general practitioners, hospital professionals, home caregivers, patients, etc.). Context is also crucial because organisational issues, such as lack of resources, can hamper guideline implementation and sometimes the original guideline intention is overridden by the guideline adaptation to a certain setting. This chapter analyses these factors and discusses their implications for the development of computerised decision support systems. Moreover, it gives examples of non-compliance detection and analysis in a specific real-world computerised guideline implementation, facing both methodological and practical issues. abstract_id: PUBMED:30173561 Compliance with clinical guidelines and adherence to antiretroviral therapy among patients living with HIV. Objective: Evaluation of provider compliance with antiretroviral (ARV) treatment guidelines and patient adherence to ARVs is important for HIV care quality assessment; however, there are few current real-world data for guideline compliance and ARV adherence in the US. This study evaluated provider compliance with US Department of Health and Human Services (DHHS) guidelines and patient adherence to ARVs in a US population of patients with HIV.Methods: This was a retrospective claims study of adults with HIV-1 receiving ARV treatment between January 2010-December 2014. Follow-up began at first ARV treatment and ended at health plan disenrollment or study end. ARV regimens for treatment-naïve patients were categorized as "preferred/recommended", "alternative", or "non-preferred/recommended/alternative" according to DHHS guidelines. ARV adherence was evaluated using proportion of days covered (PDC) and medication possession ratio (MPR).Results: The analysis included 25,320 patients (84.4% male, mean age 45.3 years) and 39,071 regimens. Preferred/recommended regimens were most common during each study year, but the proportion of non-preferred/recommended/alternative regimens was substantial (15.9-20.6%). Only 53.6% of patients had optimal adherence by PDC ≥0.95, and 57.9% by MPR ≥0.95. Guideline non-compliance and sub-optimal adherence were more prevalent among female vs male patients (22.6% vs 14.8% [in 2014] and 65.9% vs 53.7%, respectively).Conclusions: Provider non-compliance with DHHS guidelines and sub-optimal ARV adherence among patients with HIV remain common in real-world practice, particularly for female patients. Healthcare providers should follow the latest clinical guidelines to ensure that patients receive recommended therapy, and address non-adherence when selecting ARV regimens. abstract_id: PUBMED:27799173 Compliance with evidence-based clinical management guidelines in bleeding trauma patients. Background: In 2007, the multidisciplinary European Task Force for Advanced Bleeding Care in Trauma published guidelines for the management of the bleeding trauma patient. The present study aimed to assess compliance with the European guidelines during the first 24 h in a level I trauma centre and to determine whether compliance impacts mortality. Methods: This was a retrospective study of consecutive bleeding trauma patients referred to a university hospital in France between 2010 and 2014. A reference document was developed on the basis of the European guidelines to transform the guidelines pragmatically into 22 objectively measurable criteria. We measured per-patient and per-criterion compliance rates and assessed the impact of guideline compliance on mortality. Results: A total of 121 bleeding trauma patients were included. The median (interquartile range) per-patient compliance rate was 75 (65-82)% and the per-criterion compliance rate 64 (57-81)%. Mortality rates were 18 and 32% at 24 h and 30 days, respectively. After adjusting for injury severity, per-patient compliance rates were associated with decreased mortality at 24 h (odds ratio per 10% increase in patient compliance score, 0.43; 95% confidence interval 0.26-0.71; P = 0.0001) and at 30 days (odds ratio per 10% increase in patient compliance score, 0.47; 95% confidence interval 0.31-0.72; P = 0.0004). Conclusions: We found that compliance with protocols based on European guidelines impacts trauma outcome, because patient compliance was associated with survival. Further work is needed to improve adherence to these guidelines, with ongoing monitoring to ensure best practice and optimal patient outcome. abstract_id: PUBMED:34142315 Compliance with evidence-based clinical guidelines in the management of acute biliary pancreatitis: the MANCTRA-1 study protocol. Despite existing evidence-based practice guidelines for the management of biliary acute pancreatitis (AP), the clinical compliance with recommendations is overall poor. Studies in this field have identified significant discrepancies between evidence-based recommendations and daily clinical practice. The most commonly reported gaps between clinical practice and AP guidelines include the indications for CT scan, need and timing of artificial nutritional support, indications for antibiotics, and surgical/endoscopic management of biliary AP. The MANCTRA-1 (coMpliAnce with evideNce-based cliniCal guidelines in the managemenT of acute biliaRy pancreAtitis) study is aiming to identify the areas for quality improvement that will require new implementation strategies. The study primary objective is to evaluate which items of the current AP guidelines are commonly disregarded and if they correlate with negative clinical outcomes according to the different clinical presentations of the disease. We attempt to summarize the main areas of sub-optimal care due to the lack of compliance with current guidelines to provide the basis for introducing a number of bundles in AP patients' management to be implemented during the next years. The MANCTRA-1 study is an international multicenter, retrospective cohort study with the purpose to assess the outcomes of patients admitted to hospital with a diagnosis of biliary AP and the compliance of surgeons worldwide to the most up-to-dated international guidelines on biliary AP. ClinicalTrials.Gov ID Number: NCT04747990, Date: February 23, 2021. Protocol Version V2.2. abstract_id: PUBMED:28578710 Compliance with Clostridium difficile treatment guidelines: effect on patient outcomes. Guidelines for the severity classification and treatment of Clostridium difficile infection (CDI) were published by Infectious Diseases Society of America (IDSA)/Society for Healthcare Epidemiology of America (SHEA) in 2010; however, compliance and efficacy of these guidelines has not been widely investigated. This present study assessed compliance with guidelines and its effect on CDI patient outcomes as compared with before these recommendations. A retrospective study included all adult inpatients with an initial episode of CDI treated in a single academic center from January 2009 to August 2014. Patients after guideline publication were compared with patients treated in 2009-2010. Demographic, clinical, and laboratory data were collected to stratify for disease severity. Outcome measures included compliance with guidelines, mortality, length of stay (LOS), and surgical intervention for CDI. A total of 1021 patients with CDI were included. Based upon the 2010 guidelines, 42 (28·8%) of 146 patients treated in 2009 would have been considered undertreated, and treatment progressively improved over time, as inadequate treatment decreased to 10·0% (15/148 patients) in 2014 (P = 0·0005). Overall, patient outcomes with guideline-adherent treatment decreased CDI attributable mortality twofold (P = 0·006) and CDI-related LOS by 1·9 days (P = 0·0009) when compared with undertreated patients. Compliance with IDSA/SHEA guidelines was associated with a decreased risk of mortality and LOS in hospitalized patients with CDI. abstract_id: PUBMED:26393002 Primary school compliance with school canteen guidelines in Fiji and its association with student obesity. Setting: Childhood obesity is of growing public health concern in Fiji. The study setting was primary schools in Fiji's Western Division. Objective: 1) To assess primary schools' compliance with national school canteen guidelines, 2) to understand reasons for non-compliance, and 3) to assess the relationship between compliance with the guidelines and students' body mass index (BMI). Design: Cross-sectional analysis of data collected in 2010 by public health dieticians of the Ministry of Health on annual visits to primary schools. Results: Among 230 schools, 33 (14%) had no canteen data. Of the 197 schools with data, only 31 (16%) were fully compliant with national school canteen guidelines, while the remaining 166 (84%) did not fully comply with the guidelines. This was irrespective of school location or whether the canteen was school or commercially operated. In a random sample (n = 44 schools), overweight and obesity were more common among children in non-compliant schools than in fully compliant schools (40% vs. 32%, P &lt; 0.001). Conclusion: Most primary schools in Fiji's Western Division did not comply with school canteen guidelines, which is worrying given the increasing rates of overweight children. Given the association between non-compliance and student overweight/obesity, further action is needed to ensure that these guidelines are implemented. Answer: The available abstracts suggest that clinical guidelines can improve compliance when they are effectively implemented and adhered to by healthcare providers. For instance, a study on the management of asthma by primary care physicians found that compliance with the Spanish guidelines for managing asthma (GEMA) was associated with better asthma control, indicating that adherence to clinical guidelines can lead to improved patient outcomes (PUBMED:33998499, PUBMED:32151431). Similarly, compliance with evidence-based clinical management guidelines in bleeding trauma patients was associated with decreased mortality, suggesting that following protocols based on guidelines can impact patient survival (PUBMED:27799173). However, the abstracts also highlight that compliance with clinical guidelines is not always optimal. For example, a study on the management of acute biliary pancreatitis noted that clinical compliance with recommendations is generally poor, with significant discrepancies between evidence-based recommendations and daily clinical practice (PUBMED:34142315). Another study on the treatment of Clostridium difficile infection found that compliance with guidelines improved over time and was associated with a decreased risk of mortality and length of stay in hospitalized patients, indicating that adherence to guidelines can improve patient outcomes (PUBMED:28578710). Moreover, a study on primary school compliance with school canteen guidelines in Fiji showed that most schools did not comply with the guidelines, which was associated with higher rates of overweight and obesity among students, further emphasizing the importance of guideline adherence for health outcomes (PUBMED:26393002). In addition to the direct impact on patient care, the development of an engine for compliance checking of clinical guidelines has been proposed to automatically report non-compliance situations, which could potentially improve clinician compliance with guidelines (PUBMED:22874224). Overall, the evidence from the abstracts suggests that clinical guidelines have the potential to improve compliance and patient outcomes when they are adhered to, but there are challenges in achieving high levels of compliance, and efforts are needed to improve adherence to guidelines in practice.
Instruction: Do surgical oncologists achieve lower rates of local-regional recurrence in node positive breast cancer treated with mastectomy alone? Abstracts: abstract_id: PUBMED:11519060 Do surgical oncologists achieve lower rates of local-regional recurrence in node positive breast cancer treated with mastectomy alone? Background And Objective: Adjuvant radiotherapy for node positive breast cancer postmastectomy has been recommended by two previously published randomized controlled trials (RCT). The local-regional recurrence rates in the control arms, however, were considered by some critics to be excessive (&gt; 25% at 10 years). Inadequate surgery, as evidenced by the low number of axillary nodes reported, may have resulted in the high local-regional recurrence rates, allowing for the benefits seen with radiotherapy. Fellowship trained surgical oncologists might provide "better quality" surgery, resulting in lower recurrence rates and thus making adjuvant radiotherapy unnecessary. Our objective was to establish the local-regional control rate postmastectomy in node positive breast cancer patients operated on by surgical oncologists, and to determine if treatment recommendations from previous RCTs are generalizable. Methods: Node positive stage IIb and IIIa breast cancer patients treated with mastectomy at the Medical College of Virginia Hospitals by surgical oncologists, without adjuvant radiotherapy, and entered into adjuvant chemotherapy trials between 1978 and 1993 were identified retrospectively. Pathology and follow-up records were reviewed. Results: One hundred and thirty-seven patients were identified. A median of 18 axillary nodes was reported with a median of 4 positive nodes. The locoregional recurrence at 10-years was 27% (95% confidence interval, 19-35%). Conclusion: Despite some evidence of "better quality" surgery, there was no clinically significant difference in the local-regional recurrence rate in this case series compared to controls in two previous RCTs. Recommendations for postmastectomy radiotherapy should be considered for node positive breast cancers, even if operated upon by surgical oncologists. abstract_id: PUBMED:2009056 Local-regional breast cancer recurrence following mastectomy. Local-regional recurrence patterns were investigated in 1392 patients with breast cancer. Primary treatment for all patients included a mastectomy. Nine hundred seventeen patients had negative nodes and did not receive systemic therapy. Four hundred seventy-five patients had node metastases and were randomized to receive different combinations of chemoendocrine therapy. Follow-up ranged between 5 and 16 years. Two hundred thirty (25.8%) node-negative patients have had recurrences, with the initial recurrence being local-regional in 9.2%. Two hundred forty-two (50.9%) node-positive patients have had recurrences, with the initial recurrence being local-regional in 17.1%. Larger tumors and more extensive node involvement were associated with more first local-regional recurrences. The relative percent of first local-regional recurrence among patients in whom cancer recurred was similar for node-negative and node-positive patients (35.4% and 33.5%, respectively). In 63.6% of patients in whom cancer recurred, first local-regional recurrence were distant. Larger tumors, more extensive node involvement, and a shorter disease-free interval after mastectomy were associated with more rapid appearance of distant recurrence among these patients. abstract_id: PUBMED:20945319 Local, regional, and systemic recurrence rates in patients undergoing skin-sparing mastectomy compared with conventional mastectomy. Background: Although the use of SSM is becoming more common, there are few data on long-term, local-regional, and distant recurrence rates after treatment. The purpose of this study was to examine the rates of local, regional, and systemic recurrence, and survival in breast cancer patients who underwent skin-sparing mastectomy (SSM) or conventional mastectomy (CM) at our institution. Methods: Patients with stage 0 to III unilateral breast cancer who underwent total mastectomy at our center from 2000 to 2005 were included in this study. Kaplan-Meier curves were calculated, and the log-rank test was used to evaluate the differences between overall and disease-free survival rates in the 2 groups. Results: Of 1810 patients, 799 (44.1%) underwent SSM and 1011 (55.9%) underwent CM. Patients who underwent CM were older (58.3 vs 49.3 years, P&lt;.0001) and were more likely to have stage IIB or III disease (53.0% vs 31.8%, P&lt;.0001). Significantly more patients in the CM group received neoadjuvant chemotherapy and adjuvant radiation therapy (P&lt;.0001). At a median follow-up of 53 months, 119 patients (6.6%) had local, regional, or systemic recurrences. The local, regional, and systemic recurrence rates did not differ significantly between the SSM and CM groups. After adjusting for clinical TNM stage and age, disease-free survival rates between the SSM and CM groups did not differ significantly. Conclusions: SSM is an acceptable treatment option for patients who are candidates for immediate breast reconstruction. Local-regional recurrence rates are similar to those of patients undergoing CM. Cancer 2011. © 2010 American Cancer Society. abstract_id: PUBMED:34126377 Internal mammary node irradiation in node-positive breast cancer treated with mastectomy and taxane-based chemotherapy. Background: It is important to continually reevaluate the risk/benefit calculus of internal mammary node irradiation (IMNI) in the era of modern systemic therapy. We aimed to investigate the effect of IMNI on survival in node-positive breast cancer treated with mastectomy and anthracycline plus taxane-based chemotherapy. Methods: We analyzed 348 patients who underwent mastectomy and anthracycline plus taxane-based chemotherapy for node-positive breast cancer between January 2006 and December 2011. All patients received postoperative radiation therapy (RT) with IMNI (n = 105, 30.2%) or without IMNI (n = 243, 69.8%). The benefit of IMNI for disease-free survival (DFS) and overall survival (OS) was evaluated using multivariate analysis and inverse probability of treatment weighting (IPTW) to adjust for unbalanced covariates between the groups. Results: After a median follow-up of 95 months, the 10-year locoregional recurrence-free survival rate, DFS, and OS in all patients were 94.8%, 77.4%, and 86.2%, respectively. The IPTW-adjusted hazard ratio (HR) for the association of IMNI (vs. no IMNI) with DFS and OS was 0.208 (95% confidence intervals (CI) 0.045-0.966) and 0.460 (95% CI, 0.220-0.962), respectively. In multivariate analysis, IMNI was a favorable factor for DFS (HR, 0.458; P = 0.021) and OS (HR 0.233, P = 0.018). Conclusions: IMNI was associated with improved DFS and OS in node-positive patients treated with mastectomy, post-mastectomy RT, and taxane-based chemotherapy, although the rate of locoregional recurrence was low. abstract_id: PUBMED:3175044 Isolated local-regional recurrence following mastectomy for adenocarcinoma of the breast treated with radiation therapy alone or combined with surgery and/or chemotherapy. The results of radiation therapy alone or combined with surgery and/or chemotherapy are reported for 47 patients who presented with local and/or regional recurrence without evidence of distant metastases following initial management of adenocarcinoma of the breast with radical or modified radical mastectomy (43) or simple mastectomy (4). Patients were treated between October 1964 and March 1983 at the University of Florida; all have a 2-year minimum follow-up and 42/47 (89%) have had follow-up for greater than or equal to 5 years. The overall actuarial local-regional control rates were 80% at 2 years, 68% at 5 years, and 61% at 10 years. The 5-year actuarial local-regional control rates by site and extent of disease were as follows: single chest wall nodule, 92%; multiple chest wall nodules, 49%; regional lymph nodes, 66%; and multiple sites, 64%. The 5- and 10-year actuarial determinate disease-free survival rates for all patients were 41 and 17%, respectively. The 5- and 10-year actuarial survival rates for all patients were 50 and 34%, respectively. abstract_id: PUBMED:37427099 Could axillary lymph node dissection be omitted in the mastectomy patient with tumor positive sentinel node? Background: Recent data from the ACOSOG Z0011 trial suggest that axillary lymph node dissection (ALND) may not be necessary for patients with positive sentinel lymph node biopsy (SLNB) receiving breast-conserving surgery (BCS) with irradiation. However, consensus statements and guidelines have recommended that patients undergoing mastectomy with tumor-positive sentinel node undergo completion ALND. In this study, we compared the locoregional recurrence rate of patients with tumor-positive sentinel nodes among three groups: mastectomy with SLNB, mastectomy with ALND and BCS with SLNB. Method: We identified 6,163 women with invasive breast cancer who underwent surgical resection at our institution between January 2000 and December 2011. Clinicopathologic data obtained from the prospectively collected medical database were analyzed retrospectively. Among the patients with sentinel node positive, mastectomy with SLNB was performed in 39 cases, mastectomy with ALND in 181 cases, and BCS with SLNB in 165 cases. The primary end point was the loco-regional recurrence rate. Results: Clinicopathologic characteristics were similar among the groups. There were no cases of loco-regional recurrence in the sentinel groups. At a median follow-up of 61.0 months (last follow-up May 2013), the loco-regional recurrence rate of each group was 0% for BCS with SLNB and mastectomy with SLNB only, and 1.7% for mastectomy with ALND (p=0.182). Conclusion: In our study, there was no significant difference in loco-regional recurrence rates between groups. This result lends weight to the argument that SLNB without ALND may be a reasonable management for selected patients with appropriate surgery and adjuvant systemic therapy. abstract_id: PUBMED:3899246 The prediction of local or regional recurrence after simple mastectomy for operable breast cancer. Both the histological grade of the primary tumour and lymph node status have been found to contribute significantly towards the development of a local or regional recurrence after simple mastectomy for operable breast cancer. No other factor, from a series of seven studied, has been found to be of independent significance. A small group of patients with grade III tumours, lymph node positive at mastectomy, has been identified in whom more than 40 per cent of all symptomatic local or regional recurrences occurred. The chance a patient in this group has of developing a local or regional recurrence requiring treatment within 4 years approaches 50 per cent. abstract_id: PUBMED:9315067 Loco-regional recurrence following mastectomy for early breast carcinoma: efficacy of radiotherapy at the time of recurrence. This study aims to define the risk factors for loco-regional relapse following mastectomy, and to assess the efficacy of radiotherapy at the time of relapse. To achieve this 272 patients with loco-regional relapse treated at a single institution with modified radical or radical mastectomy were reviewed. Tumour size, axillary node involvement and tumour grade were found to be significant risk factors for loco-regional recurrence of disease. Radiotherapy given at the time of relapse controlled disease in 61% of cases, compared with 34% of patients treated with systemic treatment only. Altogether, 146 (54%) of the 269 evaluable patients with local failure had uncontrolled disease at the same site, either at the time of death or at the date last seen. The result of this retrospective study showed that delayed radiotherapy was effective in controlling the disease in patients with developing loco-regional relapses. However, as adjuvant radiotherapy reduces the incidence of local disease recurrence it should be recommended to patients considered to be at high risk of local relapse following mastectomy; namely those with tumours bigger than 5 cm with four or more positive axillary nodes. abstract_id: PUBMED:17368813 Loco-regional recurrence in 2064 patients with breast cancer treated with mastectomy without adjuvant radiotherapy. Introduction: We investigated the incidence of loco-regional recurrence in a sub-group of patients who underwent mastectomy without adjuvant radiotherapy to evaluate the effect of each specific clinical or pathological parameter that could be associated with a higher local relapse rate. Patients And Methods: Two thousand and sixty-four patients were treated from January 1971 to December 2003 at the University of Florence. Results: At the time of analysis 18.3% of patients (378/2064) had isolated loco-regional failures. Univariate analysis showed an association of borderline statistical significance with pathological tumour size. Elderly age at diagnosis had a low incidence of local recurrence but the results did not reach statistical significant. The number of positive axillary lymph node did not show any influence for local recurrence. Conclusion: In our series we noted a higher relapse rate only related to the pathological tumour size without any correlation with number of positive axillary nodes. Radiotherapy after mastectomy still remains controversial, but in our series the number of positive axillary lymph node did not seem enough to justify adjuvant treatment. abstract_id: PUBMED:11720664 Recurrence rates in patients with central or retroareolar breast cancers treated with mastectomy or lumpectomy. Background: Although breast conservation with lumpectomy and radiation treatment has become a commonly used treatment for breast cancer, there are little data to support the use of lumpectomy for central and retroareolar breast cancers. In this study, we investigate the local and distant recurrence rates of patients with central or retroareolar breast cancers treated with lumpectomy compared with mastectomy. Methods: This study provides a retrospective analysis of 99 patients, from 1981 to 2000, with central or retroareolar breast cancers treated with mastectomy or lumpectomy to determine the frequency of local and distant recurrence. The mastectomy and lumpectomy patients were compared with respect to recurrence and other prognostic factors including: tumor location, tumor size, axillary nodal status, and final surgical margins. Results: The overall frequency of local recurrence was 5 of 99 (5.0%) in the entire group, 3 of 67 (4.5%) and 2 of 32 (6.3%) of patients who underwent mastectomy and lumpectomy, respectively (P &gt;0.99). Overall, 3 patients experienced a distant recurrence as a first event, with 2 patients (3.0%) in the mastectomy group and 1 patient (3.1%) in the lumpectomy group (P &gt;0.99). The type of surgical management was not statistically significant related to either local or distant disease recurrence, with median time to local recurrence of 3.0 years for the mastectomy patients and 5.0 years for lumpectomy patients. Of the patients with central tumors who underwent mastectomy 2 of 42 (4.8%) developed local recurrences compared with those who had a lumpectomy, 1 of 21 (4.8%). Similarly for retroareolar tumors, the local recurrence rate was 1 of 25 (4.0%) for patients undergoing mastectomy and 1 of 11 (9.1%) for those undergoing lumpectomy (P &gt;0.99). Conclusions: In this study there was no significant difference in local or distant failure rates of those patients with central or retroareolar tumors treated with mastectomy versus lumpectomy. We conclude lumpectomy to be a reasonable treatment option for selected patients with central or retroareolar breast cancers. Answer: No, surgical oncologists do not achieve lower rates of local-regional recurrence in node-positive breast cancer treated with mastectomy alone. A study conducted on node-positive stage IIb and IIIa breast cancer patients treated with mastectomy by surgical oncologists, without adjuvant radiotherapy, showed a 10-year locoregional recurrence rate of 27%, which is not clinically significantly different from the control arms of two previous randomized controlled trials (RCTs) that recommended adjuvant radiotherapy postmastectomy (PUBMED:11519060). Despite some evidence of "better quality" surgery, indicated by a median of 18 axillary nodes reported with a median of 4 positive nodes, the local-regional recurrence rate was still high, suggesting that recommendations for postmastectomy radiotherapy should be considered for node-positive breast cancers, even if operated upon by surgical oncologists (PUBMED:11519060).
Instruction: Do patients who deliver at term after being hospitalized for preterm contractions have an increased risk for obstetrical complications? Abstracts: abstract_id: PUBMED:23529685 Do patients who deliver at term after being hospitalized for preterm contractions have an increased risk for obstetrical complications? Objectives: To determine the obstetrical complications and perinatal outcomes of patients with recurrent episodes of preterm contractions (PTC) that eventually delivered at term compared to those who delivered preterm. Methods: A retrospective study evaluating pregnancy complications and adverse perinatal outcomes of patients with recurrent episodes of PTC (three or more) was conducted. A comparison was made between those who delivered preterm to those who eventually delivered at term. Results: Deliveries occurred between the years 1989 and 2009. During the study period, there were 1,897 singleton deliveries at term and 393 preterm singleton deliveries of patients who were previously hospitalized with PTCs. Patients who delivered at term were significantly more likely to be in their first pregnancy and to be primiparous. Patients in the study group were less likely to have had fertility treatments, a history of miscarriage, a higher incidence of one previous hospitalization but lower rates of multiple hospitalizations for PTC. Patients who delivered at term had a significantly lower rate of severe preeclampsia as well as cesarean delivery and a shorter hospital stay than those who delivered preterm. At term, an increased incidence of small for gestational age (SGA) neonates was noted compared to patients who delivered prematurely (10.07 vs. 5.6 %; P = 0.005). Conclusion: Patients with symptoms of preterm labor may require further surveillance, not only because of their risk to progress to preterm delivery, but also because they are at an increased risk for delivering an SGA neonate at term. abstract_id: PUBMED:35671781 Are infants born after an episode of suspected preterm labor at risk of attention deficit hyperactivity disorder? A 30-month follow-up study. Background: An episode of suspected preterm labor may be by itself a pathologic event that may alter the normal course of pregnancy and the offspring's neurodevelopment. Certainly, the association between preterm birth and neurodevelopmental disorders can only be partially explained by the immaturity of the nervous system, as evidenced by the increased risk of attention deficit hyperactivity disorder in late-preterm infants without any neurologic alteration. Objective: This study aimed to examine whether infants born after suspected preterm labor may be at an increased risk of developing attention deficit hyperactivity disorder. Moreover, potential obstetrical, perinatal, and psychosocial risk factors associated with attention deficit hyperactivity disorder in this population are examined. Study Design: A prospective cohort study of 120 mother-infant pairs was conducted from the moment the mothers received a diagnosis of suspected preterm labor until the infants' 30 months of life. Infants were divided according to the prematurity status: full-term infants born after a suspected preterm labor (n=28; born at ≥37 weeks of gestation), late-preterm infants (n=56; born between 32 and &lt;37 weeks of gestation), very-preterm infants (n=36; born before &lt;32 weeks of gestation). At-term infants born without obstetric complications served as a control group (n=46). Infants' attention deficit hyperactivity disorder symptoms were assessed at the age of 30 months. Furthermore, obstetrical, perinatal, and psychosocial risk factors were recorded. Results: All groups of infants born after a suspected preterm labor showed more attention deficit hyperactivity disorder symptoms at the age of 30 months than the control group. Concretely, very-preterm infants showed higher restless or impulsive behaviors, whereas full-term infants born after a suspected preterm labor and late-preterm infants showed higher emotional lability behaviors. Among potential risk factors, male sex and maternal experience of posttraumatic stress symptoms predicted the severity of attention deficit hyperactivity disorder symptoms in infants born after a suspected preterm labor. Conclusion: Infants born after a suspected preterm labor had a higher risk of developing attention deficit hyperactivity disorder symptoms, including those born at term. Infants born after a suspected preterm labor showed a distinctive phenotype and shared specific risk factors suggesting that they conform an undescribed population at risk of attention deficit hyperactivity disorder. abstract_id: PUBMED:35995363 Developing a risk profile for spontaneous preterm birth and short interval to delivery among patients with threatened preterm labor. Background: Threatened preterm birth is the most common reason for antepartum hospitalization in the United States, accounting for approximately 50% of these admissions. However, fewer than 10% of patients with inpatient evaluation for signs or symptoms of preterm labor ultimately deliver before term. Objective: This study aimed to generate predictive models to assess the risk of preterm delivery and time to delivery based on clinical signs and symptoms of patients evaluated in our institution for preterm labor concerns. Study Design: This was a retrospective cohort study of singleton pregnancies evaluated for signs and/or symptoms of preterm labor, including contractions, abdominal pain, vaginal bleeding, and short cervix, between 22 0/7 and 33 6/7 weeks of gestation. Inpatient evaluations were classified by patient presentation: (1) symptomatic with cervical findings (transvaginal cervical length of &lt;2.5 cm or cervical dilation of ≥2.0 cm), (2) asymptomatic with cervical findings, and (3) symptomatic without cervical findings. The primary outcomes included incidence of spontaneous preterm birth and interval from presentation to delivery, compared between groups. The risk of preterm delivery was evaluated using log-binomial regression, and presentation to delivery timing was assessed by survival analysis and Cox proportional hazards modeling. Results: Of 631 patients with preterm labor concerns, 96 (16%) were symptomatic with cervical findings on evaluation, 51 (8%) were asymptomatic with cervical findings, and 466 (76%) were symptomatic without cervical findings. The occurrence of preterm birth was significantly higher among symptomatic patients with cervical findings (49%) than among those with cervical findings alone (31%) or symptoms alone (11%) (P&lt;.0001). In addition, symptomatic patients with cervical findings were significantly more likely to deliver within 48 hours (20%), 1 week (30%), 2 weeks (33%), and 1 month (43%) of presentation than patients with cervical findings alone (2%, 2%, 6%, and 10%, respectively) or symptoms alone (0.4%, 1%, 1.5%, and 5%, respectively) (P value for trend&lt;.0001). Adjusted for gestational age at presentation and previous preterm birth, the overall risk of preterm delivery was significantly higher among patients with symptoms and cervical findings than among patients with cervical findings alone (relative risk, 2.81; 95% confidence interval, 1.74-4.54) or symptoms alone (relative risk, 4.39; 95% confidence interval, 3.16-6.09). Adjusted for the same variables, symptomatic patients with cervical findings were also at higher risk of delivery over time after assessment than patients with cervical findings alone (hazard ratio, 2.06; 95% confidence interval, 1.47-2.90) or symptoms alone (hazard ratio, 2.16; 95% confidence interval, 1.74-2.70). The negative predictive value of these models suggested that only 1% of patients with isolated symptoms or cervical findings are at risk of preterm delivery within 1 week of initial presentation. Conclusion: Symptomatic patients with cervical findings suggestive of preterm labor were at the greatest risk of preterm birth and a shorter interval from presentation to delivery. The study findings supported a risk profile that may facilitate the selection of patients most appropriate for admission and targeted management. Nonetheless, as nearly 50% of patients meeting this risk profile subsequently deliver at term, future research is needed to identify which of these patients will require intervention. abstract_id: PUBMED:26874238 Preterm uterine contractions ultimately delivered at term: safe but not out of danger. Objectives: Patients with pregnancies complicated with premature uterine contractions (PMC), but delivered at term are considered as false preterm labor (PTL), and represent a common obstetric complication. We aimed to assess obstetric and neonatal outcomes of pregnancies complicated with PMC, but delivered at term, as compared to term normal pregnancies. Study Design: Obstetric, maternal and neonatal outcomes of singleton pregnancies complicated with PMC between 24-33(6)/7 weeks (PMC group), necessitating hospitalization and treatment with tocolytics and/or steroids, during 2009-2014, were reviewed. The study group included only cases who eventually delivered ≥37 weeks, which were compared to a control group of subsequent term singleton deliveries who had not experienced PMC during pregnancy. Neonatal adverse composite outcome included: phototherapy, RDS, sepsis, blood transfusion, cerebral injury, NICU admission. Results: The PMC group (n=497) was characterized by higher rates of nulliparity (p=0.002), infertility treatments (p=0.02), and polyhydramnios (p&lt;0.001), as compared to controls (n=497). Labor was characterized by higher rates of instrumental deliveries (p=0.03), non-reassuring fetal heart rate tracings (p&lt;0.001) prolonged third stage of labor (p=0.04), and increased rate of post-partum maternal anemia (Hb&lt;8g/dL) p=0.004, in the PMC group as compared to controls. Neonates in the PMC groups had lower birth weights compared to controls, 3149g±429 vs. 3318g±1.1, p&lt;0.001, respectively. By logistic regression analysis, PMC during pregnancy was independently associated with neonatal birth-weight &lt;3rd percentile (adjusted OR 4.6, 95% CI 1.5-13.7). Conclusions: Pregnancies complicated with PMC, even-though delivered at term, entail adverse obstetric and neonatal outcomes, and may warrant continued high risk follow up. abstract_id: PUBMED:30684071 Obstetrical complications of thin endometrium in assisted reproductive technologies: a systematic review. Purpose: The aim of the systematic review is to describe the obstetrical complications associated with pregnancies in the context of a thin endometrial lining. Methods: Systematic review of the literature. Results: Patients who conceive in the setting of a thin endometrium have a significantly increased risk of early pregnancy loss, namely miscarriage and ectopic pregnancy. These patients also have a twofold increase in low birth weight and preterm delivery, as well as a significantly higher risk of intrauterine growth restriction and composite adverse perinatal outcomes. Conclusions: In addition to the lower probability of conception, a thin endometrium in assisted reproductive technologies appears to be associated with both early and late pregnancy complications. These pregnancies thus warrant special attention and close follow-up from obstetricians. abstract_id: PUBMED:28181672 Endometriosis increases the risk of obstetrical and neonatal complications. Introduction: The objective of this study was to assess obstetrical complications and neonatal outcomes in women with endometriosis as compared with women without endometriosis. Material And Methods: National cohort including all delivering women and their newborns in Denmark 1997-2014. Data were extracted from the Danish Health Register and the Medical Birth Register. Logistic regression analysis provided odds ratios (OR) with 95% confidence intervals (CI). Sub-analyses were made for primiparous women with a singleton pregnancy and for women with endometriosis who underwent gynecological surgery before pregnancy. Results: In 19 331 deliveries, women with endometriosis had a higher risk of severe preeclampsia (OR 1.7, 95% CI 1.5-2.0), hemorrhage in pregnancy (OR 2.3, 95% CI 2.0-2.5), placental abruption (OR 2.0, 95% CI 1.7-2.3), placenta previa (OR 3.9, 95% CI 3.5-4.3), premature rupture of membranes (OR 1.7, 95% CI 1.5-1.8), and retained placenta (OR 3.1, 95% CI 1.4-6.6). The neonates had increased risks of preterm birth before 28 weeks (OR 3.1, 95% CI 2.7-3.6), being small for gestational age (OR 1.5, 95% CI 1.4-1.6), being diagnosed with congenital malformations (OR 1.3, 95% CI 1.3-1.4), and neonatal death (OR 1.8, 95% CI 1.4-2.1). Results were similar in primiparous women with a singleton pregnancy. Gynecological surgery for endometriosis before pregnancy carried a further increased risk. Conclusion: Women with endometriosis had a significantly higher risk of several complications, such as preeclampsia and placental complications in pregnancy and at delivery. The newborns had increased risk of being delivered preterm, having congenital malformations, and having a higher neonatal death rate. Pregnant women with endometriosis require increased antenatal surveillance. abstract_id: PUBMED:22311578 The relationship of socioeconomic status to preterm contractions and preterm delivery. Spontaneous preterm labor precedes approximately 50% of preterm births. One to 10% of pregnant women are hospitalized for preterm labor. This study examines the relationship of socioeconomic indicators, family income, education and type of insurance, with preterm contractions and subsequent preterm delivery. Data were from the pregnancy risk assessment monitoring system on 107,926 women who had singleton births during 2000-2002. Data on preterm contractions, family income, and type of insurance during pregnancy were from the maternal questionnaire. Maternal education and gestational age were derived from birth certificate data. Predicted marginal probabilities from logistic regression models were used to calculate the adjusted cumulative incidence and cumulative risk ratio of preterm contractions and preterm delivery. Median annual household income was approximately $30,000. More than one-fourth (28.1 95% CI: 27.7, 28.6) of women experienced preterm contractions, and these women were 3 times as likely (18 vs. 5%) to deliver preterm as women without preterm contractions. Only 58% of women who delivered preterm reported contractions. Lower income and Medicaid-paid care were independently associated with an increased risk of preterm contractions but not with preterm delivery. The association of lower income and Medicaid enrollment with preterm contractions but not preterm delivery suggests that SES is associated with the initiation of the pathway to spontaneous preterm delivery rather than access to or the success of interventions to prevent delivery following the onset of contractions. abstract_id: PUBMED:32524595 Risk factors for spontaneous preterm delivery. Background: There is a substantial variation in rates of preterm delivery between different parts of the world. The understanding of these variations, as well as the biological mechanisms behind spontaneous preterm delivery, is limited. Although the benefit of antenatal interventions has been shown to be limited, using well-known risk factors for spontaneous preterm delivery to select the correct pregnant women for targeted interventions is important from both a medical and caregiving perspective. Objective: To provide an introduction to a substantial research area dealing with risk factors of spontaneous preterm delivery. Methods: Risk factors in this review were classified as demographical, obstetrical, and gynecological and those related to the current pregnancy according to high-quality evidence of recent literature. Results And Conclusion: An introduction to a substantial research area in maternal and fetal medicine was provided that might help clinicians to better understand the risk factors related to preterm delivery and select the correct pregnant women for targeted interventions. abstract_id: PUBMED:33558813 Risk factors and short-term complications of high-grade intraventricular hemorrhages in preterm neonates in training hospitals of Alborz. Objectives: The aim of this study is to determine risk factors and short-term complications of high-grade intraventricular hemorrhages (IVHs) in preterm neonates. Other topics of investigation include the increase in complications of IVH with its severity and the effect of IVH risk factors on the severity of IVH. Materials & Methods: We conducted a retrospective case-control study of 436 consecutive preterm neonates with high-grade (3, 4) IVHs admitted in training hospitals of Alborz University in Karaj, Iran, from 2012 to 2017. The risk factors and short-term complications were assessed and analyzed in the subjects by SPSS 19. Results: Out of 10 000 eligible neonates, we identified 1203 premature infants with IVH. A total of 436 infants with IVH grades 3 and 4 were allocated to the case group. The control group consisted of 767 infants with IVH grades 1 and 2. This study revealed that the most common risk factors of IVH include lack of corticosteroid use in 67.2%, low Apgar score in 10%, and surfactant use in 5.7% of the patients. Ten percent (31 cases) had short-term complications (18 hydrocephalus and 13 death cases). Male gender (P = .006) and lower gestational age (P = .0001) contributed to higher grades of IVH. Conclusion: According to the results obtained in this study, it may be concluded that the lack of corticosteroid use is the most common risk factor for IVH, and short-term complications may be seen in one-tenth of the cases. abstract_id: PUBMED:24965989 Maternal risk factors and obstetric complications in late preterm prematurity. Objective: Late preterm prematurity has been related to poorer neonatal outcomes. However, research has focused on the neonatal outcomes of late preterm infants, maternal characteristics of these births have been less evaluated. The aim of the study was to compare maternal risk factors and obstetric complications in late preterm births (LPTB) and term births. These factors were also assessed comparing spontaneous LPTB with medically-indicated LPTB. Study Design: We conducted a retrospective cohort study with two groups. All singleton LPTB occurred at our University Hospital between January 1, 2009 and December 31, 2010 were included in the first cohort (n=171). A comparison cohort of term births was configured in a ratio 2:1 (n=342). Well-dated pregnancies without congenital malformations, congenital infections or chromosome abnormalities were eligible. LPTB were classified into two groups, spontaneous LPTB and medically-indicated LPTB following delivery indications. Statistical analysis of categorical variables was performed using either χ(2) or Fisher's exact. Continuous variables were compared using the Student's t-test. Results: Women with LPTB had more medical conditions than women with term births (29% vs 15.7%; P=0.002). Prior preterm births (9.7% vs 2%; P&lt;0.001), prior adverse obstetric outcomes (6.9% vs 2.3%; P&lt;0.001), and obstetric complications were also more frequent in LPTB than in term births. However, no differences were found in maternal medical conditions when spontaneous LPTB and medically-indicated LPTB were compared. Women with medically-indicated LPTB were older (33.69 vs 31.07; P=0.003) and mainly nulliparous (75.8% vs 49.4%; P=0.002). Obstetric complications were more frequent in medically-indicated LPTB than in spontaneous LPTB. Conclusions: Maternal risk factors and obstetric complications are significantly higher in LPTB than in term births. These factors should be considered to identify women at risk for either spontaneous or medically-indicated LPTB. Answer: Yes, patients who deliver at term after being hospitalized for preterm contractions (PTC) do have an increased risk for obstetrical complications. A retrospective study found that patients with recurrent episodes of PTC who eventually delivered at term had a significantly lower rate of severe preeclampsia and cesarean delivery compared to those who delivered preterm. However, they had an increased incidence of small for gestational age (SGA) neonates compared to patients who delivered prematurely (PUBMED:23529685). Another study on pregnancies complicated with premature uterine contractions (PMC) but delivered at term, as compared to term normal pregnancies, showed that the PMC group had higher rates of nulliparity, infertility treatments, polyhydramnios, instrumental deliveries, non-reassuring fetal heart rate tracings, prolonged third stage of labor, and increased rate of post-partum maternal anemia. Neonates in the PMC group also had lower birth weights compared to controls, and PMC during pregnancy was independently associated with neonatal birth-weight below the 3rd percentile (PUBMED:26874238). These findings suggest that pregnancies with a history of PTC or PMC, even when delivered at term, entail adverse obstetric and neonatal outcomes and may warrant continued high-risk follow-up.
Instruction: Chronic periaortitis with thoracic aorta and epiaortic artery involvement: a systemic large vessel vasculitis? Abstracts: abstract_id: PUBMED:26106209 Chronic periaortitis with thoracic aorta and epiaortic artery involvement: a systemic large vessel vasculitis? Objective: Chronic periaortitis (CP) is a rare disease characterized by fibro-inflammatory tissue surrounding the abdominal aorta and the iliac arteries. Anecdotal reports have shown that CP may also involve other vascular districts, particularly the thoracic aorta. The aim of this study was to investigate the thoracic aorta and epiaortic artery involvement in CP. Methods: Patients were eligible if they had undergone imaging studies assessing inflammatory involvement of the thoracic aorta and its major branches (e.g. contrast CT, MRI or PET-CT). We explored the patterns of thoracic vessel involvement and compared the clinical characteristics of patients with and without thoracic disease. Where available, we also reviewed the thoracic vascular/perivascular tissue biopsies. Results: Of 153 CP patients seen between 1999 and 2012, 77 were eligible. Of these, 28 (36%) had thoracic involvement: 15 (54%) had thoracic periaortitis, with 7 also showing epiaortic artery involvement; 6 (21%) had periaortitis surrounding a thoracic aortic aneurysm, 2 of them with epiaortic artery involvement; 7 (25%) had a thoracic aortic aneurysm without periaortitis. Patients with thoracic disease were more frequently female (P = 0.01), were older (P = 0.001) and had a higher frequency of pain and constitutional symptoms (P = 0.02). Thoracic (peri)vascular biopsies revealed adventitial and peri-adventitial fibro-inflammatory patterns similar to those observed in abdominal CP. Conclusion: In about one-third of patients, CP also involves the thoracic aorta and the epiaortic arteries, which supports the hypothesis of a systemic inflammatory disease of the large arteries. abstract_id: PUBMED:34113183 Diagnostic Approaches for Large Vessel Vasculitides. The large vessel vasculitides comprise giant cell arteritis (GCA), Takayasu arteritis (TAK), and chronic periaortitis. The diagnostic approach to these conditions involves the correct use and interpretation of clinical criteria, imaging techniques, and, in case of GCA, temporal artery biopsy. Ultrasound, magnetic resonance imaging (MRI), and computed tomography (CT) reveal a homogeneous, concentric, thickening of the arterial wall. MRI and CT may also reveal aneurysms and stenoses. 18F-Fluorodeoxyglucose (FDG)-PET shows increased FDG uptake of inflamed artery walls delineating increased metabolic activity. Ultrasound, FDG-PET, and MRI are the recommended imaging techniques in GCA and TAK. In patients with a high suspicion of GCA who present with visual disturbances, initiation of high-dose intravenous corticosteroids should not be delayed by imaging. Extracranial large vessel vasculitis may be confirmed by all three modalities, particularly by FDG-PET in case of atypical clinical pictures. In this article, we review the role of the GCA and TAK ACR classification criteria, temporal artery biopsy, conventional angiography, ultrasound, MRI, magnetic resonance angiography (MRA), CT angiography (CTA), and FDG-PET in the diagnostic approach of large vessel vasculitis. abstract_id: PUBMED:21124080 Chronic periaortitis: a large-vessel vasculitis? Purpose Of Review: Chronic periaortitis is characterized by a fibro-inflammatory process spreading from the abdominal aorta and the iliac arteries. Originally, chronic periaortitis was considered a localized inflammatory response to severe aortic atherosclerosis. However, subsequent studies have shown that chronic periaortitis may also involve other arteries and present with features of auto-immune diseases. This article reviews the issue of large-vessel involvement in chronic periaortitis and its implications in the pathogenesis and nosography of the disease. Recent Findings: In many reports, chronic periaortitis has been shown to involve not only the aorto-iliac axis but also other vascular segments such as the thoracic aorta, the proximal epiaortic arteries, the coronary, renal, and mesenteric arteries. Thoracic aorta involvement may manifest as thoracic periaortitis with or without aneurysmal dilatation, or simply as thoracic aorta aneurysm. Thoracic periaortitis can also be a feature of the so-called IgG4-related systemic disease, with which chronic periaortitis may sometimes be associated. Histopathologic studies of chronic periaortitis show adventitial inflammation and fibrosis, vasculitis of vasa vasorum, and adventitial lymphoid follicles with germinal centers, suggesting that chronic periaortitis could be a primary aortitis. Genetic studies have demonstrated an association with HLA-DRB1*03, a marker of auto-immunity, and with the CCR5Δ32 polymorphism, which has been mapped to a Th2 response. Taken together, these findings support the notion of a primary inflammatory or immune-mediated disorder. Summary: Chronic periaortitis is an inflammatory or immune-mediated disorder characterized histopathologically by adventitial inflammation and clinically by variable involvement of different arteries, mainly of large caliber. These findings raise the issue of whether chronic periaortitis should be considered a large-vessel vasculitis. abstract_id: PUBMED:21586202 The pathology of large-vessel vasculitides. Vasculitis affecting large elastic arteries, including the aorta and major proximal branches, encompasses various diseases including Takayasu arteritis, giant cell (or temporal) arteritis, and tertiary syphilis, but also may occur as a rare complication of Behçet's disease, rheumatoid arthritis, sarcoidosis, Cogan syndrome, Kawasaki disease, ankylosing spondylitis, systemic lupus erythematosus and Wegener's granulomatosis. Recent reports have also established a link between inflammatory abdominal aortic aneurysm as well as lymphoplasmacytic thoracic aortitis with an overabundance of IgG4-producing plasma cells and the burgeoning constellation of 'Hyper-IgG4' syndromes. This review focuses on morphologic aspects of large-vessel vasculitis pathology associated with giant cell arteritis, Takayasu arteritis, idiopathic or isolated aortitis, lymphoplasmacytic thoracic and ascending aortitis, and the inflammatory aneurysm/retroperitoneal fibrosis syndrome. abstract_id: PUBMED:36286971 Non-infectious diseases of the aorta and large arteries This article describes the various forms of inflammatory lesions of the aorta and large arteries, including chronic periaortitis, as well as the diagnostic methods are considered. Large vessel vasculitis represent the most common entities, however, there is also an association with other rheumatological or inflammatory diseases, drug-induced or paraneoplastic entities. Instrumental imaging modalities play an important role in the diagnosis. abstract_id: PUBMED:15818649 Positron emission tomography (PET): evaluation of chronic periaortitis. Objective: To evaluate the presence and extent of large-vessel inflammation in patients with chronic periaortitis (CP) using (18)F-fluorodeoxyglucose-positron emission tomography (FDG-PET). Methods: A consecutive case series consisting of 7 patients with CP seen over a 3-year period and a control group of 14 patients with malignancy were evaluated with FDG-PET. For every case we selected 2 age- and sex-matched controls who underwent PET imaging for malignancy. The diagnosis of CP was made by means of computed tomography. PET imaging was performed at diagnosis before therapy was started. Measurement of vascular uptake was graded using a 4-point semiquantitative scale. Results: All patients had evidence of grade 2+ or 3+ vascular uptake in the abdominal aorta and/or iliac artery. No controls showed vascular uptake greater than 1+. Vascular uptake in the thoracic aorta and/or in its branches was seen in 3 (43%) of 7 patients. Vascular uptake in abdominal aorta and/or iliac artery was observed in patients with CP but not in controls (100% versus 0%). There was also a significantly more frequent FDG uptake in the large thoracic arteries in case-patients compared with controls (43% versus 0%; P = 0.03). Conclusion: FDG-PET scan shows in patients with CP the presence of a large-vessel vasculitis involving abdominal aorta and common iliac arteries, which in some patients is also extended to thoracic aorta and/or its branches. abstract_id: PUBMED:28332442 Ultrasound imaging in the diagnosis of large vessel vasculitis. Nowadays noninvasive vascular imaging has an important role in the diagnostic work-up of the large vessel vasculitides (LVV), most importantly giant cell arteritis (GCA) and Takayasu arteritis. Among the imaging modalities available, ultrasound (US) has several important advantages, including low costs, rapid and repetitive availability without exposure to radiation, and high spatial resolution for assessment of large and medium-sized arteries. Therefore, US can be regarded the first line imaging method in suspected LVV. In patients with suspected GCA, US can replace temporal artery biopsy in certain clinical scenarios, and the application of US early in the diagnostic work-up of suspected GCA in specialized fast track clinics has been suggested to reduce the rate of visual ischaemic complications and associated costs. In other LVV such as Takayasu arteritis and chronic periaortitis, the diagnostic accuracy in comparison to other noninvasive imaging methods has not been formally tested but can be considered to be excellent. However, quality of US is highly dependent on the operator's experience, and assessment of the thoracic aorta which is frequently involved in GCA and TA is limited. The role of US in the follow-up of LVV under treatment is unclear. In view of the promising data supporting its value on the one hand and several uncertainties and controversies on the other hand, the present review article provides a comprehensive overview on current evidence for the application of US in the diagnosis and follow up of LVV. Recent multicentre study results and emerging trends such as the application of compression sonography in the diagnosis of GCA and the use of contrast enhanced ultrasound in disease activity assessment in Takayasu arteritis are discussed. abstract_id: PUBMED:27098921 Kidney involvement in medium- and large-vessel vasculitis. Medium- and large-vessel vasculitides (MVV and LVV, respectively) comprise a heterogeneous group of disorders whose common denominator is the inflammatory involvement of vessels of medium and large size. This disease spectrum includes giant-cell arteritis and Takayasu's arteritis, which typically affect the aorta and its main branches, and Kawasaki's disease and polyarteritis nodosa, which involve medium-sized arteries. Chronic periaortitis, characterized by a perivascular fibro-inflammatory reaction affecting the abdominal aorta and the periaortic tissue, frequently has a systemic distribution, involving other segments of the aorta and its major branches, and could thus be included in this group. Unlike small-vessel vasculitides, MVV and LVV do not cause glomerulonephritis, although glomerular immune-mediated lesions and tubulo-interstitial nephritis occur with varying frequency. However, MVV and LVV can often involve the renal artery and its branches, causing a wide array of lesions that range from renal artery stenosis to intra-renal vasculitis causing renal ischaemia/infarction, microaneurysms and haemorrhage. This review focuses on renal involvement in MVV and LVV and underlines why renal abnormalities in these syndromes should not be overlooked. abstract_id: PUBMED:27428181 Large vessel involvement by IgG4-related disease. Objectives: IgG4-related disease (IgG4-RD) is an immune-mediated fibroinflammatory condition that can affect multiple organs and lead to tumefactive, tissue-destructive lesions. Reports have described inflammatory aortitis and periaortitis, the latter in the setting of retroperitoneal fibrosis (RPF), but have not distinguished adequately between these 2 manifestations. The frequency, radiologic features, and response of vascular complications to B cell depletion remain poorly defined. We describe the clinical features, radiology findings, and treatment response in a cohort of 36 patients with IgG4-RD affecting large blood vessels. Methods: Clinical records of all patients diagnosed with IgG4-RD in our center were reviewed. All radiologic studies were reviewed. We distinguished between primary large blood vessel inflammation and secondary vascular involvement. Primary involvement was defined as inflammation in the blood vessel wall as a principal focus of disease. Secondary vascular involvement was defined as disease caused by the effects of adjacent inflammation on the blood vessel wall. Results: Of the 160 IgG4-RD patients in this cohort, 36 (22.5%) had large-vessel involvement. The mean age at disease onset of the patients with large-vessel IgG4-RD was 54.6 years. Twenty-eight patients (78%) were male and 8 (22%) were female. Thirteen patients (36%) had primary IgG4-related vasculitis and aortitis with aneurysm formation comprised the most common manifestation. This affected 5.6% of the entire IgG4-RD cohort and was observed in the thoracic aorta in 8 patients, the abdominal aorta in 4, and both the thoracic and abdominal aorta in 3. Three of these aneurysms were complicated by aortic dissection or contained perforation. Periaortitis secondary to RPF accounted for 27 of 29 patients (93%) of secondary vascular involvement by IgG4-RD. Only 5 patients demonstrated evidence of both primary and secondary blood vessel involvement. Of those treated with rituximab, a majority responded positively. Conclusions: IgG4-RD is a distinctive, unique, and treatable cause of large-vessel vasculitis. It can also involve blood vessels secondary to perivascular tumefactive lesions. The most common manifestation of IgG4-related vasculitis is aortitis with aneurysm formation. The most common secondary vascular manifestation is periaortitis with relative sparing of the aortic wall. Both primary vasculitis and secondary vascular involvement respond well to B cell depletion therapy. abstract_id: PUBMED:29105322 Clinical characteristics and outcomes of 61 patients with chronic periaortitis including IgG4-related and non-IgG4-related cases. Aim: Chronic periaortitis (CP) is a disease characterized by a fibro-inflammatory periaortic cuff and adventitia-predominant fibrosis. CP encompasses idiopathic retroperitoneal fibrosis and inflammatory abdominal aortic aneurysm (AAA), and recent studies have documented overlap between CP and immunoglobulin G4-related disease (IgG4-RD). This study aimed to investigate clinical characteristics and treatment outcomes of patients with CP. Method: CP patients were identified by retrospective review of 1245 patients with International Classification of Diseases 10th edition code of aortitis or aortic disease. Patients were further classified into IgG4-related and non-IgG4-related CP according to the criteria proposed by a Japanese study. Results: A total of 61 CP patients were identified. Patients showed a male predominance (70%) with median age of 61 at diagnosis. The abdominal aorta was most commonly involved (84%), while the thoracic aorta was affected in 46% of patients. Twenty-three (38%) patients had accompanying aortic aneurysm. Approximately 60% of patients achieved remission without further relapse during the course. Ten patients were classified as IgG4-related and 25 as non-IgG4-related. There was no significant difference in clinical features and outcomes between groups, with the exception of older age and greater pancreas involvement in IgG4-related patients. Conclusion: We documented 61 CP patients including 10 IgG4-related cases. CP involved the abdominal aorta in most patients and the thoracic aorta in approximately 50% of patients. IgG4-related CP patients were older and had greater pancreas involvement, but disease outcomes appeared to be similar between IgG4-related and non-IgG4-related CP. Answer: Chronic periaortitis (CP) is a rare disease characterized by fibro-inflammatory tissue surrounding the abdominal aorta and the iliac arteries. However, studies have shown that CP can also involve other vascular districts, including the thoracic aorta and epiaortic arteries, suggesting it may be a systemic inflammatory disease of the large arteries (PUBMED:26106209). This involvement of the thoracic aorta and epiaortic arteries was observed in about one-third of CP patients, with thoracic (peri)vascular biopsies revealing adventitial and peri-adventitial fibro-inflammatory patterns similar to those observed in abdominal CP (PUBMED:26106209). The diagnostic approaches for large vessel vasculitides, which include CP, involve the use of imaging techniques such as ultrasound, MRI, CT, and FDG-PET, which can reveal thickening of the arterial wall, aneurysms, stenoses, and increased metabolic activity indicative of inflammation (PUBMED:34113183). FDG-PET, in particular, has been used to evaluate the presence and extent of large-vessel inflammation in CP patients, showing significant vascular uptake in the abdominal aorta, iliac artery, thoracic aorta, and its branches (PUBMED:15818649). CP has been associated with features of autoimmune diseases and may involve arteries such as the thoracic aorta, proximal epiaortic arteries, coronary, renal, and mesenteric arteries. Histopathologic studies have shown adventitial inflammation and fibrosis, vasculitis of vasa vasorum, and adventitial lymphoid follicles with germinal centers, supporting the notion of CP as a primary inflammatory or immune-mediated disorder (PUBMED:21124080). Genetic studies have also demonstrated an association with markers of autoimmunity, further supporting this hypothesis (PUBMED:21124080). In summary, the evidence suggests that CP, with its involvement of the thoracic aorta and epiaortic arteries, and its histopathological and genetic features, could be considered a systemic large vessel vasculitis (PUBMED:26106209; PUBMED:21124080).
Instruction: Family history of suicide, female sex, and childhood trauma: separate or interacting risk factors for attempts at suicide? Abstracts: abstract_id: PUBMED:16223424 Family history of suicide, female sex, and childhood trauma: separate or interacting risk factors for attempts at suicide? Objective: Female sex, childhood trauma, and a family history of suicidal behavior are three well established risk factors for attempting suicide. However, interactions between these three factors in attempting suicide have been little studied. Method: One thousand eight hundred and eighty-nine abstinent substance dependent patients were interviewed about their lifetime and family history of suicidal behavior and completed the Childhood Trauma Questionnaire (CTQ). Gender, family history of suicidal behavior, and CTQ scores--and their interaction--were examined in relation to suicidal behavior. Results: Each of the three risk factors was associated with at least a doubling of the risk for an attempt at suicide. There were no significant interactions in relation to the risk of making an attempt. However, female sex and higher levels of childhood trauma each discriminated patients at risk for both a younger age of first attempting suicide and for making more attempts. Conclusion: Female sex, childhood trauma, and a family history of suicidal behavior are each independent, and non-interacting, risk factors for attempting suicide. Additionally, female sex and high childhood trauma are independent risk factors for both an early onset of first attempting suicide and for making more attempts. abstract_id: PUBMED:25259671 The additive effect on suicidality of family history of suicidal behavior and early traumatic experiences. Family history of suicidal behavior and personal history of childhood abuse are reported risk factors for suicide attempts and suicide completion. We aim to quantify the additive effect of family history of suicidal behavior and different subtypes of childhood abuse on suicidal behavior. We examined a sample of 496 suicide attempters, comparing individuals with family history of suicidal behavior and personal history of childhood (physical or sexual) abuse, individuals with family history of suicidal behavior only, individuals with history of early traumatic experiences only, and individuals with none of these two risk factors with regards to suicidal features. An additive effect was found for the age at the first attempt in suicide attempters with both family history of suicidal behavior and either physical or sexual abuse. No significant interactions were found between family history of suicidal behavior and childhood trauma in relation to any characteristics of suicidal behavior. Subjects presenting family history of suicidal behavior and childhood abuse attempt suicide earlier in life than subjects with just one or none of them, particularly if they were sexually abused. Other suicidality indexes were only partially or not associated with this combination of risk factors. A careful assessment of patients with both family history of suicidal behavior and childhood abuse could help to prevent future suicide attempts, particularly in young people. abstract_id: PUBMED:31105902 Childhood maltreatment and risk for suicide attempts in major depression: a sex-specific approach. Background: Childhood maltreatment increases the risk of suicide attempts in the general population, possibly having similar effects among patients with major depressive disorder (MDD). The few studies that have addressed this association have been restricted to specific populations (e.g. treatment-resistant depression, personality disorders) and have rarely taken sex into account. Objective: To examine the impact of childhood maltreatment on suicide attempts among MDD patients above and beyond other risk factors and potential confounders, while considering potential sex-specific effects. Methods: The study assessed 165 patients with a principal diagnosis of MDD. Neurological alterations, psychiatric comorbidities, and drug abuse were reasons for exclusion. Logistic regressions using the whole sample, and divided by sex, were run to test the association between childhood maltreatment and history of suicide attempts, controlling for symptom severity, comorbidities, and treatment-resistant depression. Results: There was a significant and clinically relevant association between childhood maltreatment and history of suicide attempts in the total sample. Patients with childhood maltreatment were 3.01 times more likely to present a history of suicide attempts than patients without childhood maltreatment. A family history of psychiatric disorders also contributed to the variance of attempted suicide, but its interaction with childhood maltreatment was not statistically significant. When testing the model separately, the effect of childhood maltreatment on suicide attempts remained for females, whereas for males, age of MDD onset and Childhood Trauma Questionnaire minimization-denial scale were predictive variables. Conclusions: Childhood maltreatment is a clear predictor of suicidal behaviour among MDD patients, and this effect remains significant after controlling for potential confounders. Also, the sex of patients emerges as a relevant factor that may model the mechanisms underlying the prediction of suicide attempts. Since suicide is the main cause of premature death among MDD patients, interventions targeting childhood maltreatment should be included in preventive and clinical strategies. abstract_id: PUBMED:1328663 Long-term follow-up of unsuccessful violent suicide attempts: risk factors for subsequent attempts. Study Population: Of 9046 consecutive trauma admissions, all suicide attempts (n = 156) were identified: 38 patients (24%) died in hospital; 118 (76%) were discharged and received long-term follow-up (mean = 2.8 years). Factors assessed included suicidal ideation and planning, reason for attempt; number of attempts, methods, dates of prior and subsequent attempts; psychiatric diagnoses, substance abuse history, treatment and medication compliance, hospitalizations, incidence of family depression and suicide; education level, job history, and living conditions. Results: 104 (88%) patients were interviewed and 14 (12%) were lost to follow-up. Seventy-seven of the patients (74%) used guns in their attempt; their mean ISS was 14.2. Seven (6.7%) made repeat suicide attempts (all unsuccessful). Late mortality was 7% (one related to index suicide, five to chronic illness, one to motor vehicle crash). Most patients (96%) had psychiatric diagnoses at discharge, 77 of 93 (83%) had diagnosed depression. Sixty-six percent (69 of 104) had histories of alcohol abuse, 42% (42 of 101) histories of drug abuse. Thirty-five percent (34 of 96) were noncompliant with psychiatric follow-up and 70% (16 of 23) were noncompliant with alcohol abuse treatment. Conclusions: (1) Repeat attempts were rare (7%) after failed suicide attempts. (2) No late deaths resulted from repeat suicide attempts. (3) Risk factors associated with repeat attempts were younger age (p = 0.002), prior attempts (p = 0.02), family history of suicide (p = 0.03), schizophrenia (p = 0.005), and not living at home (p = 0.04). (4) Identifying patients with these risk factors, ensuring that they receive inpatient alcohol abuse treatment, along with sustained psychiatric treatment and help in maintaining home environments, may prevent repeat suicide attempts. abstract_id: PUBMED:20174518 Profile of risk factors associated with suicide attempts: A study from Orissa, India. Context: Periodic systematic profiling of suicidal risk factors in developing countries is an established need. Aims: It was intended to study the risk factors associated with suicide attempts in Orissa, one of the most economically compromised states of India. Settings And Design: Cross-sectional study in a general hospital. Materials And Methods: Consecutive 149 suicide attempters were evaluated for psychosocial, situational, and clinical risk factors using the Risk Rescue Rating scale, Suicide Prevention Center scale, Lethality of Suicide Rating scale, and Presumptive Stressful Life Event scale. They were compared with healthy and psychiatric controls who had never attempted suicide. Statistical Analysis: Chi-square for comparison of categorical variables, t-tests for comparison of means. Results: The male-to-female ratio was closer to one in adults and around 1:3 in adolescents. Younger age, lower-middle economic group, rural background, unemployed, school educated were more represented in this study. Compared to the controls, significantly more number of attempters had a family history of psychiatric illness and suicide, childhood trauma, medical consultation within one month, had experienced stressful life events and had expressed suicidal ideas. In a considerable proportion of attempts, risk was high and rescuability least; 59.1% had more than 50% chance of death. Suicide potential was high in almost half the cases. More than 80% of all attempters had psychiatric disorder; however, only 31.5% had had treatment. Factors like middle age, family history of psychiatric disorders, past psychiatric history, current psychiatric illness, communication of suicidal ideas, the use of physical methods, and high potential attempts, differentiated repeaters significantly from the first-timers. Major physical illness, family and marital conflicts, financial problems, and failure in examinations were more frequent life events. Childhood trauma, noted in around 40% of the attempters, was considerably associated with adolescent suicide attempts. Conclusions: Modifiable risk factors identified in this study have preventive implications. abstract_id: PUBMED:26995241 Neutrophil-to-lymphocyte ratio predicting suicide risk in euthymic patients with bipolar disorder: Moderatory effect of family history. Background: Neutrophil-to-lymphocyte ratio (NLR) has been independently related to bipolar disorder (BD) and factors associated with suicidal risk. The aim of our study was to explore the relationship between NLR and suicide risk in euthymic BD patients. We also sought to propose a model of interaction between NLR and stress-diathesis factors, leading to suicidal risk in BD. Methods: The study group consisted of 83 patients diagnosed with BD (36 suicide attempters; 47 suicide non-attempters), compared to the healthy control group (n=73) and matched according to age, gender, and body mass index (BMI). NLR was measured according to the complete blood count. Mood symptoms have been assessed by Young Mania Rating Scale and Montgomery-Asberg Depression Rating Scale. Early trauma and acute stress were evaluated by Early Trauma Inventory Self Report-Short Form and List of Threatening Experiences Questionnaire, respectively. Suicide risk has been assessed by Suicide Behaviors Questionnaire-Revised (SBQ-R). Results: Significant correlation was found between NLR and SBQ-R score. The main effects of suicide attempts on NLR, after covarying for confounders, were observed, indicating increased NLR in BD suicide attempters compared to healthy controls. We found significant moderatory effects of family history on NLR relationship to suicidal risk, with NLR being significant positive predictor of suicidal risk only in the patients with positive family history of suicide attempts. Conclusions: The results suggest an enhancing effect of positive family history of suicide attempts on predictive effect of NLR on suicide risk. Our data support the idea that immune markers can predict suicide attempt risk in BD, but only in the subpopulation of BD patients with family history of suicide attempts. This could lead to prevention in suicide behavior in the patient population at particular risk of suicide. abstract_id: PUBMED:34873738 Risk factors for suicide attempts among U.S. military veterans: A 7-year population-based, longitudinal cohort study. Background: Population-based data on risk factors for suicide attempts among veterans remains limited. Methods: A national probability sample of 2307 veterans was followed over the course of four timepoints spanning seven years to examine how a range of baseline risk factors predict incident suicide attempt. Suicide attempt data were aggregated into a single follow-up timepoint. Results: Sixty-two veterans (3.1%) reported attempting suicide during the 7-year period. The strongest risk factors for suicide attempts were higher baseline levels of loneliness, lower baseline levels of adaptive psychosocial traits (e.g., dispositional gratitude), baseline thoughts of self-harm, and greater post-baseline trauma exposures (12.3%-41.3% of explained variance). Veterans with multiple co-occurring risk factors were at greatest risk for attempts; of veterans with 0, 1, 2, 3, and all 4 of these factors, the predicted probability of suicide attempt was 2.0%, 5.3%, 13.5%, 30.4%, and 55.0%, respectively. Conclusions: Baseline loneliness, dispositional gratitude, thoughts of self-harm, and new-onset traumas emerged as the strongest risk factors for suicide attempts among veterans, underscoring the potential importance of targeting these factors in prevention efforts. Veterans with multiple co-occurring risk factors have substantially greater risk for suicide attempts, suggesting that examination of multiple coinciding vulnerability factors may help improve suicide risk prediction models. abstract_id: PUBMED:17453699 Risk factors for suicide attempts among alcohol dependent patients. Suicidal behavior is a common and important problem among alcohol dependent patients. The study was designed to examine risk factors for attempting suicide in 499 alcohol dependent patients. Those who had attempted suicide (N = 198) were more likely to be female, report a family history of suicidal behavior, report more childhood trauma, report greater levels of aggressive behavior, began heavy drinking earlier, and were more likely to have received antidepressant medication. Logistic regression analysis showed that gender, family history, and childhood sexual abuse history made significant and independent contributions to the risk of a suicide attempt. Thus, developmental, personality, family history, social, and comorbidity risk factors may differentiate alcohol dependent patients who have attempted suicide from those who have not. abstract_id: PUBMED:20943272 Combination of family history of suicidal behavior and childhood trauma may represent correlate of increased suicide risk. Background: There is a need to try to identify patients at highest risk for suicidal behavior. A family history of suicidal behavior (FHS) and childhood trauma are two important risk factors for suicidal behavior. It was therefore decided to combine them and examine if the combination would identify patients at even increased risk for suicidal behavior. Methods: Two hundred and eighty one substance dependent patients with a FHS completed the Childhood Trauma Questionnaire (CTQ) and were interviewed about their lifetime history of suicidal behavior. Patients with the combination of a FHS and CTQ score above the mean were examined and compared with FHS patients with a CTQ score below the mean. Results: One hundred and two of the 129 (79.1%) FHS patients with a CTQ score above the mean had attempted suicide. Thirty five of the 40 female (87.5%) FHS patients with a CTQ score above the mean had attempted suicide .Patients with a CTQ score above the mean were found significantly more among FHS patients who had attempted suicide than among FHS patients who had never attempted. FHS attempters with a CTQ score above the mean had a significantly earlier age of first attempting and had made more attempts than FHS attempters with a CTQ score below the mean. Limitations: Childhood trauma data derived from self-report questionnaire. No consistent collateral information about FHS. Conclusion: The combination of a FHS and childhood trauma may represent a correlate of increased risk of attempting suicide, attempting earlier, and making more attempts. abstract_id: PUBMED:22842027 Suicidal phenotypes associated with family history of suicidal behavior and early traumatic experiences. Background: Family history of suicidal behavior and personal history of childhood trauma are risk factors for suicidal behaviors. We hypothesize that subjects with any of these risk factors will show differential features and that subjects with both of them will display more severe phenotypes. Methods: This study compares three groups of suicide attempters (n=878): subjects with a family history of suicidal behavior and a personal history of early traumatic experiences, subjects with a family history of suicidal behavior or a personal history of early traumatic experiences, and subjects with neither of these two risk factors, with regards to psychopathology, personality traits and suicidal behavior. Results: Subjects with a family history of suicidal behavior and childhood trauma were younger at their first suicide attempt and made more frequent, severe and violent attempts when compared with the other groups. Differences in number and precocity of attempts remained after adjustments in a multinomial regression model. Finally, personality profiles were also substantially different in the group with higher impulsiveness, novelty seeking, affective lability and hopelessness. Limitations: The information provided by subjects regarding childhood abuse and family history of suicidal behavior was not confirmed by other sources. Conclusions: Suicide attempters with a family history of suicidal behavior and childhood trauma show specific characteristics that might be used to prevent future suicidal behaviors in this population. Both risk factors should be routinely investigated when assessing the suicidal risk of a patient. Answer: Family history of suicide, female sex, and childhood trauma are each independent risk factors for attempting suicide, but they do not appear to interact significantly with each other in relation to the risk of making an attempt. Studies have shown that each of these factors is associated with at least a doubling of the risk for an attempt at suicide. Female sex and higher levels of childhood trauma specifically have been found to discriminate patients at risk for both a younger age of first attempting suicide and for making more attempts (PUBMED:16223424). Furthermore, an additive effect was found for the age at the first attempt in suicide attempters with both family history of suicidal behavior and either physical or sexual abuse, with those presenting both risk factors attempting suicide earlier in life, particularly if they were sexually abused (PUBMED:25259671). In the context of major depressive disorder (MDD), childhood maltreatment is a clear predictor of suicidal behavior, and this effect remains significant after controlling for potential confounders. The sex of patients emerges as a relevant factor that may model the mechanisms underlying the prediction of suicide attempts, with childhood maltreatment having a significant impact on suicide attempts in females (PUBMED:31105902). Additionally, the combination of a family history of suicidal behavior and childhood trauma may represent a correlate of increased risk of attempting suicide, attempting earlier, and making more attempts (PUBMED:20943272). In summary, while family history of suicide, female sex, and childhood trauma are each strong, independent risk factors for suicide attempts, their combination, particularly family history of suicidal behavior and childhood trauma, may represent an increased risk for suicidal behavior, including an earlier onset and higher frequency of attempts (PUBMED:16223424; PUBMED:25259671; PUBMED:31105902; PUBMED:20943272).
Instruction: Existing instruments for assessing physician communication skills: are they valid in a computerized setting? Abstracts: abstract_id: PUBMED:23623463 Existing instruments for assessing physician communication skills: are they valid in a computerized setting? Objectives: This study aims to highlight the differences in physicians' scores on two communication assessment tools: the SEGUE and an EMR-specific communication skills checklist. The first tool ignores the presence of the EMR in the exam room and the second, though not formally validated, rather focuses on it. Methods: We use the Wilcoxon Signed Ranks Test to compare physicians' scores on each of the tools during 16 simulated medical encounters that were rated by two different raters. Results: Results show a significant difference between physicians' scores on each tool (z=-3.519, p&lt;0.05 for the first rater, and z=-3.521, p&lt;0.05 for the second rater), while scores on the EMR-specific communication skills checklist were significantly and consistently lower. Conclusion: These results imply that current communication assessment tools that do not incorporate items that are relevant for communication tasks during EMR use may produce inaccurate results. Practice Implications: We therefore suggest that a new instrument, possibly an extension of existing ones, should be developed and empirically validated. abstract_id: PUBMED:12463821 Computerized physician order entry and communication: reciprocal impacts. Participant observation, focus group and oral history techniques were used to collect data from four distinctly different sites across the U.S. Data were examined initially to identify success factors for computerized physician order entry (CPOE) implementation. These data, reexamined for communication issues, revealed significant impacts on communication channels and relationships unanticipated by the implementers. Effects on physician-nurse interactions, pharmacy roles, and patient communications that vary by time and location were noted. The importance of robust bi-directional information channels between administration and staff was demonstrated to be potentially "mission-critical." The recommendation for implementers is "Plan to be surprised." Careful planning and pre-work are important but, no matter how much an institution prepares for the upheaval of CPOE, unforeseen consequences are inevitable. The presence of a prepared and capable implementation support group is essential. abstract_id: PUBMED:9887849 Patient-physician communication assessment instruments: 1986 to 1996 in review. This paper provides a comprehensive review and comparison of instruments used to assess patient-doctor interaction over the past decade. Instruments were identified from papers indexed by MEDLINE from 1986 to 1996 using the medical subject headings 'physician-patient relations', 'physician-patient communication', and 'education, medical', as well as requests for instruments currently in use by colleagues. Each instrument was reviewed under the following categories: name of the instrument; description; number of items; reliability; validity; current use and special notes. Overall, 44 instruments were obtained and reviewed. Of these, 21 were used in only one published study each in the last decade and 15 have never been validated. While most instruments have been shown to be reliable (usually inter-rater reliability), very few instruments have been directly compared with another instrument designed to assess patient-doctor interactions. We suggest that further validation of existing instruments and incorporation of assessment of non-verbal communication between the patient and the doctor are needed. abstract_id: PUBMED:28762538 Assessing patient-centred communication in teaching: a systematic review of instruments. Context: Patient-centred communication is a key component of patient centredness in medical care. Therefore, adequate education in and assessment of patient-centred communication skills are necessary. In general, feedback on communication skills is most effective when it is provided directly and is systematic. This calls for adequate measurement instruments. Objectives: The aim of this study was to provide a systematic review of existing instruments that measure patient centredness in doctor-patient communication and can be used to provide direct feedback. Methods: A systematic review was conducted using an extensive validated search strategy for measurement instruments in PubMed, EMBASE, PsycINFO and CINAHL. The databases were searched from their inception to 1 July 2016. Articles describing the development or evaluation of the measurement properties of instruments that measure patient centredness (by applying three or more of the six dimensions of a published definition of patient centredness) in doctor-patient communication and that can be used for the provision of direct feedback were included. The methodological quality of measurement properties was evaluated using the COSMIN checklist. Results: Thirteen articles describing 14 instruments measuring patient centredness in doctor-patient communication were identified. These studies cover a wide range of settings and patient populations, and vary in the dimensions of patient centredness applied and in methodological quality on aspects of reliability and validity. Conclusions: This review gives a comprehensive overview of all instruments available for the measurement of patient centredness in doctor-patient communication that can be used for the provision of direct feedback and are described in the literature. Despite the widely felt need for valid and reliable instruments for the measurement of patient-centred communication, most of the instruments currently available have not been thoroughly investigated. Therefore, we recommend further research into and enhancement of existing instruments in terms of validity and reliability, along with enhancement of their generalisability, responsiveness and aspects of interpretability in different contexts (real patients, simulated patients, doctors in different specialties, etc.). Comprehensibility and feasibility should also be taken into account. abstract_id: PUBMED:35394959 Physician-Nurse Communication Surrounding Computerized Physician Order Entry Systems From Social and Technical Perspective: An Ethnographic Study. Although computerized physician order entry systems improve order transmission and patient safety, overdependence on these systems can impede users' communication. This ethnographic study explored physician-nurse communication surrounding computerized physician order entry systems using a sociotechnical framework. Fieldwork conducted in a tertiary teaching hospital comprised 89 hours of participant observation, and individual semistructured interviews were held with seven nurses and five physicians. In addition, documents and artifacts were collected. Three core themes emerged. First, computerized physician order entry quality-related issues undermined the work efficiency of physicians and nurses. Specifically, usability was error prone because of cognitive overload, and the system was unable to perform relevant traces and raise alerts, demonstrating poor interoperability. Second, social factors, including insufficient training, unclear responsibilities, and a lack of awareness concerning interdisciplinary communication, compounded communication problems. Last, environmental factors, including noncoterminous spaces and times and insufficient technical support, impeded the resolution of communication problems. Technical and social contextual factors relating to computerized physician order entry systems jointly affected physician-nurse communication. Cognitive issues and insufficient alerts impacted work efficiency the most and were compounded by contextual individual- and team-related factors and environmental factors. Therefore, improved functions of computerized physician order entry systems and interprofessional communication training are required to optimize technical and social aspects of physician-nurse communication. abstract_id: PUBMED:15739134 Assessing communication competence: a review of current tools. Background: The assessment of communication competence has become a major priority of medical educational, policy, and licensing organizations in the United States and Canada. Multiple tools are available to assess communication competence, but there are few studies that compare the tools. Methods: A consensus panel of six family medicine educators evaluated 15 instruments measuring the physician-patient interview. The primary evaluation criteria came from the Kalamazoo Consensus Statement (KCS), which derived from a multidisciplinary panel of experts that defined seven essential elements of physician-patient communication. We evaluated psychometric properties of the instruments and other assessment criteria felt to be important to family physicians (exploring family issues, interview efficiency, and usability/practicality). Results: Instruments that received the highest ratings on KCS elements were designed for faculty raters and varied in their practicality/usability ratings and psychometric properties. Few instruments were rated high on psychometric properties or exploring family issues. Conclusions: The process successfully reviewed and provided a framework for assessing communication skills instruments. There is a need to expand the study, including use of a larger cohort of reviewers to provide more validity to the results and minimize potential biases. abstract_id: PUBMED:16514937 Using a computerized sign-out system to improve physician-nurse communication. Background: Communication problems among health care workers are a common, preventable source of hospital-related morbidity and mortality. Internal medicine residents at Jacobi Medical Center (Bronx, NY) began using an electronic sign-out program that had been incorporated into the computerized medical record. This new system had been developed to improve the quality of information transfer between cross-covering residents. Eighteen months later, a pilot study was initiated to explore the potential benefits of offering inpatient nurses access to this sign-out data. Methods: Nursing staff members were provided electronic access to the residents' sign-out information. Nurses received printouts of the computerized sign-outs at the start of each shift and were asked to use the sign-out program as a basis for their care plans and nursing change-of-shift "report." Results: The 19 (of 20) nurses who completed the survey agreed that using the resident sign-out program positively affected their ability to care for their patients. In addition, the intervention improved nurses' understanding of the patients' reason for admission, helped to improve communication between physicians and nurses, and raised nursing morale. Discussion: Incorporation of a housestaff electronic sign-out system into nursing daily workflow demonstrated multiple benefits and facilitated the transfer of valuable patient information from housestaff to nurses. abstract_id: PUBMED:21299599 Attention to gender in communication skills assessment instruments in medical education: a review. Context: Gender is increasingly regarded as an important factor in doctor-patient communication education. This review aims to assess if and how gender is addressed by current assessment instruments for communication skills in medical education. Methods: In 2009 at Radboud University Nijmegen Medical Centre, an online search was conducted in the bibliographic databases PubMed, PsycINFO and ERIC for references about communication skills assessment instruments designed to be completed by trained faculty staff and used in medical education. The search strategy used the following search terms: 'consultation skills'; 'doctor-patient communication'; 'physician-patient relations'; 'medical education'; 'instruments'; 'measurement', and 'assessment'. Papers published between January 1999 and June 2009 were included. The assessment instruments identified were analysed for gender-specific content. Results: The search yielded 21 communication skills assessment instruments. Only two of the 17 checklists obtained explicitly considered gender as a communication-related issue. Only six of 21 manuals considered gender in any way and none gave specific details to explain which aspects of communication behaviour should be assessed with regard to gender. Conclusions: Very few communication assessment instruments in medical education focus on gender. Nevertheless, interest exists in using gender in communication skills assessment. The criteria for and purpose of assessing gender in communication skills in medical education are yet to be clarified. abstract_id: PUBMED:36921469 Validity and reliability of standardized instruments measuring physician communication and interpersonal skills in video-recorded medical consultations - A systematic review. Objective: To identify standardized instruments measuring physician communication and interpersonal skills based on video-recorded consultations. Methods: We searched electronic databases for primary studies published from 1950 to 2022. Eligible studies had to report the validation of standardized instruments dedicated to the assessment of physician interpersonal skills based on video-recorded consultations with adult patients. Results: Of 7155 studies retrieved, 13 primary studies involving nine standardized instruments were included. The median number of physicians and participants was 23 (range, 1-200) and 71 (range, 1-950), respectively. Seven out of nine instruments were multidimensional and comprised a median number of 23 items (range, 7-95). The conceptual framework was reported for two instruments only. Content analysis identified 12 key behaviors with substantial overlap across instruments. The Four Habits Coding Scheme (4-HCS) yielded satisfactory validity and reliability while the evidence on psychometric properties was limited for other instruments. Conclusion: Limited evidence supports the psychometric attributes for most of the published standardized instruments dedicated to assessing physician communication and interpersonal skills. Practice Implications: Although the decision to use a specific instrument depends on the study aims, the 4-HCS appears to be the most reliable instrument for assessing physician communication and interpersonal skills based on video-recorded consultations. abstract_id: PUBMED:33035705 Measuring the quality of patient-physician communication. Patient-physician communication is an often overlooked yet a very important aspect of providing medical care. Positive patient-physician quality of communication within discourse has an influence on various aspects of a consultation such as a patient's treatment adherence to prescribed medical regimen and their medical care outcome. As few reference standards exist for exploring semantics within the patient-physician setting and its effects on personalized healthcare, this paper presents a study exploring three methods to capture, model and evaluate patient-physician communication among three distinct data-sources. We introduce, compare and contrast these methods for capturing and modeling patient-physician communication quality using relatedness between discourse content within a given consultation. Results are shown for all three data-sources and communication quality scores among physicians recorded. We found our models demonstrate the ability to capture positive communication quality between both participants within a consultation. We also evaluate these findings against self-reported questionnaires highlighting various aspects of the consultation and rank communication quality among seventeen physicians who consulted amid one-hundred and thirty-two patients. Answer: The existing instruments for assessing physician communication skills have been called into question regarding their validity in a computerized setting. A study highlighted differences in physicians' scores on two communication assessment tools: the SEGUE and an EMR-specific communication skills checklist. The SEGUE tool does not account for the presence of an Electronic Medical Record (EMR) in the exam room, while the EMR-specific checklist, though not formally validated, focuses on it. The study found a significant difference between physicians' scores on each tool, with scores on the EMR-specific checklist being consistently lower. This suggests that current communication assessment tools that do not incorporate items relevant for communication tasks during EMR use may produce inaccurate results (PUBMED:23623463). Furthermore, the literature indicates that while there are multiple tools available to assess communication competence, few studies compare these tools, and many have not been validated. There is a need for further validation of existing instruments and incorporation of assessment of non-verbal communication (PUBMED:12463821, PUBMED:9887849). A systematic review of instruments measuring patient-centered communication in teaching also found that most of the instruments currently available have not been thoroughly investigated, and there is a need for further research into and enhancement of existing instruments in terms of validity and reliability (PUBMED:28762538). Additionally, the impact of computerized systems on communication has been noted, with studies showing that computerized physician order entry (CPOE) systems can have significant impacts on communication channels and relationships, often unanticipated by implementers (PUBMED:12463821). An ethnographic study also found that technical and social contextual factors relating to CPOE systems jointly affected physician-nurse communication, indicating the need for improved functions of CPOE systems and interprofessional communication training (PUBMED:35394959). In conclusion, while there are instruments available for assessing physician communication skills, their validity in a computerized setting is questionable, and there is a clear need for the development of new instruments or the adaptation of existing ones to better reflect the realities of modern medical practice where EMRs and other computerized systems are integral to the clinical workflow (PUBMED:23623463).
Instruction: Blood loss during radical retropubic prostatectomy: is preoperative autologous blood donation indicated? Abstracts: abstract_id: PUBMED:8709311 Blood loss during radical retropubic prostatectomy: is preoperative autologous blood donation indicated? Purpose: We assessed the requirement for transfusion of allogeneic blood in a contemporary series of patients who did not deposit autologous blood before radical retropubic prostatectomy. Materials And Methods: After a policy was adopted in which preoperative autologous blood was not donated, 124 consecutive patients underwent radical retropubic prostatectomy. Type and screen for allogeneic blood were routinely available but neither hemodilation nor a cell saver was used. Results: Mean intraoperative blood loss was 579 cc and mean postoperative serum hematocrit was 33%. Only 3 patients (2.4%) required blood products due to intraoperative blood loss (2) and postoperative bleeding from a duodenal ulcer (1). Conclusions: Transfusion of blood products was required in a small percentage of our patients even without autologous blood donation. Therefore, the overall cost of care is decreased but, more importantly, the potential risks associated with autologous or allogeneic blood transfusion are eliminated. abstract_id: PUBMED:9111627 Autologous blood donation prior to anatomical radical retropubic prostatectomy: is it necessary? Objectives: To determine if autologous blood donation prior to anatomical radical retropubic prostatectomy, given current improvements in surgical technique, is necessary. Methods: The medical records of 200 consecutive patients undergoing radical retropubic prostatectomy for clinically localized prostate cancer were reviewed with regard to (1) preoperative hematocrit (HCT); (2) estimated blood loss (EBL); (3) postoperative HCT prior to discharge; (4) number of units of autologous blood donated; and (5) number of units of autologous and homologous blood transfused. In addition, the charges associated with autologous blood donation were determined via telephone interview with 14 blood donation centers across the United States. Results: Overall, 189 patients (95%) did not require a homologous blood transfusion. Sixty-four patients (32%) donated autologous units and 136 patients (68%) did not. Of the patients who had donated, only 17 (27%) received their blood back, and none (0%) received any homologous blood. Eleven (8%) of the 136 nondonors received a blood transfusion. The autologous donors, in comparison with nondonors, were found to have a significantly lower preoperative HCT (mean +/- standard deviation: 40 +/- 4.0% versus 42 +/- 2.9%, P &lt; 0.05). However, there was no statistically significant difference in the mean EBL between the two groups, autologous donors versus nondonors (771 +/- 370 versus 737 +/- 425 cc, P = 0.23). The autologous donors had a smaller mean change in HCT versus the nondonors (-9.3 +/- 5.1% versus -11.2 +/- 4.4%, P &lt; 0.05), reflecting an increased willingness to transfuse patients who have autologous units available. With regard to cost, patients, on average, can expect to be charged as much as $745 per unit of autologous blood donated. Conclusions: These findings suggest that preoperative blood donation prior to radical prostatectomy may not be necessary, because 95% of the patients did not require a homologous blood transfusion. In addition, autologous blood donation can be associated with substantial costs in both time and money. Thus, autologous donation should be left as an option for the patient and should not be considered routine practice. abstract_id: PUBMED:9800890 Utilization of autologous blood donation during radical retropubic prostatectomy. We investigated the utilization patterns of autologous blood donation for radical retropubic prostatectomy (RRP) during a 6-year period. A total of 225 patients electing RRP with blood donation were identified for analysis. Group 1 consisted of 113 men who had an RRP from 1990 to 1993. Group 2 consisted of 112 men who had an RRP from 1993 to 1995. Charts were reviewed for the number of units transfused, number of autologous units donated, and operative blood loss. More patients autodonated blood in the later group (84% vs. 75%). Technical improvements and experience have significantly decreased blood loss and the need for transfusions (69% vs. 96% in the early group). In the more current series, only 14% of patients who autodonated blood required homologous transfusion vs. 42% in the earlier group. An increase in the amount of wasted blood (42% vs. 16% in the early group) also was noted. The 4-unit donors had the lowest homologous transfusion rate in both series (group 1 = 21%, group 2 = 5%); the 2-unit donors had the lowest units wasted per person (0.74). In addition, the 2-unit donors maintained a low homologous transfusion rate of 16%. These data suggest that 2 units of autologous blood donation has a reduced risk of homologous blood transfusion while the amount of autologous blood wasted is minimized. abstract_id: PUBMED:7500469 Radical retropubic prostatectomy: limited benefit of autologous blood donation. Purpose: We determine whether autologous blood donation significantly decreases the need for homologous transfusions after radical prostatectomy. Materials And Methods: The effects of estimated blood loss and autologous donation on the rate of homologous transfusions were analyzed in 3 groups of 100 consecutive patients treated between 1983 and 1992. Results: Overall, donors were less likely than nondonors to receive homologous blood. As median estimated blood loss decreased from 1,200 to 800 cc from groups 1 to 3 (p &lt; 0.05), the incidence of nondonors requiring homologous blood decreased from 62 to 11% and that of autologous units transfused decreased from 96 to 19%. Conclusions: With decreasing blood loss, safe but stringent criteria for transfusion and improved safety of the blood supply, autologous donation is an inefficient method to lower the slight risk of complications following homologous transfusion during radical prostatectomy. abstract_id: PUBMED:12010315 Efficacy of preoperative donation of autologous blood in patients undergoing radical retropubic prostatectomy. Background: Radical retropubic prostatectomy (RRP) has resulted in substantial blood loss and the frequent need for homologous blood transfusion. In this study, the efficacy of autologous blood transfusion, from medical and financial perspectives, was evaluated in patients undergoing RRP. Methods: Between 1994 and 2000, 80 patients with localized prostate cancer underwent RRP in our institute. Based on informed consent, preoperative donation of autologous blood (PDA) was performed in 65 out of 80 patienets. Four or six units were donated during the first 3 years; however, donation units were reduced to a maximum of 4 units since 1997 onwards. The discard rate of donated blood and frequency of homologous transfusion were examined. Changes of hematocrit (Ht) and hemoglobin (Hb) levels through donation and surgery and important factors that may affect postoperative levels of Ht and Hb were evaluated in 56 patients receiving 4-unit donations. Results: Overall, 2 or 4 units of donated blood were discarded in four patients and homologous transfusion was required in two patients. In 56 patients receiving 4-unit donation, the mean Ht level at predonation was 43.3%. Following donation, this decreased to 35.7%. The administration of recombinant human erythropoietin (rHuEpo) relieved declining Ht levels following donation, but changes in Ht levels after surgery were minor. Important factors related to postoperative decline of Ht and Hb levels were operative time and blood loss. Conclusions: The program of 4-unit PDA can be performed safely without rHuEpo injection, and it is useful to reduce the risk of requiring homologous transfusion. However, more efficient programs to relieve patient burden and to reduce medical cost are needed. abstract_id: PUBMED:10414739 Use of preoperative autologous blood donation in patients undergoing radical retropubic prostatectomy. Objectives: To evaluate the appropriateness of autologous blood (AB) transfusion during radical retropubic prostatectomy in relation to the cardiopulmonary risk of the patient. Methods: We reviewed the medical records of 100 patients with American Society of Anesthesiologists status I, II, or III who underwent radical retropubic prostatectomy under general or combined general and epidural anesthesia. All patients had donated 2 units (U) of autologous blood, received 0, 1, or 2 U of autologous blood perioperatively, and received no allogeneic blood. Patients were placed in three cardiopulmonary risk groups on the basis of risk factors or documented cardiopulmonary disease. The low-risk group was assigned a target discharge hematocrit of 24% or less; moderate-risk, 25% to 28%; and high-risk, 29% or greater. The appropriateness of transfusion was determined by whether patients' hematocrit was in their group's preassigned range at discharge. Results: On the basis of discharge hematocrit, significantly more low-risk patients underwent inappropriate transfusion than moderate-risk (64% versus 26%, P = 0.006) or high-risk (64% versus 13%, P = 0.001) patients. Seventy-five AB units were discarded and at least 53 U were inappropriately transfused. We found an increase in the number of units of autologous blood transfused when a larger estimated blood loss was reported (P &lt; 0.001). The estimated charge for the units discarded and inappropriately transfused exceeded $12,000. Conclusions: Sixty-four percent of autologous blood units were discarded or inappropriately transfused during radical retropubic prostatectomy. Transfusion of autologous blood was not governed by cardiopulmonary risk stratification. If the decision to transfuse had been based on cardiopulmonary risk factors instead of estimated blood loss, fewer patients would have received autologous blood. abstract_id: PUBMED:1741903 Preoperative donation of autologous blood and plasma The risk and side effects of homologous blood transfusion led to an increased interest in autologous transfusion concepts. Whereas peri- and intraoperative procedures are clinical methods, preoperative autologous blood donation belongs into the responsibility of a transfusion service in order to ensure qualified separation into blood components as well as correct storage and handling. Special knowledge in transfusion medicine is necessary for the application of preoperative autologous plasmapheresis. On the other hand, its use by clinicians under adequate controlled conditions seems to be rather safe. Although autologous transfusion includes the lowest risk, side reactions cannot be completely excluded. There is also a risk of secondary bacterial contamination. Side effects due to the specific composition of the various blood components, to alterations during storage and to the way of application have to be considered, too. Severe haemolytic transfusion reactions are mostly caused by exchange of blood samples, patients or blood units. They are to be expected in autologous blood transfusion as well. In addition, there is an increased risk of infection when autologous blood components are exchanged accidentally. There is only an indication for autologous blood if the total risk of autologous donation as well as transfusion in the individual case is lower than the statistical risk of homologous transfusion. Autologous blood donation should be offered to all patients undergoing elective surgery where blood is likely to be required and when blood donation is practicable. Preoperative autologous plasmapheresis is indicated in elective surgery if intra- and perioperative autotransfusion methods shall be used and a blood loss of at least 1.5 liters can be expected. There are a lot of organizational problems in autologous blood transfusion which can be solved by adequate information of all persons involved and close cooperation between clinicians and transfusion service. The high standard which was developed for homologous transfusion in the last 15 years has to be maintained for autologous blood transfusion, too. abstract_id: PUBMED:15922898 Autologous blood donation. Preoperative autologous blood donation has become accepted as a standard practice in elective surgery. Subsequent improvements in blood safety and evolving surgical techniques resulting in less blood loss have caused a national decline in preoperative autologous blood donation by approximately 50%. Nevertheless, the continuing emergence of new pathogens and the potential for severe blood inventory shortages continue to give preoperative autologous blood donation an important role in blood conservation strategies. abstract_id: PUBMED:15196325 Autologous blood donation. Although preoperative autologous blood donation is employed in elective surgery, this is declining because of the increasingly safe allogeneic blood supply. However, it continues to be used because of the public's perception of allogeneic blood risks and increasing blood shortages. Patients may donate a unit of blood (450 +/- 45 ml) as often as twice weekly, up to 72 hours before surgery. Preoperative autologous blood is most beneficial in procedures that cause significant blood loss. It has been determined that preoperative autologous blood donation is poorly cost-effective; the use of this procedure must be based on evidence that it is safe and of value for the patient. abstract_id: PUBMED:31114721 Overtransfusion of Autologous Blood Identifies Opportunities for Improving Patient Blood Management. Background Preoperative autologous blood donation (PABD) has been declining in use nationally. A subset of patients scheduled for elective surgery, however, continue to be offered and choose this option. Our study aimed to understand the current impact of PABD before scheduled surgical procedures. Study design and methods A retrospective review was conducted in a single large academic center. Medical records associated with autologous units received in the transfusion service over a two-year period (1/1/2016-12/31/2017) were reviewed. Demographics, units donated, units transfused, wastage, pre-donation hemoglobin (Hb), pre-operative Hb, estimated blood loss (EBL), and clinical specialty were collected. Results During the study period, 118 patients underwent PABD, donating a total of 141 autologous red blood cell units. Patients who donated autologous units and were subsequently transfused had lower pre-donation Hb compared to patients who were not transfused (13.3 ± 1.4 g/dL vs. 14.3 ± 1.5 g/dL, p=0.004). Pre-operative Hb was lower than pre-donation Hb among both groups (12.1 ± 1.2 g/dL for patients receiving transfusion; 12.9 ± 1.5 g/dL, p=0.011 for patients who were not transfused). The majority of PABD patients (71%) had an estimated blood loss of less than 500 mL. Wastage rate of autologous units was 67%. PABD was disproportionately associated with a minority of surgeons and clinical services. Conclusion Within our institution, PABD is heavily used amongst a small subset of physicians across multiple surgical specialties and is associated with lower pre-operative Hb, tendency towards overtransfusion, and high rates of wastage of donated units. Our findings reinforce reports of inefficiencies in patient blood management and increased risks to patient health associated with PABD. Answer: Preoperative autologous blood donation (PABD) for radical retropubic prostatectomy has been a subject of debate due to evolving surgical techniques and improvements in blood safety. Studies have shown that the requirement for transfusion of allogeneic blood in patients undergoing radical retropubic prostatectomy is low, with only a small percentage of patients requiring blood products even without autologous blood donation (PUBMED:8709311). Additionally, 95% of patients did not require a homologous blood transfusion, suggesting that preoperative blood donation may not be necessary (PUBMED:9111627). Technical improvements and experience have significantly decreased blood loss and the need for transfusions over time. For instance, in a study, only 14% of patients who autodonated blood required homologous transfusion in a more current series compared to 42% in an earlier group (PUBMED:9800890). Furthermore, with decreasing blood loss, safe but stringent criteria for transfusion, and improved safety of the blood supply, autologous donation is considered an inefficient method to lower the slight risk of complications following homologous transfusion during radical prostatectomy (PUBMED:7500469). The efficacy of autologous blood transfusion from medical and financial perspectives has also been evaluated, with findings indicating that a program of 4-unit PDA can be performed safely without recombinant human erythropoietin (rHuEpo) injection and is useful to reduce the risk of requiring homologous transfusion. However, there is a need for more efficient programs to relieve patient burden and reduce medical costs (PUBMED:12010315). Moreover, a significant number of autologous blood units were found to be discarded or inappropriately transfused during radical retropubic prostatectomy, and transfusion of autologous blood was not governed by cardiopulmonary risk stratification (PUBMED:10414739). The wastage rate of autologous units is high, and PABD is disproportionately associated with a minority of surgeons and clinical services, leading to overtransfusion and inefficiencies in patient blood management (PUBMED:31114721).
Instruction: Is fetal hyperechoic bowel on second-trimester sonogram an indication for amniocentesis? Abstracts: abstract_id: PUBMED:8164918 Is fetal hyperechoic bowel on second-trimester sonogram an indication for amniocentesis? Objective: To determine the clinical significance of hyperechoic bowel seen sonographically in second-trimester fetuses. Methods: Fifty fetuses (0.6%) with echogenic bowel were identified sonographically from a population of 8680 consecutive second-trimester fetuses over 21 months. The fetal bowel was considered hyperechoic if its echogenicity was similar to that of surrounding bone. Follow-up was obtained through medical record review. Results: Twenty-nine of 50 fetuses (58%) were normal; eight (16%) were aneuploid, including six Down syndrome, one trisomy 13, and one Turner syndrome. All eight fetuses with aneuploidy had sonographic anomalies in addition to the echogenic bowel. Eight of 50 fetuses (16%) were growth-retarded, and five others (10%) had normal karyotypes but are still undelivered. Among the eight growth-retarded fetuses, there were five intrauterine or neonatal deaths, one elective abortion, and two survivors. In addition, the six fetuses with Down syndrome and echogenic bowel represented 12.5% of all second-trimester Down syndrome fetuses karyotyped in our laboratory during the study period. Combining results from the present study (six Down syndrome fetuses) with three studies from the literature (21 additional Down syndrome fetuses), a total of 27 fetuses with echogenic bowel and Down syndrome were identified, 11 (40.7%) of whom had no other sonographic findings. We calculate that if 1,000,000 second-trimester fetuses were scanned, 5105 would have hyperechoic bowel as the only finding, of whom 71 would have Down syndrome and 5034 would not. The risk of Down syndrome in fetuses with isolated hyperechoic bowel is, therefore, 71 in 5105 or 1.4%. Conclusion: The finding of isolated hyperechoic bowel in the second trimester should prompt genetic counseling and consideration of karyotypic analysis. abstract_id: PUBMED:3511420 Ultrasonic assessment of fetal response to second-trimester amniocentesis. Twenty-three patients undergoing second-trimester amniocentesis were ultrasonically monitored for 30 minutes before and 30 minutes after the amniocentesis. Fetal movements were recorded as movements of limbs, trunk, or combined limb-trunk movements. Fetal heart rates were determined at ten-minute intervals before and after the amniocentesis. No significant increase in total fetal movements, limb movements, trunk movements, or heart rate were found. A significant increase in combined fetal movements was observed. The authors conclude that amniocentesis in the second trimester elicits a change in fetal movement pattern. abstract_id: PUBMED:16533652 The role of the second trimester genetic sonogram in screening for fetal Down syndrome. The Genetic Sonogram is an ultrasound examination done on second trimester fetuses that not only evaluates the fetus for structural malformations, but also searches for the sonographic markers of fetal Down syndrome. The main markers that comprise the genetic sonogram include the nuchal fold, short femur and humerus, pyelectasis, hyperechoic bowel, echogenic intracardiac focus, and any major abnormality. The absence of any marker on a second trimester scan conveys a 60-80% reduction in prior risk of Down syndrome based on advanced maternal age or serum screen risk. The presence of sonographic markers, either singly or in combination, will raise the baseline risk of Down syndrome using likelihood ratios calculated for each individual marker. Using this approach, approximately 75% of fetuses with Down syndrome can be identified by modifying the patient's baseline risk according to the results of the ultrasound. The second trimester scan will likely continue to play an important role in the future in the detection of aneuploidy. abstract_id: PUBMED:7877208 Fetal intraluminal gastric masses after second trimester amniocentesis. Eight instances of homogeneous, well-defined echoes within the fetal stomach were identified on routine second trimester detailed scan over a 7 month period, a prevalence of 1 in 287 or 0.35%. This finding was significantly more frequent in women who had cytogenetic amniocentesis than in those who had not had the procedure (4 in 266 [1.5%] versus 4 in 2031 [0.2%], respectively; P &lt; 0.01). Indirect signs of intra-amniotic bleeding, such as particles in the amniotic fluid, chorioamniotic separation, or hyperechogenic bowel, were present in four cases. The association between echogenic material within the fetal stomach and cytogenetic material within the fetal stomach and cytogenetic amniocentesis is discussed. abstract_id: PUBMED:8649704 The use of second-trimester genetic sonogram in guiding clinical management of patients at increased risk for fetal trisomy 21. Objective: To test the efficacy of ultrasound in detecting fetuses with trisomy 21. Methods: From November 1, 1992, to December 31, 1995, a second-trimester genetic sonogram was offered to all women with singleton fetuses at increased risk (at least 1:274) for trisomy 21, who had either declined genetic amniocentesis or chose to have a sonogram before deciding whether to undergo an amniocentesis. In addition to standard fetal biometry, the following ultrasound markers for aneuploidy were evaluated: structural anomalies (including face, hands, and cardiac [four-chamber view and outflow tracts]), short femur, short humerus, pyelectasis, nuchal fold thickening, echogenic bowel, choroid plexus cysts, hypoplastic middle phalanx of the fifth digit, wide space between the first and second toes, and two-vessel umbilical cord. Outcome information included the results of genetic amniocentesis, if performed, or the results of postnatal pediatric assessment and follow-up. Results: Five hundred seventy-three patients had a genetic sonogram between 15 and 23 weeks' gestation: 378 patients had advanced maternal age (at least 35 years), 141 had abnormal serum biochemistry, and 54 had both. The majority (495, or 86.3%) had a normal genetic sonogram (absence of abnormal ultrasound markers); 51 (9%) had one marker present, and 27 (4.7%) had two or more markers present. Outcome was obtained on 422 patients (the remaining were ongoing pregnancies or were lost to follow-up). Twelve of 14 fetuses with trisomy 21, one fetus with trisomy 13, and one fetus with triploidy had two or more abnormal ultrasound markers present; one fetus with trisomy 21 had one abnormal marker and one had a completely normal ultrasound. When one or more abnormal ultrasound markers were present, the sensitivity, specificity, and positive and negative predictive values for trisomy 21 were 92.8%, 86.7%, 19.4%, and 99.7%, respectively. When two or more abnormal ultrasound markers were present, the corresponding values were 85.7%, 96.8%, 48%, and 99.5%. In the study population, the amniocentesis rate was 12.7% overall and 17.3% in cases with known outcome. Conclusion: Second-trimester genetic sonogram may be a reasonable alternative for patients at increased risk for fetal trisomy 21 who wish to avoid amniocentesis. In experienced hands, this approach may result in a high detection rate of trisomy 21 (93%), with an amniocentesis rate of less than 20%. abstract_id: PUBMED:21052703 Fetal loss associated with second trimester amniocentesis. Objective: To evaluate the fetal loss rate associated with second trimester amniocentesis. Methods: All cases of pregnant women ≥35 years old with gestational age beyond 16 weeks between 1997 and 2006 were included in this study. The study group consisted of all cases that the patients decided to have second trimester genetic amniocentesis. The control group was a randomized selection of pregnant women who refused any procedures. The ratio of the study case to the control case was 2:1. The fetal loss rates after amniocentesis before 24, and 28 complete weeks and the pregnancy outcome were analyzed. Results: 2,990 cases in the study group and 1,495 cases in the control group were used for this analysis. The mean maternal age in the study group and control group were 36.89 ± 1.63 and 36.78 ± 1.59 years old, respectively (P = 0.239). The procedure-related fetal loss before 24 and 28 complete weeks were 0.17 and 0.50%, respectively. The most common presenting symptom before fetal loss in the study cases was abdominal pain. Almost all of the cases leading to abortion had symptoms initiated after 48 h post procedure. Significantly higher chances of abortion occurred in pregnant women ≥41 years old (P = 0.008). Conclusion: The procedure-related fetal loss rates in pregnant women ≥35 years old after amniocentesis before 24 and 28 complete weeks were 0.17 and 0.50%, respectively. The certain factors influencing the risk of fetal loss might be independent of the amniocentesis procedure. abstract_id: PUBMED:7258256 Amniocentesis in the second trimester: the effect on fetal movement. Linear array real-time ultrasound was utilized to observe gross fetal body movements before and after second-trimester amniocentesis. No significant differences were found between the two observation periods. All pregnancies continued to term, and no evidence of fetal trauma was found at delivery. abstract_id: PUBMED:19127782 Outcome of second trimester amniocentesis in twin pregnancies at Songklanagarind Hospital. Objective: To evaluate the outcome of genetic amniocentesis in twin gestations at Songklanagarind Hospital. Material And Method: This was a descriptive study that included all women with twin pregnancies who had a second trimester amniocentesis for chromosome study at the Maternal Fetal Medicine Unit, Department of Obstetrics and Gynecology, Songklanagarind Hospital from January 1998 through June 2006 to assess the outcome including risk of fetal loss in such cases. Results: Advanced maternal age was the most common indication for amniocentesis. The success rate of cell culture was 100%. The fetal loss within 14 days after the procedure was 1.4%. Conclusion: The tendency of fetal loss after amniocentesis in twin pregnancies was higher than in singletons. Pre-procedure counseling personnel should be aware of this potential outcome, and be sure to inform the involved parents prior to the procedure. abstract_id: PUBMED:17238217 Age-specific risk of fetal loss post second trimester amniocentesis: analysis of 5043 cases. Objectives: To assess the risk of fetal loss attributable to second trimester amniocentesis in singleton pregnancies through a cross-sectional study. Methods: Records of 5043 consecutive second trimester amniocentesis, performed by a single operator between 1997 and 2003, were analyzed. Fetal loss post amniocentesis was calculated by grouping pregnant women in age classes and assessing observed/expected (O/E) rate. Results: Total fetal losses were 40 (0.81%): 33 cases (0.67%) occurred before the 24th week, 37 cases (0.76%) before the 28th gestational week, and 3 cases (0.06%) after the 28th week of pregnancy. An age-dependent increase of the rate of fetal loss, not statistically significant (Chi-Square = 0.349, p = 0.505) was observed. The total O/E ratio values did not show any statistically significant risk (O/E ratio = 1.25, CI = 0.86-1.64). The analysis of the single age classes did not detect any statistical significance. The excess fetal loss rate associated with amniocentesis was 0.16%. Conclusions: No effect of the 2nd trimester amniocentesis was noted on fetal loss. abstract_id: PUBMED:22256466 Outcome of second trimester amniocentesis in singleton pregnancy at Songklanagarind Hospital. Objective: To evaluate the outcome of genetic amniocentesis in singleton gestation at Songklanagarind Hospital. Material And Method: This was a descriptive study that included all singleton pregnant women who had a second trimester amniocentesis for chromosome studies at the Maternal Fetal Medicine Unit, Department of Obstetrics and Gynecology, Songklanagarind Hospital between January 1998 and 2006 to assess the outcome including risk of fetal loss in such cases. Results: Advanced maternal age was the most common indication for amniocentesis. Amniocentesis after positive screening aneuploidy test increased. The fetal loss within 14 days after the procedure was 0.12% (10/8,073). Leakage of amniotic fluid occurred 0.1% (5/8,073) but only one case aborted. Fever occurred in two cases and a case of chorioamnionitis was diagnosed. Conclusion: The rate of fetal loss within 14 days after amniocentesis in singleton pregnant women was lower than the authors' previous 10 years. The anxiety of the family and difficulty of counseling to the family will be reduced because of a lower complication rate. Moreover the pregnant women who have a high degree of anxiety for genetic abnormalities are a likely choice for genetic amniocentesis if the risk rate regarding the patient's age, ultrasonographic finding, or maternal serum screening is around 0.1%. Answer: Fetal hyperechoic bowel observed on a second-trimester sonogram can be an indication for amniocentesis, especially when considering the potential association with chromosomal abnormalities such as Down syndrome. A study found that among fetuses with hyperechoic bowel, 16% were aneuploid, including cases of Down syndrome, trisomy 13, and Turner syndrome. All aneuploid fetuses had additional sonographic anomalies besides the echogenic bowel. The study also noted that isolated hyperechoic bowel could indicate a 1.4% risk of Down syndrome, suggesting that the finding should prompt genetic counseling and consideration of karyotypic analysis (PUBMED:8164918). The role of the second-trimester genetic sonogram, which includes the assessment of hyperechoic bowel among other markers, is to screen for fetal Down syndrome. The presence of sonographic markers can raise the baseline risk of Down syndrome, and the absence of markers can reduce the risk. Approximately 75% of fetuses with Down syndrome can be identified by modifying the patient's baseline risk according to the ultrasound results (PUBMED:16533652). Therefore, while hyperechoic bowel on its own may not be a definitive indication for amniocentesis, it is a marker that, when combined with other factors or sonographic findings, may lead to the recommendation for further genetic testing, including amniocentesis, to determine the presence of chromosomal abnormalities. It is important for patients to receive genetic counseling to understand the implications of the sonographic findings and to make informed decisions regarding amniocentesis (PUBMED:8164918).
Instruction: Does acute aortic dissection display seasonal variation? Abstracts: abstract_id: PUBMED:19683997 Does acute aortic dissection display seasonal variation? Background: The relation between cardiovascular diseases and the seasons is well known; however, only a few reports have addressed the seasonal aspects of acute aortic dissections. We investigated whether presentation of acute aortic dissection has monthly/seasonal variations. Methods: From February 1985 to January 2006, 165 consecutive patients with aortic dissection admitted to our institution were reviewed. During this period, regional monthly atmospheric pressure data were supplied by the state's meteorological service. The mean and SD of atmospheric pressure data were analyzed statistically. Results: The frequency of acute aortic dissection was found to be significantly higher during winter versus other seasons (P = .041). A relatively high positive correlation was found between the incidence of acute aortic dissection and the mean atmospheric pressure (P = .037). The study confirmed monthly variation with a peak in January. In winter, the frequency of acute aortic dissection was higher in male than in female patients. Conclusions: This study demonstrates that the occurrence of acute aortic dissection has significant seasonal/monthly variations. Thus, these observations may be a guide for prevention of acute aortic dissections by structuring treatment approaches with consideration given to the times of the year that patients are most vulnerable. abstract_id: PUBMED:37209929 Impact of seasonal variation on hospital admission and in-hospital mortality of acute cardiovascular diseases: a contemporary nationwide database study. Purpose: Whether acute cardiovascular diseases (CVDs), such as acute heart failure (AHF), acute myocardial infarction (AMI), and acute aortic dissection (AAD), have distinct seasonal variations in the number of hospitalizations and in-hospital mortality was investigated using a nationwide database in Japan. Methods: The hospitalized patients with AHF, AMI, and AAD between April 2012 and March 2020 were identified. Multilevel mixed-effects logistic regression was conducted and adjusted odds ratio (aOR) was calculated. Also, the Poisson regression model was conducted to calculate the peak-to-trough ratio (PTTR) with peak month. Results: Patients identified were as follows: 752,434 AHF patients (median age, 82 years; male, 52.2%), 346,110 AMI patients (median age, 71 years; male, 72.2%), and 118,538 AAD patients (median age, 72 years; male, 58.0%). The monthly proportion of hospitalized patients was the highest in winter and the lowest in summer in all three diseases. Based on aOR, 14-day mortality was the lowest in spring for AHF, summer for AMI, and spring for AAD. Furthermore, the PTTRs with peak month were 1.24 for AHF in February, 1.34 for AMI in January, and 1.33 for AAD in February. Conclusions: A clear seasonal pattern was observed in the number of hospitalizations and in-hospital mortality of all acute CVDs, independent of confounders. abstract_id: PUBMED:16021847 Does circadian and seasonal variation in occurrence of acute aortic dissection influence in-hospital outcomes? The risk of acute aortic dissection (AAD) exhibits chronobiological variations with peak onset in the morning and in winter. However, it is not known whether the time of day or season of the year of the AAD affects clinical outcomes. We studied 1,032 patients enrolled in the International Registry of Acute Aortic Dissection from January 1997 to December 2001. For circadian and seasonal analysis, the time and date of symptom onset were available for 741 and 1,007 patients, respectively, and were grouped into four 6h periods (morning, afternoon, evening, and night) and four seasons (winter, spring, summer, and autumn). The chi2 test for goodness of fit was used to evaluate non-uniformity of the time of day and time of year for critical in-hospital clinical events, including death. While highest incidence of AAD occurred in the morning and winter, clinical events (including mortality) were similar during the four different periods of the 24 h (chi2 = 1.9, p = 0.60) and seasonal (chi2 = 1.2, p = 0.75) periods. abstract_id: PUBMED:10481426 Seasonal variation in the incidence of acute aortic dissection in Yokohama In order to determine any possible influence of climatic variables on the incidence of acute aortic dissection (AAD), we reviewed all patients with a diagnosis of AAD treated at Yokohama City University Urafune Hospital between January 1994 and December 1997 retrospectively. One hundred and eight patients with AAD were admitted into the hospital during the period. Forty-two patients (39%) developed AAD during the winter, when the lowest temperature was below 6 degrees C. Thirty-one patients (29%) developed AAD in May or October, when the lowest temperature was almost 15 degrees C. We suspect that the incidence of AAD was influenced not only by the atmospheric temperature but also by other factors, for example, the change of the atmospheric pressure. abstract_id: PUBMED:16394282 Seasonal variation in mortality from myocardial infarction and haemopericardium. A postmortem study. Background: Seasonal variation in the incidence of and mortality from myocardial infarction (MI) has been well recognised for many years. Haemopericardium (HP) is usually a fatal complication of MI. No data exist in the literature with regard to the seasonal variation in mortality from HP. Aims: To perform a necropsy based study to confirm seasonal variation in mortality from MI in a London population and to determine whether seasonal variation in mortality from HP can be established. Methods: Postmortem causes of death issued by several pathologists, working in two public London mortuaries over a five year period from 1999 to 2004 were analysed. Deaths caused by HP secondary to traumatic or iatrogenic causes were specifically excluded, as were deaths caused by HP secondary to bicuspid aortic valve and aortic dissection. The results were subdivided into winter (1 November to 31 March) and summer (1 April to 31 October). Results: In total, there were 2266 cases of MI and 135 cases of HP. Significantly more deaths from HP (83 of 135; 61.5%; p = 0.004) and MI (1051 of 2266; 46.4%; p = 0.016) occurred in the five month winter period. Furthermore, there was a significantly higher incidence of HP compared with MI during the winter (83/1051; 7.9%) than the summer (52/1215; 4.3%; p&lt;0.001). There was no significant difference in the age or sex of patients dying in either the winter or summer. Conclusion: There is seasonal variation in mortality from both MI and HP in the London population, as confirmed by a postmortem study. abstract_id: PUBMED:29404804 Variation in emergency department use of computed tomography for investigation of acute aortic dissection. Introduction: Acute aortic dissection (AAD) is a life-threatening condition making early diagnosis critical. Although 90% present with acute pain, the myriad of associated symptoms can make diagnosis a challenge. Our objective was to assess how we are using computed tomography to rule out acute aortic dissection specifically rate of ordering, diagnostic yield, and variation in practice. Methods: We included consecutive adult patients presenting to two tertiary academic care emergency departments over one calendar year presenting with non-traumatic chest, back, abdominal, or flank pain. Primary outcome was rate of CT thorax/abdomen ordered to rule out AAD. Secondary outcome was variation in CT ordering, measured comparing number of CTs ordered per physician. Sample size of 12 per group was calculated based on an expected delta in mean CT ordered of 5 and a within group SD of 3. Results: Thirty-one thousand two hundred one patients presented with truncal pain during the study period, 22,729 were included (mean 47 years, SD 18.5 years, 56.2% female); prevalence of AAD (N = 4) was 0.02%. CT was ordered to rule out AAD in 175 (0.7%) patients (mean 62 years, SD 16.5, 50.6% female). Significant variation between physicians ordering was found, with individual physicians ordering varying from 0.6 to 12%. Conclusions: Current rate of imaging for acute aortic dissection is low and potentially inefficient, with a large variation in practice. These findings suggest potential for more standardized and efficient use of CT for the diagnosis of acute aortic dissection. abstract_id: PUBMED:28865779 Chronobiology of Acute Aortic Syndromes. Acute aortic syndromes are highly morbid conditions that require prompt diagnosis and management. Aortic dissections have rhythmic patterns, with notable peaks at certain points in every 24 hours as well as weekly and seasonal variations. Several retrospective studies have assessed the chronobiology of acute aortic dissections and there seems to be a winter seasonal peak and morning daily peak in incidence. Although the pathophysiology of this chronobiology is unclear, there are several environmental and physiologic possibilities. This article reviews the major studies examining the chronobiology of acute aortic dissection, and summarizes some theories on the pathophysiology of this phenomenon. abstract_id: PUBMED:25902880 Circadian variation of acute aortic dissection. Acute aortic dissection (AAD) is a life-threatening cardiovascular disease with high mortality. Hypertension is a well known risk factor of AAD. There have been previous reports about the association between circadian variation of blood pressure (BP) and cardiovascular events. However, little is known about the association between the onset-time of AAD and circadian variation of BP. The purpose of this study was to clarify the characteristics of circadian variation of BP in AAD and its relation to the onset-time of this disease. This study included type B spontaneous AAD patients who were referred to our institution and treated conservatively between January 2008 and June 2013. Patients with type A AAD, secondary to trauma, and type B AAD which preceded surgical intervention were excluded. Data were retrospectively collected from the hospital medical records. Sixty-eight patients with type B AAD were enrolled. The distribution of the circadian pattern in the study patients was as follows: extreme-dipper, 0% (none); dipper, 20.6% (n = 14); nondipper, 50% (n = 34); riser, 29.4% (n = 20). Non-dipper and riser patterns were more frequently observed compared with other population studies reported previously. Moreover, no patient in the dipper group had night-time onset while 31.5% of the patients in the absence of nocturnal BP fall group (non-dipper and riser) did (P = 0.01). Absence of a nocturnal BP fall was frequently seen in AAD patients. Absence of a nocturnal BP fall may be a risk factor of AAD. Circadian variation of BP may also affect the onset-time of type B AAD. abstract_id: PUBMED:37029426 Seasonal and daily variations in the occurence and outcomes of acute Stanford type A dissections: a retrospective single-center study. Background: To investigate the seasonal, monthly, and daily distributions of the incidence of Stanford type-A acute aortic dissection (TAAAD) and identify seasonality in the duration of hospital stay and in-hospital mortality of TAAAD in south-eastern China. Methods: We enrolled patients diagnosed with TAAAD between 1 June 2017 and 31 May 2021. Participants were divided into seasonal, monthly, and daily groups according to the need for analysis. Analysis of variance was applied to compare the number of TAAAD in different seasons, months, and days. χ2 test was used to compare in-hospital mortality among the four groups. Non-parametric methods were used for all comparisons of the duration of hospital stay. Univariate logistic and multivariable logistic regression analyses were performed to assess the duration of hospital stay. Results: Of the 485 patients, 154 were diagnosed in winter (31.8%), 115 in spring (23.7%), 73 in summer (15.1%), and 143 in autumn (29.5%). The daily, monthly, and seasonal distributions of TAAAD were significantly different (P = 0.04, P &lt; 0.01, and P &lt; 0.01, respectively). This study did not identify any significant decrease in maximal, mean, or minimum temperatures between the three days before TAAAD and the day of TAAAD. No seasonal variations associated with in-hospital mortality was observed (P = 0.89). However, significant differences were observed in the seasonal distribution of the duration of hospital stay for TAAAD [winter was 17.0 (4.0-24.0) days, spring was 20.0 (14.0-29.0) days, summer was 20.0 (12.5-31.0) days, and autumn was 20.0 (13.0-30.0) days, P &lt; 0.01]. Multiple factor analysis showed that winter was the independent risk factor for the increased duration of hospital stay. The odds ratio for winter was 2.21 (1.46, 3.33, P &lt; 0.01). Conclusions: Our study confirmed that the incidence of TAAAD exhibits seasonal, monthly, and daily variations in south-eastern China. Moreover, the daily incidence of TAAAD is higher on weekdays than that on weekends. abstract_id: PUBMED:34496919 Effects of seasonal and climate variations on in-hospital mortality and length of stay in patients with type A aortic dissection. Objective: To investigate the effects of seasonal and climatic changes on postoperative in-hospital mortality and length of stay (LOS) in patients with type A acute aortic dissection (AAD). Methods: Patients undergoing implantation of the modified triple-branched stent graft to replace the descending aorta in addition to aortic root reconstruction for type A AAD in our hospital from January 2016 to December 2019 were included. Relevant data were retrospectively collected and analyzed. Results: A total of 404 patients were included in our analyses. The multivariate unconditional logistic regression analysis showed that patients admitted in autumn (OR 4.027, 95% CI 1.023-17.301, P = 0.039) or with coronary heart disease (OR 8.938, 95% CI 1.991-29.560, P = 0.049) were independently associated with an increased risk of postoperative in-hospital mortality. Furthermore, patients admitted in autumn (OR 5.956, 95% CI 2.719-7.921, P = 0.041) or with hypertension (OR 3.486, 95% CI 1.192-5.106, P = 0.035) were independently associated with an increased risk of longer LOS. Conclusion: Patients admitted in autumn or with coronary heart disease are at higher risk of in-hospital mortality following surgery for type A AAD. Also, patients admitted in autumn or with hypertension have a longer hospital LOS. In the autumn of the temperature transition, we may need to strengthen the management of medical quality after surgery for type A AAD. Answer: Yes, acute aortic dissection (AAD) does display seasonal variation. Several studies have found that the incidence of AAD is higher during certain times of the year, particularly in the winter months. For instance, one study found a significantly higher frequency of acute aortic dissection during winter compared to other seasons, with a peak in January, and a positive correlation between the incidence of AAD and mean atmospheric pressure (PUBMED:19683997). Another study using a nationwide database in Japan also observed that the monthly proportion of hospitalized patients for AAD was highest in winter and lowest in summer (PUBMED:37209929). Similarly, a retrospective single-center study in south-eastern China confirmed that the incidence of Stanford type-A acute aortic dissection exhibits seasonal, monthly, and daily variations, with a higher incidence in winter (PUBMED:37029426). Furthermore, a study that reviewed patients with AAD treated at Yokohama City University Urafune Hospital suggested that the incidence of AAD might be influenced by atmospheric temperature and possibly other factors such as changes in atmospheric pressure (PUBMED:10481426). Chronobiological studies have also indicated that AAD has rhythmic patterns with a winter seasonal peak and a morning daily peak in incidence (PUBMED:29404804). Another study found no significant seasonal variation in in-hospital mortality for AAD, but did note the highest incidence of AAD in the morning and winter (PUBMED:16021847). In summary, the evidence from multiple studies supports the conclusion that acute aortic dissection has a seasonal variation, with a higher incidence during the winter months.
Instruction: Does resting two-dimensional echocardiography identify patients with ischemic cardiomyopathy and low likelihood of functional recovery after coronary revascularization? Abstracts: abstract_id: PUBMED:15238824 Does resting two-dimensional echocardiography identify patients with ischemic cardiomyopathy and low likelihood of functional recovery after coronary revascularization? Objective: To evaluate the potential of a simple and widely available technique as two-dimensional (2D) echocardiography to identify patients with ischemic cardiomyopathy and low likelihood of functional recovery after coronary revascularization. Methods: Two-dimensional echocardiography and radionuclide ventriculography (RNV) were performed before coronary revascularization in 94 patients with ischemic cardiomyopathy. Left ventricular ejection fraction (LVEF) was measured by RNV. Regional wall motion abnormalities, wall motion score index, end-diastolic wall thickness (EDWT), left ventricular (LV) volumes and LV sphericity index were assessed in the echocardiographic images. RNV was repeated 9-12 months after revascularization to assess LVEF change; an improvement &gt;or=5% was considered clinically significant. Results: Nine hundred and ninety-nine segments were severely dysfunctional; 149 out of 999 (15%) had an EDWT &lt;or=6 mm. A severe enlargement of the end-diastolic volume index (EDVI) (&gt;or=100 ml/ml) and of the end-systolic volume index (&gt;or=80 ml) was present in 32 (34%) and 21 (22%) patients, respectively. A spherical shape of the LV was observed in 35 (37%) patients. LVEF after revascularization increased in 30 out of 94 patients (32%) from 30+/-8% to 39+/-9% (P&lt;0.0001). On multivariate analysis, the EDVI was the only predictor of no recovery in LVEF [odds ratio, 1.06, confidence interval (CI), 1.04-1.1, P&lt;0.0001]. The cut-off value of EDVI &gt;or=90 ml/ml accurately identified patients that virtually never recover. Post-operatively, LVEF increased in three out of 42 (7%, 95% CI 0-15%) patients with EDVI &gt;or=90 ml/ml as compared to 27 out of 52 (52%) patients with EDVI&lt;90 ml/ml (P&lt;0.0001). Conclusions: In patients with ischemic cardiomyopathy and severe LV enlargement, improvement of LVEF after revascularization is unlikely to occur. Conversely, in patients with relatively preserved LV size, a higher likelihood of functional recovery may be anticipated. abstract_id: PUBMED:7586253 Comparison of myocardial contrast echocardiography and low-dose dobutamine stress echocardiography in predicting recovery of left ventricular function after coronary revascularization in chronic ischemic heart disease. Background: Dobutamine stress echocardiography (DSE) and myocardial contrast echocardiography (MCE) can predict recovery of left ventricular function after myocardial infarction. DSE also has been shown to predict left ventricular functional recovery after revascularization in chronic ischemic heart disease, whereas MCE has not been evaluated in such patients. This study was performed to compare DSE and MCE in the prediction of left ventricular functional recovery after revascularization in patients with chronic ischemic heart disease. Methods And Results: MCE and DSE were performed in 35 patients with chronic coronary artery disease and significant wall motion abnormalities (mean ejection fraction, 0.36 +/- 0.09). Regional wall motion was scored by use of a 16-segment model wherein 1 = normal or hyperkinetic, 2 = hypokinetic, 3 = akinetic, and 4 = dyskinetic. Each segment was evaluated for contractile reserve by DSE and perfusion by MCE. Revascularization (coronary artery bypass graft [n = 13] and percutaneous transluminal coronary angioplasty [n = 10]) was successful in 23 patients. Follow-up echocardiograms were done to assess wall motion 30 to 60 days later. In 238 segments with resting wall motion abnormalities, perfusion was more likely to present than contractile reserve (97% versus 91%, P &lt; .02). Revascularization resulted in functional recovery in 77 of 95 hypokinetic segments (81%) but only 18 of 57 akinetic segments (32%, P &lt; .0001). DSE and MCE were not significantly different in predicting functional recovery of hypokinetic segments. In akinetic segments, DSE and MCE had similar sensitivities (89% versus 94%, respectively) and negative predictive values (93% and 97%, respectively) in predicting functional recovery. However, DSE had a higher specificity (92% versus 67%, P &lt; .02) and positive predictive value (85% versus 55%, P &lt; .02) than MCE in predicting functional recovery. Conclusions: Both contractile reserve by DSE and perfusion by MCE are predictive of functional recovery in hypokinetic segments after coronary revascularization in patients with chronic coronary revascularization in patients with chronic coronary artery disease. In akinetic segments, myocardial perfusion by MCE may exist in segments that do not recover contractile function after revascularization. Thus, contractile reserve during low-dose dobutamine infusion is a better predictor of functional recovery after revascularization in akinetic segments than perfusion. abstract_id: PUBMED:7743617 Dobutamine echocardiography predicts improvement of hypoperfused dysfunctional myocardium after revascularization in patients with coronary artery disease. Background: In patients with coronary artery disease, dysfunctional hypoperfused myocardium at rest may represent either necrotic or viable hibernating myocardium. The accuracy of inotropic stimulation in identifying hypoperfused, reversibly dysfunctional myocardium has not been extensively investigated. Methods And Results: Eighteen patients with stable chronic coronary artery disease underwent, while off drugs, quantitative 201Tl single-photon emission computed tomography after rest injection (2 to 3 mCi), two-dimensional echocardiography at rest and during dobutamine (5 to 10 micrograms/kg per minute i.v.), and radionuclide angiography. Single-photon emission computed tomography and echocardiography at rest were repeated 34 +/- 10 days after coronary revascularization, and radionuclide angiography was repeated 45 +/- 13 days after revascularization. Resting hypoperfusion was defined as 201Tl uptake &lt; 80% of maximal activity. Systolic function was scored from 1 (normal) to 4 (dyskinesia), and functional improvement was defined as a score change &gt; 1 grade. Of 79 dysfunctional hypoperfused segments, 48 (61%) improved function after revascularization. In 42 (88%) of these latter segments, function had improved during dobutamine. Conversely, systolic function after revascularization did not improve in 31 segments, and in 27 (87%), it had not improved during dobutamine. Functional improvement after revascularization was observed in 42 (91%) of 46 segments manifesting an improvement during dobutamine as opposed to 6 (18%) of 33 segments that did not improve during dobutamine. Resting 201Tl uptake (% of maximal activity) before revascularization (65 +/- 9%) significantly increased at follow-up in segments where function improved (70 +/- 12%, P &lt; .005), whereas it did not change significantly in segments with unchanged systolic function after revascularization (from 57 +/- 13% to 60 +/- 17%, P = NS). In 10 patients with prerevascularization ejection fraction &lt; 45%, left ventricular ejection fraction significantly increased from 36 +/- 7% before revascularization to 42 +/- 7% at follow-up (P &lt; .05). Conclusions: Inotropic stimulation using dobutamine echocardiography identifies hypoperfused reversibly dysfunctional myocardium. Functional improvement during dobutamine is highly predictive of improvement after revascularization. abstract_id: PUBMED:18067761 Real-time myocardial contrast echocardiography can predict functional recovery and left ventricular remodeling after revascularization in patients with ischemic heart disease. Background: Previous studies showed that preservation of microvascular integrity after myocardial ischemia was associated with myocardial viability. Real-time myocardial contrast echocardiography (RT-MCE) is a promising modality for non-invasive evaluation of microcirculation perfusion. Thus, it provides a unique tool to detect myocardial viability. We sought in this study to investigate the role of RT-MCE in predicting left ventricular (LV) functional recovery and remodeling after revascularization in patients with ischemic heart disease. Methods: Thirty-one patients with ischemic heart disease and resting regional LV dysfunction were included. LV volume, global and regional function were evaluated by echocardiography before and 6 - 9 months after revascularization. RT-MCE was performed before revascularization using low mechanical index power modulation imaging. Myocardial contrast opacification of dysfunctional segments was scored on a 3-point scale and mean contrast score in dysfunctional segments was calculated. Patients were divided into 2 groups according to mean contrast score in dysfunctional segments: group A, patients with mean contrast score = 0.5 (n = 19); group B, patients with mean contrast score &lt; 0.5 (n = 12). Results: Wall motion improvement was found to be 94.5%, 45.5% and 16.1% respectively (P &lt; 0.01) in homogenous, patchy and absent contrast opacification segments. At baseline, there was no significant difference in LV volume and global function between the two groups. After revascularization, group B had significantly larger LV end-diastolic volume (LVEDV) and LV end-systolic volume (LVESV), lower LV ejection fraction (LVEF) and higher wall motion score index (WMSI) than those of group A (all P &lt; 0.05). Revascularization was followed by significant improvement of LV volume and recovery of global LV function in group A (all P &lt; 0.01); however, in group B, after revascularization, deterioration of LVEDV (P &lt; 0.05) was observed, moreover LVESV, WMSI and LVEF did not change significantly. Conclusions: The maintenance of myocardial microcirculation detected by RT-MCE can predict functional recovery and LV remodeling after revascularization in patients with ischemic heart disease, which might be helpful in clinical decision-making and risk stratification. abstract_id: PUBMED:14969609 Comparison of functional recovery of mildly hypokinetic versus severely dysfunctional left ventricular segments after revascularization in patients with ischemic cardiomyopathy. Dysfunctional left ventricular (LV) segments showing contractile reserve during dobutamine stress echocardiography (DSE) are considered viable myocardium; functional recovery is expected after revascularization. Many segments, however, particularly mildly hypokinetic segments, do not recover. The reason for this failure is unknown. Two-dimensional echocardiography at rest and low-high-dose DSE were performed before revascularization in 114 consecutive patients with ischemic cardiomyopathy. Two-dimensional echocardiography at rest was repeated after 9 to 12 months. Segmental function was scored by a 5-point grading score. Functional recovery after revascularization was assessed in mildly hypokinetic (score 2, group I) and severely dysfunctional segments (score 3 to 5, group II). For each segment, functional recovery was defined as an improvement in functional score of &gt; or =1 grade compared with the baseline score at rest. During low-dose DSE (up to 10 microg/kg/min), 183 group I segments (68%) and 438 group II (39%) segments had contractile reserve (p &lt;0.0001). However, functional recovery was observed less frequently in group I segments (41%) than in group II segments (55%) with contractile reserve (p &lt;0.005). During high-dose DSE (up to 40 microg/kg/min), in the group I segments with contractile reserve at the low dose, the sustained improvement pattern (indicating subendocardial scar) was prevalent (73%). After revascularization, 73% of segments with sustained improvement did not recover. Conversely, the biphasic response (indicating ischemically jeopardized myocardium) was observed only in 27% of group I segments. Functional recovery occurred in 39 of these segments (78%) (p &lt;0.001 vs sustained improvement). Hence, mildly hypokinetic segments probably indicate the presence of subendocardial scars, and may explain the failure in functional recovery after revascularization. abstract_id: PUBMED:12804724 Individual prediction of functional recovery after coronary revascularization in patients with ischemic cardiomyopathy: the scar-to-biphasic model. Currently, the prediction of improvement of left ventricular (LV) ejection fraction (EF) after revascularization in patients with ischemic cardiomyopathy relies only on viable myocardium extent, whereas both the amount of viable and scar tissue may be important. A model was developed, based on the amount of viable and nonviable myocardium, to predict functional recovery. Viable and scarred myocardium was defined by dobutamine stress echocardiography (DSE) in 108 consecutive patients. LVEF before and 9 to 12 months after revascularization was assessed by radionuclide ventriculography; an improvement of &gt; or =5% was considered significant. In the 1,089 dysfunctional segments (63%), DSE elicited biphasic response in 216 segments (20%), sustained improvement in 205 (19%), worsening in 43 (4%), and no change in 625 (57%). LVEF improved in 39 patients (36%). Only the numbers of biphasic and scar segments were predictors of improvement or no improvement of LVEF (odds ratio 1.5, 95% confidence interval 1.2 to 1.7, p &lt;0.0001 for biphasic segments; odds ratio 0.8, 95% confidence interval 0.7 to 0.9, p &lt;0.0005 for scarred segments). The sustained improvement and worsening pattern were not predictive of improvement or no improvement. A regression function, based on the number of scar and biphasic segments, showed that the likelihood of recovery was 85% in patients with extensive biphasic tissue and no scars and 11% in patients with extensive scars and no biphasic myocardium. Patients with a mixture of scar and biphasic tissue had an intermediate likelihood of improvement (50%). In patients with ischemic cardiomyopathy and a mixture of viable and nonviable tissue, both numbers of viable and nonviable segments should be considered to accurately predict functional recovery after revascularization. abstract_id: PUBMED:28833528 Intraoperative assessment of transient and persistent regional left ventricular wall motion abnormalities in patients undergoing coronary revascularization surgery using real time three-dimensional transesophageal echocardiography: A prospective observational study. Objective: To evaluate the feasibility of intraoperative real time three-dimensional echocardiography (RT3DE) for identification and quantification of transient and persistent regional wall motion abnormalities (RWMAs) in patients with ischemic cardiomyopathy undergoing coronary artery bypass grafting (CABG). Design: A prospective observational study SETTING: Single-center study in an academic tertiary care hospital PARTICIPANTS: A series of 42 patients undergoing elective CABG over a 2-year period were included. Intervention: After induction of anesthesia, a comprehensive transesophageal echocardiography (TEE) examination was performed to evaluate regional wall motion using two-dimensional wall motion score index (WMSI) and RT3D echocardiographic parameters at three specific time points during the operative phase. Measurements And Main Results: The 3D assessment of LV function was based on the quantification of change in LV chamber volume over time from each segment excursion. Patients were divided into two groups and subgroups based on TEE findings. There was significant mechanical dyssynchrony in patients with RWMAs (WMSI &gt; 1; systolic dyssynchrony index [SDI] = 7.0 ± 3.66) as compared to the patients having normal wall motion (WMSI = 1; SDI = 2.0 ± 0.95; P = .001). Patients with contractile dysfunction were found to have low values of segmental excursion and high values of negative excursion on parametric imaging. Persistent RWMAs due to hibernating myocardium showed significant resolution of mechanical dyssynchrony after revascularization. Parametric imaging could detect transient RWMAs due to stunning and graft dysfunction. Early activating segments (EAS) on "timing bull's-eye" may represent hypercontractile segments and may influence inotrope administration. Conclusion: The RT3DE is a valuable modality for precise quantification of regional wall motion during revascularization procedure. abstract_id: PUBMED:12354702 Perfusion and contractile reserve in chronic dysfunctional myocardium: relation to functional outcome after surgical revascularization. Background: Chronic dysfunctional but viable myocardium may exhibit contractile reserve and/or intact perfusion. Segments with intact perfusion without contractile reserve are frequently observed inpatients with ischemic cardiomyopathy. The clinical relevance of this observation is unclear; in particular, the functional outcome after revascularization is unknown. Thus, contractile reserve (using low-dose dobutamine echocardiography) and perfusion (using resting (99m)Tc tetrofosmin) were evaluated in 114 patients with ischemic cardiomyopathy and the findings were related to functional outcome (9 to 12 months after revascularization). Methods And Results: Patients (n=114) with ischemic cardiomyopathy undergoing surgical revascularization were evaluated for perfusion (using (99m)Tc tetrofosmin) and contractile reserve (using low-dose dobutamine echocardiography). Contractile function (two-dimensional echocardiography) was assessed before and 9 to 12 months after revascularization. In the 1 336 dysfunctional segments, perfusion was preserved in 51% of the segments and contractile reserve in 31% (P&lt;.05); 47% of the segments with perfusion did not exhibit contractile reserve. The majority (66%) of segments with recovery of function postrevascularization had intact perfusion and contractile reserve; the majority (58%) of segments without functional recovery lacked both perfusion and contractile reserve. Interestingly, 22% of segments with functional recovery and 25% of segments without functional recovery showed intact perfusion without contractile reserve. Conclusion: Segments with intact perfusion/contractile reserve have a high likelihood of recovery of function postrevascularization; segments without contractile reserve/perfusion have a low likelihood of recovery and segments with intact perfusion without contractile reserve have an intermediate likelihood of recovery. abstract_id: PUBMED:12411906 Perioperative tissue Doppler echocardiography and bypass graft flowmetry in patients undergoing coronary revascularization: predictive power for late recovery of regional myocardial function. To assess if recovery of regional myocardial function can be predicted by perioperative Doppler tissue echocariography, 20 patients undergoing coronary artery bypass graft surgery (mean 4.1 grafts) had serial transthoracic and transesophageal echocardiography. Systolic velocities were lower in basal posterior and anterior segments, and higher in the midanterior septum, at intraoperative transesophageal echocardiography compared with preoperative transthoracic echocardiography (-45%, -30%, +18%, respectively), but comparable elsewhere. After bypass, velocities were increased in midposterior, basal lateral, basal anterior, and midanterior septal segments (+41%, 25%, 27%, 44%, respectively, P &lt;.05). Increased velocities in circumflex segments at 6 weeks (midposterior, basal lateral, and midlateral +54%, 45%, 39%, respectively, P &lt;.05) were not predicted by perioperative changes or related to graft flow (transit-time flowmetry; R -0.09, -0.14, and -0.32, respectively, not significant). Myocardial velocities measured by transthoracic and transesophageal echocardiography are not comparable in segments with different angles of insonation. Perioperative changes in resting myocardial systolic velocity are highly variable and do not predict late recovery. abstract_id: PUBMED:22379128 Persistent dysfunction of viable myocardium after revascularization in chronic ischaemic heart disease: implications for dobutamine stress echocardiography with longitudinal systolic strain and strain rate measurements. Aims: Studies of functional recovery after revascularization in chronic coronary artery disease are contradictory and mark a lack of knowledge of persistent dysfunction in the non-scarred myocardium. Based on tissue Doppler-derived regional longitudinal systolic strain and strain rate (SR), both at rest and during dobutamine stress (DS), we assessed to what extent ischaemia-related reduced myocardial function would recover after revascularization in hearts with predominantly viable myocardium. Methods And Results: Reference peak systolic strain and SR values were determined from tissue Doppler imaging in 15 healthy volunteers. Fifty-seven patients scheduled for coronary artery bypass grafting (CABG), with an average ejection fraction of 49%, underwent pre-operative magnetic resonance imaging (MRI) with late enhancement, resting echocardiography, and DS echocardiography (DSE), with assessment of systolic strain and SR and post-systolic strain (PSS). Eight to 10 months after CABG, myocardial function was reassessed. Forty per cent of all segments had reduced longitudinal systolic strain pre-operatively despite only 1.4% of segments with transmural infarctions on MRI. After revascularization, 38% of prior dysfunctional segments improved their resting strain, whereas 72% were improved by DS. Positive resting systolic strain indicated the absence of significant scar tissue. Resting systolic strain and DS strain responses were good prognosticators for functional improvement with areas under the receiver operating characteristic curve of 0.753 (0.646-0.860) and 0.790 (0.685-0.895), respectively. Conclusion: Persistently reduced longitudinal function was observed in more than half of pre-operatively viable but dysfunctional segments after CABG. We propose that such a functional impairment marks a regional remodelling process not amendable to re-established blood flow. Answer: Yes, resting two-dimensional echocardiography can identify patients with ischemic cardiomyopathy and a low likelihood of functional recovery after coronary revascularization. A study found that in patients with ischemic cardiomyopathy and severe left ventricular (LV) enlargement, improvement of LV ejection fraction (LVEF) after revascularization is unlikely to occur. Conversely, in patients with relatively preserved LV size, a higher likelihood of functional recovery may be anticipated. The end-diastolic volume index (EDVI) was the only predictor of no recovery in LVEF, and a cut-off value of EDVI ≥90 ml/ml accurately identified patients that virtually never recover (PUBMED:15238824). Additionally, other studies have shown that contractile reserve by dobutamine stress echocardiography (DSE) and perfusion by myocardial contrast echocardiography (MCE) are predictive of functional recovery in hypokinetic segments after coronary revascularization in patients with chronic coronary artery disease. In akinetic segments, contractile reserve during low-dose dobutamine infusion is a better predictor of functional recovery after revascularization than perfusion (PUBMED:7586253). Dobutamine echocardiography has also been found to predict improvement of hypoperfused dysfunctional myocardium after revascularization in patients with coronary artery disease (PUBMED:7743617). Real-time myocardial contrast echocardiography (RT-MCE) has been shown to predict functional recovery and LV remodeling after revascularization in patients with ischemic heart disease, indicating that the maintenance of myocardial microcirculation detected by RT-MCE can be predictive (PUBMED:18067761). In summary, resting two-dimensional echocardiography, particularly when combined with other modalities such as DSE, MCE, and RT-MCE, can be a useful tool in identifying patients with ischemic cardiomyopathy who have a low likelihood of functional recovery after coronary revascularization.
Instruction: Analysis of third body particles generated during total knee arthroplasty: is metal debris an issue? Abstracts: abstract_id: PUBMED:18255297 Analysis of third body particles generated during total knee arthroplasty: is metal debris an issue? Purpose: There is a concern amongst knee surgeons that during total knee arthroplasty (TKA) certain particles could be generated which might become left inside the knee, and which could become a possible cause of third body wear. Especially during the sawing process, when an oscillating saw is used through slots in a cutting block, there is a potential cause for metal debris generation. So far however, no data exist to substantiate and quantify the amount of metal debris left in situ after TKA, and it was the purpose of this study to evaluate this. Methods: In seven consecutive patients undergoing primary cemented TKA, we measured the amount of debris that was present at the end of the operation immediately before closure. The debris was retrieved using pulsed irrigation with 1 l of normal saline, which was then collected and centrifuged. Obtained pellets were subsequently washed and chemically treated to quantify the different components present in the debris. Results: Overall an average amount of 134.9 mg (range 73.6-198.0 mg) debris was retrieved. The debris contained on average 75.8 mg of bone particles (range 41.2-109.3 mg), 57.2 mg (range 31.2-83.9 mg) of cement particles, and 1.96 mg (range 0-7.2 mg) of metal particles. On average the total amount of debris consisted for 56.5% of bony fragments, 42% cement fragments and 1.5% metal particles. Conclusion: Even despite precise surgical technique including pressurized irrigation, a significant amount of debris is still present after TKA. The debris consists predominantly of bone and cement particles. The presence of metal debris is limited, and contributes only 1.5% to the total amount. abstract_id: PUBMED:33955669 Arthroscopy-Assisted Treatment of Wear Debris Pseudotumor After Total Knee Arthroplasty: A Case Report. Background: Wear debris pseudotumors are a rare complication after total knee arthroplasty (TKA) and have seldom been reported in the recent literature. There is no consensus on the best therapeutic method, but the high quality curative treatment, safe, low invasive treatments are required for the patients. Case Presentation: In this paper, we present the case of a 74-year-old man with a wear debris pseudotumor after TKA with symptoms of severe pain and functional disability of his right knee. X-ray examination showed that the medial compartment of the right knee was narrowing. Magnetic resonance imaging (MRI) and Doppler ultrasound both revealed a polycystic mass at the posteromedial side of the patient's right knee. Considering the bad health condition and the minimally invasive surgery requirement of this senior patient, arthroscopic knee debridement and percutaneous cystic mass suction were carried out simultaneously. Video arthroscopy of the right knee showed visible inflammatory soft tissue, obvious polyethylene fragments, wear of the polyethylene prothesis, and a broken polyethylene insert. The intraarticular polyethylene wear debris was removed as much as possible, and inflammatory soft tissue was debrided and sent for pathology. Postoperative pathology showed polyethylene debris in the soft tissue with an apparent multinucleated giant cell response, which was consistent with foreign body granuloma. All clinical manifestation was improved and Lysholm scores were significantly better at one year with this treatment, increasing from 32 points to 71 points. Conclusion: After two years of follow-up, the patient's knee joint was significantly relieved from soreness and pain, and walking was not significantly restricted. Our treatment could not address the root cause of the wear debris pseudotumor, which was due to prosthesis failure, but sometimes, such an approach is the safest, most economical, and most effective choice for patients who are intolerant to reoperation. abstract_id: PUBMED:16195061 Metal debris from bony resection in knee arthroplasty--is it an issue? Background: Metal particles are generated during bone preparation in knee arthroplasty. These particles may produce third-body wear, or may have a role in osteolysis. Knowledge of their characteristics may help in the development of methods to reduce the amount of metal debris during bone cutting procedures. Material And Methods: We performed bony resection of the distal femur and proximal tibia on 15 pig knees, simulating a total knee arthroplasty (TKA). Metal debris was collected from the saw blades, cutting blocks and bone surfaces and cleaned for microanalysis. Results: The average loss of metal from the saw blades was 1.13 mg. The average volume of a wear particle was 3.4 x 10(-16) m(3). From this, it was estimated that approximately 500,000 particles are released from the saw blade alone. Material analysis of the particles indicated that the majority originated from the metallic cutting guides, suggesting that many millions of wear particles would be generated during the surgical procedure. Two particle shapes predominated: platelet shape and ploughed shape. Interpretation: Wear particles are produced during resection for a TKA. These may enter the artificial articulation and cause accelerated wear and macrophage activation. Redesign of cutting blocks and saw blades may reduce the amount of debris produced during surgery. abstract_id: PUBMED:32168765 Third Body Wear of UHMWPE-on-PEEK-OPTIMA™. PEEK-OPTIMA™ is being considered as an alternative to cobalt chrome (CoCr) in the femoral component of total knee replacements. To date, investigations of ultra-high molecular weight polyethylene (UHMWPE)-on-PEEK have shown an equivalent wear rate to conventional implant materials under standard conditions. In this study, the third body wear performance of UHMWPE-on-PEEK was directly compared to UHMWPE-on-CoCr in a series of pin-on-plate studies using two approaches for third body damage. Damage simulation with particles of bone cement showed a significant (p &lt; 0.001), four-fold increase in the mean surface roughness of PEEK plates compared to CoCr. However, wear simulation against the damaged plates showed no significant difference in the wear of UHMWPE pins against the different materials (p = 0.59), and a polishing effect by the pin against the PEEK plates was observed. Scratching PEEK and CoCr counterfaces with a diamond stylus to create scratches representative of severe third body damage (4 µm lip height) resulted in a significantly higher (p = 0.01) wear of UHMWPE against CoCr compared to PEEK and again, against PEEK plates, polishing by the UHMWPE pin led to a reduction in scratch lip height. This study shows that in terms of its wear performance under third body wear/damage conditions, UHMWPE-on-PEEK differs from conventional knee replacement materials. abstract_id: PUBMED:25112210 Synovial fluid differential cell count in wear debris synovitis after total knee replacement. Background: Determining the cause of synovitis following total knee arthroplasty (TKA) can be challenging. The differential diagnoses include infection, hemarthrosis, instability, crystalline disease, wear debris or idiopathic causes. Wear particle synovitis can mimic periprosthetic infection with symptoms of pain and effusion. Radiographs and physical exam are often inconclusive in differentiating the two. Synovial fluid analysis is routinely used in evaluating periprosthetic infections. We examined the association between synovial white blood cell count and differentials, and polyethylene wear and osteolysis, to see if fluid analysis can aid in establishing the diagnosis of wear particle synovitis. Methods: A cell count and differential was obtained from synovial fluid samples from 54 TKAs undergoing revision for aseptic failure. Explanted polyethylene inserts were analyzed for linear and volumetric wear, oxidation (ketone peak height), and damage features. Analysis was performed to assess the relationship between cell counts and polyethylene wear indicators as well as severity of intra-operative and radiographic osteolysis. Results: Total and percent mononuclear (monocyte and lymphocyte) cell counts were found to be elevated in the presence of documented wear debris synovitis and an association was suggested between their levels and maximum ketone levels. Conclusion: The present study implies that the differential cell count of knee fluid can help distinguish wear debris from infection as a source of synovitis following TKA and identifies the value of the mononuclear cell count as a possible tool to assess abnormal wear rates of the polyethylene insert. Further research into identifying the exact role of monocytes in the wear debris synovitis and osteolytic pathways is warranted. Level Of Evidence: Level II, diagnostic study. abstract_id: PUBMED:23417520 The influence of third-body particles on wear rate in unicondylar knee arthroplasty: a wear simulator study with bone and cement debris. The reduced intraoperative visibility of minimally invasive implanted unicondylar knee arthroplasty makes it difficult to remove bone and cement debris, which have been reported on the surface of damaged and retrieved bearings. Therefore, the aim of this study was to analyze the influence of bone and cement particles on the wear rate of unicompartmental knee prostheses in vitro. Fixed bearing unicompartmental knee prostheses were tested using a knee-wear-simulator according to the ISO standard 14243-1:2002(E) for 5.0 million cycles. Afterwards bone debris (particle size 671 ± 262 μm) were added to the test fluid in a concentration of 5 g/l for 1.5 million cycles, followed by 1.5 million cycles blended with cement debris (particle size 644 ± 186 μm) in the same concentration. Wear rate, knee-kinematics and wear-pattern were analyzed. The wear rate reached 12.5 ± 1.0 mm³/million cycles in the running-in and decreased during the steady state phase to 4.4 ± 0.91 mm³/million cycles. Bone particles resulted in a wear rate of 3.0 ± 1.27 mm³/million cycles with no influence on the wear rate compared to the steady state phase. Cement particles, however, lead to a significantly higher wear rate (25.0 ± 16.93 mm³/million cycles) compared to the steady state phase. The careful removal of extruded cement debris during implantation may help in reducing wear rate. Bone debris are suggested to have less critical influence on the prostheses wear rate. abstract_id: PUBMED:10065721 Relationship between wear debris particles and polyethylene surface damage in primary total knee arthroplasty. To study relationships between design and wear, particles were isolated from tissues around 75 failed primary total knee prostheses, and particle size and concentration were compared with the extent and distribution of wear on superior and inferior polyethylene surfaces of the corresponding implants. Twenty-four heat-pressed, posterior cruciate ligament (PCL)-retaining (group I: PCA knee), 19 non-heat-pressed, PCL-retaining with relatively flat surfaces (group II: AMK, Kinematic, Microloc, Ortholoc), 14 non-heat-pressed, PCL-retaining with relatively congruent surfaces (group III: Townley Arizona), and 18 PCL-resected implants (group IV: Insall-Burstein, Total Condylar) were evaluated. Groups III and IV had fewer particles, smaller particles, and less surface damage than groups I and II. Implants with third-body wear had relatively small debris particles; those with thinner polyethylene had more severe surface delamination and more wear between liner and metal base. Groups I and II had more rotationally asymmetric wear patterns than groups III and IV. The size and concentration of debris particles might reflect, in part, different dominant wear mechanisms among implants of different designs. abstract_id: PUBMED:35057319 Characteristics of Particles and Debris Released after Implantoplasty: A Comparative Study. Titanium particles embedded on peri-implant tissues are associated with a variety of detrimental effects. Given that the characteristics of these detached fragments (size, concentration, etc.) dictate the potential cytotoxicity and biological repercussions exerted, it is of paramount importance to investigate the properties of these debris. This study compares the characteristics of particles released among different implant systems (Group A: Straumann, Group B: BioHorizons and Group C: Zimmer) during implantoplasty. A novel experimental system was utilized for measuring and collecting particles generated from implantoplasty. A scanning mobility particle sizer, aerodynamic particle sizer, nano micro-orifice uniform deposit impactor, and scanning electron microscope were used to collect and analyze the particles by size. The chemical composition of the particles was analyzed by highly sensitive microanalysis, microstructures by scanning electron microscope and the mechanical properties by nanoindentation equipment. Particles released by implantoplasty showed bimodal size distributions, with the majority of particles in the ultrafine size range (&lt;100 nm) for all groups. Statistical analysis indicated a significant difference among all implant systems in terms of the particle number size distribution (p &lt; 0.0001), with the highest concentration in Group B and lowest in Group C, in both fine and ultrafine modes. Significant differences among all groups (p &lt; 0.0001) were also observed for the other two metrics, with the highest concentration of particle mass and surface area in Group B and lowest in Group C, in both fine and ultrafine modes. For coarse particles (&gt;1 µm), no significant difference was detected among groups in terms of particle number or mass, but a significantly smaller surface area was found in Group A as compared to Group B (p = 0.02) and Group C (p = 0.005). The 1 first minute of procedures had a higher number concentration compared to the second and third minutes. SEM-EDS analysis showed different morphologies for various implant systems. These results can be explained by the differences in the chemical composition and microstructures of the different dental implants. Group B is softer than Groups A and C due to the laser treatment in the neck producing an increase of the grain size. The hardest implants were those of Group C due to the cold-strained titanium alloy, and consequently they displayed lower release than Groups A and B. Implantoplasty was associated with debris particle release, with the majority of particles at nanometric dimensions. BioHorizons implants released more particles compared to Straumann and Zimmer. Due to the widespread use of implantoplasty, it is of key importance to understand the characteristics of the generated debris. This is the first study to detect, quantify and analyze the debris/particles released from dental implants during implantoplasty including the full range of particle sizes, including both micro- and nano-scales. abstract_id: PUBMED:11307140 Paratibial cyst associated with wear debris after total knee arthroplasty. We present a case in which a synovial cyst arose from the proximal tibia and expanded in the calf of a patient after total knee arthroplasty. A cystogram showed a direct communication between the joint cavity and the cyst, apparently associated with a screw that penetrated the tibial cortex. Histologic examination of the cyst showed an inflammatory reaction, including macrophages, foreign body giant cells, and metal and polyethylene particles. To our knowledge, this is the first case report illustrating a paraosseous cyst that developed after total knee arthroplasty. Wear debris from the total knee prosthesis may have been responsible for this unusual cyst. abstract_id: PUBMED:24891003 Effect of microseparation and third-body particles on dual-mobility crosslinked hip liner wear. Large heads have been recommended to reduce the risk of dislocation after total hip arthroplasty. One of the issues with larger heads is the risk of increased wear and damage in thin polyethylene liners. Dual-mobility liners have been proposed as an alternative to large heads. We tested the wear performance of highly crosslinked dual-mobility liners under adverse conditions simulating microseparation and third-body wear. No measurable increase in polyethylene wear rate was found in the presence of third-body particles. Microseparation induced a small increase in wear rate (2.9mm(3)/million cycles). A finite element model simulating microseparation in dual-mobility liners was validated using these experimental results. The results of our study indicate that highly crosslinked dual-mobility liners have high tolerance for third-body particles and microseparation. Answer: The analysis of third body particles generated during total knee arthroplasty (TKA) indicates that metal debris is not a significant issue compared to other types of debris. In a study measuring the amount of debris present at the end of TKA operations, it was found that the debris consisted predominantly of bone and cement particles, with metal particles contributing only 1.5% to the total amount of debris. On average, 56.5% of the debris was bony fragments, and 42% was cement fragments (PUBMED:18255297). However, the presence of metal debris, although limited, is still a concern as it may produce third-body wear or have a role in osteolysis. During bone preparation in knee arthroplasty, metal particles are generated, and it is estimated that approximately 500,000 particles are released from the saw blade alone. The majority of these particles originate from the metallic cutting guides, suggesting that many millions of wear particles could be generated during the surgical procedure (PUBMED:16195061). In terms of wear performance under third body wear/damage conditions, ultra-high molecular weight polyethylene (UHMWPE)-on-PEEK differs from conventional knee replacement materials. When damaged with bone cement particles, PEEK plates showed a significant increase in mean surface roughness compared to cobalt chrome (CoCr) plates. However, there was no significant difference in the wear of UHMWPE pins against the different materials, and a polishing effect by the pin against the PEEK plates was observed (PUBMED:32168765). In conclusion, while metal debris is generated during TKA, its presence is relatively small compared to other types of debris such as bone and cement particles. Nonetheless, the potential for metal debris to contribute to third-body wear and osteolysis remains a concern, and efforts to reduce the amount of metal debris during bone cutting procedures are warranted.
Instruction: Reduced Face Preference in Infancy: A Developmental Precursor to Callous-Unemotional Traits? Abstracts: abstract_id: PUBMED:25526972 Reduced Face Preference in Infancy: A Developmental Precursor to Callous-Unemotional Traits? Background: Children with callous-unemotional (CU) traits, a proposed precursor to adult psychopathy, are characterized by impaired emotion recognition, reduced responsiveness to others' distress, and a lack of guilt or empathy. Reduced attention to faces, and more specifically to the eye region, has been proposed to underlie these difficulties, although this has never been tested longitudinally from infancy. Attention to faces occurs within the context of dyadic caregiver interactions, and early environment including parenting characteristics has been associated with CU traits. The present study tested whether infants' preferential tracking of a face with direct gaze and levels of maternal sensitivity predict later CU traits. Methods: Data were analyzed from a stratified random sample of 213 participants drawn from a population-based sample of 1233 first-time mothers. Infants' preferential face tracking at 5 weeks and maternal sensitivity at 29 weeks were entered into a weighted linear regression as predictors of CU traits at 2.5 years. Results: Controlling for a range of confounders (e.g., deprivation), lower preferential face tracking predicted higher CU traits (p = .001). Higher maternal sensitivity predicted lower CU traits in girls (p = .009), but not boys. No significant interaction between face tracking and maternal sensitivity was found. Conclusions: This is the first study to show that attention to social features during infancy as well as early sensitive parenting predict the subsequent development of CU traits. Identifying such early atypicalities offers the potential for developing parent-mediated interventions in children at risk for developing CU traits. abstract_id: PUBMED:34424454 Examining Links Between Infant Parasympathetic Regulation during the Still-Face Paradigm and Later Callous-Unemotional Traits. Although research suggests that callous-unemotional (CU) traits are underpinned by deficits in social affiliation and reduced sensitivity to threat, there has been little investigation of the biophysiological regulatory mechanisms underlying these processes in infancy. The current study uses data from the Durham Child Health and Development Study (DCHD; n = 206) to examine whether and how the combination of infants' behavioral reactivity and levels of respiratory sinus arrhythmia (RSA), an indicator of parasympathetic nervous system functioning, during the still-face episode of the still-face paradigm at 6 months differentiates risk for CU traits and oppositional defiant behaviors (ODD) at age 3 years, as well as whether these relations vary by children's attachment security. Results indicate that reduced negative affect during the still-face episode at 6 months predicts higher CU traits (B = -0.28, β = -0.27, p = 0.003) and ODD (B = -0.35, β = -0.24, p = 0.007) at 3 years. Results also show that comparatively lower RSA, i.e. engaged parasympathetic system, predicts higher CU traits (B = -0.10, β = -0.34, p = 0.013), but not ODD. Tests of moderation suggest the combination of blunted negative affect but comparatively lower RSA levels during a social stressor constitutes risk for later CU traits for children who are also insecurely attached (simple slope = -0.70, t = 2.88, p = 0.006 at -1 SD). Findings contribute to our understanding of the complex and interactive risk processes which precede the development of CU traits. abstract_id: PUBMED:26254018 National multi-cohort time trends in adolescent risk preference and the relation with substance use and problem behavior from 1976 to 2011. Aims: Preference for risky activities is an important developmentally graded predictor of substance use. Population-level trends in adolescent risk preference, as well as the way in which risk preference may be a conduit to risk behavior, have never been documented. The present study examines population-level trends in risk preference among U.S. high school seniors for the 36 years from 1976 to 2011, as well as trends in the association between risk preference and substance use and other problem behaviors. Methods: Data were drawn from yearly nationally representative cross-sectional surveys of US high school seniors (N=91,860). Risk preference was measured consistently with two items. Marijuana and cocaine use, binge drinking, and conduct problems were assessed. Trends were tested using JoinPoint software. Results: The mean level of reported risk preference among US 12th graders has increased over time, especially in the 1980s. For example, the proportion of high school females who reported enjoying activities that were "a little dangerous" more than doubled, from 4.9% in 1976 to 10.8% in 1988. While risk preference reports among adolescent males leveled off in 1992, risk preference reports among females show a continued positive overall slope through 2011. The magnitude of the association between risk preference and marijuana use has increased over time. Conclusions: Reported preference for risky activities has increased among adolescents in the US, especially among young women. Reported risk preference is increasingly associated with a higher use of marijuana. Our findings argue for the importance of placing risk preference within a multi-level framework that attends to historical variation. abstract_id: PUBMED:33316763 Eye gaze patterns and functional brain responses during emotional face processing in adolescents with conduct disorder. Background: Conduct disorder (CD) is characterized by severe aggressive and antisocial behavior. Initial evidence suggests neural deficits and aberrant eye gaze pattern during emotion processing in CD; both concepts, however, have not yet been studied simultaneously. The present study assessed the functional brain correlates of emotional face processing with and without consideration of concurrent eye gaze behavior in adolescents with CD compared to typically developing (TD) adolescents. Methods: 58 adolescents (23CD/35TD; average age = 16 years/range = 14-19 years) underwent an implicit emotional face processing task. Neuroimaging analyses were conducted for a priori-defined regions of interest (insula, amygdala, and medial orbitofrontal cortex) and using a full-factorial design assessing the main effects of emotion (neutral, anger, fear), group and the interaction thereof (cluster-level, p &lt; .05 FWE-corrected) with and without consideration of concurrent eye gaze behavior (i.e., time spent on the eye region). Results: Adolescents with CD showed significant hypo-activations during emotional face processing in right anterior insula compared to TD adolescents, independent of the emotion presented. In-scanner eye-tracking data revealed that adolescents with CD spent significantly less time on the eye, but not mouth region. Correcting for eye gaze behavior during emotional face processing reduced group differences previously observed for right insula. Conclusions: Atypical insula activation during emotional face processing in adolescents with CD may partly be explained by attentional mechanisms (i.e., reduced gaze allocation to the eyes, independent of the emotion presented). An increased understanding of the mechanism causal for emotion processing deficits observed in CD may ultimately aid the development of personalized intervention programs. abstract_id: PUBMED:24839886 Atypical neural responses during face processing in female adolescents with conduct disorder. Objective: Conduct disorder (CD) in females is associated with negative adult outcomes including mental health problems and personality disorders. Although recent neuroimaging studies have reported changes in neural activity during facial emotion processing in males with CD or callous-unemotional (CU) traits, there have been no neuroimaging studies specifically assessing females with CD. We addressed this gap by investigating whether female adolescents with CD show atypical neural activation when processing emotional or neutral faces. Method: We acquired functional magnetic resonance imaging (fMRI) data from 20 female adolescents with CD and 20 female control participants while they viewed angry, sad, and neutral faces. Results: An omnibus group (CD, control) by facial emotion (angry, sad, neutral) analysis of variance (ANOVA) revealed main effects of facial emotion in superior temporal cortex, fusiform gyrus, ventrolateral prefrontal cortex and insula, and main effects of group in medial orbitofrontal cortex (OFC) and right anterior insula. Female participants with CD showed reduced medial OFC and increased anterior insula responses relative to healthy controls. There were no significant group × facial emotion interactions. Lifetime CD symptoms were negatively correlated with amygdala, superior temporal cortex, fusiform gyrus, and dorsolateral prefrontal cortex activity for the contrast "all-faces versus fixation." CU traits were negatively correlated with fusiform gyrus activity for the contrast sad versus neutral faces. Conclusion: Females with CD showed atypical neural activation during the processing of all facial expressions, irrespective of valence. Our results demonstrate that severity of CD symptoms and CU traits is important in explaining abnormal patterns of neural activity. abstract_id: PUBMED:28646974 Testing the feasibility of a briefer school-based preventive intervention with aggressive children: A hybrid intervention with face-to-face and internet components. This study describes the results from a feasibility study of an innovative indicated prevention intervention with hybrid face-to-face and web-based components for preadolescent youth. This intervention includes a considerably briefer set of face-to-face sessions from the evidence-based Coping Power program and a carefully integrated internet component with practice and teaching activities and cartoon videos for children and for parents. The Coping Power - Internet Enhanced (CP-IE) program introduces a set of cognitive-behavioral skills in 12 small group sessions for children delivered during the school day and 7 group sessions for parents. Eight elementary schools were randomly assigned to CP-IE or to Control, and six children at each school were identified each year based on 4th grade teacher ratings of aggressive behavior. Path analyses of teacher-rated disruptive behavior outcomes for 91 fifth grade children, across two annual cohorts, indicated Control children had significantly greater increases in conduct problem behaviors across the 5th grade year than did CP-IE children. This much briefer version of Coping Power provided beneficial preventive effects on children's behavior in the school setting similar to the effects of the longer version of Coping Power. The website materials appeared to successfully engage children, and parents' use of the website predicted children's changes in conduct problems across the year. abstract_id: PUBMED:25956290 DRD4 Genotype and the Developmental Link of Peer Social Preference with Conduct Problems and Prosocial Behavior Across Ages 9-12 Years. The peer environment is among the most important factors for children's behavioral development. However, not all children are equally influenced by their peers, which is potentially due to their genetic make-up. The dopamine receptor D4 gene (DRD4) is a potential candidate gene that may influence children's susceptibility to the peer environment. In the present study, we explored whether variations in the DRD4 gene moderated the association between children's social standing in the peer group (i.e., social preference among classmates) with subsequent conduct problems and prosocial behavior among 405 (51% females) elementary school children followed annually throughout early adolescence (ages 9-12 years). The behavioral development of children with and without the DRD4 7-repeat allele was compared. The results indicated that children who had higher positive social preference scores (i.e., who were more liked relative to disliked by their peers) showed less conduct problem development in subsequent years relative to children who had lower positive social preference scores. In contrast, children who had more negative preference scores (i.e., who were more disliked relative to liked among peers) showed more conduct problem development in subsequent years, relative to children who had less negative preference scores. However, these effects only occurred when children had a 7-repeat allele. For children who did not have a 7-repeat allele, the level of social preference was not associated with subsequent conduct problems. No evidence for gene-environment interaction effects for prosocial behavior was found. The implications for our understanding of conduct problem development and its prevention are discussed. abstract_id: PUBMED:28095001 Callousness and affective face processing in adults: Behavioral and brain-potential indicators. The investigation of callous-unemotional (CU) traits has been central to contemporary research on child behavior problems, and served as the impetus for inclusion of a specifier for conduct disorder in the latest edition of the official psychiatric diagnostic system. Here, we report results from 2 studies that evaluated the construct validity of callousness as assessed in adults, by testing for affiliated deficits in behavioral and neural processing of fearful faces, as have been shown in youthful samples. We hypothesized that scores on an established measure of callousness would predict reduced recognition accuracy and diminished electocortical reactivity for fearful faces in adult participants. In Study 1, 66 undergraduate participants performed an emotion recognition task in which they viewed affective faces of different types and indicated the emotion expressed by each. In Study 2, electrocortical data were collected from 254 adult twins during viewing of fearful and neutral face stimuli, and scored for event-related response components. Analyses of Study 1 data revealed that higher callousness was associated with decreased recognition accuracy for fearful faces specifically. In Study 2, callousness was associated with reduced amplitude of both N170 and P200 responses to fearful faces. Current findings demonstrate for the first time that callousness in adults is associated with both behavioral and physiological deficits in the processing of fearful faces. These findings support the validity of the CU construct with adults and highlight the possibility of a multidomain measurement framework for continued study of this important clinical construct. (PsycINFO Database Record abstract_id: PUBMED:32814089 Chronic risperidone administration leads to greater amphetamine-induced conditioned place preference. Risperidone is an atypical antipsychotic drug used increasingly in children to manage symptoms of ADHD and conduct disorder. In rats, developmental risperidone administration is accompanied by increased locomotor activity during adulthood, as well as heightened sensitivity to the locomotor stimulating effects of amphetamine. This study compared sensitivity to the rewarding effects of amphetamine, as measured by conditioned place preference (CPP), between groups of rats administered chronic risperidone (3.0 mg/kg, s.c.) during development (postnatal days 14-42) or adulthood (postnatal days 77-105). Locomotor activity in a novel test cage and amphetamine-induced CPP were measured beginning three and four weeks, respectively, after the final risperidone injection. Female rats administered risperidone early in life were more active than any other group tested. Previous risperidone administration enhanced amphetamine CPP regardless of sex, and this effect appeared more prominent in the developmentally treated group. The density of forebrain dopamine transporters, a primary target of amphetamine, was also quantified in rats administered risperidone early in life and found to be reduced in the medial anterior, posterior, and ventral caudate nucleus. These results suggest that chronic risperidone treatment modifies later locomotor activity and sensitivity to the reinforcing effects of amphetamine, perhaps via a mechanism related to decreased forebrain dopamine transporter density. abstract_id: PUBMED:12656849 Risk factors for major injuries to the face and teeth. Hyperactivity in children was significantly associated with the occurrence of major injuries affecting the face and/or teeth. Other important risk factors included being male, from lower social class and single-parent households, and scoring high for conduct disorder. The behavioural risk factors were significantly more common in children living in families receiving benefits, lower social classes and non-nuclear families. Answer: The research suggests that reduced face preference in infancy may indeed be a developmental precursor to callous-unemotional (CU) traits. A study found that infants who showed lower preferential tracking of a face with direct gaze at 5 weeks were more likely to exhibit higher CU traits at 2.5 years of age. Additionally, higher levels of maternal sensitivity predicted lower CU traits in girls, but not boys, indicating that early parenting characteristics play a role in the development of these traits (PUBMED:25526972). Another study examining infant parasympathetic regulation during the Still-Face Paradigm found that reduced negative affect during the still-face episode at 6 months predicted higher CU traits at 3 years. The study also found that lower levels of respiratory sinus arrhythmia (RSA), an indicator of parasympathetic nervous system functioning, predicted higher CU traits, particularly for children who were also insecurely attached (PUBMED:34424454). These findings are consistent with other research on the development of CU traits and conduct disorder (CD). For example, studies have shown that adolescents with CD exhibit atypical neural activation during emotional face processing and spend less time focusing on the eye region of faces, which may be related to deficits in emotion processing (PUBMED:33316763; PUBMED:24839886). Overall, the evidence supports the notion that early indicators such as reduced attention to faces and impaired affective and physiological responses during social interactions may be precursors to the development of CU traits. Identifying these early atypicalities offers the potential for developing targeted interventions for children at risk of developing CU traits (PUBMED:25526972; PUBMED:34424454).
Instruction: Mesh repair for postoperative wound dehiscence in the presence of infection: is absorbable mesh safer than non-absorbable mesh? Abstracts: abstract_id: PUBMED:17551808 Mesh repair for postoperative wound dehiscence in the presence of infection: is absorbable mesh safer than non-absorbable mesh? Objective: In patients with postoperative wound dehiscence in the presence of infection, extensive visceral oedema often necessitates mechanical containment of bowel. Prosthetic mesh is often used for this purpose. The aim of the present study was to assess the safety of the use of non-absorbable and absorbable meshes for this purpose. Method: All patients that had undergone mesh repair of abdominal wound dehiscence between January 1988 and January 1998 in the presence of intra-abdominal infection were included in a retrospective cohort study. All surviving patients had physical follow-up in February 2001. Result: Eighteen patients were included in the study. Meshes consisted of polyglactin (n = 6), polypropylene (n = 8), polyester (n = 1), or a combination of a polypropylene mesh with a polyglactin mesh on the visceral side (n = 3). All patients developed complications, consisting mainly of mesh infection (77%), intra-abdominal abscess (17%), enterocutaneous fistula (17%), or mesh migration through the bowel (11%). Mesh removal was necessary in eight patients (44%). Within four months postoperatively, six patients (33%) had died because of progressive abdominal sepsis. The incidence of progressive abdominal sepsis was significantly higher in the group with absorbable polyglactin mesh than in the group with nonabsorbable mesh (67 vs. 11%, p = 0.02) After a mean follow-up of 49 months, 63% of the surviving patients had developed incisional hernia. Absorbable meshes did not yield better outcomes than nonabsorbable meshes in terms of complications and mortality rate. Conclusion: Synthetic graft placement in the presence of intra-abdominal infection has a high risk of complications, regardless of whether absorbable (polyglactin) or non-absorbable mesh material (polypropylene or polyester) is used, and should be avoided if possible. abstract_id: PUBMED:36541911 Emergency groin hernia: outcome after mesh and non-mesh repair. Background: Emergency inguinal and femoral hernia repair can be done by suture or mesh repair, there is still scepticism around using mesh. We aim to evaluate the usage of mesh and the outcome of emergency groin hernia repair after mesh and suture repair. Methods: Retrospective cohort study of adult patients who underwent emergency inguinal and femoral hernia repair from 1st January 2018 to 31st July 2020. Electronic data and case notes were reviewed and outcome data were collected. Results: Eighty-nine emergency groin hernia repairs were carried out. Sixty-two were males, 60 inguinal hernia and 29 femoral hernia. Median age was 72 years (range 20-95). 74 (83.1%) were primary hernia and 15 (16.9%) recurrent hernias. 67 (75.3%) mesh and 22 (24.7%) suture repairs were carried out. Eleven cases required bowel resection and of those 10 had suture repair. Inguinal hernia was more likely to have mesh repair as compared to femoral (P-value 0.002). Median length of stay was significantly lower in mesh group 2 days (1-5 IQR) versus 7.5 days (5-11 IQR) in suture repair group (P-value &lt;0.0001). Five cases (6.74%) had wound complications (3 wound infections, 2 haematoma). With median 20 months (range 6-36 months) follow-up, 1 recurrence each in both mesh and suture repair groups, no mesh infection and 2 (2.2%) 30-day mortality recorded. Wound infection, recurrence and reoperation were not statistically different in two groups. Conclusion: Emergency groin hernia are amenable to mesh repair and in case of bowel resection or gross contamination non- mesh repair is recommended. abstract_id: PUBMED:35233657 Emergent and urgent ventral hernia repair: comparing recurrence rates amongst procedures utilizing mesh versus no mesh. Background: The decision for emergent and urgent ventral hernia repair (VHR) is driven by acute symptomatology, concern for incarceration and strangulation, and perforation. Although mesh has been established to reduce hernia recurrences, the potential for mesh complications may impact the decision for utilization in emergent repairs. This study evaluates hernia repair outcomes in the emergent setting with/without mesh. Methods: An IRB-approved review of NSQIP and retrospective chart review data of emergent/urgent VHRs performed between 2013 and 2017 was conducted at a single academic institution. Six-month postoperative emergency department and surgery clinic visits, hospital readmissions, and hernia recurrences were recorded. Patients were grouped based on mesh utilization. Perioperative and outcome variables were compared using Chi-square, Fisher's exact, and t-tests. Results: Among 94 patients, 41 (44%) received mesh; 53 (56%) did not. Synthetic mesh was used in 27 cases (65.9%); bioresorbable or biologic mesh was used in 14 cases (34.1%). ASA class (p = 0.016) was higher in the no-mesh group, as were emergent vs. urgent cases (p ≤ 0.001). Preoperative SIRS/Sepsis, COPD, and diabetes were increased in the no-mesh group. Hernia recurrence was significantly higher in the no-mesh group vs. the mesh group (24.5% vs. 7.3%, p = 0.03). No difference was found in wound complications between groups. ED visits occurred almost twice as often in the mesh group (42% vs. 23%, p = 0.071). Postoperative surgery clinic visits were more frequent among the mesh group (&amp;gt; 1 visit 61% vs. 24%, p = 0.004). Conclusions: Mesh-based hernia repairs in the urgent/emergent patient population are performed in fewer than half of patients in our tertiary care referral center. Repairs without mesh were associated with over a three-fold increase in recurrence without a difference in the risk of infectious complications. Efforts to understand the rationale for suture-based repair compared to mesh repair are needed to reduce hernia recurrences in the emergent population. abstract_id: PUBMED:32385706 Poly-4-hydroxybutyrate (Phasix™) mesh onlay in complex abdominal wall repair. Background: Poly-4-hydroxybutyric acid (P4HB, Phasix™) is a biosynthetic polymer that degrades by hydrolysis that can be woven into a mesh for use in soft tissue reinforcement. Herein, we describe our initial experience performing complex abdominal wall repair (CAWR) utilizing component separation and P4HB mesh as onlay reinforcement. Methods: All patients undergoing CAWR between June 2014 and May 2017 were followed prospectively for postoperative outcomes. Only those patients who underwent components separation with primary repair of the fascial edges followed by onlay of P4HB mesh were included in this study. Results: 105 patients (52 male, 53 female; mean age 59.2 years, range 22-84) met inclusion criteria. Mean BMI was 29.1 (range 16-48); 52% patients had prior attempted hernia repair, most with multiple medical comorbidities (71% of patients with ASA 3 or greater). 30% of cases were not clean at the time of repair (CDC class 2 or greater). Median follow-up was 36 months (range 9-63). Eighteen patients (17%) developed a hernia recurrence ranging from 2 to 36 months postoperatively. Five (5%) patients developed a localized superficial infection treated with antibiotics, three (2.8%) required re-operation for non-healing wounds, and six (6%) patients developed seroma. Conclusions: These data demonstrate a relatively low rate of hernia recurrence, seroma, and other common complications of CAWR in a highly morbid patient population. Importantly, the rate of mesh infection was low and no patients required complete mesh removal, even when placed into a contaminated or infected surgical field. abstract_id: PUBMED:26779487 Evidence for Replacement of an Infected Synthetic by a Biological Mesh in Abdominal Wall Hernia Repair. Introduction: The incidence of deep infection using a synthetic mesh in inguinal hernia repair is low and reported to be well below 1%. This is in contrast to incisional hernia surgery where the reported incidence is 3% respective 13% comparing laparoscopic to open mesh repair reported in a Cochrane review. Main risk factors were long operation time, surgical site contamination, and early wound complications. An infected mesh can be preserved using conservative treatment were negative pressure wound therapy (VAC(®)) could play an important role. If strategy fails, the mesh needs to be removed. This review aims to look at evidence for situations were a biological mesh would work as a replacement of a removed infected synthetic mesh. Materials And Methods: A literature search of the Medline database was performed using the PubMed search engine. Twenty publications were found relevant for this review. Results: For studies reviewed three options are presented: removal of the infected synthetic mesh alone, replacement with either a new synthetic or a new biological mesh. Operations were all performed at specialist centers. Removal of the mesh alone was an option limited to inguinal hernias. In ventral/incisional hernias, the use of a biological mesh for replacement resulted in a very high recurrence rate, if bridging was required. Either a synthetic or a biological mesh seems to work as a replacement when fascial closure can be achieved. Evidence is though very low. Conclusion: When required, either a synthetic or a biological mesh seems to work as a replacement for an infected synthetic mesh if the defect can be closed. It is, however, not recommended to use a biological mesh for bridging. Mesh replacement surgery is demanding and is recommended to be performed in a specialist center. abstract_id: PUBMED:25448661 Standard polypropylene mesh vs lightweight mesh for Lichtenstein repair of primary inguinal hernia: a randomized controlled trial. Purpose: The aim of the present clinical study was to compare early and late outcomes after inguinal hernia repair with the heavyweight mesh (HW) and lightweight mesh (LW) during a 3 year follow-up period. Methods: 226 patients were randomized into LW and HW mesh groups, both of which underwent unilateral primary inguinal hernia repair via the Lichtenstein technique. Wound complications (infection, hematoma, seroma), hernia recurrence, pain and feeling of foreign body in inguinal area were determined in patients. Pain was measured by visual analogue scale. Results: No statistical difference has been found between LW and HW groups by wound complication (P = 0.80). One case of hernia recurrence has been mentioned in both groups one year after hernioplasty. But there was no detectable difference between the two groups. No significant difference has been found between LW and HW groups by frequency of chronic pain 7 days, 1 and 3 months, 1, 2, and 3 years after surgery. As for the feeling of foreign body in groin it is similar in both groups after 1 and 3 months. Level of feeling of foreign body was significantly lower in LW group 1, 2, and 3 years after surgery, than in HW group (P = 0.03, P = 0.02, P = 0.02, respectively). Conclusion: Our research shows no significant difference in wound complications, hernia recurrence and chronic pain after Lichtenstein hernioplasty, by using of LW and HW meshes. The usage of the LW mesh was associated with less feeling of foreign body than that of the HW mesh, what can be considered as prevalence of LW mesh hernioplasty. abstract_id: PUBMED:32495044 Re-do surgery after prosthetic abdominal wall repair: intraoperative findings of mesh-related complications. Introduction: Mesh repair is one of the most popular technique for the treatment of abdominal wall hernias, resulting in lower recurrence rates. However, it is associated with a high risk of mesh-related complications. The aim of the present study is to assess the impact of biomaterials on the intra-abdominal organs, in terms of adhesions and visceral complications, in a series of patients undergoing re-do surgery at our abdominal wall unit. Materials And Methods: We reviewed the clinical records of 301 patients who undergone laparotomy between June 2008 and May 2018, selecting 67 patients with one or more previous prosthetic abdominal wall repair (AWR). Results: The average number of previous repairs was 1.6 with a mean time interval of 66 months from the last repair. Clinical presentation included hernia recurrence (69%), mesh infection (26%), infection and recurrence (10%), and fistula (1%). Adhesions were intraoperatively observed in all patients, except for eight cases. Mesh was completely removed in 43 patients, partially in four. Postoperative complications were observed in 39% of cases, including wound dehiscence, hematoma, seroma, and mesh infection. Conclusions: Long-term implant results in abdominal wall repair and are not completely known, and literature is still lacking on this topic. Re-do surgery for subsequent pathological events may represent a way to increase our knowledge. abstract_id: PUBMED:25549956 Repair of a coccygeal hernia with a biological mesh. Coccygeal herniation after coccygectomy is rare. Little is known about the management of this complication. We present a case of a 44 year old women with a coccygeal herniation 7 years after coccygectomy. She was treated two times for an infected pilonidal sinus with incision and drainage. After the last incision and drainage she had complaints of a painful swelling in the sacral area and difficulty with evacuation of her stools. A defaecography showed a coccygeal herniation. An additional MRI of the pelvic region showed a defect with a diameter of approximately 38mm. We performed a hernia repair with a biological mesh (Strattice™ surgical mesh, LifeCell Corporation USA) via a sacral approach. Her recovery was complicated by a small wound dehiscence without clinical signs of infection. The sacral wound healed per secundam. Her complaints had completely disappeared. A defaecography 2 months after surgery showed no residual herniation. To our knowledge, it is the first reported case of a coccygeal hernia repair with a biological Strattice™ surgical mesh. abstract_id: PUBMED:27728955 Laparoscopic Stapled Sublay Repair With Self-Gripping Mesh: A Simplified Technique for Minimally Invasive Extraperitoneal Ventral Hernia Repair. Introduction: Minimally invasive laparoscopic and robotic techniques for ventral hernia repair have evolved to achieve the benefits and minimize the limitations of both the open Rives-Stoppa sublay mesh repair and laparoscopic intraperitoneal onlay mesh (IPOM) repair. By combining the principles of a retromuscular repair with the benefits of a minimally invasive approach, these techniques attempt to decrease recurrence, increase functionality, exclude mesh from the viscera, limit infection and wound complications, and minimize pain. The difficult ergonomics, challenging dissection, and extensive suturing make traditional laparoscopic sublay repair technically challenging and has led to increased robotic utilization to overcome these limitations. We describe a laparoscopic extraperitoneal sublay mesh repair technique using an endoscopic stapler to facilitate reapproximation of the linea alba and creation of the retromuscular space, and self-gripping mesh to position and fixate the prosthetic. Materials And Methods: Between January and June 2016, 10 patients with midline ventral and incisional hernias underwent laparoscopic extraperitoneal stapled sublay mesh repair with self-gripping mesh. Three of these cases included a laparoscopic posterior component separation with myofascial release of the transversus abdominis muscle to facilitate midline closure. Intraoperative and perioperative complications, early recurrence, pain, and narcotic usage were measured. Results: There were no significant intraoperative complications or conversions to open surgery. Patients were discharged at 1.2 days on average. Early postoperative complications included a hernia site seroma in one patient, which resolved without intervention. There were no early postoperative infections or recurrences. Compared with traditional laparoscopic IPOM repair, there was less acute postoperative pain and use of analgesics. Conclusions: Laparoscopic extraperitoneal stapled sublay mesh repair is a safe and effective method for the treatment of medium- to large-sized ventral and incisional hernias. This extraperitoneal stapled approach using self-gripping mesh facilitates a minimally invasive sublay repair and abdominal wall reconstruction using traditional laparoscopic tools. abstract_id: PUBMED:34722625 Long-Term Follow-Up of Lichtenstein Repair of Inguinal Hernia in the Morbid Patients With Self-Gripping Mesh (ProgripTM). Objective: This study aimed to demonstrate the safety and the efficacy of the self-gripping mesh (ProgripTM) for inguinal hernia repair in morbid patients of the higher American Society of Anesthesiologists (ASA) classification (ASA III and IV). The incidence of chronic pain, postoperative complications, and hernia recurrence was evaluated. Methods: Data were collected retrospectively from the files of the patient and were analyzed for 198 hernias in 147 patients. All the patients included in this study had undergone inguinal hernia repair by Lichtenstein approach with the self-gripping mesh (ProgripTM) in the same clinical center. Preoperative, perioperative, and postoperative data were collected and a long-term follow-up of 31.8 ± 19.5 m (5-60 m) was performed. Complications, pain scored on a 0-10 numeric rating scale (NRS), and hernia recurrence were assessed. Results: During the past 5 years, 198 hernias in 147 patients were repaired with the Lichtenstein procedure with the self-gripping mesh (ProgripTM). The majority of the patients were high level of the ASA classification (ASA III and IV) (95.9%), with ASA III (10.2%) and IV (85.7%). The mean operation time was 71.2 ± 23.8 min. The mean length of postoperative stay was 2.5 ± 2.1 days. There were no intraoperative complications. About 14 cases (7.1%) suffered from postoperative surgical wound complications, which were limited to the skin and subcutaneous tissue and were cured with the conservative methods successfully; there was no mesh infection, the acute postoperative pain was low or mild [visual analog scale (VAS) score ≤ 4] and the chronic postoperative pain was reported in three patients (1.5%) and tolerable, hernia recurrence (femoral hernia recurrence) occurred in one patient half a year after during the follow-up period. Conclusion: This study demonstrated the advantages of the self-gripping mesh in hernia repair of the high-risk patients with inguinal hernia (ASA III and IV) by Lichtenstein procedure under local anesthesia. Answer: The use of mesh in postoperative wound dehiscence, particularly in the presence of infection, is a complex issue. According to the study by PUBMED:17551808, the use of both absorbable (polyglactin) and non-absorbable (polypropylene or polyester) meshes in the context of intra-abdominal infection carries a high risk of complications. The study found that all patients developed complications, with mesh infection being the most common. Notably, the incidence of progressive abdominal sepsis was significantly higher in the group with absorbable polyglactin mesh compared to the group with non-absorbable mesh (67% vs. 11%, p = 0.02). This suggests that absorbable mesh may not be safer than non-absorbable mesh in the presence of infection, and in fact, may be associated with a higher rate of severe infection leading to sepsis. Furthermore, the study concluded that synthetic graft placement in the presence of intra-abdominal infection should be avoided if possible, due to the high risk of complications regardless of the type of mesh material used. It is important to note that this study is specific to the context of abdominal wound dehiscence with concurrent intra-abdominal infection, and the findings may not be generalizable to other types of hernia repairs or clinical scenarios without infection. In summary, based on the evidence from the study with PUBMED:17551808, absorbable mesh does not appear to be safer than non-absorbable mesh for postoperative wound dehiscence in the presence of infection, and its use may be associated with a higher risk of progressive abdominal sepsis.
Instruction: Prophylactic antiemetics in oral and maxillofacial surgery: a requiem? Abstracts: abstract_id: PUBMED:9842837 Prophylactic oral antiemetics for preventing postoperative nausea and vomiting: granisetron versus domperidone. Unlabelled: In this prospective, randomized, double-blinded study, we evaluated the efficacy of the oral antiemetics, granisetron and domperidone, for the prevention of postoperative nausea and vomiting (PONV) in 100 women undergoing major gynecologic surgery. Patients received either granisetron 2 mg or domperidone 20 mg (n = 50 in each group) orally 1 h before surgery. Standardized anesthetic techniques and postoperative analgesia regimens were used. Complete response (defined as no PONV and no administration of rescue antiemetic medication) for 0-3 h after anesthesia was 88% with granisetron and 52% with domperidone; the corresponding incidence for 3-24 h after anesthesia was 86% and 48% (P &lt; 0.05). No clinically important adverse events due to the drugs were observed in any of the groups. In conclusion, the efficacy of preoperative oral granisetron is superior to that of domperidone for the prevention of PONV after major gynecologic surgery. Implications: We compared the efficacy of granisetron and domperidone administered orally for the prevention of postoperative nausea and vomiting in women undergoing gynecologic surgery. Preoperative oral granisetron was more effective than domperidone. abstract_id: PUBMED:19686923 Prophylactic antiemetics in oral and maxillofacial surgery: a requiem? Purpose: To determine the incidence of postoperative nausea and vomiting (PONV) after oral and maxillofacial surgical procedures and to evaluate the rationale behind prophylactic antiemetic medications. Materials And Methods: A total of 167 patients, irrespective of age and gender, undergoing oral and maxillofacial surgical procedures under general anesthesia/dissociative anesthesia, were included. Risk factors associated with PONV such as gender, type of anesthetic agent used, nature of surgical procedure, surgical approach used, and duration of surgery and postoperative use of opioids were assessed. A "watch and wait" policy was adopted in all cases of recorded PONV with gastric lavage (GL) to be performed in patients with more than 2 episodes of PONV in the 6-hour postoperative period. The efficacy of such an intervention was also assessed. Antiemetic medications were given in only those cases which did not respond favorably to GL. A chi(2) test was performed using SPSS software (Chicago, IL) to determine statistical significance. Results: Of the 167 patients included, 19 patients experienced episodes of PONV. GL was performed in 3 patients, and all showed cessation of emesis after this intervention. No antiemetic medications were administered. A significant association was observed between PONV and female gender, duration of surgery, type of anesthetic agent used, and specific surgical procedures such as oncologic and temporomandibular joint surgeries. The role of surgical approach and the use of opioids in the postoperative period on the incidence of PONV were found to be insignificant. Conclusions: Information regarding the incidence of PONV after oral and maxillofacial surgical procedures remains scanty. We conclude that there does not appear to be a rationale for the prophylactic administration of antiemetic drugs in such surgical procedures. A watch-and-wait policy and simple GL may provide significant relief. Antiemetic medications are to be considered only in case of non-responders and intractable PONV. abstract_id: PUBMED:26635998 Phenothiazine vs 5HT3 antagonist prophylactic regimens to prevent Post-Anesthesia Care Unit rescue antiemetic: an observational study. Purpose: Our practitioners are asked to consider a patient's postoperative nausea and vomiting (PONV) risk profile when developing their prophylactic antiemetic strategy. There is wide variation in employed strategies, and we have yet to determine the most effective PONV prophylactic regimen. The objective of this study is to compare prophylactic antiemetic regimens containing: phenothiazines to 5HT3 antagonists for effectiveness at reducing the incidence of Post-Anesthesia Care Unit (PACU) rescue antiemetic administration. Methods: This is an observational study of 4,392 nonsmoking women who underwent general anesthesia for breast surgery from 1/1/2009 through 6/30/2012. Previous history of PONV or motion sickness (HxPONV/MS) and the use of PACU opioids were recorded. Prophylactic antiemetic therapy was left to the discretion of the anesthesia care team. We compared phenothiazines and 5HT3 antagonists alone and with a glucocorticoid to determine the most effective treatment regimen in our practice for the prevention of the administration of PACU rescue antiemetics. Results: Patients who received a phenothiazine regimen compared to a 5HT3 antagonist regimen were less likely to have an antiemetic administered in the PACU (p=0.0100) and this significant difference in rates holds in a logistic regression model adjusted for HxPONV/MS and PACU Opioid use (p=0.0103). Conclusions: Based on our findings our clinicians are encouraged to administer a combination of a phenothiazine and a glucocorticoid in female, nonsmoking surgical breast patients for the prevention of PACU rescue antiemetic administration. abstract_id: PUBMED:16178752 The utility of antiemetics in the prevention and treatment of postoperative nausea and vomiting in patients scheduled for laparoscopic cholecystectomy. Postoperative nausea and vomiting (PONV) are distressing and frequent adverse events of anesthesia and surgery, with a relatively high incidence after laparoscopic cholecystectomy. Numerous antiemetics have been studied for the prevention and treatment of PONV in patients scheduled for laparoscopic cholecystectomy. Traditional antiemetics, including anticholinergics (e.g., scopolamine), antihistamines (e.g., dimenhydrinate), phenothiazines (e.g., promethazine), butyrophenones (e.g., droperidol), and benzamide (e.g., metoclopramide), are used for the control of PONV. The available nontraditional antiemetics for the prophylaxis against PONV are dexamethasone and propofol. Serotonin receptor antagonists (ondansetron, granisetron, tropisetron, dolasetron, and ramosetron), compared with traditional antiemetics, are highly efficacious for PONV. The prophylactic ondansetron, granisetron, tropisetron, and dolasetron in antiemetic efficacy are comparable. Ramosetron is effective for the long-term prevention of PONV. None of the available antiemetics is entirely effective, perhaps because most of them act through the blockade on one type of receptor. There is a possibility that combined antiemetics with different sites of activity would be more effective than one drug alone for the prophylaxis against PONV. Combination antiemetic therapy is often effective for the prevention of PONV following laparoscopic cholecystectomy. The efficacy of a combination of serotonin receptor antagonists (ondansetron and granisetron) and droperidol is superior to monotherapy with a serotonin receptor antagonist or droperidol. Similarly, adding dexamethasone to ondansetron or granisetron improves antiemetic efficacy in PONV. Knowledge regarding antiemetics is necessary to completely prevent and treatment of PONV in patients scheduled for laparoscopic cholecystectomy. abstract_id: PUBMED:3578726 Vomiting after ophthalmic surgery. Effects of intra-operative antiemetics and postoperative oral fluid restriction. The usefulness of intra-operative antiemetics and postoperative oral fluid restriction in the prevention of vomiting following anaesthesia for ophthalmic surgery, was studied in 200 patients. They were allocated into four groups of 50 and given either saline (as control), droperidol, metoclopramide or prochlorperazine. Oral intake was restricted postoperatively in half of the patients of each group. Anaesthesia comprised morphine and atropine premedication and a halothane, nitrous oxide and oxygen spontaneous breathing technique. No significant beneficial effects resulted from intra-operative antiemetics; vomiting incidences of 26% after saline and droperidol, 28% after metoclopramide and 14% after prochlorperazine were observed. Younger patients and females vomited most frequently. Restriction of oral fluids did not decrease the incidence of vomiting but demonstrated that approximately half of those patients who vomit do so with their first postoperative oral intake. Vomiting was observed more frequently after non intra-ocular surgery than after intra-ocular surgery (37% cf. 16%, p less than 0.01) and postoperative analgesics were required by more non intra-ocular patients than by intra-ocular patients (25% cf. 5%, p less than 0.001). Squint patients vomited most frequently (48%) and most frequently required postoperative analgesia (35%). abstract_id: PUBMED:9793814 Prophylactic antiemetics for laparoscopic cholecystectomy: a comparison of perphenazine, droperidol plus ondansetron, and droperidol plus metoclopramide. Study Objective: To compare the prophylactic administration of ondansetron plus droperidol, droperidol plus metoclopramide, and perphenazine to determine effects on postoperative nausea, vomiting, and sedation after laparoscopic cholecystectomy. Design: Prospective, randomized, double-blind study. Setting: University medical center. Patients: 212 ASA physical status I and II adults presenting for laparoscopic cholecystectomy. Interventions: Patients were randomly assigned to receive one of three prophylactic antiemetic drug combinations: ondansetron 4 mg plus droperidol 0.625 mg (Group OD), droperidol 0.625 mg plus metoclopramide 10 mg (Group DM), or perphenazine 5 mg (Group P). Study drugs were administered intravenously after induction of general anesthesia. Measurements And Main Results: The groups were similar with respect to gender, age, weight, duration of surgery, numbers of patients receiving intraoperative atropine or ephedrine, number admitted overnight, and time to discharge home. Patients in Group P used lower total doses of opioids than did patients in Group OD. There were no significant differences in postoperative nausea, pain, or sedation scores, in numbers of patients requiring antiemetics (Group OD, 13 of 66; Group DM, 15 of 66; Group P, 14 of 68), or in numbers of patients vomiting, either in hospital or during the first postoperative day. Conclusions: These three drug regimens are equivalent for antiemetic prophylaxis before laparoscopic cholecystectomy. abstract_id: PUBMED:20403532 Prophylactic antiemetics in oral and maxillofacial surgery: a requiem?--a response. N/A abstract_id: PUBMED:18426627 Preoperatively administered ramosetron oral disintegrating tablets for preventing nausea and vomiting associated with patient-controlled analgesia in breast cancer patients. Background And Objectives: The purpose of this study was to investigate the acceptability and therapeutic efficacy of a preoperative single administration of long-acting 5-hydroxytryptamine type 3 (5-HT 3) receptor antagonist in an orally disintegrating tablet formulation, ramosetron, in breast cancer patients. Methods: Two hundred and forty women, ASA I-II, aged 24-60 yr, undergoing elective breast cancer surgery, were randomized. A standardized anaesthetic technique was used. Patients were assigned to receive one of three treatment regimens (n = 80 in each group): no prophylactic antiemetics (Group A), single prophylactic intravenous injection of ramosetron 0.1 mg at the completion of surgery (Group B) or preoperatively oral administration of 0.1 mg of ramosetron (Group C). Episodes of nausea and vomiting, the use of rescue antiemetic treatment, degree of pain, adverse events and level of satisfaction were recorded. Results: The overall incidence of nausea and vomiting during the first 24 h after the recovery in Groups B (27.8%) and C (25%) was decreased significantly compared with Group A (75.3%). The frequency of the use of rescue antiemetics was significantly lower in Group C (5.0%) compared with Groups A (53.2%) and B (15.2%). The patients in Group C were more satisfied with control of postoperative nausea and vomiting than others. Conclusion: Preoperative oral administration of ramosetron at a dose of 0.1 mg is an acceptable and effective way of reducing the incidence of postoperative nausea and vomiting in breast cancer patients. abstract_id: PUBMED:31573486 Propofol Drug Shortage Associated With Worse Postoperative Nausea and Vomiting Outcomes Despite a Mitigation Strategy. Drug shortages negatively affect patient care and outcomes. Postoperative nausea and vomiting (PONV) can be mitigated using risk assessment and prophylaxis. A 2012 propofol shortage provided an opportunity to study the impact of using prophylactic antiemetics and changing the technique from a propofol infusion to inhaled agents in an ambulatory surgery setting. We retrospectively collected data for 2,090 patients regarding PONV risk factors, anesthetic management, and PONV outcomes for periods before, during, and after the shortage. Patients during the propofol shortage experienced a higher incidence of PONV (11% vs 5% before the shortage), greater need for rescue antiemetics (3% vs 1%), and longer duration of stay (mean [SD] = 124 [115] minutes vs 118 [108] minutes). More patients in this group reported PONV at home (14% vs 7%), and 2 required unplanned admission or return to the hospital. During the shortage, patients had a 2-fold increase in the odds of PONV when adjusted for all risk factors. Antiemetics moderated the association between gender and PONV but did not change the effect of the shortage. Findings suggest that despite mitigation efforts, the inability to use propofol infusion was associated with worse PONV outcomes. abstract_id: PUBMED:20844345 A review of granisetron, 5-hydroxytryptamine3 receptor antagonists, and other antiemetics. Nausea and vomiting are 2 of the most upsetting adverse reactions of chemotherapy. Current guidelines propose 5-hydroxytryptamine3 (5-HT3) receptor antagonists as a pharmacologic intervention for acute and delayed nausea and vomiting [chemotherapy-induced nausea and vomiting (CINV)] associated with moderately and highly emetogenic chemotherapy. Meanwhile, both postoperative nausea and vomiting (PONV) and postdischarge nausea and vomiting are challenging situations after surgeries and procedures. Prophylactic and therapeutic combinations of antiemetics are recommended in patients at high risk of suffering from PONV and postdischarge nausea and vomiting. Granisetron (Kytril) is a selective 5-HT3 receptor antagonist that does not induce or inhibit the hepatic cytochrome P-450 system in vitro. There are also 4 other antagonists of 5-HT3 receptor (dolasetron, ondansetron, palonosetron, and tropisetron) being metabolized via the CYP2D6 and are subject to potential genetic polymorphism. The launch of a new class of antiemetics, the substance P/neurokinin1 receptor antagonists, was attributed to the scientific update on the central generator responsible for emesis and role of substance P. There has been mounting interest in exploring integrative medicine, either acupuncture or acustimulation of P6 (Nei-Kuwan), to complement the western medicine for prevention and management of nausea and vomiting. The potential application of cannabinoids, either alone or in combination with other agents of different mechanism, could contribute further to improve outcome in CINV. Implementation of future treatment guidelines for more effective management of CINV and PONV could certainly improve the efficacy and outcome of cancer and postoperative care. Answer: The question "Prophylactic antiemetics in oral and maxillofacial surgery: a requiem?" refers to the debate on whether routine use of prophylactic antiemetics is justified in oral and maxillofacial surgery to prevent postoperative nausea and vomiting (PONV). Based on the provided abstracts, there is evidence suggesting different perspectives on this issue. One study (PUBMED:19686923) concluded that there does not appear to be a rationale for the prophylactic administration of antiemetic drugs in oral and maxillofacial surgical procedures. The study adopted a "watch and wait" policy and found that simple gastric lavage (GL) provided significant relief for PONV, with antiemetic medications being considered only in case of non-responders and intractable PONV. This suggests that routine prophylactic antiemetics may not be necessary in all cases of oral and maxillofacial surgery. In contrast, other studies highlight the effectiveness of various antiemetic regimens in preventing PONV in different surgical contexts. For instance, a study comparing granisetron and domperidone found that preoperative oral granisetron was more effective than domperidone for preventing PONV after major gynecologic surgery (PUBMED:9842837). Another observational study suggested that a combination of a phenothiazine and a glucocorticoid was effective in preventing PACU rescue antiemetic administration in nonsmoking female patients undergoing breast surgery (PUBMED:26635998). Additionally, a study on laparoscopic cholecystectomy found that combination antiemetic therapy was often effective for the prevention of PONV following the procedure (PUBMED:9793814). The utility of antiemetics in preventing PONV is also supported by a study that found preoperative oral administration of ramosetron to be an acceptable and effective way of reducing the incidence of PONV in breast cancer patients (PUBMED:18426627). In summary, while one study suggests that prophylactic antiemetics may not be necessary in oral and maxillofacial surgery and that a "watch and wait" approach could be sufficient (PUBMED:19686923), other studies demonstrate the effectiveness of prophylactic antiemetics in preventing PONV in various surgical contexts (PUBMED:9842837, PUBMED:26635998, PUBMED:9793814, PUBMED:18426627). The decision to use prophylactic antiemetics should be based on individual patient risk factors, the type of surgery, and the anesthetic regimen employed.
Instruction: Can a pain management and palliative care curriculum improve the opioid prescribing practices of medical residents? Abstracts: abstract_id: PUBMED:12213144 Can a pain management and palliative care curriculum improve the opioid prescribing practices of medical residents? Background: Although opioids are central to acute pain management, numerous studies have shown that many physicians prescribe them incorrectly, resulting in inadequate pain management and side effects. We assessed whether a case-based palliative medicine curriculum could improve medical house staff opioid prescribing practices. Design: Prospective chart review of consecutive pharmacy and billing records of patients who received an opioid during hospitalization before and after the implementation of a curricular intervention, consisting of 10 one-hour case-based modules, including 2 pain management seminars. Measurements: Consecutive pharmacy and billing records of patients who were cared for by medical residents (n = 733) and a comparison group of neurology and rehabilitative medicine patients (n = 273) that received an opioid during hospitalization in 8-month periods before (1/1/97 to 4/30/97) and after (1/1/99 to 4/30/99) the implementation of the curriculum on the medical service were reviewed. Three outcomes were measured: 1) percent of opioid orders for meperidine; 2) percent of opioid orders with concomminant bowel regimen; and 3) percent of opioid orders using adjuvant nonsteroidal anti-inflammatory drugs (NSAIDs). Main Results: The percentage of patients receiving meperidine decreased in the study group, but not in the comparison group. The percentages receiving NSAIDs and bowel medications increased in both groups. In multivariate logistic models controlling for age and race, the odds of an experimental group patient receiving meperidine in the post-period decreased to 0.55 (95% confidence interval [95% CI], 0.32 to 0.96), while the odds of receiving a bowel medication or NSAID increased to 1.48 (95% CI, 1.07 to 2.03) and 1.53 (95% CI, 1.01 to 2.32), respectively. In the comparison group models, the odds of receiving a NSAID in the post-period increased significantly to 2.27 (95% CI, 1.10 to 4.67), but the odds of receiving a bowel medication (0.45; 95% CI, 0.74 to 2.00) or meperidine (0.85; 95% CI, 0.51 to 2.30) were not significantly different from baseline. Conclusions: This palliative care curriculum was associated with a sustained (&gt;6 months) improvement in medical residents' opioid prescribing practices. Further research is needed to understand the changes that occurred and how they can be translated into improved patient outcomes. abstract_id: PUBMED:34292770 Opioid epidemic and prescribing in hospice and palliative care: a review of the literature. Background: Rising rates of opioid abuse worldwide have led to the implementation of policies to curb opioid prescribing. It is unknown what impact these policies have on prescribing within the setting of hospice and palliative care. Objectives: To determine the current state of the science of opioid prescribing in hospice and palliative care in relation to the opioid epidemic and associated policies. Methods: A systematic integrative literature review was conducted using the Cumulative Index of Nursing and Allied Health Literature (CINAHL), PubMed, ProQuest Central and SCOPUS. Results: Most of the existing literature examines physician perspectives related to opioid prescribing in primary care settings. Ample evidence exists that policies can and do affect rates of opioid prescribing in specialties outside of hospice and palliative care. There is limited evidence to suggest how these policies affect opioid prescribing in hospice and palliative care. However, the available evidence suggests that opioids are necessary in hospice and palliative care in order to manage pain. Conclusion: Further research is necessary to examine the possible negative impact of the opioid epidemic on opioid prescribing in hospice and palliative care. abstract_id: PUBMED:37951062 An Institutional Curriculum for Opioid Prescribing Education: Outcomes From 2017 to 2022. Introduction: Prescription opioids, including those prescribed after surgery, have greatly contributed to the US opioid epidemic. Educating opioid prescribers is a crucial component of ensuring the safe use of opioids among surgical patients. Methods: An annual opioid prescribing education curriculum was implemented among new surgical prescribers at our institution between 2017 and 2022. The curriculum includes a single 75-min session which is comprised of several components: pain medications (dosing, indications, and contraindications); patients at high risk for uncontrolled pain and/or opioid misuse or abuse; patient monitoring and care plans; and state and federal regulations. Participants were asked to complete an opioid knowledge assessment before and after the didactic session. Results: Presession and postsession assessments were completed by 197 (89.6%) prescribers. Across the five studied years, the median presession score was 54.5%. This increased to 63.6% after completion of the curriculum, representing a median relative knowledge increase of 18.2%. The median relative improvement was greatest for preinterns and interns (18.2% for both groups); smaller improvements were observed for postgraduate year 2-5 residents (9.1%) and advanced practice providers (9.1%). On a scale of 1 to 10 (with 5 being comfortable), median (interquartile range) self-reported comfort in prescribing opioids increased from 3 (2-5) before education to 5 (4-6) after education (P &lt; 0.001). Conclusions: Each year, the curriculum substantially improved provider knowledge of and comfort in opioid prescribing. Despite increased national awareness of the opioid epidemic and increasing institutional initiatives to improve opioid prescribing practices, there was a sustained knowledge and comfort gap among new surgical prescribers. The observed effects of our opioid education curriculum highlight the value of a simple and efficient educational initiative. abstract_id: PUBMED:27559255 Specialist Pediatric Palliative Care Prescribing Practices: A Large 5-year Retrospective Audit. Introduction: There is a gradual increasing trend in childhood cancers in India and pediatric palliative care in India is an emerging specialty. Prescribing pain and symptom control drugs in children with cancer requires knowledge of palliative care formulary, dosing schedules, and prescription guidelines. This study is a retrospective audit of prescribing practices of a specialist palliative care service situated in a tertiary cancer center. Methods: A total of 1135 medication records of children receiving specialist pediatric palliative care services were audited for 5 years (2010-2014) to evaluate prescribing practices in children with advanced cancer. Results: A total of 51 types of drugs were prescribed with an average of 4.2 drugs per prescription. 66.9% of the prescriptions had paracetamol, and 33.9% of the prescriptions had morphine. Most common nonsteroidal anti-inflammatory drugs prescribed was ibuprofen (23.9%), and more than 50% of the prescriptions had aperients. The most commonly prescribed aperient was a combination of liquid paraffin and sodium-picosulfate. Dexamethasone was prescribed in 51.9% of patients and in most cases this was part of oral chemotherapy regimen. Generic names in prescription were used only in 33% of cases, and adverse effects of the drugs were documented in only 9% of cases. In 25% of cases, noncompliance to the WHO prescription guidelines was seen, and patient compliance to prescription was seen in 40% of cases. Conclusions: Audit of the prescribing practices in specialist pediatric palliative care service shows that knowledge of pediatric palliative care formulary, rational drug use, dosing, and prescribing guidelines is essential for symptom control in children with advanced life-limiting illness. Noncompliance to WHO prescribing guidelines in one fourth of cases and using nongeneric names in two-thirds of prescription indicates poor prescribing practices and warrants prescriber education. Prescription noncompliance by almost two-thirds of patients is alarming and necessitates intense patient, family, and caregiver education and empowerment. abstract_id: PUBMED:33647421 Feasibility of Safe Opioid Prescribing in Outpatient Palliative Care: A Quality Improvement Project. Background: No guidelines for safe opioid prescribing in palliative care exist, which contributes to limited monitoring of opioid misuse in palliative care. Measures: Feasibility of a safe opioid prescribing standard operating protocol (SOP) was determined by assessing the percentage of patients in an outpatient cancer center who completed each component of a five-component SOP. Intervention: A five-component SOP included: risk stratification for misuse, consent form, prescription drug monitoring program review, urine drug testing, and Naloxone for high-risk individuals. Outcomes: After one year, compliance rates on four of the of the five-component SOP were greater or equal to 93%. Naloxone co-prescription for high-risk patients never reached over 78%, largely due to clinical decision not to co-prescribe if transition to hospice was imminent. Conclusions/lessons Learned: Safe opioid prescribing measures are feasible in outpatient palliative care and can facilitate identification of individuals at risk for opioid misuse and prompt early interventions for misuse. abstract_id: PUBMED:28705485 Trainees as Agents of Change in the Opioid Epidemic: Optimizing the Opioid Prescription Practices of Surgical Residents. Objective: Opioid abuse has become an epidemic in the United States, causing nearly 50,000 deaths a year. Postoperative pain is an unavoidable consequence of most surgery, and surgeons must balance the need for sufficient analgesia with the risks of overprescribing. Prescribing narcotics is often the responsibility of surgical residents, yet little is known about their opioid-prescribing habits, influences, and training experience. Design: Anonymous online survey that assessed the amounts of postoperative opioid prescribed by residents, including type of analgesia, dosage, and number of pills, for a series of common general surgery procedures. Additional questions investigated influences on opioid prescription, use of nonnarcotic analgesia, degree of engagement in patient education on opioids, and degree of training received on analgesia and opioid prescription. Setting: Accreditation Council for Graduate Medical Education accredited general surgery program at a university-based tertiary hospital. Participants: Categorical and preliminary general surgery residents of all postgraduate years. Results: The percentage of residents prescribing opioids postprocedure ranged from 75.5% for incision and drainage to 100% for open hernia repair. Residents report prescribing 166.3 morphine milligram equivalents of opioid for a laparoscopic cholecystectomy, yet believe patients will only need an average of 113.9 morphine milligram equivalents. The most commonly reported influences on opioid-prescribing habits include attending preference (95.2%), concern for patient satisfaction (59.5%), and fear of potential opioid abuse (59.5%). Only 35.8% of residents routinely perform a narcotic risk assessment before prescribing and 6.2% instruct patients how to properly dispose of excess opioids. More than 90% of residents have not had formal training in best practices of pain management or opioid prescription. Conclusion And Relevance: Surgical trainees are relying almost exclusively on opioids for postoperative analgesia, often in excessive amounts. Residents are heavily influenced by their superiors, but are not receiving formal opioid-prescribing education, pointing to a great need for increased resident education on postoperative pain and opioid management to help change prescribing habits. abstract_id: PUBMED:30911556 Prescribing and Administration of Opioid Analgesics in Residents of Aged Care Facilities. Objective: Optimal pain management in residents requires an understanding of prescribing and administration of opioid analgesics. This study aimed to describe opioid administration for elderly residents in a selection of aged care facilities. Methods: A cross-sectional audit of analgesics administration was conducted for all 458 residents of three-aged care facilities on May 1, 2017. The facilities (each ~160 beds) represented a geographically diverse area in Perth, Australia, and varying service levels by nurse practitioners. Data were accessed using the iCare® platform and transcribed into a customized database. Data were reported descriptively, with relationships between categorical variables examined using the Chi-square analysis. Main outcome measures in the study were the prevalence of administration of opioids compared to that of nonopioid analgesics and no analgesics on the audit date, and characteristics of the opioid prescriptions (the type of prescriber and nursing/care staff involved in the dose administration). Findings: Of 458 residents, 95 (20.7%) received an opioid analgesic on the audit date; 231 had also received a nonopioid analgesic. The most common opioid (34 residents) was a brand of oxycodone hydrochloride and naloxone hydrochloride as 10/5 mg tablets. There was no significant tendency for opioid prescribing by classification of the prescriber, nor for any category of nursing/care staff to administer the particular types of analgesics. Conclusion: The tendency for prescribing of opioids showed no significant among the prescribers. Finally, the administration of opioids was predominantly by caregivers. This represents the first step in a program of activity to ensure the quality use of potent analgesics in an aged care provider network. abstract_id: PUBMED:33844624 Naloxone Prescribing and Education in Outpatient Pain Management and Palliative Care. Over the past two decades, opioid use and overdose have increased substantially. Naloxone, an opioid overdose reversal agent, has been one of many risk mitigation strategies for preventing mortality due to overdose. Most literature describing naloxone utilization has been about populations of illicit drug users and patients in hospitals, primary care, and pharmacies. There is limited information regarding naloxone prescribing and training for opioid users in specialty pain management clinics. Furthermore, there are no known publications concerning patients receiving palliative care services and overdose prevention. Pain and palliative care patients are commonly at risk of opioid overdose. In an interdisciplinary outpatient pain and palliative care clinic, pharmacists implemented naloxone prescribing and education. Eleven patients at increased risk for overdose were prescribed naloxone and educated on overdose risk factors, recognition, and management. Seven patients reported picking up their naloxone prescription from the pharmacy, and none reported using it within two weeks of the initial education. This intervention was deemed successful within the clinic, but small sample size and the pharmacist role may not be replicable within other pain and palliative care settings. It encourages further research of overdose risk and prevention in pain management and palliative care. abstract_id: PUBMED:34991556 A focus on the future of opioid prescribing: implementation of a virtual opioid and pain management module for medical students. Background: The United States opioid epidemic is a devastating public health crisis fueled in part by physician prescribing. While the next generation of prescribers is crucial to the trajectory of the epidemic, medical school curricula designated to prepare students for opioid prescribing (OP) and pain management is often underdeveloped. In response to this deficit, we aimed to investigate the impact of an online opioid and pain management (OPM) educational intervention on fourth-year medical student knowledge, attitudes, and perceived competence. Methods: Graduating students completing their final year of medical education at Sidney Kimmel Medical College of Thomas Jefferson University were sent an e-mail invitation to complete a virtual OPM module. The module consisted of eight interactive patient cases that introduced topics through a case-based learning system, challenging students to make decisions and answer knowledge questions about the patient care process. An identical pre- and posttest were built into the module to measure general and case-specific learning objectives, with responses subsequently analyzed using the Wilcoxon matched-pairs signed-rank test. Results: Forty-three students (19% response rate) completed the module. All median posttest responses ranked significantly higher than paired median pretest responses (p &lt; 0.05). Comparing the paired overall student baseline score to module completion, median posttest ranks (Mdn = 206, IQR = 25) were significantly higher than median pretest ranks (Mdn = 150, IQR = 24) (p &lt; 0.001). Regarding paired median Perceived Competence Scale metrics specifically, perceived student confidence, capability, and ability in opioid management increased from "disagree" (2) to "agree" (4) (p &lt; 0.001), and student ability to meet the challenge of opioid management increased from "neither agree nor disagree" (3) to "agree" (4) (p &lt; 0.001). Additionally, while 77% of students reported receiving OP training in medical school, 21% reported no history of prior training. Conclusion: Implementation of a virtual, interactive module with clinical context is an effective framework for improving the OPM knowledge, attitudes, and perceived competence of fourth-year medical students. This type of intervention may be an important method for standardizing and augmenting the education of future prescribers across multiple institutions. abstract_id: PUBMED:33150202 Teaching Pain Management in Serious Illness in the Era of the Opioid Epidemic: A Team-Based Intervention. Introduction: Despite the prevalence of pain in patients with serious illness, recent guidelines for opioid prescribing practices have largely excluded palliative care patients. In lieu of such guidelines, many have recommended adapting risk mitigation strategies from the chronic pain arena for palliative care and oncology populations. Teaching interventions are needed to demonstrate how these methods can be applied to patients with serious illness. Methods: We developed a teaching intervention for fourth-year medical students to improve knowledge about safe opioid prescribing practices in palliative care patients and emphasized both effective and safe pain management. A secondary aim of the intervention was to demonstrate how a palliative care interdisciplinary team works together to care for a complex patient near the end of life. The intervention lasted 1 hour and consisted of an interdisciplinary case presentation as well as a slide presentation. Results: Twenty-two medical students attended the session over 2 years. After the intervention, medical students better understood risk mitigation strategies and felt more strongly that opioids can be a useful tool in treating pain for patients with serious illness. Students' familiarity with palliative care interdisciplinary roles also improved after the intervention. Discussion: This session was a useful part of a palliative care 2-week classroom elective and was well received by students. The development of a survey tool that assesses student attitudes around effective and safe pain management in patients with serious illness may be of use to others who teach pain management in palliative care populations. Answer: Yes, a pain management and palliative care curriculum can improve the opioid prescribing practices of medical residents. A study assessing the impact of a case-based palliative medicine curriculum, which included pain management seminars, on medical house staff opioid prescribing practices found that after the curricular intervention, there was a significant decrease in the percentage of patients receiving meperidine and an increase in the percentages receiving NSAIDs and bowel medications. The curriculum was associated with a sustained improvement in medical residents' opioid prescribing practices, suggesting that further research is needed to understand the changes and how they can be translated into improved patient outcomes (PUBMED:12213144). Additionally, an institutional curriculum for opioid prescribing education implemented among new surgical prescribers showed substantial improvement in provider knowledge of and comfort in opioid prescribing, highlighting the value of educational initiatives in this area (PUBMED:37951062). Furthermore, a teaching intervention for fourth-year medical students aimed at improving knowledge about safe opioid prescribing practices in palliative care patients demonstrated that students better understood risk mitigation strategies and the usefulness of opioids in treating pain for patients with serious illness after the intervention (PUBMED:33150202). These findings collectively indicate that educational interventions and curricula focused on pain management and palliative care can effectively enhance the opioid prescribing practices of medical residents and other healthcare providers.
Instruction: Prolonged follow-up after paediatric pyeloplasty: are repeat scans necessary? Abstracts: abstract_id: PUBMED:17535274 Prolonged follow-up after paediatric pyeloplasty: are repeat scans necessary? Objective: To establish the value of repeat renal scans to assess the relative renal function after successful paediatric pyeloplasty. Patients And Methods: Renal scans were assessed from 138 patients after successful pyeloplasty; the relative renal function before and immediately after pyeloplasty were compared, as was split renal function at 3.5 and 5.5 years after surgery. Results: With a few exceptions the relative renal function before and after pyeloplasty was stable. Repeat renal scans 3.5 and 5.5 years after surgery also showed stable split renal function, even if the renal function was already diminished. Conclusions: Repeat renal scans are not necessary after successful pyeloplasty in children, as most operated renal units showed stable renal function for 5-7 years. Further studies are needed to investigate whether renal function changes further after puberty. abstract_id: PUBMED:36670025 A systematic review of point-of-care abdominal ultrasound scans performed by general surgeons in paediatric patients. Background/purpose: Bedside point-of-care ultrasound scans are a cheap, quick and safe diagnostic tool. There is increasing evidence for the use of point-of-care surgeon-performed ultrasound scans in adults, however there are fewer studies of its use in children. This systematic review aims to provide an up-to-date summary of the evidence behind surgeon-performed ultrasound scans in paediatric surgery. Methods: The PubMed database was used to conduct this systematic review between the dates 1 Jan 1980 to 1 June 2020 (last search: 1 June 2020). Seven primary research studies were included in this review. Results: There is good evidence for the use of ultrasound scans in appendicitis and hypertrophic pyloric stenosis. Training times are easily achievable and transferable within a surgical department. Conclusions: Although the use of surgeon-performed bedside ultrasound scans has been described in appendicitis and hypertrophic pyloric stenosis, more research is required to embed this into clinical practice, particularly in low volume centres such as district general hospitals. A robust training programme is also recommended to incorporate ultrasound scans into clinical practice. abstract_id: PUBMED:29104682 The Necessity of Follow-Up Brain Computed-Tomography Scans: Is It the Pathology Itself Or Our Fear that We Should Overcome? Aim: This study aimed to make a retrospective analysis of pediatric patients with head traumas that were admitted to one hospital setting and to make an analysis of the patients for whom follow-up CT scans were obtained. Methods: Pediatric head trauma cases were retrospectively retrieved from the hospital's electronic database. Patients' charts, CT scans and surgical notes were evaluated by one of the authors. Repeat CT scans for operated patients were excluded from the total number of repeat CT scans. Results: One thousand one hundred and thirty-eight pediatric patients were admitted to the clinic due to head traumas. Brain CT scan was requested in 863 patients (76%) in the cohort. Follow-up brain CT scans were obtained in 102 patients. Additional abnormal finding requiring surgical intervention was observed in only one patient (isolated 4th ventricle hematoma) on the control CTs (1% of repeat CT scans), who developed obstructive hydrocephalus. None of the patients with no more than 1 cm epidural hematoma in its widest dimension and repeat CT scans obtained 1.5 hours after the trauma necessitated surgery. Conclusion: Follow-up CT scans changed clinical approach in only one patient in the present series. When ordering CT scan in the follow-up of pediatric traumas, benefits and harms should be weighted based upon time interval from trauma onset to initial CT scan and underlying pathology. abstract_id: PUBMED:30368599 Long-term mortality and functional outcome after prolonged paediatric intensive care unit stay. We performed a retrospective, observational study of patients who had spent &gt; 14 days in the paediatric intensive care unit (PICU) of our hospital from 2011 to 2013. Specifically, long-term mortality, functional outcome, and PICU resource occupancy were examined. All prolonged-stay patients in our study were &lt; 15 years of age. Favourable outcomes were defined as a Pediatric Overall Performance Category (POPC) score of 1-2, and unfavourable outcomes as a POPC score of 3-6 or death. During the study period, there were 1082 PICU admissions involving 805 patients, 111 (13.8%) of whom had one or more prolonged PICU stays. Among these patients, 100 (90%) survived to PICU discharge and 92 (83%) survived to hospital discharge. At the 3-year follow-up, the survival rate was 75% (77/102; nine patients were lost to follow-up) and the favourable outcome rate was 43% (44/102) (57% among survivors). Prolonged PICU-stay patients accounted for 50.5% of the PICU patient-days. Extremely prolonged stays (≥ 28 days) correlate with low favourable outcome rates (P = 0.03), but did not correlate with mortality rates (P = 0.16).Conclusion: Although prolonged PICU-stay patients utilized many PICU resources, most survived at least 3 years, and &gt; 50% of the survivors had a favourable functional outcome (POPC score). What is Known: • The number of patients with prolonged paediatric intensive care unit (PICU) stays is increasing. • These patients utilize many resources and are at high risk for mortality and disabilities. What is New: • Although prolonged-stay patients accounted for 50% of PICU patient-days, their 3-year survival rate and favourable functional outcome rate (based on Pediatric Overall Performance Category scores) were relatively high. • Extremely prolonged stays (≥ 28 days) correlate with low favourable functional outcomes but not with mortality. abstract_id: PUBMED:22970059 Multislice CT of the head and body routine scans: Are scanning protocols adjusted for paediatric patients? Purpose: To investigate whether the multislice CT scanning protocols of head, chest and abdomen are adjusted according to patient's age in paediatric patients. Materials And Methods: Multislice CT examination records of paediatric patients undergoing head, chest and abdomen scans from three public hospitals during a one-year period were retrospectively reviewed. Patients were categorised into the following age groups: under 4 years, 5-8 years, 9-12 years and 13-16 years, while the tube current was classified into the following ranges: &lt; 49 mA, 50-99 mA, 100-149 mA, 150-199 mA, &gt; 200 mA and unknown. Results: A total of 4998 patient records, comprising a combination of head, chest and abdomen CT scans, were assessed, with head CT scans representing nearly half of the total scans. Age-based adjusted CT protocols were observed in most of the scans with higher tube current setting being used with increasing age. However, a high tube current (150-199 mA) was still used in younger patients (0-8 years) undergoing head CT scans. In one hospital, CT protocols remained constant across all age groups, indicating potential overexposure to the patients. Conclusion: This analysis shows that paediatric CT scans are adjusted according to the patient's age in most of the routine CT examinations. This indicates increased awareness regarding radiation risks associated with CT. However, high tube current settings are still used in younger patient groups, thus, optimisation of paediatric CT protocols and implementation of current guidelines, such as age-and weight-based scanning, should be recommended in daily practice. abstract_id: PUBMED:31656474 Patterns of paediatric emergency admissions and predictors of prolonged hospital stay at the children emergency room, University of Calabar Teaching Hospital, Calabar, Nigeria. Background: There is a high prevalence of paediatric emergency cases in less developed countries. However, prolonged hospital stay at emergency units may further overstretch the facilities. Objective: To assess the patterns of presentations, services offered and predictors of a prolonged stay at the Children Emergency Room of a tertiary hospital in Southern Nigeria. Methods: This prospective cross-sectional, study was conducted at the University of Calabar Teaching Hospital, Nigeria from 1st January-31st December 2014. Socio-demographic and clinical characteristics of consecutively recruited children (n=633) were recorded in a proforma. Binary logistic regression was conducted to determine predictors of prolonged stay (&gt;72 hours). Result: The median age of participants was 2 (1 - 4.6) years. Three-fifths of children were admitted at off-hours and the commonest symptom was fever (73.9%). About 16.4% (95%CI:13.6% - 19.4%, n= 103/633) of the children had prolonged stay while those with sepsis had the longest mean stay (65.5±72.1 hours). Children admitted on account of Sickle cell disease (OR:11.2, 95%CI:1.3-95.1, P-value = 0.03), Malaria (OR:10.7, 95%CI:1.4-82.5, P-value = 0.02) or sepsis (OR:10.5, 95%CI:1.3 - 82.7, P-value = 0.03) had higher odds of prolonged hospital stay. There was no significant difference in hospital stay among children admitted by the consultant as compared to other health personnel (P-value = 0.08). Conclusion: Prevention and proper management of Sickle cell disease and malaria reduces paediatric hospital stay in our environment. Paediatric emergency medicine should be re-organized to cater for high volume of off-hour admissions. abstract_id: PUBMED:26036205 Evaluating an outreach service for paediatric burns follow up. Complications following paediatric burns are well documented and care needs to be taken to ensure the appropriate follow up of these patients. Historically this has meant follow up into adulthood however this is often not necessary. The centralisation of burns services in the UK means that patients and their parents may have to travel significant distances to receive this follow up care. To optimise our burns service we have introduced a burns outreach service to enable the patients to be treated closer to home. The aim of this study is to investigate the impact of the introduction of the burns outreach service and within this environment define the optimum length of time needed to follow up these patients. A retrospective analysis was carried out of 100 consecutive paediatric burns patients who underwent surgical management of their burn. During the follow up period there were 43 complications in 32 patients (32%). These included adverse scarring (either hypertrophic or keloid), delayed healing (taking &gt;1 month to heal) and contractures (utilising either splinting or surgical correction). Fifty-nine percent of these complications occurred within 6 months of injury and all occurred within 18 months. Size of burn was directly correlated to the risk of developing a complication. The outreach service reduced the distance the patient needs to travel for follow up by more than 50%. There was also a significant financial benefit for the service as the follow up clinics were on average 50% cheaper with burns outreach than burns physician. Burns outreach is a feasible service that not only benefits the patients but also is cheaper for the burns service. The optimum length of follow up for paediatric burns in 18 months, after which if there have not been any complications they can be discharged. abstract_id: PUBMED:32596172 Investigation of the Paediatric Tracheostomy Decannulation: Factors Affecting Outcome. Introduction: Evidence for factors determining paediatric tracheostomy decannulation vary extensively; therefore, this prospective observational study aimed to investigate these factors. Materials And Methods: In total, 67 consecutive paediatric patients (&lt;12 years old) who referred to the Department of Otolaryngology, (Postgraduate Institute Medical Education and Research),(Chandigarh), India, for decannulation were included and evaluated for contributing factors in this study. Parental counselling was performed, and informed consents were obtained from them. The patients underwent detailed work up including X-rays of airway/soft tissue neck (STN) and endoscopic assessment under anaesthesia for evaluating airway patency. Decannulations were attempted post assessment and followed up one month to classify decannulation as success or failure regarding the removal of the tracheostomy tube. Results: Totally, 61 patients out of 67 cases were successfully decannulated, whereas six children failed the decannulation. Moreover, the duration of tracheostomy (Pearson's Chi-square 35.330, P=0.013), indication of tracheostomy (Pearson's Chi-square 21.211, P=0.000), STN X-Ray (Chi-square 43.249, P=0.000), and bronchoscopic findings (Chi-square 67.000, P=0.000) were significantly associated with the outcome of decannulation. However, decannulation outcome had no significant correlation with various factors, such as the duration of intubation preceding tracheostomy, duration of ventilation, tracheal swabs, and antibiotic therapy. Conclusion: The STN X-ray is an independent predictor, and it is recommended for paediatric tracheostomy decannulation. Moreover, bronchoscopic assessment should be performed in children having doubtful infra-stomal airway. Duration of tracheostomy significantly affects decannulation outcome. However, intubation duration preceding tracheostomy and duration of assistive ventilation have no direct effects on the outcome of decannulation. In children, gradual decannulation should be preferred and one month follow up is adequate for deciding decannulation outcome. abstract_id: PUBMED:23845570 Radiation from CT scans in paediatric trauma patients: Indications, effective dose, and impact on surgical decisions. Objectives: The purpose of this study was to determine the effective dose of radiation due to computed tomography (CT) scans in paediatric trauma patients at a level 1 Canadian paediatric trauma centre. We also explored the indications and actions taken as a result of these scans. Patients And Methods: We performed a retrospective review of paediatric trauma patients presenting to our centre from January 1, 2007 to December 31, 2008. All CT scans performed during the initial trauma resuscitation, hospital stay, and 6 months afterwards were included. Effective dose was calculated using the reported dose length product for each scan and conversion factors specific for body region and age of the patient. Results: 157 paediatric trauma patients were identified during the 2-year study period. Mean Injury Severity Score was 22.5 (range 12-75). 133 patients received at least one CT scan. The mean number of scans per patient was 2.6 (range 0-16). Most scans resulted in no further action (56%) or additional imaging (32%). A decision to perform a procedure (2%), surgery (8%), or withdrawal of life support (2%) was less common. The average dose per patient was 13.5mSv, which is 4.5 times the background radiation compared to the general population. CT head was the most commonly performed type of scan and was most likely to be repeated. CT body, defined as a scan of the chest, abdomen, and/or pelvis, was associated with the highest effective dose. Conclusions: CT is a significant source of radiation in paediatric trauma patients. Clinicians should carefully consider the indications for each scan, especially when performing non-resuscitation scans. There is a need for evidence-based treatment algorithms to assist clinicians in selecting appropriate imaging for patients with severe multisystem trauma. abstract_id: PUBMED:34842633 Congenital Adrenal Hyperplasia: Diagnostic Pitfalls in Prolonged Neonatal Jaundice. Congenital Adrenal Hyperplasia (CAH) is a genetic disorder that leads to cortisol deficiency. However, prolonged neonatal jaundice is a rare presentation of CAH. The pathophysiology of hyperbilirubinemia in CAH is still ill-defined. Plausible causes are related to the synthesis of bile, maturity of the liver and adrenal function. This case reported a neonate who presented with severe prolonged jaundice that lasted for more than a month. A short Synacthen test confirmed diagnosis of CAH. He was started on steroid replacement. He had regular follow-up under paediatric endocrinologist and primary care physician for long-term monitoring and overall health care. This case demonstrates the importance of recognizing the clinical and biochemical features of CAH for early detection and referral. Long-term follow-up and monitoring is necessary due to the risk of complications and side effects of medications. This is the first case of CAH presented with persistent hyperbilirubinemia to be reported from Malaysia. The case describes the difficult workup that has been encountered in the patient's care and management. Answer: Based on the findings from the study with PUBMED:17535274, repeat renal scans are not necessary after successful paediatric pyeloplasty in children, as most operated renal units showed stable renal function for 5-7 years. The study compared the relative renal function before and immediately after pyeloplasty, as well as split renal function at 3.5 and 5.5 years after surgery, and found that with a few exceptions, the relative renal function was stable. Therefore, the study concludes that further studies are needed to investigate whether renal function changes after puberty, but in the meantime, repeat scans do not appear to be necessary for follow-up in the majority of cases.
Instruction: Dry hot air: is it a reliable method of disinfection for urological endoscopes? Abstracts: abstract_id: PUBMED:19616280 Dry hot air: is it a reliable method of disinfection for urological endoscopes? Preliminary study. Objectives: To evaluate dry hot air effect as a reliable method of disinfection of urological endoscopes. Methods: Two groups of urological endoscopic instruments (2 cystoscopic sheaths, 2 bridges, and 2 working elements of resectoscope and urethrotome for each group), after being sterilized by full immersion in 2% glutaraldehyde solution for 10 hours, were contaminated by immersion in suspensions of Staphylococcus saprophyticus (3 x 10(3) CFU/mL), Escherichia coli (28 x 10(2) CFU/mL), and Candida albicans (27 x 10(2) CFU/mL). Group I was exposed to disinfection by full immersion in 2% glutaraldehyde solution for 20 minutes. Group II was exposed to dry hot air obtained by using a hair dryer for 3 minutes. The surfaces and lumens of the instruments of both groups were washed with specific volumes of saline. The surface and luminal specimens were collected separately and used for culture on different media specific for these organisms. Colony-forming units were estimated. Results: In group I, the cultures isolated from the instrument surface washes were negative but those isolated from the luminal injected saline were all positive for S. saprophyticus (60 CFU/mL), E. coli (50 CFU/mL), and C. albicans (50 CFU/mL). In group II, the cultures isolated from both the instrument surface and luminal saline washes were negative for all 3 above-mentioned microorganisms. Conclusions: Hot dry air by using a hair dryer is an inexpensive, rapid, easy, easily available, and effective method of disinfection compared with glutaraldehyde solution. abstract_id: PUBMED:19033060 Sterilization and disinfection of endoscopes in urology Sterilization and disinfection of endoscopes take account of the risk of transmitted infections and nosocomial infections. These risks are ruled by legal texts. Urology is a high risk speciality. The material which is used must be single use or at least sterilisable (18min at 134 degrees C). Flexible endoscopes are sensitive to high temperatures and needs disinfection, and immediate use. These steps are subjected to quality control rules and marking. abstract_id: PUBMED:8934166 Deficiencies of automatic endoscopic reprocessors: a method to achieve high-grade disinfection of endoscopes. Background: We show that disinfection using the automatic endoscopic reprocessor is not complete and propose a method for high-grade disinfection of endoscopes. Methods: We used an automatic endoscopic reprocessor, Pyser System 83, and 2% glutaraldehyde. After each endoscopic procedure, the endoscopes were divided into three groups. Endoscopes in group A were washed only by the reprocessor. Group B endoscopes were washed by the reprocessor after the connectors were soaked in glutaraldehyde for 5 minutes. The channels, valves, connecting sections of group C endoscopes, and the connectors of the machine were sprayed with glutaraldehyde before machine-washing. Swabs were taken from all 13 parts of each endoscope and machine for microbiologic culture. Results: Six endoscopes were positive, cumulatively, for bacterial contamination in group A. Among group B endoscopes, one remained contaminated. No endoscope was positive in group C. The difference between group A and C was statistically significant (p &lt; .05). Conclusions: Machine washing by automatic endoscopic reprocessors may not achieve complete disinfection. Additional procedures are necessary. High-grade disinfection of the connectors is critical. Disinfection of the interface between the connectors is important. abstract_id: PUBMED:23578363 Evaluation of a storage cabinet for heat-sensitive endoscopes in a clinical setting. Background: In most countries, endoscopes must be disinfected or fully reprocessed before the beginning of each session, even if they were cleaned and disinfected after their last use. Several storage cabinets for heat-sensitive endoscopes (SCHE) are commercially available. They are designed to maintain the microbiological quality of reprocessed endoscopes for a predefined period of time validated by the SCHE manufacturer. Use of an SCHE increases the acceptable storage time before it is necessary to re-disinfect the endoscope. Aim: To evaluate the efficacy of an SCHE (DSC8000, Soluscope, SAS Marseilles, France) in a clinical setting. Method: The microbiological quality of endoscopes was assessed after 72 h of storage in an SCHE (Group I), and compared with the microbiological quality of endoscopes stored for 72 h in a clean, dry, dedicated cupboard without morning disinfection (Group II) and the microbiological quality of endoscopes stored for 72 h in a clean, dry, dedicated cupboard with morning disinfection (Group III). Forty-one endoscopes in each group were sampled for microbiological quality. Endoscope contamination levels were analysed according to guidelines published by the National Technical Committee on Nosocomial Infection in 2007. Findings/conclusion: Use of an SCHE helps to maintain the microbiological quality of endoscopes, provided that staff members are well trained and all practices are framed by a proven quality assurance process. abstract_id: PUBMED:15229429 Bacteriologic testing of endoscopes after high-level disinfection. Background: There are no definitive data available concerning microbiologic safety of prolonged endoscope storage after reprocessing and disinfection. This study evaluated the durability of high-level disinfection of endoscopes stored in a dust-proof cabinet for 5 days. Methods: Three different types of endoscopes (upper endoscopes, duodenoscopes, colonoscopes) were tested. After completion of the endoscopic procedure, endoscopes were subjected to an initial decontamination, followed by manual cleaning with the endoscope immersed in detergent. The endoscopes then were placed in an automatic reprocessor that provides high-level disinfection. They then were stored by hanging in a dust-proof cabinet. Bacteriologic samples were obtained from the surface of the endoscopes, the openings for the piston valves, and the accessory channel daily for 5 days, and by flush-through (combined with brushing) from the accessory channels after 5 days of storage. Samples were cultured for all types of aerobic and anaerobic bacteria, including bacterial spores, and for Candida species. Results: For all assays, all endoscopes were bacteria-free immediately after high-level disinfection. Only 4 assays (of 135) were positive during the subsequent 5-day assessment (skin bacteria cultured from endoscope surfaces). All flush-through samples were sterile. Conclusions: When endoscope reprocessing guidelines are strictly observed and endoscopes are stored in appropriate cabinets for up to 5 days, reprocessing before use may not be necessary. abstract_id: PUBMED:10670140 Regulatory aspects of disinfection of endoscopes The circular on the sterilization and the law of july 1, 1998 enact clearly that the medical devices that support steam sterilization must be sterilized with an organization of sterilization that ensure quality. Endoscope that enter normally sterile tissues should be subjected to a sterilization procedure before each use; if this is not feasible, they should receive high-level disinfection to destroy bacterial spores. The endoscope must be immersed for at least one hour in aqueous solution of 2% glutaraldehyde. This lengthened duration of processing must be integrate in the organization of medical department. The endoscopes not penetrating in a sterile cavity are disinfected with manual processing according to the protocol of the circular of april 1996, or with automated endoscope reprocessing machine. The different types of automated machine used to wash and disinfect endoscope must now answer to criterion concerning their design described in the circular of july 15, 1998. A significant work has to be made in hospitals to conform automated machine and procedure, to control the risk dependent with their use. abstract_id: PUBMED:31241788 Guidelines for standardizing cleansing and disinfection of gastrointestinal endoscopes. As part of the activities toward standardizing endoscopy procedures, the Japan Gastroenterological Endoscopy Society has prepared guidelines for cleansing and disinfection of gastrointestinal endoscopes. The environment of gastrointestinal endoscopy differs between Japan and advanced Western countries. In advanced Western countries, gastrointestinal endoscopy is performed almost exclusively at specialized facilities, where strict provisions are observed for cleansing and disinfecting endoscopes. In Japan, however, gastrointestinal endoscopy is performed even in small clinics, and the annual number of gastrointestinal endoscopy cases is enormous. In addition, the method for cleansing and disinfecting endoscopes differs among hospitals. Although there is a distinct lack of evidence for how gastrointestinal endoscopes are cleaned and disinfected, it is necessary to standardize the method for doing so to advance the field of endoscopic medicine. abstract_id: PUBMED:10477367 Cleaning and disinfection of gastrointestinal endoscopes. Comparative analysis of two disinfectants. Unlabelled: Gastrointestinal endoscopy can lead to infectious complications, and endoscopes must be disinfected to prevent them. Aim: to evaluate three methods of disinfection: 1) usual cleaning technique and immersion in glutaraldehyde phenolate (GP); 2) meticulous cleaning and immersion in GP, and 3) meticulous cleaning and immersion in hydrogen peroxide. Method: thirty endoscopes (15 gastroscopes and 15 colonoscopes) were disinfected with each method. Samples were taken following endoscopic exploration, after cleaning and after disinfection, and were cultured. The number of positive culture (cfu/ml &gt; 1) was counted. Results: the rate of contamination of endoscopes did not decrease significantly after cleaning with method 1 (66 vs 60%), but did decrease with method 2 (38 vs 16%) and method 3 (53 vs 17%). The contamination rate after cleaning was significantly lower with methods 2 and 3 (p &lt; 0.005). This rate was also lower after disinfection (p &lt; 0.025). Method 3 achieved 0% contamination following disinfection. Conclusions: the greatest decrease in contamination rate was achieved with conscientious cleaning followed by disinfection. Both disinfectants yielded similar results, although hydrogen peroxide produced a higher level of disinfection. abstract_id: PUBMED:10573955 Guidelines for cleaning and disinfection/sterilization of endoscopes Endoscopy is a diagnostic and therapeutic method which is being increasingly used in various fields of medicine, especially in minimal invasive surgery. During the endoscopic procedure, endoscopes are contaminated with patient's microbial flora. After each procedure and before the next patient, endoscope should be reprocessed in a way to be safe from post-procedural infection. Endoscopes are divided in two categories (the borders between them are not always clear-cut): high-risk category endoscopes which enter the sterile tissue, and medium-risk category which come in contact with mucosal surface. High-risk endoscopes should be sterilized or high-level disinfected, and medium-risk should be high-level disinfected. The first and the most important step in endoscope reprocessing is thorough manual cleaning of all parts of dismantled endoscope and of all channels in water and (enzymatic) detergent. The second step is disinfection of endoscope fully immersed in 2% glutaraldehyde for 20 minutes at room temperature. The third step is thorough rinsing in sterile water or tap water followed by 70% ethanol, depending on the next endoscopic procedure. Steps 2-4 can be done in the machine. During endoscopy as well as during endoscope reprocessing, strict preventive measures should be followed for health care workers protection. abstract_id: PUBMED:658032 Experiences and problems in the disinfection of fibre endoscopes. The problem of the disinfection of fibre endoscopes is to achieve a good efficiency together with easy handling. There is not yet a disinfectant which shows a sufficient germicidal effect after a short time of contact with the fibre endoscope and which is harmless to the patient and medical staff. This paper deals with a two-stage disinfecting procedure in which the instrument undergoes a short intermediate disinfection using polyvidoneiodine (Betaisodona) between two investigations and a main disinfection at the end of a series of investigations using glutardialdehyde (Cidex). The measures lead to a sufficient reduction of the number of germs. The importance of the disinfection of all parts of the endoscopes, especially the channels and the accessories (water bottles etc.) is pointed out. Answer: Based on the preliminary study described in the abstract from PUBMED:19616280, dry hot air appears to be a reliable method of disinfection for urological endoscopes. In the study, two groups of urological endoscopic instruments were contaminated with various microorganisms and then subjected to different disinfection methods. Group I was disinfected with full immersion in 2% glutaraldehyde solution for 20 minutes, while Group II was exposed to dry hot air using a hair dryer for 3 minutes. The results showed that in Group I, cultures isolated from the luminal injected saline were all positive for the microorganisms, indicating that the disinfection was not completely effective. However, in Group II, the cultures isolated from both the instrument surface and luminal saline washes were negative for all three microorganisms tested, suggesting that the dry hot air method was effective in disinfecting the instruments. The study concluded that hot dry air using a hair dryer is an inexpensive, rapid, easy, easily available, and effective method of disinfection compared with glutaraldehyde solution. Therefore, based on this preliminary study, dry hot air can be considered a reliable method of disinfection for urological endoscopes. However, it is important to note that this is a preliminary study, and further research may be needed to confirm these findings and to establish dry hot air as a standard practice for disinfecting urological endoscopes.
Instruction: Endofibrosis in athletes: is a simple bedside exercise helpful or sufficient for the diagnosis? Abstracts: abstract_id: PUBMED:19638821 Endofibrosis in athletes: is a simple bedside exercise helpful or sufficient for the diagnosis? Objective: Ankle-to-brachial index (ABI) can be easily performed by all physicians. The Ruffier-Dickson (RD) test is an easy procedure to attain moderate exercise at the bedside for physicians who do not have an ergometer. Design: Retrospective analysis. Setting: Tertiary care, institutional practice. Patients: Fifty-three asymptomatic athletes and 22 patients suffering from unilateral pain due to histologically proven exercise-induced arterial endofibrosis (EIAE). Intervention: Brachial and ankle systolic blood pressures were measured in the supine position on the suspected leg in EIAE or left leg in controls, at rest (rest) and at the first minute of the recovery from incremental maximal cycle ergometer exercise (maxCE) and Ruffier-Dickson (RD) exercise tests. Main Outcome Measures: Comparison of ABI(rest), ABI(maxCE), and ABI(RD) in discriminating patients from normal subjects, using receiver operating characteristics (ROC) curves. Results: Area (+/-SE of area) of ROC curve was 0.76 +/- 0.06 for ABI(rest), 0.83 +/- 0.05 for ABI(RD) (nonsignificant from rest), and 0.99 +/- 0.01 for ABI(maxCE) (P &lt; 0.01 from ABI(RD) and P &lt; 0.001 from ABI(rest)). An ABI(maxCE) below 0.48 was 100% specific and 80% sensitive for EIAE. For the RD test, a 100% negative predictive value was only attained for postexercise ABI values higher than 0.92. Conclusion: ABI after maximal cycle ergometer exercise is more accurate than ABI after an RD test to search for unilateral EIAE in athletes. abstract_id: PUBMED:27209899 Diagnosis and Management of Iliac Artery Endofibrosis: Results of a Delphi Consensus Study. Objective: Iliac endofibrosis is a rare condition that may result in a reduction of blood flow to the lower extremity in young, otherwise healthy individuals. The data to inform everyday clinical management are weak and therefore a Delphi consensus methodology was used to explore areas of consensus and disagreement concerning the diagnosis and management of patients with suspected iliac endofibrosis. Methods: A three-round Delphi questionnaire approach was used among vascular surgeons, sports physicians, sports scientists, radiologists, and clinical vascular scientists with experience of treating this condition to explore diagnosis and clinical management issues for patients with suspected iliac artery endofibrosis. Analysis is based on 18 responses to round 2 and 14 responses to round 3, with agreement reported when 70% of respondents were in agreement. Results: Initially there was agreement on the typical symptoms at presentation and the need for an exercise test in the diagnosis. Round 3 clarified that duplex ultrasound was a useful tool in the diagnosis of endofibrosis. There was consensus on the most appropriate type of surgery (endarterectomy and vein patch) and that endovascular interventions were inadvisable. The final round helped to inform aspects of the natural history and post-operative surveillance. Progression of the disease was likely with continued exercise but cessation may prevent progression. Surveillance after surgery is generally recommended yearly with at least a clinical assessment. Conclusions: There is broad agreement about the presenting symptoms and the investigations required to confirm (or exclude) the diagnosis of iliac endofibrosis. There was consensus on the surgical approach to repair. Disagreement existed about the specific diagnostic criteria that should be applied during non-invasive testing and about post-operative care and resumption of exercise. abstract_id: PUBMED:34746530 Bicycle exercise ankle brachial index recovery time as a novel metric for evaluating the hemodynamic significance of external iliac endofibrosis in competitive cyclists. Subtle radiographic findings can increase the challenge of diagnosing external iliac artery endofibrosis. We evaluated a new metric, the bicycle exercise ankle brachial index recovery time (BART), in a cohort of cyclists with symptomatic external iliac artery endofibrosis. BART was defined as the time required in minutes for the ankle brachial index to return to 0.9 after a period of exercise. Surgical correction resulted in an improvement in BART postoperatively (4.5 ± 4.1 vs 9.1 ± 4.3 minutes; P &lt; .001), with improved values correlating with better patient satisfaction. Documentation of the BARTs before and after surgical treatment provides an additional measure of postoperative hemodynamic improvement. abstract_id: PUBMED:36382323 Endofibrosis as a Causative Agent of the Peripheral Artery Disease: A Report of Two Cases for Professional Cyclists. Endofibrosis is a rare disease that predominantly affects athletes and is caused by a gradual occlusion of the (usually iliac) artery due to a thickening of the intima. From our experience, we report in this article two cases with the entity of endofibrosis in females around 30 years old. The first case presented with acute limb ischemia, and the second one was with pain in the leg during exercise. In addition, both cases are professional cyclists. They were eventually diagnosed with endofibrosis and underwent surgical procedures. They are now pursuing their professional career successfully. Last but not least, endofibrosis might be classified as an occupational disease, particularly, in the case of professional athletes or cyclists. abstract_id: PUBMED:32930655 Exercise induced leg pain due to endofibrosis of external iliac artery. Due to monotonous movement patterns, muscular hypertrophy, and increased cardiac output peripheral vasculature of athletes are subject to extreme stresses during athletic performance. Individuals suffering from exercise induced non-traumatic lower leg pain may display underlying vascular pathology such as external iliac artery endofibrosis. Therefore, it is essential in the course of to discriminate vascular from non-vascular findings and prime the correct diagnostic path within the course of clinical examination. In this regard, interdisciplinary thinking and profound knowledge in exercise-associated pathologies of the musculoskeletal, nervous and vascular system is indispensable. Consequently, provocation testing displays an indispensable diagnostic tool and has to be continued until symptoms occur, or complete exhaustion is attained. Finally, selective assessment of conservative and surgical treatment options, as well as its ethical evaluation, are of major importance in order to protect, preserve and promote the health and physical integrity of our patients who are keen to perform. abstract_id: PUBMED:37705566 External Iliac Artery Endofibrosis: A Discussion on Two Unique Cases. Iliac artery endofibrosis (IAE), as the name suggests, involves subintimal fibrosis of the iliac artery. IAE is most commonly associated with competitive athletics, particularly cycling, and remains a rather underappreciated diagnosis in the clinical setting. We present two unique and distinct presentations of IAE in competitive athletes. The first case involves a 38-year-old male cyclist who initially presented with complaints of a bulge at the right groin and acute onset monoplegia and paresthesia associated with exertion of the right lower extremity. This patient was referred to vascular surgery and underwent right common iliac artery and proximal common femoral artery endarterectomy with patch angioplasty and Fogarty embolectomy. Case 2 involves a 50-year-old female triathlete who presented with left lower extremity claudication of a more chronic course, with symptoms beginning approximately four years prior. The pain radiated to her upper thigh and was associated with exertion, restricting her exercise tolerance and return to training. After a diagnosis of IAE was made, she was referred to vascular surgery for a left iliofemoral bypass. abstract_id: PUBMED:22977777 The effect of bedside exercise program on stroke patients with Dysphagia. Objective: To examine the effects of a bedside exercise program on the recovery of swallowing after a stroke. Method: Fifty stroke patients with dysphagia (&lt;6 months post-stroke) were enrolled and classified into two groups, the experimental (25 subjects) and control groups (25 subjects). The control group was treated with conventional swallowing therapy. The experimental group received additional bedside exercise training, which consisted of oral, pharyngeal, laryngeal, and respiratory exercises, 1 hour per day for 2 months, and they were instructed regarding this program through the nursing intervention. All patients were assessed for their swallowing function by Videofluoroscopic Swallowing Study (VFSS), using the New VFSS scale, the level of functional oral intake, the frequency of dysphagia complications, the presence (or not) of tube feeding, the mood state and quality of life before the treatment and at 2 months after the treatment. Results: After 2 months of treatment, the experimental group showed a significant improvement in the swallowing function at the oral phase in the New VFSS Scale than that of the control group (p&lt;0.05). Further, they also showed less depressive mood and better quality of life than the control group. However, there was no significant change in the incidence of dysphagia complication and the presence (or not) of tube feeding between the two groups. Conclusion: Bedside exercise program showed an improvement of swallowing function and exhibited a positive secondary effect, such as mood state and quality of life, on subacute stroke patients with dysphagia. For improvement of rehabilitation results on subacute stroke patients with dysphagia, this study suggests that additional intensive bedside exercise would be necessary. abstract_id: PUBMED:26140352 Endofibrosis: an unusual cause of leg pain in an athlete. Endofibrosis of the external iliac artery can occur as a rare complication of high endurance physical activities, particularly cycling, running, and rowing. Symptoms mimic claudication and typically occur with vigorous exercise and improve with rest. Patients can experience thigh pain, swelling and paresthesias in the affected extremity. The diagnosis of endofibrosis is often delayed given the patient's overall normal appearance and lack of significant medical history. This report outlines the presentation and evaluation of a case of endofibrosis of the external iliac artery in an otherwise healthy 42-year-old healthy woman. Her diagnosis was delayed for two years before appropriate diagnostic testing and subsequent surgical intervention allowed for return to a normal, active lifestyle. abstract_id: PUBMED:27178720 Color Doppler Ultrasound Imaging in the Assessment of Iliac Endofibrosis. Endofibrosis (EF) of the iliac arteries is a flow-limiting condition typically seen in highly trained endurance athletes. Thirty-seven athletes (74 limbs) were referred to our department with suspected EF. All patients had a pre- and postexercise color Doppler ultrasound (CDU) of the iliac arteries. Doppler waveform and peak systolic velocity (PSV) and end-diastolic velocity (EDV) were assessed pre- and postexercise. Endofibrosis was diagnosed with CDU in 24 athletes (29 limbs). Arterial wall and course abnormalities were detected at rest in 20 (67%) symptomatic limbs of athletes with and 4 (22%) symptomatic limbs of athletes without EF. Postexercise abnormal waveforms of the stenotic/damped type were seen in the iliac arteries in all 29 limbs of athletes diagnosed with EF. These waveform changes were accompanied by high PSV (&gt;350 cm/s) and EDV (&gt;150 cm/s), with (n = 10; 34%) or without (n = 19; 66%) the evidence of reduced arterial lumen caliber. Color Doppler ultrasound can be used to detect EF. abstract_id: PUBMED:31780606 Functional magnetic resonance angiography in the diagnosis of iliac artery endofibrosis in an endurance runner. A 45-year-old woman was presented with a 2-year history of left lower limb claudication symptoms occurring only during long-distance running. Multimodal imaging with exercise duplex ultrasonography and magnetic resonance angiogram confirmed the presence of flow-limiting stenoses in the left external iliac artery consistent with a diagnosis of left external iliac artery endofibrosis. She successfully underwent a left external iliac endarterectomy with vein patch repair and returned to full physical activity soon after. A year following full recovery from her original operation, she presented with similar symptoms on the right side and was managed in a similar manner. This report illustrates an unusual case of bilateral iliac artery endofibrosis occurring in an older endurance runner. We also present a novel diagnostic modality of pre-exercise and postexercise magnetic resonance angiography for iliac endofibrosis. Answer: The diagnosis of endofibrosis in athletes, particularly those suffering from exercise-induced arterial endofibrosis (EIAE), can be challenging. The ankle-to-brachial index (ABI) is a simple non-invasive test that can be performed at the bedside and has been used to aid in the diagnosis of EIAE. A retrospective analysis compared ABI measurements at rest, after maximal cycle ergometer exercise (ABI(maxCE)), and after the Ruffier-Dickson (RD) exercise test (ABI(RD)) in athletes with unilateral pain due to histologically proven EIAE and asymptomatic athletes. The study found that ABI after maximal cycle ergometer exercise was more accurate than ABI after an RD test in detecting unilateral EIAE in athletes. An ABI(maxCE) below 0.48 was 100% specific and 80% sensitive for EIAE, whereas a 100% negative predictive value for the RD test was only attained for postexercise ABI values higher than 0.92 (PUBMED:19638821). While the RD test is an easy procedure that can be performed at the bedside without an ergometer, it was not as accurate as ABI measurements following maximal cycle ergometer exercise. Therefore, while a simple bedside exercise like the RD test may be helpful, it may not be sufficient on its own for the diagnosis of endofibrosis in athletes. More accurate diagnostic methods, such as maximal cycle ergometer exercise testing, duplex ultrasound, and functional magnetic resonance angiography, are often necessary to confirm the diagnosis (PUBMED:27209899, PUBMED:27178720, PUBMED:31780606). In summary, while simple bedside exercises can be helpful in the initial assessment of athletes with suspected endofibrosis, they are not typically sufficient for a definitive diagnosis. More comprehensive testing is usually required to accurately diagnose and manage this condition in athletes.
Instruction: Does multicomponent physical exercise with simultaneous cognitive training boost cognitive performance in older adults? Abstracts: abstract_id: PUBMED:33011095 Cognitive Function Improvements Mediate Exercise Intervention Effects on Physical Performance in Acutely Hospitalized Older Adults. Objectives: An individualized, multicomponent exercise program is effective to reverse the functional and cognitive decline that frequently occur during acute care hospitalization in older patients. The aim was to determine whether improvements in cognition mediate improvements in physical function in acutely hospitalized older patients. Design: A single-center, single-blind randomized clinical trial. Setting And Participants: Acute care for elderly (ACE) unit in a tertiary public hospital in Navarre (Spain). Hospitalized patients were randomly assigned to an exercise intervention (n = 185) or usual-care group (n = 185). The intervention consisted of a multicomponent exercise-training program performed during 5 to 7 consecutive days (2 sessions/day). The usual-care group received habitual hospital care, which included physical rehabilitation when needed. Measures: The main endpoints were changes in cognitive function assessed by the Mini-Mental State Examination test and verbal fluency ability, and changes in physical performance by the Short Physical Performance Battery from baseline to discharge. Mediation regression models were generated using ordinary least squares with the PROCESS version 3.2 to determine links between exercise-induced improvements. Results: Mediation regression model analysis indicated a significant and direct beneficial effect of physical exercise on physical function (β = 2.14; P &lt; .0001), and a significant indirect effect of global cognitive function on the direct effect (indirect effect = 0.26; 0.10 to 0.49). Verbal fluency ability also had an indirect effect (0.32; 0.16 to 0.53) on the positive effect of exercise-training on physical function. Conclusions And Implications: Cognitive function enhancements mediate physical function improvements in acutely hospitalized older adults after an individualized, multicomponent exercise-training program. ClinicalTrials.gov registration (NCT02300896). abstract_id: PUBMED:35447987 Acute Effect of a Simultaneous Exercise and Cognitive Task on Executive Functions and Prefrontal Cortex Oxygenation in Healthy Older Adults. The rapid increase in population aging and associated age-related cognitive decline requires identifying innovative and effective methods to prevent it. To manage this socio-economic challenge, physical, cognitive, and combined stimulations are proposed. The superiority of simultaneous training compared to passive control and physical training alone seems to be an efficient method, but very few studies assess the acute effect on executive function. This study aimed to investigate the acute effect of simultaneous physical and cognitive exercise on executive functions in healthy older adults, in comparison with either training alone. Seventeen healthy older adults performed three experimental conditions in randomized order: physical exercise, cognitive exercise, and simultaneous physical and cognitive exercise. The protocol involved a 30 min exercise duration at 60% of theoretical maximal heart rate or 30 min of cognitive exercise or both. Executive functions measured by the Stroop task and pre-frontal cortex oxygenation were assessed before and after the intervention. We found a main effect of time on executive function and all experimental condition seems to improve inhibition and flexibility scores (&lt;0.05). We also found a decrease in cerebral oxygenation (Δ[HbO2]) in both hemispheres after each intervention in all cognitive performance assessed (p &lt; 0.05). Simultaneous physical and cognitive exercise is as effective a method as either physical or cognitive exercise alone for improving executive function. The results of this study may have important clinical repercussions by allowing to optimize the interventions designed to maintain the cognitive health of older adults since simultaneous provide a time-efficient strategy to improve cognitive performance in older adults. abstract_id: PUBMED:28587957 Role of physical exercise on cognitive function in healthy older adults: A systematic review of randomized clinical trials. Cognitive impairment has a harmful effect on quality of life, is associated with functional limitations and disability in older adults. Physical activity (PA) has shown to have beneficial effects on cognition but the results and conclusions of randomized controlled trials (RCTs) are less consistent. Update of knowledge was necessary to examine the effects on cognitive function of new training modalities developed in recent years, such as multicomponent exercise training. Therefore, the purpose of this review was to examine the role of multicomponent training versus aerobic or resistance training alone on cognition in healthy older adults (&gt;65 years) without known cognitive impairment. The mean differences (MD) of the parameters from pre-intervention to post-intervention between groups were pooled using a random-effects model. Twenty-one RCTs published between 2002 and 2016 were included. Multicomponent exercise training may have the most positive effects on cognitive function in older adults. The small number of included studies and the large variability in study populations, study design, exercise protocols, adherence rates and outcome measures complicate the interpretation of the results and contribute to discrepancies within the exercise research literature. abstract_id: PUBMED:37845603 Dual task measures in older adults with and without cognitive impairment: response to simultaneous cognitive-exercise training and minimal clinically important difference estimates. Background: Responsiveness and minimal clinically important difference (MCID) are critical indices to understand whether observed improvement represents a meaningful improvement after intervention. Although simultaneous cognitive-exercise training (SCET; e.g., performing memory tasks while cycling) has been suggested to enhance the cognitive function of older adults, responsiveness and MCID have not been established. Hence, we aimed to estimate responsiveness and MCIDs of two dual task performance involving cognition and hand function in older adults with and without cognitive impairment and to compare the differences in responsiveness and MCIDs of the two dual task performance between older adults with and without cognitive impairment. Methods: A total of 106 older adults completed the Montreal Cognitive Assessment and two dual tasks before and after SCET. One dual task was a combination of Serial Sevens Test and Box and Block Test (BBT), and the other included frequency discrimination and BBT. We used effect size and standardized response mean to indicate responsiveness and used anchor- and distribution-based approaches to estimating MCID ranges. When conducting data analysis, all participants were classified into two cognitive groups, cognitively healthy (Montreal Cognitive Assessment ≥ 26) and cognitively impaired (Montreal Cognitive Assessment &lt; 26) groups, based on the scores of the Montreal Cognitive Assessment before SCET. Results: In the cognitively healthy group, Serial Seven Test performance when tasked with BBT and BBT performance when tasked with Serial Seven Test were responsive to SCET (effect size = 0.18-0.29; standardized response mean = 0.25-0.37). MCIDs of Serial Seven Test performance when tasked with BBT ranged 2.09-2.36, and MCIDs of BBT performance when tasked with Serial Seven Test ranged 3.77-5.85. In the cognitively impaired group, only frequency discrimination performance when tasked with BBT was responsive to SCET (effect size = 0.37; standardized response mean = 0.47). MCIDs of frequency discrimination performance when tasked with BBT ranged 1.47-2.18, and MCIDs of BBT performance when tasked with frequency discrimination ranged 1.13-7.62. Conclusions: Current findings suggest that a change in Serial Seven Test performance when tasked with BBT between 2.09 and 2.36 corrected number (correct responses - incorrect responses) should be considered a meaningful change for older adults who are cognitively healthy, and a change in frequency discrimination performance when tasked with BBT between 1.47 and 2.18 corrected number (correct responses - incorrect responses) should be considered a meaningful change for older adults who are cognitively impaired. Clinical practitioners may use these established MCIDs of dual tasks involving cognition and hand function to interpret changes following SCET for older adults with and without cognitive impairment. Trial Registration: NCT04689776, 30/12/2020. abstract_id: PUBMED:35881301 Effect of simultaneous exercise and cognitive training on executive functions, baroreflex sensitivity, and pre-frontal cortex oxygenation in healthy older adults: a pilot study. Aging is characterized by cognitive decline affecting daily functioning. To manage this socio-economic challenge, several non-pharmacological methods such as physical, cognitive, and combined training are proposed. Although there is an important interest in this subject, the literature is still heterogeneous. The superiority of simultaneous training compared to passive control and physical training alone seems clear but very few studies compared simultaneous training to cognitive training alone. The aim of this pilot study was to investigate the effect of simultaneous exercise and cognitive training on several cognitive domains in healthy older adults, in comparison with either training alone. Thirty-five healthy older adults were randomized into one of three experimental groups: exercise training, cognitive training, and simultaneous exercise and cognitive training. The protocol involved two 30-min sessions per week for 24 weeks. Cognitive performance in several domains, pre-frontal cortex oxygenation, and baroreflex sensitivity were assessed before and after the intervention. All groups improved executive performance, including flexibility or working memory. We found a group by time interaction for inhibition cost (F(2,28) = 6.44; p &lt; 0.01) and baroreflex sensitivity during controlled breathing (F(2,25) = 4.22; p = 0.01), the magnitude of improvement of each variable being associated (r = -0.39; p = 0.03). We also found a decrease in left and right pre-frontal cortex oxygenation in all groups during the trail making test B. A simultaneous exercise and cognitive training are more efficient than either training alone to improve executive function and baroreflex sensitivity. The results of this study may have important clinical repercussions by allowing to optimize the interventions designed to maintain the physical and cognitive health of older adults. abstract_id: PUBMED:33249177 Combined physical and cognitive training for older adults with and without cognitive impairment: A systematic review and network meta-analysis of randomized controlled trials. Combining physical exercise with cognitive training is a popular intervention in dementia prevention trials and guidelines. However, it remains unclear what combination strategies are most beneficial for cognitive and physical outcomes. We aimed to compare the efficacy of the three main types of combination strategies (simultaneous, sequential or exergaming) to either intervention alone or control in older adults. Randomized controlled trials of combined cognitive and physical training were included in multivariate and network meta-analyses. In cognitively healthy older adults and mild cognitive impairment, the effect of any combined intervention relative to control was small and statistically significant for overall cognitive (k = 41, Hedges' g = 0.22, 95 % CI 0.14 to 0.30) and physical function (k = 32, g = 0.25, 95 % CI 0.13 to 0.37). Simultaneous training was the most efficacious approach for cognition, followed by sequential combinations and cognitive training alone, and significantly better than physical exercise. For physical outcomes, simultaneous and sequential training showed comparable efficacy as exercise alone and significantly exceeded all other control conditions. Exergaming ranked low for both outcomes. Our findings suggest that simultaneously and sequentially combined interventions are efficacious for promoting cognitive alongside physical health in older adults, and therefore should be preferred over implementation of single-domain training. abstract_id: PUBMED:34868877 Simultaneous Exercise and Cognitive Training in Virtual Reality Phase 2 Pilot Study: Impact on Brain Health and Cognition in Older Adults. Background: Aerobic exercise and environmental enrichment have been shown to enhance brain function. Virtual reality (VR) is a promising method for combining these activities in a meaningful and ecologically valid way. Objective: The purpose of this Phase 2 pilot study was to calculate relative change and effect sizes to assess the impact of simultaneous exercise and cognitive training in VR on brain health and cognition in older adults. Methods: Twelve cognitively normal older adults (64.7±8.8 years old, 8 female) participated in a 12-week intervention, 3 sessions/week for 25-50 minutes/session at 50-80% HRmax. Participants cycled on a custom-built stationary exercise bike while wearing a VR head-mounted display and navigating novel virtual environments to train spatial memory. Brain and cognitive changes were assessed using MRI imaging and a cognitive battery. Results: Medium effect size (ES) improvements in cerebral flow and brain structure were observed. Pulsatility, a measure of peripheral vascular resistance, decreased 10.5% (ES(d) = 0.47). Total grey matter volume increased 0.73% (ES(r) = 0.38), while thickness of the superior parietal lobule, a region associated with spatial orientation, increased 0.44% (ES(r) = 0.30). Visual memory discrimination related to pattern separation showed a large improvement of 68% (ES(ηp2) = 0.43). Cognitive flexibility (Trail Making Test B) (ES(r) = 0.42) and response inhibition (ES(W) = 0.54) showed medium improvements of 14% and 34%, respectively. Conclusions: Twelve weeks of simultaneous exercise and cognitive training in VR elicits positive changes in brain volume, vascular resistance, memory, and executive function with moderate-to-large effect sizes in our pilot study. abstract_id: PUBMED:25773610 Cognitive plasticity in older adults: effects of cognitive training and physical exercise. Cognitive training, physical activity, and exercise have often been reported to improve cognitive performance in older adults. This paper reviews some seminal and recent studies using these approaches to improve cognition and physical functioning in healthy older adults and in patients suffering from non-neurological chronic medical conditions. Results from cognitive training studies suggest that despite performance improvement in trained tasks, transfer effects appeared very limited. Surprisingly though, computerized dual-task training has been shown to improve balance and postural control in tests of physical functioning, suggesting that broad transfer can sometimes be observed. Physical exercise intervention studies generally found significant and large improvements in physical capacity, in some cognitive domains, and in quality of life. The benefits seem to be equivalent between frail and nonfrail participants. Overall, results reviewed here support the notion that cognitive plasticity for attentional control, as induced by cognitive training or physical activity and exercise, is preserved in late adulthood. Moreover, results of studies with patients at risk of cognitive decline also suggest that cognitive training and exercise interventions are promising nonpharmaceutical tools to help improve cognition in older at-risk individuals. abstract_id: PUBMED:32803232 Synergistic Effects of Cognitive Training and Physical Exercise on Dual-Task Performance in Older Adults. Background: Studies report benefits of physical exercise and cognitive training to enhance cognition in older adults. However, most studies did not compare these interventions to appropriate active controls. Moreover, physical exercise and cognitive training seem to involve different mechanisms of brain plasticity, suggesting a potential synergistic effect on cognition. Objective: This study investigated the synergistic effect of cognitive training and aerobic/resistance physical exercise on dual-task performance in older adults. Intervention effects were compared to active controls for both the cognitive and the exercise domain. Method: Eighty-seven older adults completed one of 4 different combinations of interventions, in which computer lessons was active control for cognitive training and stretching/toning exercise control for aerobic/resistance training: (a) cognitive dual-task training and aerobic/resistance training (COG+/AER+), (b) computer lessons and aerobic/resistance training (COG-/AER+), (c) cognitive dual-task training and stretching/toning exercises (COG+/AER-), and (d) computer lessons and stretching/toning exercises (COG-/AER-). The primary outcome was performance in an untrained transfer dual task. Stepwise backward removal regression analyses were used to predict pre- versus post-test changes in groups that have completed the dual-task training, aerobic/resistance or both interventions. Results: Participation in AER+ did not predict improvement in any dual-task outcomes. Participation in COG+ predicted reduction in dual-task cost and participation in COG+/AER+ predicted reduction in task-set cost. Discussion: Results suggest that the combination of cognitive and physical training protocols exerted a synergistic effect on task-set cost which reflects the cost of maintaining multiple response alternatives, whereas cognitive training specifically improved dual-task cost, which reflects the ability of synchronizing concurrent tasks. abstract_id: PUBMED:36550323 Aerobic exercise and dual-task training combination is the best combination for improving cognitive status, mobility and physical performance in older adults with mild cognitive impairment. Aim: The aim of the present study was to investigate the effects of different exercise combinations on cognitive status, muscle strength of lower extremities, mobility, physical performance, mood and quality of life in older adults with mild cognitive impairment (MCI). Methods: A total of 48 older adults with MCI were randomly assigned to four groups: (1) aerobic plus lower extremity strengthening exercises (AG), (2) dual-task training plus lower extremity strengthening exercises (DG), (3) aerobic exercise, dual-task training and lower extremity strengthening exercises (ADG), (4) solely lower extremity strengthening exercises (CG). Patients' cognitive status, lower extremity muscle strength, balance, mobility, activities-specific balance confidence, functional exercise capacity, physical performance, mood and quality of life were evaluated. Results: In all three intervention groups, there was a significant improvement in cognitive status, balance, mobility, activities-specific balance confidence, physical performance, mood and quality of life (p &lt; 0.05). The most remarkable change was observed in the ADG on cognitive status, mobility and physical performance parameters (p &lt; 0.05). In addition, the most significant improvement in balance parameters was recorded both in the DG and ADG (p &lt; 0.05). Besides, the highest increase in functional exercise capacity was detected both in the AG and ADG (p &lt; 0.05). On the other hand, both exercise combinations were superior to the control group in terms of improving mood and quality of life (p &lt; 0.05). Conclusion: The trial results proved that aerobic exercise and dual-task training is the best combination for improving cognitive status, mobility and physical performance in older adults with MCI. Answer: Yes, multicomponent physical exercise with simultaneous cognitive training has been shown to boost cognitive performance in older adults. Studies have found that combining physical exercise with cognitive training can lead to improvements in cognitive function, including executive functions, memory, and cognitive flexibility (PUBMED:35447987; PUBMED:37845603; PUBMED:35881301; PUBMED:33249177; PUBMED:36550323). For instance, a study found that simultaneous physical and cognitive exercise was as effective as either physical or cognitive exercise alone for improving executive function in healthy older adults (PUBMED:35447987). Another study reported that simultaneous exercise and cognitive training in virtual reality led to positive changes in brain volume, vascular resistance, memory, and executive function with moderate-to-large effect sizes (PUBMED:34868877). Furthermore, a systematic review and network meta-analysis of randomized controlled trials indicated that simultaneous training was the most efficacious approach for cognition, followed by sequential combinations and cognitive training alone, and significantly better than physical exercise alone for cognitive outcomes (PUBMED:33249177). Another study suggested that the combination of cognitive and physical training protocols exerted a synergistic effect on task-set cost, reflecting the cost of maintaining multiple response alternatives, whereas cognitive training specifically improved dual-task cost, reflecting the ability to synchronize concurrent tasks (PUBMED:32803232). Additionally, a study on older adults with mild cognitive impairment found that a combination of aerobic exercise and dual-task training was the best for improving cognitive status, mobility, and physical performance (PUBMED:36550323). Overall, the evidence suggests that multicomponent physical exercise with simultaneous cognitive training can indeed boost cognitive performance in older adults, potentially offering a time-efficient strategy to improve cognitive health in this population.
Instruction: Use of blood transfusion at the end of life: does it have any effects on survival of cancer patients? Abstracts: abstract_id: PUBMED:24935379 Use of blood transfusion at the end of life: does it have any effects on survival of cancer patients? Background: Treatment of anemia is an important issue in the palliative care setting. Blood transfusion is generally used for this purpose in supportive care. However the place of blood transfusion in terminally ill cancer cases is less far established. Objective: We aimed to outline the use of transfusions and to find the impact of blood transfusion on survival in patients with advanced cancer and very near to death. Design: Patients dying in 2010-2011 with advanced cancer were included in the study. We retrospectively collected the data including age, type of cancer, the duration of last hospitalisation, ECOG performance status, Hb levels, transfusion history of erythrocytes and platelets, cause and the amount of transfusion. The anaemic patients who had transfusion at admission were compared with the group who were not transfused. Survival was defined as the time between the admission of last hospitalisation period and death. Results: Three hundred and ninety eight people with solid tumours died in 2010-2011 in our clinic. Ninety percent of the patients had anemia at the time of last hospitalisation. One hundred fifty three patients had erythrocyte transfusion at admission during the last hospitalisation period (38.4%). In the anaemic population the duration of last hospitalisation was longer in patients who had erythrocyte transfusion (15 days vs 8 days, p&lt;0.001). Conclusions: Patients who had blood transfusion at the end of life lived significantly longer than the anaemic patients who were not transfused. This study remarks that blood transfusions should not be withheld from terminal cancer patients in palliative care. abstract_id: PUBMED:32927824 Preference for Aggressive End-of-Life Care among Advanced Cancer Patients in Wuhan, China: A Cross-Sectional Study. Life-sustaining treatments (LSTs) and end-of-life (EOL) care's goal for prolonging one's life are defined as aggressive EOL care among critically ill patients. They have limited effects and add unnecessary financial burden to advanced cancer patients. A questionnaire survey was conducted to collect information on demographics, disease conditions, preference for LSTs, and goal of EOL care among advanced cancer patients of comprehensive grade-A tertiary hospitals in Wuhan, mainland China. Most patients preferred to accept LSTs when they were in a critical condition, including cardiopulmonary resuscitation (89.9%), mechanical ventilation support (85.7%), nasogastric tube feeding (84.1%), blood transfusion (89.8%), general surgery (87.5%), and hemodialysis (85.8%). Most (88%) preferred prolonging life as the goal of EOL care. Logistic regression showed common influencing factors were participants who completed junior high/high school or below and were financially adequate had higher reference for aggressive EOL care. Patients whose physician had accurately disclosed prognosis; however, showed a decrease trend for aggressive EOL care. Most advanced cancer patients preferred to accept aggressive EOL care. Discussions about prognosis disclosure among physicians and patients should be improved. Education about LSTs' limitations and comfort-oriented care's benefits should be promoted among the advanced cancer patients in mainland China. abstract_id: PUBMED:20042443 When blood is not an option: factors affecting survival after the use of a hemoglobin-based oxygen carrier in 54 patients with life-threatening anemia. Background: In consenting Jehovah's Witness patients and others for whom blood is contraindicated or not available, hemoglobin-based oxygen carrier (HBOC)-201 may enable survival in acutely anemic patients while underlying conditions are treated. Methods: Survival factors were identified in a multicenter, unblinded series of severely anemic "compassionate use" patients receiving available standard treatment plus consultant-supported HBOC-201 administration by novice users. Predictors of outcome were sought and compared between survivors and nonsurvivors. A compound variable, hemoglobin-duration deficit product was used to describe the interactive clinical effects of severity and duration of anemia. Mortality,correlations between patient characteristics, and survival to hospital discharge were determined from patient records. Results: Fifty-four patients (median age 50 years) with life-threatening anemia (median hemoglobin concentration at time of request = 4 g/dL) received 60 to 300 g HBOC-201.Twenty-three patients (41.8%) were discharged. Intraoperative blood loss (45%), malignancy(18%), and acute hemolysis (13%) were the prevailing reasons for anemia. Time from onset of anemia (&lt; or = 8 g/dL) to HBOC-201 infusion was shorter for survivors than nonsurvivors (3.2 vs 4.4 days, P = 0.027). Mean hemoglobin levels before HBOC-201 infusion in survivors and nonsurvivors were 4.5 and 3.8 g/dL, respectively (P = 0.120). No serious adverse event was attributed to HBOC-201. The hemoglobin-duration deficit product separated survivors from nonsurvivors. Cancer and renal disease were associated with nonsurvival. Conclusion: Earlier, compared with later, administration by inexperienced users of HBOC-201 to patients with anemia was associated with improved chances of survival of acutely bleeding and hemolyzing patients. Survival was more likely if the duration and magnitude of low hemoglobin was minimized before treatment with HBOC-201. abstract_id: PUBMED:15221916 Use of autologous instead of allogeneic blood transfusion during esophagectomy prolongs disease-free survival among patients with recurrent esophageal cancer. Background And Objectives: A substantial body of evidence suggests that allogeneic blood transfusion increases the rate of recurrence of resected malignancies. The present study was conducted with the aim of understanding better the clinical characteristics of recurrent esophageal cancer and determining whether any survival advantage is conferred by transfusing autologous instead of allogeneic blood during the esophagectomy for the original malignancy. Methods: We retrospectively analyzed 123 patients who received blood transfusion while undergoing esophagectomy for thoracic esophageal cancer between January 1991 and February 1998. We focused on those patients in whom the malignancy recurred. Of them, 23 patients received allogeneic blood and 18 received autologous blood. Compared were the clinico-pathological factors influencing prognosis as well as the disease-free survival periods and the period of survival after recurrence of the cancer. Results: The clinico-pathological factors that influenced prognosis were similar in the two groups. There was also no significant difference in the rate at which the esophageal cancer recurred, or in survival time once it had recurred. On the other hand, disease-free survival prior to recurrence was significantly prolonged in the autologous blood transfusion group. Conclusion: Use of autologous instead of allogeneic blood prolongs disease-free survival of esophageal cancer patients. abstract_id: PUBMED:26914833 Intensity of end-of-life care for patients with myelodysplastic syndromes: Findings from a large national database. Background: As the population ages, the prevalence of myelodysplastic syndromes (MDS) will increase, and many patients with MDS will require end-of-life (EOL) care. Little is known about the intensity of EOL care received by patients with these malignancies. Methods: Using the Surveillance, Epidemiology, and End Results-Medicare database and standard EOL quality measures, we assessed the prevalence and predictors of intensive care unit (ICU) admission in the last 30 days of life, chemotherapy in the last 14 days of life, and hospice enrollment among MDS patients who were 65 years old or older and died between 2006 and 2011. Results: Of 6,955 patients, 28% were admitted to the ICU and 7% received chemotherapy near the EOL, while 49% enrolled in hospice. In multivariable models, patients dependent on red blood cell or platelet transfusions at the EOL were less likely to enroll in hospice (odds ratio [OR], 0.69; 95% confidence interval [CI], 0.61-0.78). Nonwhite patients were less likely to enroll in hospice (OR, 0.77; 95% CI, 0.67-0.89) and more likely to be admitted to the ICU near the EOL (OR, 1.19; 95% CI, 1.03-1.38). Finally, the prevalence of hospice enrollment increased in later years (P &lt; .001). Conclusions: The intensity of EOL care for patients with MDS varies but is potentially suboptimal with respect to the traditional measure of hospice use. The lower odds of enrollment for transfusion-dependent patients suggest that the current hospice model, which largely disallows transfusions, may not be meeting the palliative needs of this population. abstract_id: PUBMED:8760635 Massive transfusion in cancer surgery. A study of the survival of 21 patients In a retrospective study over 5 years, the evolution of 21 patients who received a massive blood transfusion during a carcinological surgery was analyzed. In this type of surgery, the frequency of massive blood transfusion is 0.28% and affects 2.5% of the patients transfused. In half of the cases, surgery was performed to resect an ovarian cancer. The overall death rate, related to the importance of the blood transfusion, is high: 38% (8 patients out of 21). Among the patients who survived (13), the actuarial survival is low: 10 deaths from local or metastatic disease with a mean survival of 12 months, suggesting that the use of massive blood transfusion was related to the tumoral aggressiveness and to the severity of the deepseated malignant disease. abstract_id: PUBMED:25976505 The predicting value of postoperative body temperature on long-term survival in patients with rectal cancer. This study aimed to assess the association between postoperative body temperature and prognosis in patients with rectal cancer. Five hundred and seven patients with stage I to III rectal cancers were enrolled in the current study. Basal body temperature (BBT, measured at 6 am) and maximal body temperature (MBT) on each day after surgery were analyzed retrospectively. Patients were divided into two equal groups according to the median of BBT and MBT at each day. The primary end points were disease-free survival (DFS) and overall survival (OS). The univariate and multivariate analyses showed that patients with low D0-MBT (&lt;37.4 °C) had lower 3-year DFS [adjusted hazard ratio (HR) 1.56 (95 % CI 1.08-2.24, P = 0.017)] as well as OS [adjusted HR 1.72 (95 % CI 1.05-2.82, P = 0.032)] rate as compared to those with high D0-MBT (&gt;37.4 °C). In the subset of 318 patients with T3 stage tumor and the subgroup of 458 patients without blood transfusion as well, low D0-MBT continues to be an independent predictor of DFS/OS with an adjusted HR equal to 1.48 (95 % CI 1.02-2.24, P = 0.046)/1.68 (95 % CI 1.04-2.99, P = 0.048) and 1.45 (95 % CI 1.02-2.13, P = 0.048)/1.59 (95 % CI 1.01-2.74, P = 0.049), respectively. In addition, we found that patients have higher risk of 1-year recurrence if those were exhibiting low preoperative BBT (&lt;36.6 °C) (17 vs. 10 %, P = 0.034). Low body temperature (D0-MBT &lt; 37.4 °C) after surgery was an independent predictor of poor survival outcomes in patients with rectal cancer. abstract_id: PUBMED:24558067 Adverse tumor biology associated with mesenterico-portal vein resection influences survival in patients with pancreatic ductal adenocarcinoma. Background: Although pancreatoduodenectomy (PD) with mesenterico-portal vein resection (VR) can be performed safely in patients with resectable pancreatic ductal adenocarcinoma (PDAC), the impact of this approach on long-term survival is controversial. Patients And Methods: Analyses of a prospectively collected database revealed 122 consecutive patients with PDAC who underwent PD with (PD+VR) or without (PD-VR) VR between January 2004 and May 2012. Clinical data, operative results, and survival outcomes were analysed. Results: Sixty-four (53 %) patients underwent PD+VR. The majority (84 %) of the venous reconstructions were performed with a primary end-to-end anastomosis. Demographic and postoperative outcomes were similar between the two groups. American Society of Anesthesiologists (ASA) score, duration of operation, intraoperative blood loss, and blood transfusion requirement were significantly greater in the PD+VR group compared with the PD-VR group. Furthermore, the tumor size was larger, and the rates of periuncinate neural invasion and positive resection margin were higher in the PD+VR group compared with the PD-VR group. Histological venous involvement occurred in 47 of 62 (76 %) patients in the PD+VR group. At a median follow-up of 29 months, the median overall survival (OS) was 18 months for the PD+VR group, and 31 months for the PD-VR group (p = 0.016). ASA score, lymph node metastasis, neurovascular invasion, and tumor differentiation were predictive of survival. The need for VR in itself was not prognostic of survival. Conclusions: PD with VR has similar morbidity but worse OS compared with a PD-VR. Although VR is not predictive of survival, tumors requiring a PD+VR have more adverse biological features. abstract_id: PUBMED:25669088 Potential influence of pre and intraoperative factors on postoperative recurrence and survival in patients undergoing radical resection of esophageal cancer Background: Several papers report that preoperative and intraoperative factors influence postoperative recurrence of malignancy. The purpose of this study is to define which factors affect the recurrence and survival of patients after the surgical resection of the esophageal cancer. Methods: Ninety five patients underwent complete elective resection of the esophageal cancer. All patients were without preoperative chemotherapy and radiotherapy. We extracted 12 parameters, and cox regression analyses were used to assess the relation of 12 factors and the outcomes of patients. The 12 factors included preoperative factors (age, sex, weight stage of cancer, ASA PS, serum creatinine and total bilirubin), intraoperative variables (duration of anesthesia, blood transfusion, fluid balance, hypotensive episodes) and surgical Apgar score. Hypotensive episodes were defined as the systolic pressure lower than 70 mmHg occurring from the introduction of anesthesia to the end of anesthesia. Results: Hypotensive episodes and blood transfusion significantly affected 1 year cancer specific survival. Stage of cancer and blood transfusion affected 5 year cancer specific survival. Conclusions: We found that intraoperative hypotension affected 1-year cancer specific survival; however, the stage of cancer affected long-term survival instead of intraoperative factors. A low 5-year survival rate in esophageal cancer may have affected this result. abstract_id: PUBMED:18709505 Postoperative fever and survival in patients after open resection for colorectal cancer: a long-term follow-up study of 2,311 prospectively enrolled patients. Purpose: Two reports on the impact of postoperative fever on survival after surgery in patients with colorectal cancer yielded contradictory results. Our study examined possible associations between postoperative fever and long-term survival of patients who underwent resection of colorectal cancer. Methods: We investigated 2,311 consecutive patients who underwent elective open colorectal resection for primary colorectal cancer at a single institution between 1995 and 1998. The primary end points were cancer-specific and overall survival. Multiple covariate impact of risk factors on survival rates was assessed by Cox regression analysis. Results: A total of 252 patients (12.2 percent) developed postoperative fever. The most important independent risk factor for postoperative fever was postoperative morbidity (odds ratio, 4.9; 95 percent confidence interval, 3.7-6.6) followed by blood transfusion (1.7; 1.2-2.2), Stage IV disease (1.6; 1.1-2.2), male gender (1.4; 1.0-1.9), and rectal cancer (1.4; 1.0-1.8). Cox regression modeling indicated that stage, histology, tumor location, and blood transfusion were statistically significant covariate predictors for cancer-specific survival. Postoperative fever was not independently associated with cancer-specific or overall survival. Conclusions: This study did not support the hypothesis that postoperative fever is an independent prognostic factor after colorectal resection for primary colorectal cancer. Answer: The use of blood transfusion at the end of life in cancer patients appears to have varying effects on survival, depending on the context and the specific patient population. A study that included patients dying with advanced cancer found that those who received erythrocyte transfusions at admission during their last hospitalization period lived significantly longer than anemic patients who were not transfused, suggesting that blood transfusions should not be withheld from terminal cancer patients in palliative care (PUBMED:24935379). This indicates a potential positive effect on survival for patients receiving transfusions at the end of life. However, the overall picture is complex. For instance, in the context of esophageal cancer, the use of autologous blood transfusion during esophagectomy was associated with prolonged disease-free survival compared to allogeneic blood transfusion, which suggests that the type of blood transfusion might influence long-term outcomes (PUBMED:15221916). In contrast, a study on massive transfusion during cancer surgery reported a high death rate related to the extent of blood transfusion, with a low actuarial survival among those who survived the initial surgery, indicating that massive transfusion might be related to tumor aggressiveness and severity of the disease (PUBMED:8760635). Furthermore, in the case of myelodysplastic syndromes (MDS), patients dependent on red blood cell or platelet transfusions at the end of life were less likely to enroll in hospice care, which could suggest that the need for transfusions may affect the type of end-of-life care received (PUBMED:26914833). Additionally, preferences for aggressive end-of-life care, including blood transfusions, were common among advanced cancer patients in Wuhan, China, although those who had their prognosis accurately disclosed by their physician showed a decreased trend for aggressive end-of-life care (PUBMED:32927824). In summary, while blood transfusions at the end of life can extend survival in certain cancer patients, the impact on survival may be influenced by factors such as the type of transfusion, the underlying aggressiveness of the disease, and the preferences and prognostic understanding of the patients.
Instruction: Nuclear morphometry in columnar cell lesions of the breast: is it useful? Abstracts: abstract_id: PUBMED:23132980 Columnar cell lesions of the breast - significant or not? Unlabelled: Columnar cell lesions (CCL) of the breast range from the innocuous CAPSS (Columnar alteration with prominent apical snouts and secretions) to DCIS. Their precancerous potential poses diagnostic and therapeutic riddles, especially for those with mammographic microcalcifications. Aim: To document the incidence of columnar cell lesions in the breast, to study their morphologic spectrum and to study the significance of their association with benign and malignant breast lesions. Material And Methods: We studies 100 consecutive breast specimens. These specimens included biopsies, lumpectomies and mastectomies. Columnar cell lesions were classified as per Schnitt criteria. Associated lesions in the adjacent tissue were also studied. Results: Of the 100 samples (43 malignant and 57 benign), 37 cases showed CCL: CAPSS 23, columnar cell hyperplasia-11 and Columnar cell hyperplasia with atypia-3. The commonest associated lesion was fibrocystic change (n=54). Their incidence in benign and malignant lesions was 42% and 30% respectively, with no statistical significance in the difference. Lesions with atypia were seen only in malignant cases. Conclusion: Columnar cell lesions are common, and often overlooked. These lesions are seen in association with both benign and malignant breast lesions. It is only the presence of atypia in CCL that warrants a special mention and necessitates follow up, and resection if indicated. abstract_id: PUBMED:37378137 Significance of Nuclear Morphometry in Breast Lesions: A Cross-Sectional Study. Background Fine-needle aspiration cytology (FNAC) is one of the reliable methods in diagnosing breast cancers. Morphometric studies are done in benign and malignant neoplasms of various organs by using software, which measures cellular, cytoplasmic, and nuclear parameters. Nuclear parameters define the behavior of the neoplasm. This study aims to evaluate nuclear morphometry parameters in aspirated smears of breast lesions and determine the association between cytological findings with nuclear morphometry parameters. Methodology It's a retrospective cytology study from July 2020 to June 2022 conducted at a tertiary health care center in Kolar, Karnataka, India. The FNAC smears of breast mass were analyzed cytologically and were subjected to nuclear morphometry study. Nuclear parameters such as nuclear area, nuclear perimeter, nuclear Feret diameter, minimum Feret, and shape factor were captured in Zen software (Zeiss, Oberkochen, Germany) and ImageJ software (National Institutes of Health, Bethesda, MD, USA; Laboratory for Optical and Computational Instrumentation [LOCI], University of Wisconsin-Madison, Madison, WI, USA). The association between nuclear morphometric findings and cytological findings was noted. A descriptive statistical analysis was done. Results Sixty cases of mass in the breast were considered for the study of which 37 cases were benign and 23 were malignant. Nuclear morphometry parameters such as nuclear area, nuclear perimeter, nuclear Feret diameter, minimum Feret, and shape factor for benign breast lesions were 25.16 ± 3.2 µm2, 21.58 ± 1.89 µm, 6.5 ± 0.94 µm, 4.87 ± 0.50 µm, and 0.92 ± 0.02, respectively, and for malignant breast cases were 46.57 ± 12.24 µm2, 27.53 ± 3.26 µm, 10.08 ± 1.18 µm, 6.49 ± 0.88 µm, and 0.93 ± 0.01, respectively. The association of all nuclear parameters between benign and malignant lesions was statistically significant (P = 0.001). Conclusions Nuclear morphometric study in breast lesions is a concept that supplements FNAC findings in differentiating benign from malignant lesions. abstract_id: PUBMED:16603646 Nuclear morphometry in columnar cell lesions of the breast: is it useful? Aims: To evaluate the nuclear morphometric features of breast columnar cell lesions (CCLs) observed on mammotome core biopsies, to determine if there are significant measurable differences between those with atypia and those without. Correlation with follow-up open excision specimens was made. Methods: Mammotome core biopsies performed on patients that contained CCLs were derived from the departmental case files. Histological material was reviewed and foci of CCLs demarcated for nuclear morphometric assessment, which was accomplished using an imaging system. Nuclear parameters studied were nuclear area and perimeter, circularity factor and feret's diameter. Statistical analysis used the GraphPad Prism software, with p&lt;0.05 indicating significance. Results: On examination of core biopsies of 40 patients with CCLs, 8 lesions were benign, 4 showed atypical lobular hyperplasia, 8 showed CCLs with nuclear atypia, 19 disclosed atypical ductal hyperplasia (ADH) and 1 showed ductal carcinoma in situ (DCIS). The nuclear area, perimeter and feret's diameter of CCLs with atypia were significantly greater than those without (p = 0.04, 0.03 and 0.019, respectively), whereas no difference was observed in the circularity factor. Follow-up open excision biopsy specimens in 24 patients showed upgrading to DCIS in 40% of cases diagnosed initially with ADH on core biopsy compared with 20% of CCLs with atypia. Conclusions: Nuclear morphometry in CCLs confirms nuclear size as the key parameter in the assessment of nuclear atypia. Whether it can be potentially used as an adjunctive tool depends on the establishment of appropriate cut-offs. abstract_id: PUBMED:27026270 Does flat epithelial atypia have rounder nuclei than columnar cell change/hyperplasia? A morphometric approach to columnar cell lesions of the breast. Columnar cell lesions of the breast encompass columnar cell change/hyperplasia (CCC/CCH) and flat epithelial atypia (FEA). These have attracted researchers because emerging data suggest that FEA may represent the earliest histologically detectable non-obligate precursor of breast cancer. However, it is occasionally difficult to distinguish FEA from CCC/CCH because of similar histology. Although the nuclei of FEA are frequently described as relatively round compared with those of CCC/CCH, there are few morphometric studies to support this statement. The aim of this study was to provide objective data as to the nuclear shape in columnar cell lesions. As a shape descriptor, we adopted ellipticity that is defined by the formula 2b/2a, where a is the length of the long axis of the ellipse and b is the length of the short axis. Contrary to circularity, ellipticity reflects the overall configuration of an ellipse irrespective of surface irregularity. Our image analysis included generating whole slide images, extracting glandular cell nuclei, measuring nuclear ellipticity, and superimposing graded colors based on execution of results on the captured images. A total of 7917 nuclei extracted from 22 FEA images and 5010 nuclei extracted from 13 CCC/CCH images were analyzed. There was a significant difference in nuclear roundness between FEA and CCC/CCH with mean ellipticity values of 0.723 and 0.679, respectively (p &lt; 0.001, Welch's t test). Furthermore, FEA with malignancy had significantly rounder nuclei than FEA without malignancy (p &lt; 0.001). Our preliminary results suggest that nuclear ellipticity is a key parameter in reproducibly classifying columnar cell lesions of the breast. abstract_id: PUBMED:23776836 Significance of nuclear morphometry in benign and malignant breast aspirates. Background: Breast carcinoma is one of the most common cancers occurring in the female population world-wide. Normal cells gradually transform to form the cancer cells through several stages. Nuclear changes occurring during these transformational steps need to be assessed objectively. Hence nuclear morphometry can be used as a diagnostic tool. Aim: To compare the nuclear morphometric parameters of benign and malignant breast aspirates. Study Design: Cytology was used to categorize aspirates from the breast lumps in to malignant (30 cases), and benign (30 cases). Nuclear parameters were calculated using the Image J 1.44C morphometric software. Several nuclear size parameters were analyzed. Results: The nuclear area, perimeter, diameter, compactness, and concave points were found to be statistically significant (P &lt; 0.05) parameters in differentiating benign, and malignant aspirates. Conclusion: Nuclear morphometry was thus, a useful objective tool in the differentiating benign, and malignant breast lesions. abstract_id: PUBMED:29118474 Cytological Evaluation of Thyroid Lesions by Nuclear Morphology and Nuclear Morphometry. Introduction: Fine needle aspiration (FNA) of the thyroid gland is an effective diagnostic method. The Bethesda system for reporting thyroid cytopathology classifies them into six categories and gives implied risk for malignancy and management protocol in each category. Though the system gives specific criteria, diagnostic dilemma still exists. Using nuclear morphometry, we can quantify the number of parameters, such as those related to nuclear size and shape. The evaluation of nuclear morphometry is not well established in thyroid cytology. Objective: To classify thyroid lesions on fine needle aspiration cytology (FNAC) using Bethesda system and to evaluate the significance of nuclear parameters in improving the prediction of thyroid malignancy. Materials And Methods: In the present study, 120 FNAC cases of thyroid lesions with histological diagnosis were included. Computerized nuclear morphometry was done on 81 cases which had confirmed cytohistological correlation, using Aperio computer software. One hundred nuclei from each case were outlined and eight nuclear parameters were analyzed. Results: In the present study, thyroid lesions were common in female with M: F ratio of 1:5 and most commonly in 40-60 yrs. Under Bethesda system, 73 (60.83%) were category II; 14 (11.6%) were category III, 3 (2.5%) were category IV, 8 (6.6%) were category V, and 22 (18.3%) were category VI, which were malignant on histopathological correlation. Sensitivity, specificity, and diagnostic accuracy of Bethesda reporting system are 62.5, 84.38, and 74.16%, respectively. Minimal nuclear diameter, maximal nuclear diameter, nuclear perimeter, and nuclear area were higher in malignant group compared to nonneoplastic and benign group. Conclusion: The Bethesda system is a useful standardized system of reporting thyroid cytopathology. It gives implied risk of malignancy. Nuclear morphometry by computerized image analysis can be utilized as an additional diagnostic tool. abstract_id: PUBMED:36005185 Are Columnar Cell Lesions the Earliest Non-Obligate Precursor in the Low-Grade Breast Neoplasia Pathway? Columnar cell lesions (CCLs) of the breast comprise a spectrum of morphologic alterations of the terminal duct lobular unit involving variably dilated and enlarged acini lined by columnar epithelial cells. The World Health Organization currently classifies CCLs without atypia as columnar cell change (CCC) and columnar cell hyperplasia (CCH), whereas flat epithelial atypia (FEA) is a unifying term encompassing both CCC and CCH with cytologic atypia. CCLs have been increasingly recognized in stereotactic core needle biopsies (CNBs) performed for the assessment of calcifications. CCLs are believed to represent the earliest non-obligate precursor of low-grade invasive breast carcinomas as they share molecular alterations and often coexist with entities in the low-grade breast neoplasia pathway. Despite this association, however, the risk of progression of CCLs to invasive breast carcinoma appears low and may not exceed that of concurrent proliferative lesions. As the reported upgrade rates of pure CCL/FEA when identified as the most advanced high-risk lesion on CNB vary widely, the management of FEA diagnosed on CNB remains controversial. This review will include a historical overview of CCLs and will examine histologic diagnostic criteria, molecular alterations, prognosis and issues related to upgrade rates and clinical management. abstract_id: PUBMED:28182069 Role of morphometry in the cytological differentiation of benign and malignant thyroid lesions. Context: Thyroid nodules represent a common problem, with an estimated prevalence of 4-7%. Although fine needle aspiration cytology (FNAC) has been accepted as a first line diagnostic test, the rate of false negative reports of malignancy is still high. Nuclear morphometry is the measurement of nuclear parameters by image analysis. Image analysis can merge the advantages of morphologic interpretation with those of quantitative data. Aims: To evaluate the nuclear morphometric parameters in fine needle aspirates of thyroid lesions and to study its role in differentiating benign from malignant thyroid lesions. Material And Methods: The study included 19 benign and 16 malignant thyroid lesions. Image analysis was performed on Giemsa-stained FNAC slides by Nikon NIS-Elements Advanced Research software (Version 4.00). Nuclear morphometric parameters analyzed included nuclear size, shape, texture, and density parameters. Statistical Analysis: Normally distributed continuous variables were compared using the unpaired t-test for two groups and analysis of variance was used for three or more groups. Tukey or Tamhane's T2 multiple comparison test was used to assess the differences between the individual groups. Categorical variables were analyzed using the chi square test. Results And Conclusion: Five out of the six nuclear size parameters as well as all the texture and density parameters studied were significant in distinguishing between benign and malignant thyroid lesions (P &lt; 0.05). Cut-off values were derived to differentiate between benign and malignant cases. abstract_id: PUBMED:24267518 Do columnar cell lesions exist in the male breast? Aims: In females, columnar cell lesions (CCLs) have been recognized as putative precursor lesions of low-grade breast cancer, but their role in male breast carcinogenesis is as yet unclear. Methods And Results: We reviewed surgical resections from males with breast cancer (n = 89), gynaecomastia (n = 20) and normal breast specimens from autopsies (n = 5) for the presence of CCL. In addition, we performed immunohistochemistry for cytokeratin 5/6 (CK5/6), CK14 and oestrogen receptor alpha (ER). In 19 of 89 resections (two DCIS cases and 17 invasive carcinoma), some individual ducts were found to contain cells with snouts on the luminal border but lacking further typical columnar cell lesion features. We mainly found three-layered ductal epithelium, characteristic for gynaecomastia and confirmed by immunohistochemistry. Moreover, we found a few ducts in male breast cancer sections that were clonally negative for basal cytokeratins. Conclusion: We found no lesions with convincing CCL morphology at the periphery of invasive male breast cancers, in gynaecomastia or in normal male breast specimens. Although we cannot completely exclude the existence of CCLs in the male breast, these lesions seem to be very uncommon and are therefore unlikely to play a major role in male breast carcinogenesis. abstract_id: PUBMED:19918332 Columnar cell lesions and pseudoangiomatous hyperplasia like stroma: is there an epithelial-stromal interaction? The significance of association between cancer and its microenvironment has been increasingly recognized. It has been shown in animal models that interaction between neoplastic epithelial cells and adjacent stroma can modulate tumor behavior. Carcinoma associated stromal cells can transform normal epithelial cells into neoplastic cells. In breast, columnar cell lesions are non-obligate precursors of low grade ductal carcinoma in situ. Columnar cell lesions can be seen intimately associated with PASH-like-stroma, a lesion we termed as CCPLS. Our aim is to investigate epithelial-stromal interactions in CCPLS and compare them to PASH without columnar cell lesions in breast core needle biopsies. Normal terminal duct lobular unit (TDLU) epithelium was seen in association with columnar cell lesions as well as PASH. Eight (8) cases of each category were examined by a panel of immunostains: CD117 (C-kit), CD34, CD105, bFGF, AR, ER-beta, MIB-1. We observed a markedly decreased expression of c-kit in columnar cell lesions compared to TDLU-epithelium. CD105 showed a quantitative increase in activated vessels in CCPLS compared to PASH. A subset of CCPLS and PASH were androgen receptor positive. A strong nuclear positivity for ER-beta is observed in the epithelium and stroma of all CCPLS cases. We conclude that (1) activated blood vessels predominate in CCPLS; (2) A molecular alteration is signified by c-kit loss in columnar cell lesions; (3) ER-beta and androgen receptor positivity indicate CCPLS are hormonally responsive lesions. Our study suggests an intimate vascular and hormone dependent epithelial-stromal interaction exists in CCPLS lesions. Answer: Nuclear morphometry, which involves the measurement of nuclear parameters such as size, shape, and density, has been studied in columnar cell lesions (CCLs) of the breast to determine its utility in differentiating between benign and malignant lesions. Several studies have investigated the significance of nuclear morphometry in breast lesions, including CCLs. One study evaluated the nuclear morphometric features of breast CCLs observed on mammotome core biopsies to determine if there were significant measurable differences between those with atypia and those without. The study found that the nuclear area, perimeter, and Feret's diameter of CCLs with atypia were significantly greater than those without, suggesting that nuclear size is a key parameter in the assessment of nuclear atypia (PUBMED:16603646). Another study aimed to provide objective data on the nuclear shape in columnar cell lesions, adopting ellipticity as a shape descriptor. The study found a significant difference in nuclear roundness between flat epithelial atypia (FEA) and columnar cell change/hyperplasia (CCC/CCH), with FEA having rounder nuclei. This suggests that nuclear ellipticity is a key parameter in reproducibly classifying CCLs of the breast (PUBMED:27026270). Furthermore, a study that compared the nuclear morphometric parameters of benign and malignant breast aspirates found that several nuclear size parameters, including nuclear area, perimeter, and diameter, were statistically significant in differentiating between the two, indicating that nuclear morphometry is a useful objective tool in this differentiation (PUBMED:23776836). In conclusion, nuclear morphometry appears to be a useful adjunctive tool in the evaluation of columnar cell lesions of the breast. It can help in distinguishing between benign and malignant lesions by providing objective measurements of nuclear parameters, which are indicative of the presence of atypia and potentially malignant changes. However, the establishment of appropriate cut-offs and further validation in clinical practice is necessary to determine its full utility (PUBMED:16603646; PUBMED:27026270; PUBMED:23776836).
Instruction: Early nonbronchoscopic bronchoalveolar lavage: predictor of ventilator-associated pneumonia? Abstracts: abstract_id: PUBMED:29793870 Cell Count Analysis from Nonbronchoscopic Bronchoalveolar Lavage in Preterm Infants. Objectives: To establish the reference values, diagnostic accuracy, and effect of various factors on cell count in intubated preterm neonates subjected to nonbronchoscopic bronchoalveolar lavage. Study Design: This prospective, cross-sectional, blinded study included preterm neonates ventilated for any reason who underwent nonbronchoscopic bronchoalveolar lavage if they had not previously received postnatal antibiotics or steroids. Lavage was performed before surfactant replacement, if any. A gentle ventilation policy was applied. Pneumonia was diagnosed using clinical criteria, without considering cell count. Investigators performing cell counts were blinded to the clinical data. Results: There were 276 neonates enrolled; 36 had congenital or ventilator-associated pneumonia. In the 240 noninfected babies, median neutrophil count increased significantly after the first 2 days of ventilation (day 1, 2 cells per field [IQR, 0.0-9.5 cells per field]; day 2, 2 cells per field [IQR, 0-15 cells per field]; day 3, 20 cells per field [IQR, 2-99 cells per field]; day 4, 15 cells per field [IQR, 2-96 cells per field]; P &lt; .0001). No significant difference was seen over time in infected babies. Multivariate analysis indicated pneumonia (standardized β = 0.134; P = .033) and the time spent under mechanical ventilation before nonbronchoscopic bronchoalveolar lavage as factors significantly influencing neutrophil count (standardized β = 0.143; P = .027). Neutrophil count was correlated with the duration of ventilation (rho = 0.28; P &lt;.001). Neutrophil counts were higher in infected (24 cells/field [IQR, 5-78] cells/field) than in noninfected babies (4 cells/field [IQR, 1-24 cells/field]; P &lt;.001) and had an moderate reliability for pneumonia within the first 2 days of ventilation (area under the curve, 0.745; (95% CI, 0.672-0.810; P = .002). Conclusions: We provide reference values for airway neutrophil counts in ventilated preterm neonates. Bronchoalveolar lavage neutrophils significantly increase after 2 days of ventilation. Neutrophil count has moderate accuracy to diagnose pneumonia, but only within the first 2 days of ventilation. abstract_id: PUBMED:16698740 Diagnosis of ventilator-associated pneumonia: focus on nonbronchoscopic techniques (nonbronchoscopic bronchoalveolar lavage, including mini-BAL, blinded protected specimen brush, and blinded bronchial sampling) and endotracheal aspirates. The ideal diagnostic approach for ventilator-associated pneumonia currently is based on invasive procedures to obtain respiratory tract cultures. Given the lack of consensus and relatively poor acceptance of full bronchoscopic bronchoalveolar lavage (BAL) and protected specimen brush (PSB), less invasive procedures have been developed. We review the nonbronchoscopic procedures (nonbronchoscopic bronchoalveolar lavage, including mini-BAL, blinded protected specimen, and blinded bronchial sampling) and endotracheal aspiration. We provide a critique of the methods used, the types of catheters inserted, and the sample collection methods. Most studies were flawed in that antibiotic use before initiation of the procedure was not controlled. The variability of both the methods and the criteria for the gold standard in the numerous investigations show that these procedures are neither standardized nor proven to be accurate and often did not improve management. Pending future studies, use of endotracheal aspirates without the use of quantitation seems to be a reasonable approach for clinicians who are not committed to an invasive procedure. abstract_id: PUBMED:23354237 Early nonbronchoscopic bronchoalveolar lavage: predictor of ventilator-associated pneumonia? Background: Ventilator-associated pneumonia is a problem in trauma and emergency general surgery patients. Our hospital-acquired infection prevention committee approved the use of early nonbronchoscopic bronchoalveolar lavage (screening-BAL) in the surgical intensive care unit (SICU) to identify ventilated patients with bronchiolar bacteria before 48 hours. We reviewed the results of this quality improvement initiative. Methods: All ventilated patients in the SICU (March 2011 to June 2012) underwent a screening-BAL 36 hours to 48 hours after intubation; quantitative culture results (&gt;5 × 10(4) colony-forming unit per milliliter) were used to identify positive specimens. Clinical pneumonia was defined as clinical pulmonary infection score greater than 6 with a subsequent positive diagnostic-BAL result. Sequential organ failure assessment scores were averaged for the first 48 hours in the SICU. Continuous and dichotomous data were compared, and a multivariate regression analysis was performed on the screening-BAL and pneumonia results. Results: Screening-BALs were performed in 150 patients (99 trauma and 51 emergency general surgery patients), 72 of these specimens had positive findings. Fifty-three clinical pneumonias were diagnosed, and 45 (positive predictive value, 0.85) identified the same organism as the screening-BAL. Clinical pneumonia developed in eight patients with a negative screening-BAL (negative predictive value, 0.85). Antibiotic therapy at the time of the screening-BAL was associated with a negative screen (odds ratio, 0.44; p = 0.026). Pneumonia developed on median postintubation Day 4 (2-15 days) in patients with a positive screening-BAL results as compared with day 7.5 in the patients with a negative screening-BAL results (3.5-15 days; p = 0.007). Field intubation is an independent risk factor (odds ratio, 3.5; p = 0.004). Conclusion: Positive screening-BAL results in trauma and emergency general surgery patients are associated with the development of ventilator-associated pneumonia by the same organism and may play a role in identifying patients at risk for pneumonia. Further studies must be conducted to evaluate the role of screening-BAL in this patient population. Level Of Evidence: Diagnostic/prognostic study, level III. abstract_id: PUBMED:11292122 Ventilator associated pneumonia: quality of nonbronchoscopic bronchoalveolar lavage sample affects diagnostic yield. The importance of predefined criteria for acceptable samples of respiratory therapists obtained lower respiratory samples were studied, using a nonbronchoscopic bronchoalveolar lavage (BAL) protocol for ventilated patients in the intensive care unit. Therapists were instructed and asked to follow guidelines for obtaining samples. Over one year, 219 samples were obtained by respiratory therapists. Of these, 115 were considered to be adequate samples using the following criteria: 60 mL of instilled volume, at least 5 mL of fluid aspirated, specimens sent for semiquantitative culture, a differential cell count of &lt;5% bronchial epithelial cells. Overall, 52 samples grew one or more pathogen at &gt;10,000 colony forming units (cfu).mL(-1) of BAL. The most common pathogen was Staphylococcus aureus (S. aureus) (11 samples), although Gram-negative bacilli were the single pathogen in 21 specimens. Of the 115 acceptable samples, 40 (35%) grew &gt; or =1 pathogen at &gt;10,000 cfu.mL(-1). For the 80 not acceptable samples which were sent for appropriate culture, 12 (15%) grew &gt;10,000 cfu.mL(-1) BAL. This difference was significant (Chi-squared=9.44, p&lt;0.01). Nonbronchoscopic bronchoalveolar lavage can be safely performed by respiratory therapists'. The authors recommend that a protocol be used to evaluate the quality of a bronchoalveolar lavage sample in the same manner sputum samples are screened prior to interpretation. abstract_id: PUBMED:16374157 Role of nonbronchoscopic lavage for investigating alveolar inflammation and permeability in acute respiratory distress syndrome. Objective: Nonbronchoscopic bronchoalveolar lavage is often used as an alternative to bronchoscopic bronchoalveolar lavage in the diagnosis of ventilator-associated pneumonia. We have previously reported an improved safety profile for nonbronchoscopic lavage in patients with lung injury, suggesting that this may be a better technique in this patient group. The objective of this study was to determine whether nonbronchoscopic lavage could be used as an alternative to bronchoscopic lavage for the assessment of alveolar permeability and inflammation in patients at risk for acute respiratory distress syndrome (ARDS) or with ARDS. Design: Prospective randomized crossover trial. Patients: Intubated patients with ARDS or at risk of ARDS. Interventions: Bronchoscopic and nonbronchoscopic lavage in the same patient, 15 mins apart. Measurements And Main Results: Twenty-one patients with ARDS and 20 patients at risk of ARDS were recruited and underwent nonbronchoscopic and bronchoscopic lavage in randomized order. Despite similar volumes of lavage fluid, nonbronchoscopic lavage had fewer cells and an increased ratio of bronchial epithelial cells to macrophages. Although average concentrations of myeloperoxidase and total protein, the protein permeability index, and the epithelial-lining fluid volume were similar with the two techniques and demonstrated moderate linear associations, Bland and Altman analysis revealed poor comparability, with substantial side-to-side variability and wide 95% limits of agreement. Furthermore, unlike bronchoscopic lavage, nonbronchoscopic lavage was unable to differentiate between patients with ARDS and those at risk of ARDS. Conclusions: Nonbronchoscopic lavage is not comparable to bronchoscopic lavage and as such cannot be used as an alternative to bronchoscopic lavage for assessing alveolar inflammation in patients with ARDS. abstract_id: PUBMED:12840790 Nonbronchoscopic evaluation of ventilator-associated pneumonia. The method to diagnose ventilator-associated pneumonia (VAP) ranges from physical examination to invasive procedures. In this section, the use of nonbronchoscopic techniques are reviewed. These include the traditional methods of reviewing the patient's clinical presentation and chest roentgenogram. Although these techniques are not specific, they are useful in helping the clinician in determining when to proceed with more tests to diagnose pneumonia. Methods for sampling lower respiratory tract secretions in an intubated patient include endotracheal aspirates as well as deeper samples obtained by protected brush or a nonbronchoscopic bronchoalveolar lavage catheter. Respiratory cultures can be useful in determining what is causing the pneumonia. However, the distinction between colonization and true infection is best understood if the samples are handled so that semiquantitative culture results can be reported. By using this approach, the physician should be able to better direct therapy and reduce mortality from pneumonia in the intensive care unit (ICU). abstract_id: PUBMED:17172064 Nonbronchoscopic bronchoalveolar lavage for diagnosing ventilator-associated pneumonia in newborns. The appropriate treatment of ventilator-associated pneumonia (VAP) must be based on accurate diagnosis, which can be done by microbiological examination of the samples obtained from the respiratory tract by nonbronchoscopic bronchoalveolar lavages (NB-BAL). This study was designed to determine the effectiveness of NB-BAL in diagnosing VAP in newborns. Two hundred and seven NB-BAL samples were obtained from 145 intubated neonates for microbiologic and cytologic evaluation of the distal airway. The NB-BAL samples were processed for microscopic quantification of the polymorphonuclear cells (PMN) containing intracellular bacteria (ICB) and quantitative culture (positive threshold, 10(5) cfu/ml). VAP was defined as a new, progressive, or persistent (&gt;24 hrs) infiltrate on the chest radiograph, with two or more of the following criteria: (a) macroscopically purulent tracheal secretions, (b) fever or hypothermia, (c) leukocytosis or leukopenia, and (d) worsening of respiratory status with a Pa O2/F IO2 ratio of &lt;240. Colonization was defined as mechanical ventilation for more than seven days, no signs of infection, and isolation of the same bacteria species in two previously obtained NB-BAL samples. Of the 145 neonates, 40 (27.5%) were infected and 12 (8.3%) were colonized. Forty-four patients (30%) developed VAP according to diagnostic categories based on clinical and radiologic criteria. Forty newborns with VAP (90%) had positive NB-BAL culture. The sensitivity, specificity, and positive and negative predictive values of NB-BAL fluid culture for VAP diagnosis were 90%, 90%, 70%, and 97%, respectively. The percentage of ICB was significantly higher in newborns with VAP. The presence of ICB in 2% or more on Giemsa-stained smears corresponded to a sensitivity of 94%, specificity of 83%, positive predictive value of 94%, and negative predictive value of 83%. The sensitivity and specificity of combination of ICB and NB-BAL quantitative culture in diagnostic samples were 94% and 90%, respectively. The positive and negative predictive values were 71% and 98%. In our study, the presence of leukocytes in the NB-BAL fluid smear of infants with VAP was higher than that of the colonized babies (84%, 26%). This difference was statistically significant (p &lt; 0.0001). The sensitivity and specificity of PMNs in NB-BAL fluid for the diagnosis were 86% and 75%, respectively, and the positive and negative predictive values were 89% and 69%. We conclude that NB-BAL lavage is well tolerated and clinically useful in mechanically ventilated newborns. These results suggest that NB-BAL fluid microscopic examination and cultures can offer a sensitive and specific means to diagnose VAP in newborns and may provide relevant information about the causative pathogens. abstract_id: PUBMED:33413988 Safety and clinical application of nonbronchoscopic bronchoalveolar lavage in preterm neonates with clinical ventilator-associated pneumonia. Background: The safety and clinical application of nonbronchoscopic bronchoalveolar lavage (NB-BAL) in preterm neonates with ventilator-associated pneumonia (VAP) have not been fully investigated, and limited data on the feasibility of this method are available. Methods: Premature infants with clinically suspected VAP between October 2017 and June 2019 were enrolled, and NB-BAL was performed. The tolerance and safety of NB-BAL were prospectively recorded during the procedure, and the clinical applications of NB-BAL were observed. Results: A total of 46 NB-BAL procedures were performed in 31 neonates with clinically suspected VAP. The median (interquartile range) gestational age and birth body weight were 28.7 (26.7-31.3) weeks and 1055.0 (817.0-1475.0) grams, respectively. Overall, all episodes of the procedure were well tolerated, with only 9 (19.5%) episodes showing transient desaturation and one patient (2.2%) showing bradycardia during the NB-BAL procedure. There were no impairments in arterial blood gas, cardiopulmonary parameters or respiratory severity scores after NB-BAL. No significant complications occurred in any of the patients who received NB-BAL. No chronic comorbidities affected the safety and clinical application of NB-BAL in these mechanically ventilated preterm neonates. NB-BAL yielded a diagnosis in 32 (69.6%) of these VAP episodes. Staphylococcus aureus was the most common isolated bacterium and accounted for 7 (15.2%) confirmed cases of VAP in our study, followed by polymicrobial microorganisms (n = 6, 13.0%). The appropriate antibiotics were prescribed and modified according to the NB-BAL results in 25 (54.3%) cases of VAP. Conclusions: NB-BAL is a safe and clinically applicable method for determining the etiology and diagnosis of VAP in the NICU, even in extremely preterm neonates with major chronic comorbidities. Further studies to investigate the diagnostic accuracy and impact of NB-BAL on VAP treatment in neonates are warranted in the future. abstract_id: PUBMED:10579277 Blind protected specimen brush and bronchoalveolar lavage in ventilated children. Objective: To determine whether nonbronchoscopic protected specimen brush (PSB) and bronchoalveolar lavage (BAL) are contributive for diagnosing ventilator-associated pneumonia in mechanically ventilated children. Design: Prospective study. Setting: Fifteen-bed pediatric intensive care unit in a university hospital. Patients: A total of 103 mechanically ventilated children, ranging in age from 7 days to 8.8 yrs, most with a high clinical suspicion for bacterial pneumonia. Interventions: All the children underwent nonbronchoscopic PSB and BAL. Nonbronchoscopic PSB was performed with a plugged double-sheathed brush and BAL with a double-lumen plugged catheter. Endotracheal secretions and blood cultures were also collected. Open-lung biopsy was performed for any child who died within 7 days after the inclusion in the study, according to the parental consent. Measurements And Main Results: The PSB specimens were submitted for bacteriologic quantitative culture (positive threshold, 10(3) colony-forming units [cfu]/mL). The BAL samples were processed for microscopic quantification of the polymorphonuclear cells containing intracellular bacteria (positive threshold, 1%) and quantitative culture (positive threshold, 10(4) cfu/mL). According to diagnostic categories based on clinical, biological, radiologic, and pathologic criteria, 29 children had bacterial pneumonia and 64 did not Ten children were classified as having an uncertain status. Of the 29 children with bacterial pneumonia, 26 (90%) met one of the following three criteria: a) PSB specimen culture, &gt; or =10(3) cfu/mL; b) intracellular bacteria in cells retrieved by BAL, &gt; or =1%; and c) BAL fluid culture, &gt; or =10(4) cfu/mL. In contrast, 56 (88%) of the 64 patients without pneumonia did not. Conclusion: The results of this study indicate the following: a) nonbronchoscopic PSB and BAL were feasible in a large population of mechanically ventilated children; b) nonbronchoscopic techniques were contributive for diagnosing ventilator-associated pneumonia in children; and c) a combined diagnostic approach, using nonbronchoscopic PSB and BAL, was superior to using either test alone. abstract_id: PUBMED:8858032 Comparison of nonbronchoscopic techniques with bronchoscopic brushing in the diagnosis of ventilator-associated pneumonia. Objective: To determine the diagnostic accuracy of quantitative cultures obtained via nonbronchoscopic protected specimen brushing (PSB) and nonbronchoscopic bronchoalveolar lavage (BAL) compared with quantitative cultures obtained by bronchoscopic PSB in surgical patients suspected of ventilator-associated pneumonia. Design: Prospective, crossover controlled study of 15 ventilated surgical intensive care unit patients in a university teaching hospital. Methods: Fifteen consecutive ventilated patients suspected of ventilator-associated pneumonia on the basis of leukocytosis, purulent sputum, and appearance of chest roentgenogram were enrolled. All patients underwent nonbronchoscopic PSB and BAL followed by bronchoscopic PSB. The duration of each procedure was noted. Culture results were considered positive only if greater than 10(4) colony forming units (CFU) per milliliter were present. Main Results: Perfect concordance was noted between bronchoscopic PSB and nonbronchoscopic BAL (kappa = 1.0). The concordance between bronchoscopic PSB and nonbronchoscopic PSB was 93% ((kappa = 0.86). The nonbronchoscopic procedures were performed in significantly less time than the bronchoscopic procedure. Conclusion: Nonbronchoscopic PSB and BAL provide similar microbiologic data to bronchoscopic PSB in the diagnosis of ventilator-associated pneumonia while shortening procedure time significantly. Answer: The study by PUBMED:23354237 investigated the use of early nonbronchoscopic bronchoalveolar lavage (screening-BAL) as a predictor of ventilator-associated pneumonia (VAP) in trauma and emergency general surgery patients. The results indicated that positive screening-BAL results were associated with the development of VAP by the same organism and could identify patients at risk for pneumonia. The study found that a positive screening-BAL had a positive predictive value of 0.85 for subsequent clinical pneumonia, and a negative screening-BAL had a negative predictive value of 0.85. This suggests that early nonbronchoscopic BAL can be a useful predictor of VAP in this patient population. Additionally, PUBMED:17172064 reported that nonbronchoscopic bronchoalveolar lavage (NB-BAL) is effective in diagnosing VAP in newborns, with high sensitivity and specificity when combined with microscopic examination and quantitative culture of the lavage fluid. The study found that NB-BAL fluid culture had a sensitivity of 90%, specificity of 90%, and positive and negative predictive values of 70% and 97%, respectively, for VAP diagnosis. Furthermore, PUBMED:33413988 demonstrated that NB-BAL is a safe and clinically applicable method for determining the etiology and diagnosis of VAP in neonatal intensive care units, even in extremely preterm neonates with major chronic comorbidities. The study showed that NB-BAL yielded a diagnosis in 69.6% of VAP episodes, and appropriate antibiotics were prescribed and modified according to the NB-BAL results in 54.3% of cases. In summary, early nonbronchoscopic bronchoalveolar lavage can be a predictor of ventilator-associated pneumonia, particularly when positive results are obtained. It has been shown to be a safe and effective diagnostic tool in both adult and neonatal populations, with the potential to guide appropriate antibiotic therapy.
Instruction: Does glycoprotein IIIa gene (Pl(A)) polymorphism influence clopidogrel resistance? Abstracts: abstract_id: PUBMED:17432927 Does glycoprotein IIIa gene (Pl(A)) polymorphism influence clopidogrel resistance? : a study in older patients. Background: Clopidogrel is a potent antiplatelet drug used for secondary prevention after ischaemic cardiovascular or cerebrovascular events. In patients with aspirin (acetylsalicylic acid) intolerance or resistance, it is used as monotherapy. Recent data report that Pl(A) polymorphism of the glycoprotein IIIa gene may account for differences in aspirin-induced antiplatelet effects. An increased degree of platelet reactivity was also reported in Pl(A2) carriers compared with Pl(A1/A1) patients after administration of a clopidogrel 300mg loading dose. Objectives: The aim of this study was to assess the modulatory effect of the Pl(A2) allele on platelet aggregation in patients taking long-term clopidogrel. M ETHODS: The prevalence of the Pl(A2) allele was assessed in 38 (21 males, 17 females; mean age 63 +/- 13 years) clopidogrel-resistant and 59 (26 males, 33 females; mean age 63 +/- 11 years) clopidogrel-responsive patients. The polymerase chain reaction-restriction fragment length polymorphism method was utilised to evaluate Pl(A) polymorphism. A Carat TX4 optical platelet aggregometer (Carat Diagnostics Ltd, Budapest, Hungary) was used to measure 5 and 10 micromol/L adenosine diphosphate-induced platelet aggregation. Results: Significantly more patients were taking combination antiplatelet therapy in the clopidogrel-resistant group than in the clopidogrel-responsive group (50% vs 30%, respectively). The prevalence of the Pl(A2) allele did not differ significantly between the two groups (0.09 vs 0.13), even after adjustment for combination therapy and various risk factors. Conclusions: Our results show that carriers of the Pl(A2) allele do not have an increased risk of clopidogrel resistance. These findings and data from our previous studies suggest that patients with a Pl(A2) allele homozygosity may benefit from antiplatelet therapy based on clopidogrel rather than aspirin. abstract_id: PUBMED:23917001 Relationship between human platelet antigen-1 gene polymorphism and clopidogrel resistance in patients with coronary artery disease Objectives: It has been proposed that human platelet antigen-1 (HPA-1) gene polymorphism is associated with coronary artery disease (CAD) and affects platelet function. We aimed to investigate the distribution of HPA gene polymorphism between angiographic CAD and a control group and the relation between HPA gene polymorphism and platelet aggregation. Study Design: The study population consisted of 94 patients with angiographic CAD and 115 patients without angiographic CAD. Platelet aggregation was measured with impedance aggregometry on the fifth day of percutaneous coronary intervention (PCI). Platelet aggregation &gt;480 AU*min was defined as the clopidogrel resistance group. Blood samples were obtained from all participants at discharge for investigating HPA-1 gene polymorphism. Results: There was no significant difference in the distribution of HPA-1 gene polymorphism between the control and CAD groups (78.7% vs. 78.1% for A allele and 21.3% vs. 21.9% for B allele, p=NS). The analysis between groups with and without clopidogrel resistance revealed no significant difference in the distribution of HPA-1A and HPA-1B alleles between the groups (A allele 78.7% vs. 78.9% and B allele 21.3% vs. 21.1%, p=NS). In the CAD group, there were no significant differences in platelet aggregation between HPA-1A and HPA-1B alleles (294±240 vs. 259±261 AU*min, p=NS). Conclusion: The distribution of HPA-1 gene polymorphism was not different in CAD patients compared to the control group. HPA-1 gene polymorphism was not associated with platelet aggregation or clopidogrel resistance assessed by impedance aggregometry in the CAD group. abstract_id: PUBMED:17184645 Dose-related efficacy of aspirin after coronary surgery in patients With Pl(A2) polymorphism (NCT00262275). Background: To evaluate the impact of the genetic polymorphisms affecting aspirin response using platelet aggregation and the response to different aspirin doses after cardiopulmonary bypass, we performed a subanalysis of the results from a randomized trial evaluating low- and medium-dose aspirin and clopidogrel. Methods: Blood was collected from consenting patients and DNA extracted. Polymerase chain reaction and restriction fragment length polymorphism analysis was performed to detect Pl(A2), C807T, and A842/C50T polymorphisms. Aspirin efficacy was assessed using light transmission platelet aggregometry, and reported as percentage aggregation and EC50 concentrations using the technique of Born. Results: Of 90 patients, 80 consented to further genetic testing, of whom 63 patients were randomly assigned to medium- (325 mg) or low-dose (100 mg) aspirin. The Pl(A2), C807T, and A842/C50T gene frequencies were 30%, 66%, and 21%, respectively, with no identifiable differences in the baseline platelet aggregation. Postoperatively, after 5 days of aspirin, platelet aggregation was consistently but not significantly impaired with Pl(A2) and A842/C50T carriers and consistently but not significantly improved with C50T carriers. An interaction term was identified on percentage aggregation and EC50 using epinephrine. The interaction coefficient describes a higher aggregation of 19% (95% confidence interval: 2 to 36; p = 0.03) and less inhibition with an EC50 of -2.07 (-4.19 to 0.04; p = 0.06) in patients who were both Pl(A2) positive and receiving low-dose aspirin. Conclusions: Genetic polymorphisms that affect the response to aspirin are common. The impaired response of persons with the Pl(A2) polymorphism to aspirin may be dose related, with significant improvement observed in patients using medium- rather than low-dose aspirin. abstract_id: PUBMED:24437181 Resistance to clopidogrel and polymorphism of P2RY12 and GPIIIA genes in patients with chronic coronary heart disease We studied the association between clopidogrel resistance, H1/NH2 polymorphism of the P2RY12 gene and T156C polymorphism of the GpIIIa gene in residents of Western Siberia suffering chronic CHD. It was shown that the occurrence of H1 and H2 haplotypes of the P2RY12 gene and 1565T and 1565C alleles of the GpIIIa gene was similar to that reported for European populations. Patients showing variable platelet response to the inhibitory action of clopidogrel were not significantly different in terms of P2RY12 and GpIIIa genotype distribution. To conclude, the study revealed no association between the risk of clopidogrel resistance and the presence of polymorphic variants of platelet receptor genes P2RY12 and GpIIIa. abstract_id: PUBMED:15813672 Platelet glycoprotein IIb/IIIa polymorphism and coronary artery disease: implications for clinical practice. Membrane glycoprotein (GP) IIb/IIIa plays a major role in platelet function; indeed it enables stimulated platelets to bind fibrinogen and related adhesive proteins, a process that is considered key in the development of thrombosis. The gene encoding GPIIIa (ITGB3, also known as GP3A) shows a common platelet antigen polymorphism [PL(A1)/PL(A2); expressed by alleles ITGB3*001 and ITGB3*002] that was variably associated with vascular disease. In 1996, the presence of the PL(A2) allele (ITGB3*001) was first reported to increase the risk of coronary heart disease. Shortly after, the interest in this study was increased by the publication of a case report on the death from myocardial infarction of an Olympic athlete who was found to be homozygous for the PL(A2) allele. Overviews of the published studies on the PL(A1)/PL(A2) polymorphism and coronary risk suggest an influence of the PL(A2) allele on the clinical phenotype and the interaction with other environmental factors. In particular, the strongest effect of the ITGB3 PL(A2) allele was expressed on the risk of occlusion after revascularization procedures, mainly after stent implantation, a condition in which platelet activation is more important as compared with other stenotic mechanisms. In the future, the identification of patients who are particularly responsive to GPIIb/IIIa antagonist therapy (e.g. those with the PL(A2) allele) might help to improve the treatment efficacy in this relatively small population. In a mechanism possibly unrelated to its effect on platelet reactivity to aggregating stimuli, the presence of the PL(A2) allele might influence the antiaggregatory effect of platelet inhibitory drugs such as aspirin (acetylsalicylic acid), clopidogrel, and GPIIb/IIIa antagonists. Although interesting, current data does not yet have direct clinical implications for patient risk identification and drug therapy tailoring. Larger studies are necessary to define the role of the PL(A2) allele in more homogeneous groups where platelet GPIIb/IIIa activation might be particularly relevant. abstract_id: PUBMED:31007424 Aspirin and Clopidogrel Resistance in Indian Patients with Ischemic Stroke and its Associations with Gene Polymorphisms: A Pilot Study. Introduction: Antiplatelet resistance is one of the urgent issues in current stroke care. One-third to one-half of the patients who experience a recurrent stroke is already on antiplatelet medications. We studied resistance to aspirin and clopidogrel in Indian stroke patients and its association with gene polymorphisms. Methods: Platelet function testing by light transmission aggregometry was performed on 65 patients with ischemic stroke who were stable on dual antiplatelet therapy (clopidogrel 75 mg OD and aspirin 75 mg OD) along with 65 age-matched controls. Aspirin resistance was considered as mean platelet aggregation ≥70% with 10 μM adenosine diphosphate (ADP) and ≥20% with 0.75 mM arachidonic acid. Clopidogrel resistance was defined as &lt;10% decrease from the baseline in platelet aggregation in response to ADP 10 μM and semi-response as &lt;30% decrease from the baseline. Polymorphisms CYP2C19 * 2 and GPIIb/IIIa (PLA1/A2) were genotyped by polymerase chain reaction-restriction fragment length polymorphism. Results: We found 64.6% (42/65) patients with inadequate response to clopidogrel (15.4% [10/65] resistant and 49.2% [32/65] semi-responders) and 4.6% (3/65) patients with inadequate response to aspirin (3.1% [2/65] resistant and 1.5% [1/65] semi-responder). The frequency of CYP2C19*2 mutant genotype was significantly higher in clopidogrel nonresponders compared to responders (P = 0.014). Clopidogrel nonresponsiveness was much higher in small vessel stroke. Conclusion: Unlike aspirin, a high proportion of nonresponders to clopidogrel was identified. In an interim analysis on 65 Indian patients, a significant association was found between CYP2C19*2 and clopidogrel nonresponsiveness. abstract_id: PUBMED:15166949 PlA polymorphism and platelet reactivity following clopidogrel loading dose in patients undergoing coronary stent implantation. The PlA polymorphism (Leu33Pro) of the platelet glycoprotein (GP) IIIa gene has been suggested to play an important role in coronary thrombosis. In vitro studies have shown differences for this polymorphism in platelet sensitivity towards antiplatelet drugs (aspirin and abciximab), suggesting a pharmacogenetic modulation. The aim of the study was to assess the modulatory effect of the PlA polymorphism on clopidogrel-induced antiplatelet effects in 38 patients undergoing coronary stent implantation receiving a 300 mg clopidogrel loading-dose. Platelet reactivity was assessed as GPIIb/IIIa activation and P-selectin expression in platelets stimulated with 2 micromol/l adenosine diphosphate using whole blood flow cytometry. The distribution of the homozygous PlA1/A1 and heterozygous PlA1/A2 genotypes were 74 and 26%, respectively. PlA2 carriers had a higher degree of GPIIb/IIIa activation (P = 0.05) and P-selectin expression (P = 0.02) during the overall study time course and a lower antiplatelet effect to a 300 mg clopidogrel loading-dose up to 24 h following intervention (P &lt; 0.05). In conclusion, the Pl polymorphism of the GPIIIa gene modulates platelet reactivity towards clopidogrel front loading in patients undergoing coronary stenting. This suggests the need for individualized antithrombotic regimens to optimally inhibit platelet reactivity. abstract_id: PUBMED:23834376 The PlA1/A2 polymorphism of glycoprotein IIIa in relation to efficacy of antiplatelet drugs: a systematic review and meta-analysis. Aim: The PlA1/A2 polymorphism of glycoprotein IIIa (GPIIIa) has been associated with both antiplatelet drug resistance and increased cardiovascular events. The aim of this study was to conduct the first meta-analysis investigating the association between carriage of the PlA2 allele and resistance to currently licensed antiplatelet drugs. Methods: Electronic databases (MEDLINE and EMBASE) were searched for all articles evaluating genetic polymorphisms of GPIIIa. For studies where antiplatelet resistance was measured using validated techniques, pooled odds ratios (ORs) were calculated using fixed effects and random effects models. Results: Sixteen studies were eligible for statistical analysis and included 1650 PlA1 homozygous subjects and 668 carriers of the PlA2 allele. For carriers of the PlA2 allele, OR 0.924 (n = 2318; 95% CI 0.743, 1.151; P = 0.481) was observed for resistance to any antiplatelet drug, OR 0.862 (n = 2085; 95% CI 0.685, 1.086; P = 0.208) for resistance to aspirin and OR 1.429 (n = 233; 95% CI 0.791, 2.582; P = 0.237) for resistance to clopidogrel. In the aspirin cohort, sub-group analysis revealed no statistical association in either healthy subjects or those with cardiovascular disease. PlA2 carriage was marginally associated with aspirin sensitivity using the fixed effects model when identified by the PFA-100 assay (n = 1151; OR 0.743, 95% CI 0.558, 0.989; P = 0.041) but with significant heterogeneity (I(2) = 55%; P = 0.002). Significance was lost with analysis using a random effects model. Conclusions: The totality of published data does not support an association between carriage of the PlA2 allele and antiplatelet drug resistance. Significant heterogeneity indicates the need for larger studies using validated and standardized assays. abstract_id: PUBMED:21901881 Resistance to antiplatelet drugs in patients with cerebrovascular disorders This review concerns clinical and laboratory resistance to antiplatelet drugs (aspirin and clopidogrel) in patients with cerebrovascular disorders. Results of certain clinical trials showed that laboratory resistance to antiaggregants is associated with recurrent thromboembolic vascular events. The commonest causes of aspirin resistance are production of arachidonic acid metabolites via the lipoxygenase pathway, poor compliance with the treatment, polymorphism of the genes encoding for cyclooxygenase and glycoprotein (GP) IIb/IIIa, endothelial dysfunction. The causes of clopidogrel resistance include inadequate doses of the drug, its low absorption, poor compliance with the treatment, polymorphism of ADP receptors, GP IIb/IIIa and cytochrome P450 genes, acute coronary syndrome and stroke, metabolic syndrome. Therapeutic efficacy of antiaggregants can be improved by increasing their doses, using membranotropic agents, correcting endothelial dysfunction, etc. Because the apparent variability of antiplatelet drug resistance is currently due to the use of different test-systems by different authors, the evaluation of individual sensitivity to a given drug showing laboratory resistance and the choice of alternative therapy are thus far possible only in the framework of clinical studies. Large-scale prospective multicenter trials of antiplatelet drug resistance are needed along with research for better understanding mechanisms of individual platelet sensitivity and resistance to antiaggregants and developing efficacious methods for their correction. abstract_id: PUBMED:19526385 Role of PLA2 polymorphism on clinical events after percutaneous coronary intervention. Objectives: We examined the relationship between the PLA2 polymorphism of the platelet GPIIIa receptor and major adverse cardiac events (MACE) after percutaneous coronary intervention (PCI). Background: PLA2 polymorphism has been associated with increased thrombosis and myocardial infarction. The association of PlA2 with MACE post-PCI has not been determined. Methods: 200 patients with normal baseline CKMB undergoing non-urgent PCI for symptomatic coronary artery disease were tested for the PLA2 polymorphism and followed for 1 year while on aspirin and clopidogrel. MACE were recorded and adjudicated by an independent, blinded committee. Results: Baseline demographic and lesion characteristics, platelet aggregation, activated clotting time and use of GP Ilb/llla blockers were similar between the 2 groups. The normal (A1A1), heterozygous (A1A2), and homozygote (A2A2) variants were found in 144 (72%), 55 (27.5%), and 1 (0.5%) patients, respectively. The presence of the PLA2 genetic polymorphism had no influence on 1-year MACE: 7.1% for the A1A1 group versus 6.5% for the A1A2 group (P=NS). The rate of any CKMB elevation post-PCI was 39% vs. 38% respectively (P=NS). Conclusion: In this study, the GPIIIa PlA2 polymorphism was frequent (27.5%), but the homozygous variant was very infrequent (0.5%). The presence of PLA2 had no influence on peri-procedural or one-year clinical outcomes. Answer: The influence of glycoprotein IIIa gene (Pl(A)) polymorphism on clopidogrel resistance has been investigated in several studies, but the results are not entirely consistent. Some studies suggest that the Pl(A2) allele may modulate platelet reactivity and affect the antiplatelet effects of clopidogrel. For instance, one study found that Pl(A2) carriers had a higher degree of platelet activation and a lower antiplatelet effect to a 300 mg clopidogrel loading dose up to 24 hours following coronary stenting (PUBMED:15166949). Another study indicated that the impaired response of persons with the Pl(A2) polymorphism to aspirin may be dose-related, with significant improvement observed in patients using medium- rather than low-dose aspirin (PUBMED:17184645). However, other studies have not found a significant association between the Pl(A) polymorphism and clopidogrel resistance. One study showed that carriers of the Pl(A2) allele do not have an increased risk of clopidogrel resistance, suggesting that patients with Pl(A2) allele homozygosity may benefit from antiplatelet therapy based on clopidogrel rather than aspirin (PUBMED:17432927). Similarly, another study found no significant difference in the distribution of HPA-1 gene polymorphism between control and coronary artery disease (CAD) groups, nor was the polymorphism associated with platelet aggregation or clopidogrel resistance (PUBMED:23917001). Additionally, a systematic review and meta-analysis concluded that the totality of published data does not support an association between carriage of the Pl(A2) allele and antiplatelet drug resistance (PUBMED:23834376). In conclusion, while there is some evidence to suggest that the Pl(A) polymorphism may influence the response to clopidogrel, the overall data is mixed and does not definitively support a strong association between Pl(A) polymorphism and clopidogrel resistance. Further research with larger, more homogeneous study populations and standardized assays may be necessary to clarify the role of this genetic polymorphism in clopidogrel resistance.
Instruction: Is Pertussis Infection Neglected in China? Abstracts: abstract_id: PUBMED:27223467 Is Pertussis Infection Neglected in China? Evidence from a Seroepidemiology Survey in Zhejiang, an Eastern Province of China. Background: The resurgence of pertussis has occurred in many countries. However, the epidemiological profiles of pertussis cannot be well understood by the current surveillance system in China. This study was designed to investigate the age specific serologic evidence of antibodies against pertussis, and to offer information regarding the existence of pertussis infection in Zhejiang Province, China. Methods: A cross-sectional serosurvey was carried out in 6 counties of Zhejiang Province during September and October of 2014. The immunoglobulin G-pertussis toxin (IgG-PT) levels were measured quantitatively with a commercially available enzyme-linked immunosorbent assay (ELISA). The antibody activities were expressed in the Food and Drug Administration (FDA)-U/ml and a level ≥30 FDA-U/ml was considered seropositive. An IgG-PT &gt;80 FDA-U/ml indicated recent pertussis infection if the patient had not received immunization with the pertussis vaccine within the last year. Results: The mean IgG-PT seropositivity rate among the 2107 subjects was 33.32% with a geometric mean concentration of 17.73 (95% confidence interval: 16.90-18.60) FDA-U/ml. The difference in the seropositivity rates reached significant means among the different age groups (waldχ2 = 198.41, P&lt;0.0005), and children aged 3 years had the highest percentage (63.24%) of undetectable IgG-PT level. Of the 1707 subjects ≥3 years of age, 169 (9.90%) had evidence of a recent infection. The highest proportion of IgG-PT levels ≥80 FDA-U/ml was found in ≥60 years age group followed by 11-15 and 16-25 years age groups. Conclusions: This study indicates the rather lower IgG-PT level sustained 1 year after the acellular pertussis vaccine booster dose, and substantial proportion of population susceptibility to pertussis in Zhejiang Province, China. Moreover, pertussis infection is not uncommon; it was estimated that 10% of subjects were recently infected approximately within the last 100 days. We highly suggest that the surveillance capacity should be strengthened and consider introducing booster dose that protect against pertussis in 6 years old children. abstract_id: PUBMED:37637457 Severe problem of macrolides resistance to common pathogens in China. With the widespread use of macrolide antibiotics in China, common pathogens causing children's infections, such as Streptococcus pneumoniae, Streptococcus (including Group A streptococcus, Group B streptococcus), Staphylococcus aureus, Bordetella pertussis, and Mycoplasma pneumoniae, have shown varying degrees of drug resistance. In order to provide such problem and related evidence for rational use of antibiotics in clinic, we reviewed the drug resistance of common bacteria to macrolides in children recent 20 years. abstract_id: PUBMED:30994096 Resurgence of Pertussis Infections in Shandong, China: Space-Time Cluster and Trend Analysis. Although vaccination is effective in preventing infection, pertussis remains endemic worldwide, including China. To lead better targeted prevention strategies, we examined dynamics of spatial and temporal patterns of pertussis transmission in Shandong, China, from 2009 to 2017. We used space-time cluster analysis, logistic regression analysis, and regression tree model to detect the changes in spatial patterns of pertussis infections in Shandong Province, China, between periods (2009-2011, 2012-2014, and 2015-2017). The yearly pertussis incidence rates dramatically increased by 16.8 times from 2009 to 2017. Shifting patterns of peaks of pertussis infections were observed over both time (from June-July to August-September) and space (from Linyi to Jinan), with increasing RR from 4.1 (95% CI: 2.3-7.4) (2009-2011) to 6.1 (95% CI: 5.6-6.7) (2015-2017) and obvious coincidence of peak time. West Shandong had larger odds of increased infections over the study period (odds ratio: 1.52 [95% CI: 1.05-2.17]), and pertussis had larger odds of spreading to east (odds ratio: 2.32 [95% CI: 1.63-3.31]) and north (odds ratio: 1.69 [95% CI: 1.06-2.99]) over time. Regression tree model indicated that the mean difference in yearly average pertussis incidence between 2009-2011 and 2015-2017 increased by more than 4-fold when the longitudes of counties are &lt; 118.0°E. The geographic expansion of pertussis infection may increase the risk of epidemic peaks, coinciding with increased infections in the future. The findings might offer evidence for targeting preventive measures to the areas most in need to minimize the impact of the disease. abstract_id: PUBMED:33034224 The epidemic of erythromycin-resistant Bordetella pertussis with limited genome variation associated with pertussis resurgence in China. Background: The resurgence of Bordetella pertussis infections leading to whooping cough is a concern in many parts of the world. The number of pertussis cases in China has increased significantly since 2013. Research Design And Methods: In this study, whole-genome sequencing analysis was performed for 388 B. pertussis strains isolated in China from the 1970s to 2018, combining 594 published strains from around the world. Results: This study revealed that lineage V diverged about 50 years ago in China, while lineage IV is dominant in the other countries. It also revealed that the erythromycin-resistant sub-lineages Va, Vb, and Vc with limited genomic variation emerged 11 ~ 12 years ago. These three sub-lineages were identified after the co-purified acellular vaccines (cp-ACVs) completely replaced the previous whole cell vaccines (WCVs) after the national immunization program of 2012. It suggests that the cp-ACVs cannot induce immunity that is potent enough to restrict the spread of the lineage V, antibiotic abuse further favors the spread of this lineage in China. Conclusions: These findings demand a reassessment of the immunization strategy and development of new vaccines in China to stop the resurgence and drug resistance of B. pertussis. abstract_id: PUBMED:35746480 Seroprevalence of Pertussis in Adults at Childbearing Age Pre- and Post- COVID-19 in Beijing, China. The number of reported pertussis cases has significantly decreased during the coronavirus disease 2019 (COVID-19) pandemic under the influence of strict public health measures in many countries including China. This study evaluated the prevalence of serum anti-pertussis toxin (anti-PT) IgG antibodies in adults at childbearing age pre- and post- COVID-19 in Beijing, China. Altogether, 2021 serum samples collected from individuals aged 20 to 39 years who attended an annual health examination at the Sixth Medical Center of PLA General Hospital, Beijing, in 2018~2020 were measured by ELISA. The median concentration of anti-PT IgG antibodies among participants in 2020 (2.96 IU/mL) was significantly lower than that in 2018 (3.27 IU/mL) (p = 0.011) and in 2019 (3.24 IU/mL) (p = 0.014). The percentage of participants with anti-PT IgG antibodies higher than 40 IU/mL (indicating a pertussis infection within the past few years) was 1.79% (9/503) in 2018, 2.04% (15/735) in 2019 and 1.66% (13/783) in 2020, respectively. The corresponding numbers of the non-detectable (&lt;5 IU/mL) rate of anti-PT IgG antibodies were 66.60%, 65.99% and 70.24%. Our results showed that there was a significant difference between true and reported incidence rates even during the COVID-19 pandemic. The proportion of adults at childbearing age without pertussis-specific antibodies is high, suggesting that booster vaccinations in adults should be considered in this country. abstract_id: PUBMED:22892100 The seroepidemiology of immunoglobulin G antibodies against pertussis toxin in China: a cross sectional study. Background: Pertussis is a reported vaccine-preventable respiratory disease in China. Because the routine laboratory methods for diagnosis are not in use, the reported cases are mainly in infants with classical paroxysmal cough and the true incidence related to pertussis is most likely under estimated. In China, however, few studies have attempted to address this issue. The purpose of this cross sectional study was to estimate the incidence rates using the method of sero-epidemiology of immunoglobulin (Ig) G antibodies against pertussis toxin (PT) among healthy populations in China. Methods: Blood samples were obtained from 1313 healthy individuals aged 0 to 95 years in Guangdong province of China throughout September 2010. Serum IgG antibodies against PT were determined by commercial ELISA kits. Subjects with concentration of anti-PT IgG higher than 30 IU/mL were indicated to have recent Bordetella pertussis infection, if they have not received a booster dose of pertussis vaccine within one year. Results: Of the 1313 study subjects, 117 (8.91%) were found to have anti-PT antibodies higher than 30 IU/mL. The estimated incidence of recent infection was thus 9395 per 100,000 for individuals older than 7 years. Peaks of the estimated incidence rate of recent infection were found to be 11561 per 100,000 in age group of 41-50 years and 11428 per 100,000 in the group aged 13-19 years. Conclusions: Our study indicated that B.pertussis infections are considerablely common, particularly in adolescents and adults in China. The study also stresses the importance of laboratory diagnosis for pertussis and employment of booster dose of pertussis vaccine in adolescents and adults in this country. abstract_id: PUBMED:24018405 Seroprevalence of pertussis in China: need to improve vaccination strategies. Pertussis remains an important cause of infant death worldwide and is an ongoing public health concern even in countries with high vaccination coverage. A cross-sectional seroepidemiological study was undertaken to estimate true incidence rates and gain further insight into the epidemiology and burden of pertussis in China. During 2011, a total of 1080 blood samples were obtained from healthy individuals between 0 and 86 y of age in Zhengzhou, Central China. Serum IgG antibodies against pertussis toxin (PT) and filamentous hemagglutinin (FHA) were measured quantitatively using ELISA. The results showed that the geometric mean titers of PT and FHA IgG were 6.48 IU/mL (95% CI: 5.70-7.41 IU/mL) and 11.39 IU/mL (95% CI: 10.22-12.87 IU/mL) among subjects less than 4 y of age, indicating that pertussis antibody levels were low despite high vaccination coverage. Of the 850 subjects≥4 y of age, 56 (6.6%) had anti-PT IgG titers above 30 IU/mL, and 11 (1.3%) had antibodies titers above 80 IU/mL. The estimated age-specific incidence of infection with B. pertussis revealed a peak incidence in the 31 to 40 y age group, followed by the 41 to 60 y age group. Taken together, these results indicate that pertussis is common in Chinese subjects in Zhengzhou, especially in adults, suggesting that the disease burden is underestimated in China. Therefore, our study stresses the importance of strengthening the diagnostic capacity and improving surveillance system for delineating current epidemiological profiles of pertussis. Most importantly, it may be advisable to re-evaluate the current Chinese pertussis immunization schedule and implement to booster doses for older children, adolescents and adults. abstract_id: PUBMED:34656079 Whole genome sequence of a non-toxigenic Corynebacterium diphtheriae strain from a hospital in southeastern China. Background: Sporadic cases of infection with non-toxigenic Corynebacterium diphtheriae (C. diphtheriae) isolates have been reported in regions covered by the Diphtheria-Tetanus-Pertussis vaccine, but no information describing the whole genome of non-toxigenic strains collected in China is available. Therefore, in this work, the complete genome of a non-toxigenic strain of C. diphtheriae from a hospital located in southeastern China was performed. Results: This non-toxigenic isolate belonged to the belfanti biotype and possessed a unique ST (assigned as ST799 in pubMLST). ErmX was present in the genome sequence and this isolate owned the resistance to erythromycin and clindamycin. Genes coding for virulence factors involved in adherence, iron-uptake and regulation of diphtheria toxin were also found. Two genes were involved in the interaction between pathogen and host. The phylogenetic analysis revealed that this newly isolated strain was similar to the strain NCTC10838, CMCNS703 and CHUV2995. Conclusion: Non-toxigenic C. diphtheriae strain contained virulence factors, thus it is able to cause an infectious disease, aspect that could be clarified by performing the whole genome sequencing analysis. abstract_id: PUBMED:24308586 Clustered cases of Bordetella pertussis infection cause high levels of IgG antibodies against pertussis toxin in adolescents in Gaobeidian city, China. SUMMARY In a large serosurvey performed in Gaobeidian city, China, 1032 children and adolescents aged 3-18 years were divided into three age groups: 3-6, 7-11, and 12-18 years. The geometric mean of IgG antibodies against pertussis toxin (anti-PT IgG) varied from 3.47 IU/ml (3-6 years) to 8.71 IU/ml (12-18 years) and the seroprevalence (≥ 40 IU/ml) varied from 4.48% (7-11 years) to 11.76% (12-18 years). Infections were clustered in three secondary schools (schools 1-3). However, excluding secondary schools 1-3, the IgG PT levels, seroprevalence and rates of recent infections (&gt;100 IU/ml) in secondary schools 4-7 (where infections were not clustered) were very close to those in the 3-6 and 7-11 years age groups. This suggests that the high seroprevalence and anti-PT IgG levels in adolescents in Gaobeidian city, China were caused by the clustered cases in certain secondary schools. Our results also suggested that pertussis was still circulating in China, and the reported incidence might be underestimated. abstract_id: PUBMED:11023089 Adult immunization--a neglected issue in Southeast Asia. Adult immunization is a neglected and underpublicised issue in Southeast Asia. Vaccine-preventable diseases cause unnecessary morbidity and mortality among adults in the region, while inadequate immunization results in unnecessary costs, including those associated with hospitalization, treatment, and loss of income. Childhood vaccination coverage is high for the EPI diseases of diphtheria, tetanus and pertussis; however, unvaccinated, undervaccinated, and aging adults with waning immunity remain at risk from infection and may benefit from vaccination. Catch-up immunization is advisable for adults seronegative for hepatitis B virus, while immunization against the hepatitis A and varicella viruses may benefit those who remain susceptible. Among older adults, immunization against influenza and pneumococcal infections is likely to be beneficial in reducing morbidity and mortality. Certain vaccinations are also recommended for specific groups, such as rubella for women of child-bearing age, typhoid for those travelling to high-endemicity areas, and several vaccines for high-risk occupational groups such as health care workers. This paper presents an overview of a number of vaccine-preventable diseases which occur in adults, and highlights the importance of immunization to protect those at risk of infection. Answer: Yes, pertussis infection appears to be neglected in China. Evidence from various studies indicates that despite high vaccination coverage, pertussis remains a significant public health concern in the country. A seroepidemiology survey in Zhejiang, an eastern province of China, revealed a substantial proportion of the population susceptible to pertussis, with an estimated 10% of subjects recently infected, suggesting that the surveillance capacity should be strengthened and booster doses considered for children aged 6 years (PUBMED:27223467). Additionally, the widespread use of macrolide antibiotics in China has led to varying degrees of drug resistance in common pathogens, including Bordetella pertussis (PUBMED:37637457). The resurgence of pertussis infections in Shandong, China, was analyzed through space-time cluster and trend analysis, showing a dramatic increase in yearly incidence rates and shifting patterns of infection peaks over time and space (PUBMED:30994096). Whole-genome sequencing analysis of B. pertussis strains in China revealed the emergence of erythromycin-resistant sub-lineages with limited genomic variation, suggesting that the current acellular pertussis vaccines may not be potent enough to restrict the spread of these lineages, and that antibiotic abuse may further favor their spread (PUBMED:33034224). Seroprevalence studies in adults at childbearing age pre- and post-COVID-19 in Beijing showed a significant proportion of adults without pertussis-specific antibodies, indicating that booster vaccinations should be considered (PUBMED:35746480). Other seroepidemiological studies across China have consistently indicated that B. pertussis infections are common, particularly in adolescents and adults, and that the disease burden is likely underestimated (PUBMED:22892100, PUBMED:24018405). These findings underscore the need for improved vaccination strategies and highlight the importance of laboratory diagnosis for pertussis (PUBMED:24018405). In summary, the evidence suggests that pertussis infection is indeed neglected in China, with a need for improved surveillance, diagnosis, and vaccination strategies to address the underestimation of the disease burden and the emergence of drug-resistant strains.
Instruction: Does utilization of screening mammography explain racial and ethnic differences in breast cancer? Abstracts: abstract_id: PUBMED:16618951 Does utilization of screening mammography explain racial and ethnic differences in breast cancer? Background: Reasons for persistent differences in breast cancer mortality rates among various racial and ethnic groups have been difficult to ascertain. Objective: To determine reasons for disparities in breast cancer outcomes across racial and ethnic groups. Design: Prospective cohort. Setting: The authors pooled data from 7 mammography registries that participate in the National Cancer Institute-funded Breast Cancer Surveillance Consortium. Cancer diagnoses were ascertained through linkage with pathology databases; Surveillance, Epidemiology, and End Results programs; and state tumor registries. Participants: 1,010,515 women 40 years of age and older who had at least 1 mammogram between 1996 and 2002; 17,558 of these women had diagnosed breast cancer. Measurements: Patterns of mammography and the probability of inadequate mammography screening were examined. The authors evaluated whether overall and advanced cancer rates were similar across racial and ethnic groups and whether these rates were affected by the use of mammography. Results: African-American, Hispanic, Asian, and Native American women were more likely than white women to have received inadequate mammographic screening (relative risk, 1.2 [95% CI, 1.2 to 1.2], 1.3 [CI, 1.2 to 1.3], 1.4 [CI, 1.3 to 1.4], and 1.2 [CI, 1.1 to 1.2] respectively). African-American women were more likely than white, Asian, and Native American women to have large, advanced-stage, high-grade, and lymph node-positive tumors of the breast. The observed differences in advanced cancer rates between African American and white women were attenuated or eliminated after the cohort was stratified by screening history. Among women who were previously screened at intervals of 4 to 41 months, African-American women were no more likely to have large, advanced-stage tumors or lymph node involvement than white women with the same screening history. African-American women had higher rates of high-grade tumors than white women regardless of screening history. The lower rates of advanced cancer among Asian and Native American women persisted when the cohort was stratified by mammography history. Limitations: Results are based on a cohort of women who had received mammographic evaluations. Conclusions: African-American women are less likely to receive adequate mammographic screening than white women, which may explain the higher prevalence of advanced breast tumors among African-American women. Tumor characteristics may also contribute to differences in cancer outcomes because African-American women have higher-grade tumors than white women regardless of screening. These results suggest that adherence to recommended mammography screening intervals may reduce breast cancer mortality rates. abstract_id: PUBMED:37354321 Disparities in screening mammography utilization among Hispanic women in rural Texas from 2002 to 2018. Purpose: To examine the trends of racial/ethnic and urban-rural disparities in screening mammography use with a focus on Hispanic women in rural Texas, as well as to further investigate barriers to mammography screening practices. Methods: A serial cross-sectional study was conducted on screening mammography including eligible female respondents (≥ 40 years) from the Texas Behavioral Risk Factor Surveillance System survey from 2002 to 2018. Findings: Weighted descriptive analyses showed persistent racial/ethnic and urban-rural disparities in mammography screening rates among eligible women (≥ 40 years) in Texas. Overall, the mammography screening rates for women in rural areas were significantly lower than women in urban areas with a mean rate of 64.09% versus 70.89% (p &lt; 0.001). Rural Hispanic women had the lowest mean mammography screening rate (55.98%) among all eligible women which is 16.27% below the mean mammography screening rate of non-Hispanic white women in urban areas. Weighted logistic regression model revealed that women with no health insurance or primary care providers were 52% (95% Confidence Interval [CI] 0.36-0.63, p &lt; 0.001) or 54% (95% CI 0.35-0.6, p &lt; 0.001) less likely having an up-to-date mammography screening compared with women with health insurance or primary care providers, respectively. Conclusions: Our study demonstrated significant and persistent racial and urban-rural disparities in screening mammography utilization among Hispanic women compared with non-Hispanic white women from 2002 to 2018. Healthcare access is a major contributor to these disparities. It highlights the need for wide-scale interventions from public health and policymakers targeting under screened racial minorities and rural regions population to promote screening mammography services among disadvantaged population. abstract_id: PUBMED:24667037 Ethnic differences and predictors of colonoscopy, prostate-specific antigen, and mammography screening participation in the multiethnic cohort. Purpose: Given the relation between screening and improved cancer outcomes and the persistence of ethnic disparities in cancer mortality, we explored ethnic differences in colonoscopy, prostate-specific antigen (PSA), and mammography screening in the Multiethnic Cohort Study. Methods: Logistic regression was applied to examine the influence of ethnicity as well as demographics, lifestyle factors, comorbidities, family history of cancer, and previous screening history on self-reported screening participation collected in 1999-2002. Results: The analysis included 140,398 participants who identified as white, African American, Native Hawaiian, Japanese American, US born-Latino, or Mexican born-Latino. The screening prevalences overall were mammography: 88% of women, PSA: 45% of men, and colonoscopy: 35% of men and women. All minority groups reported 10-40% lower screening utilization than whites, but Mexican-born Latinos and Native Hawaiian were lowest. Men were nearly twice as likely to have a colonoscopy (OR=1.94, 95% CI=1.89-1.99) as women. A personal screening history, presence of comorbidities, and family history of cancer predicted higher screening utilization across modalities, but to different degrees across ethnic groups. Conclusions: This study confirms previously reported sex differences in colorectal cancer screening and ethnic disparities in screening participation. The findings suggest it may be useful to include personal screening history and family history of cancer into counseling patients about screening participation. abstract_id: PUBMED:14535597 Racial/ethnic differences in the self-reported use of screening mammography. The efficacy of mammography in reducing breast cancer mortality among women 50-69 years of age has been demonstrated in randomized controlled studies, but many women, especially ethnic minorities, have not been receiving regular mammographic screening. The current study investigated racial/ethnic differences in mammography use and their association with demographic characteristics and other factors. The study population consisted of 4,444 women aged 40 years and older who participated in the 1996 Medical Expenditure Panel Survey. Outcome measures studied included the self-reporting of mammography within the past two years and past year. Multivariate logistic regression modeling was used to examine the effect of race while controlling for other factors. In the univariate analysis, there was virtually no difference between white, black, and Hispanic women in mammography rates within either one or two years. However, multivariate logistic regression suggested that both blacks and Hispanics were more likely than whites to have received recent mammography, as black women were 31% and Hispanic women were 43% more likely than white women to have had a mammogram within the previous two years. Our results suggest that white women are no longer more likely to receive periodic screening mammography than black and Hispanic women, and in fact, might even be less likely to undergo the procedure. This reversal might indicate, at least in part, that programs and other activities to promote screening mammography among ethnic minority women have been successful and should now be expanded to include other women. abstract_id: PUBMED:27825351 Determining the spatial heterogeneity underlying racial and ethnic differences in timely mammography screening. Background: The leading cause of cancer death for women worldwide continues to be breast cancer. Early detection through timely mammography has been recognized to increase the probability of survival. While mammography rates have risen for many women in recent years, disparities in screening along racial/ethnic lines persist across nations. In this paper, we argue that the role of local context, as identified through spatial heterogeneity, is an unexplored dynamic which explains some of the gaps in mammography utilization by race/ethnicity. Methods: We apply geographically weighted regression methods to responses from the 2008 Public Health Corporations' Southeastern Household Health Survey, to examine the spatial heterogeneity in mammograms in the Philadelphia metropolitan area. Results: We find first aspatially that minority identity, in fact, increases the odds of a timely mammogram: 74% for non-Hispanic Blacks and 80% for Hispanic/Latinas. However, the geographically weighted regression confirms the relation of race/ethnicity to mammograms varies by space. Notably, the coefficients for Hispanic/Latinas are only significant in portions of the region. In other words, the increased odds of a timely mammography we found are not constant spatially. Other key variables that are known to influence timely screening, such as the source of healthcare and social capital, measured as community connection, also vary by space. Conclusions: These results have ramifications globally, demonstrating that the influence of individual characteristics which motivate, or inhibit, cancer screening may not be constant across space. This inconsistency calls for healthcare practitioners and outreach services to be mindful of the local context in their planning and resource allocation efforts. abstract_id: PUBMED:38416907 Screening Mammography Utilization in the United States. Breast cancer is the second leading cause of cancer mortality in adult women in the United States. Screening mammography reduces breast cancer mortality between 22% and 48%; however, screening mammography remains underutilized. Screening mammography utilization data are available from insurance claims, electronic medical records, and patient self-report via surveys, and each data source has unique benefits and challenges. Numerous barriers exist that adversely affect the use of screening mammography in the United States. This article will review screening mammography utilization in the United States, explore factors that impact utilization, and briefly discuss strategies to improve utilization. abstract_id: PUBMED:37967477 Rural racial disparities and barriers in mammography utilization among Medicare beneficiaries in Texas: A longitudinal study. This study examined rural racial/ethnic disparities in long-term mammography screening practices among Medicare beneficiaries. A retrospective longitudinal study was conducted using 100% Texas Medicare data for women aged 65-74 who enrolled in Medicare between 2010-2013. Of the 114,939 eligible women, 21.2% of Hispanics, 33.3% of non-Hispanic Blacks (NHB), and 38.4% non-Hispanic Whites (NHW) in rural areas were regular users of mammography, compared to 33.5%, 44.9%, and 45.3% of their counterparts in urban areas, respectively. Stratification analyses showed rural Hispanics and NHB were 33% (95% CI, 25% - 40%) and 22% (95% CI, 6% - 36%) less likely to be regular users of mammography compared to their urban counterparts. Major barriers to routine mammography screening included the lack of a primary care provider, frequent hospitalization, and comorbidity. The findings of this study highlight the importance of addressing rural racial disparities in mammography utilization among older women to ensure equitable screening practices for all populations. abstract_id: PUBMED:33792894 Racial differences in no-show rates for screening mammography. Background: Differences in utilization of screening mammography partly explain the increased breast cancer mortality observed in African American (AA) women compared with non-Hispanic White women. However, the contribution of noncompliance from women who do not come for their scheduled screening mammography appointment (ie, no-shows) is unknown. The purpose of this study was to investigate racial differences in no-show rates for screening mammography. Methods: Women scheduled for routine screening mammograms between January 2018 and March 2018 were identified from the Joanne Knight Breast Health Center at Siteman Cancer Center in St. Louis, Missouri. Using a case-control design, this study retrospectively identified patients who no-showed for their mammograms (cases) and randomly sampled an equal number of patients who completed their mammograms (controls). These participants were compared by race. The main outcome measure was whether AA race was associated with no-shows for screening mammography. Results: During the study period, 5060 women were scheduled for screening mammography, and 316 (6.2%) did not keep their appointment (ie, they no-showed). Women who no-showed were more likely to be AA than women who kept their appointment (odds ratio, 2.64; 95% confidence interval, 1.90-3.67). Even after adjustments for marital status, insurance type, and place of residence, AA race was still significantly associated with no-shows for screening mammography. Conclusions: This study identified a no-show rate of 6.2% for screening mammography at the authors' institution. Women who no-showed were more likely to be AA than women who completed their mammogram even after adjustments for multiple factors. These data can be leveraged for future studies aimed at improving mammography attendance rates among AA women. abstract_id: PUBMED:28647544 Racial and ethnic disparities among state Medicaid programs for breast cancer screening. Breast cancer screening by mammography has been shown to reduce breast cancer morbidity and mortality. The use of mammography screening though varies by race, ethnicity, and, sociodemographic characteristics. Medicaid is an important source of insurance in the US for low-income beneficiaries, who are disproportionately members of racial or ethnic minorities, and who are less likely to be screened than women with higher socioeconomic statuses. We used 2006-2008 data from Medicaid claims and enrollment files to assess racial or ethnic and geographic disparities in the use of breast cancer screening among Medicaid-insured women at the state level. There were disparities in the use of mammography among racial or ethnic groups relative to white women, and the use of mammography varied across the 44 states studied. African American and American Indian women were significantly less likely than white women to use mammography in 30% and 39% of the 44 states analyzed, respectively, whereas Hispanic and Asian American women were the minority groups most likely to receive screening compared with white women. There are racial or ethnic disparities in breast cancer screening at the state level, which indicates that analyses conducted by only using national data not stratified by insurance coverage are insufficient to identify vulnerable populations for interventions to increase the use of mammography, as recommended. abstract_id: PUBMED:21246529 Comorbidities and mammography use interact to explain racial/ethnic disparities in breast cancer stage at diagnosis. Background: Interactions with comorbidity burden and comorbidity-related care have not been examined as potential explanations for racial/ethnic disparities in advanced-stage breast cancer at diagnosis. Methods: The authors used linked Surveillance, Epidemiology, and End Results-Medicare data to determine whether comorbidity burden and comorbidity-related care are associated with stage at diagnosis, whether these associations are mediated by mammography use, and whether they explain racial/ethnic disparities. Stage at diagnosis and mammography use were analyzed in multivariate regression models, adjusting for comorbidity burden and comorbidity-race interactions among 118,742 women diagnosed with breast cancer during 1993 to 2005. Results: Mammography utilization was higher among women with ≥ 3 stable comorbidities than among those without comorbidities. Advanced stage at diagnosis was associated with black race (odds ratio [OR], 1.8; 95% confidence interval [CI], 1.6-1.8), Hispanic ethnicity (OR, 1.3; 95% CI, 1.2-1.5), unstable comorbidity, and age ≥ 80 years. Mammography was protective in all racial/ethnic groups, but neither mammography use (OR, 0.3; 95% CI, 0.3-0.3 and OR, 0.2; 95% CI, 0.2-0.2 for women with 1 and ≥ 2 prior mammograms, respectively) nor overall physician service use (OR, 0.7; 95% CI, 0.7-0.8 for women with ≥ 16 visits) explained the association between race/ethnicity and stage at diagnosis. The black/white OR fell to 1.2 (95% CI, 0.9-1.5) among women with multiple stable comorbidities who received ≥ 2 screening mammograms, and 1.0 (95% CI, 0.8-1.3) among mammography users with unstable comorbidities. Conclusions: Comorbidity burden was associated with regular mammography and earlier stage at diagnosis. Racial/ethnic disparities in late stage disease were reduced among women who received both regular mammograms and comorbidity-related care. Answer: Yes, the utilization of screening mammography does explain some of the racial and ethnic differences in breast cancer outcomes. Studies have shown that African-American, Hispanic, Asian, and Native American women are more likely than white women to have received inadequate mammographic screening, which may contribute to the higher prevalence of advanced breast tumors among African-American women (PUBMED:16618951). Additionally, disparities in mammography screening rates persist, with rural Hispanic women in Texas having significantly lower screening rates compared to non-Hispanic white women in urban areas (PUBMED:37354321). Ethnic disparities in screening participation have also been reported, with all minority groups having lower screening utilization than whites, and Mexican-born Latinos and Native Hawaiians being the lowest (PUBMED:24667037). However, another study suggested that black and Hispanic women were more likely than white women to have received recent mammography, indicating a possible reversal in previous trends where white women were more likely to receive periodic screening mammography (PUBMED:14535597). Spatial heterogeneity also plays a role in explaining gaps in mammography utilization by race/ethnicity, with the influence of individual characteristics varying across different regions (PUBMED:27825351). Moreover, rural racial disparities in mammography utilization among Medicare beneficiaries in Texas have been highlighted, with Hispanics and non-Hispanic Blacks in rural areas being less likely to be regular users of mammography compared to their urban counterparts (PUBMED:37967477). Racial differences in no-show rates for screening mammography have been identified, with African American women more likely to miss their scheduled mammography appointments (PUBMED:33792894). State-level disparities in the use of breast cancer screening among Medicaid-insured women have also been observed, with African American and American Indian women less likely to use mammography compared to white women in many states (PUBMED:28647544). Finally, interactions between comorbidity burden and mammography use have been examined, with comorbidity burden associated with regular mammography and earlier stage at diagnosis. Racial/ethnic disparities in late-stage disease were reduced among women who received both regular mammograms and comorbidity-related care (PUBMED:21246529). In conclusion, the utilization of screening mammography is a significant factor in explaining racial and ethnic differences in breast cancer outcomes, with disparities in access, adherence, and follow-up care contributing to these differences.
Instruction: Is interleukin-1 genotyping useful for the clinical management of patients with atrophic body gastritis? Abstracts: abstract_id: PUBMED:18047563 Is interleukin-1 genotyping useful for the clinical management of patients with atrophic body gastritis? Background: Atrophic body gastritis patients are at increased risk for gastric cancer. IL-1B/IL-1RN polymorphisms have been associated with gastric cancer susceptibility. The relationship between these polymorphisms and the long-term outcome of atrophic body gastritis patients is not known. Aim: To investigate whether the genotyping of IL-1B-511/IL-1RN polymorphisms is useful to characterize atrophic body gastritis patients at increased risk for gastric neoplasms. Methods: IL-1B-511/IL-1RN polymorphisms were compared between 110 atrophic body gastritis patients and 110 age- and gender-matched controls, and patients were followed up (median 4.1 years) according to a cohort study design. Results: Genotype frequencies of IL-1B-511/IL-1RN were similar between patients and controls. Atrophic body gastritis patients harbouring the wild type of IL-1B-511/IL-1RN polymorphisms were not different from those harbouring the proinflammatory pattern as far as regards gender, age, gastric cancer family history and metaplastic atrophy. Sixteen atrophic body gastritis patients developed a gastric neoplastic lesion at follow-up: eight were IL-1B-511-T carriers and eight were IL-1RN-allele-2 carriers. Harbouring the proinflammatory genotypes was not significantly associated with developing gastric neoplastic lesions. Conclusions: In atrophic body gastritis patients, IL-1B-511 and IL-1RN polymorphisms do not appear to be associated either with specific clinical, biochemical or histological features or with the development of gastric neoplastic lesions at long-term follow-up. abstract_id: PUBMED:12588289 Ratio between serum IL-8 and pepsinogen A/C: a marker for atrophic body gastritis. Background And Aims: Elevated serum gastrin and a low pepsinogen A/C ratio are well-recognized markers for atrophic body gastritis (ABG). We have shown that the presence of body atrophy is also associated with elevated serum pro-inflammatory cytokines. This study tested the hypothesis that serum cytokines provide additional information to gastrin and pepsinogens in screening for ABG. Methods: Two hundred and twenty-six consecutive patients were investigated on referral for upper gastrointestinal endoscopy: 150 were patients with gastro-oesophageal reflux disease, receiving acid inhibitory medication either with proton pump inhibitors (n = 113) or with histamine2-receptor antagonists (n = 37), and 76 were nontreated controls, who had normal endoscopic findings. Gastric mucosal biopsies were sampled for histological examination (Sydney classification). Serum samples were analyzed for gastrin, chromogranin A (CgA), and pepsinogens A and C by RIA, and for the interleukins (IL)-1beta, IL-6, and IL-8 by ELISA. Results: Subjects with ABG had significantly higher serum gastrin (P &lt; 0.01) and serum CgA (P &lt; 0.01) levels and significantly lower pepsinogen A/C ratios (P &lt; 0.001) than those without ABG. Additionally, serum IL-1beta, IL-6 and, especially, IL-8 levels were significantly higher in the subjects with than in those without ABG (P &lt; 0.0001, for all cytokines). To optimize the detection of body atrophy we defined the ABG index: the ratio between the simultaneously measured IL-8 and pepsinogen A/C. The area under the ROC curve for the ABG index was significantly greater than that for serum gastrin and for serum pepsinogen A/C alone (0.91 +/- 0.029 vs. 0.72 +/- 0.042, and vs. 0.83 +/- 0.031, P = 0.018 and P = 0.049). Using the ABG index at a cut-off value of 1.8 pg mL-1, 91% of the cases were classified correctly. Conclusions: The ratio between serum IL-8 and pepsinogen A/C accurately predicts the presence of ABG. We therefore propose the ABG index as a noninvasive screening test for ABG in population-based studies. abstract_id: PUBMED:19030199 Relation of atrophic gastritis with Helicobacter pylori-CagA(+) and interleukin-1 gene polymorphisms. Aim: To determine the association of Helicobacter pylori (H pylori) CagA(+) infection and pro-inflammatory polymorphisms of the genes interleukin (IL)-1RN and IL-1B with the risk of gastric atrophy and peptic ulcers in a dyspeptic population in Costa Rica, a country with high incidence and mortality of gastric cancer. Methods: Seven biopsy specimens, a fasting blood sample and a questionnaire concerning nutritional and sociodemographic factors were obtained from 501 consecutive patients who had undergone endoscopy for dyspeptic symptoms. A histopathological diagnosis was made. Pepsinogen concentrations were analyzed by enzyme linked immunosorbent assay (ELISA). Infection with H pylori CagA(+) was determined by serology and polymerase chain reaction (PCR). IL-1B and IL-1RN polymorphisms genotyping was performed by PCR-restriction fragment length polymorphism (PCR-RFLP) and PCR respectively. Results: In this dyspeptic population, 86% were H pylori positive and of these, 67.8% were positive for CagA. Atrophic antral gastritis (AAG) was associated with CagA(+) status [odd ratio (OR) = 4.1; P &lt; 0.000] and fruit consumption (OR = 0.3; P &lt; 0.00). Atrophic body gastritis (ABG) was associated with pepsinogen PGI/PGII &lt; 3.4 (OR = 4.9; P &lt; 0.04) and alcohol consumption (OR = 7.3; P &lt; 0.02). Duodenal ulcer was associated with CagA(+) (OR = 2.9; P &lt; 0.04) and smoking (OR = 2.4; P &lt; 0.04). PGI &lt; 60 microg/L as well as PGI/PGII &lt; 3.4 were associated with CagA(+). Conclusion: In a dyspeptic population in Costa Rica, H pylori CagA(+) is not associated with ABG, but it is a risk factor for AAG. The pro-inflammatory cytokine polymorphisms IL-1B + 3945 and IL-1RN are not associated with the atrophic lesions of this dyspeptic population. abstract_id: PUBMED:12105837 Interleukin 1beta polymorphisms increase risk of hypochlorhydria and atrophic gastritis and reduce risk of duodenal ulcer recurrence in Japan. Background & Aims: Interleukin-1 beta (IL-1beta) polymorphisms are associated with increased risk of gastric cancer in whites. This study aimed to examine effects of these polymorphisms on gastric acid secretion, atrophic gastritis, and risk of peptic ulcer in Japan. Methods: We determined IL-1B-511/-31 and IL-1RN genotypes and measured gastric juice pH, serum pepsinogen (PG) I and II levels, and gastritis and atrophy scores in Helicobacter pylori-positive patients with gastritis only, gastric ulcers, or duodenal ulcers (DUs), and H. pylori-negative controls. Results: In the H. pylori-positive group, subjects with the proinflammatory IL-1B-511 T/T genotype had the highest atrophy and gastritis scores, the highest median gastric juice pH, and the lowest median serum PG I/PG II ratios. Although gastric juice pH significantly increased and serum PG I and PG I/PG II ratios significantly decreased in the IL-1B-511 T/T genotype group with age, no such age-dependent changes were observed in the C/C genotype group. Changes in the C/T genotype group were intermediate. In the H. pylori-negative group, the IL-1 loci had no effect on any of the physiologic or morphologic parameters. Carriage of IL-1RN allele 2 significantly protected against DU disease while the IL-1B-511 T/T genotype significantly protected against DU recurrence in patients older than 60 years. Conclusions: Proinflammatory IL-1beta polymorphisms are associated with hypochlorhydria and atrophic gastritis in Japan. The effects are dependent on H. pylori infection and become more significant with advancing age. This may explain the high incidence of gastric cancer in Japan and also the age-dependent decrease in DU recurrence in infected subjects. abstract_id: PUBMED:16405550 Association of interleukin-1beta gene polymorphisms with precancerous gastric lesions in African Americans and Caucasians. Objective: Interleukin-1beta plays an important role in inflammation and gastric physiology. Polymorphisms of the IL1B gene have been associated with gastric atrophy and increased cancer risk, especially in Helicobacter pylori-infected subjects. The aim of this study was to evaluate the relationship between IL1B and IL1 receptor antagonist gene polymorphisms and the risk of multifocal atrophic gastritis in African Americans and Caucasians. Methods: Genomic DNA was extracted from gastric biopsies of 269 adult outpatients (172 African Americans and 97 Caucasians) undergoing diagnostic upper gastrointestinal endoscopy. Histological diagnosis was evaluated according to the updated Sydney System and H. pylori status was assessed by Steiner silver stain. Polymorphisms of the IL1B gene (-511, -31, and +3954) and the IL1 receptor antagonist were investigated by PCR-RFLP. Logistic regression models were used to identify variables associated with multifocal atrophic gastritis in terms of odds ratios and 95% confidence intervals. Results: Considering subjects with normal histology and nonatrophic gastritis as controls, a significant association was found between IL1B+3954T carrier and multiatrophic gastritis (OR 2.23, 95% CI 1.28, 3.88). Analyses stratified by ethnic group demonstrated similar associations in both African Americans (OR 2.23, 95% CI 1.14, 4.37) and Caucasians (OR 2.04, 95% CI 0.74, 5.65). A positive but not significant association was found between the allele 2 of the IL1RN and the presence of multifocal atrophic gastritis. The remaining proinflammatory polymorphisms were not associated with this precancerous lesion. Conclusions: Our results suggest that the presence of IL1B+3954T allele is a risk marker for multifocal atrophic gastritis in the population studied. abstract_id: PUBMED:15038084 Helicobacter pylori-related cytokines influence gastric acid secretion and gastric mucosal inflammation in gastroduodenal ulcers Helicobacter pylori(H. pylori) is an important pathogenic factor for gastroduodenal ulcers and gastric cancer. The level of gastric acid output may influence the outcome of those diseases. With low acid output, H. pylori can spread to the corpus of the stomach, resulting in progression to atrophic gastritis. It may cause an increased risk of gastric cancer and ulcer. In contrast, with high output, H. pylori is confined in the gastric antrum, which develops antrum-predominant gastritis. This may contribute to an increased risk of duodenal ulcer. It is well known that inflammatory cytokines including interleukin (IL)-1 beta, IL-8 and tumor necrosis factor alpha modulate gastric acid secretion. Therefore, the host immune response by the cytokines may cause these disparate pathways in gastric acid secretion. abstract_id: PUBMED:15247172 Cytokine gene polymorphisms influence mucosal cytokine expression, gastric inflammation, and host specific colonisation during Helicobacter pylori infection. Background And Aims: Recent studies linked cytokine gene polymorphisms to H pylori related gastric cancer development. The current study evaluated the role of cytokine gene polymorphisms for mucosal cytokine expression, the gastric inflammatory response, and bacterial colonisation during H pylori infection. Patients And Methods: In 207 H pylori infected patients with chronic gastritis, polymorphisms at different loci of the interleukin (IL)-10, IL-1B, IL-1 receptor antagonist (IL-1RN), tumour necrosis factor (TNF)-A, and interferon (IFN)-G genes were genotyped by polymerase chain reaction (PCR), restriction fragment length polymorphism (RFLP) analysis, and allelic discriminating TaqMan PCR. Mucosal cytokine mRNA copy numbers were determined by real time quantitative PCR. Presence of bacterial virulence factors was investigated by cagA, vacAs1/2, and babA2 PCR. Biopsies were assessed with regard to the degrees of granulocytic/lymphocytic infiltration and the presence of intestinal metaplasia (IM) and atrophic gastritis (AG). Results: Proinflammatory IL-1 polymorphisms (IL-1RN*2(+)/IL-1B-511T/-31C(+)) were associated with increased IL-1beta expression, more severe degrees of inflammation, and an increased prevalence of IM and AG. Carriers of the IL-10-1082G/-819C/-592C alleles (GCC haplotype) had higher mucosal IL-10 mRNA levels than ATA haplotype carriers and were associated with colonisation by more virulent cagA(+), vacAs1(+), and babA2(+) H pylori strains. The TNF-A-307(G/A) and IFN-G+874(A/T) polymorphisms did not influence mucosal cytokine expression or the inflammatory response to H pylori. Conclusions: Cytokine gene polymorphisms influence mucosal cytokine expression, gastric inflammation, and the long term development of precancerous lesions in H pylori infection. Host polymorphisms are associated with certain bacterial strain types, suggesting host specific colonisation or adaptation. These findings contribute to the understanding of the complex interplay between host and bacterial factors involved in the development of gastric pathology. abstract_id: PUBMED:12441323 Helicobacter pylori and interleukin 1 genotyping: an opportunity to identify high-risk individuals for gastric carcinoma. Background: Both Helicobacter pylori genotype and host genetic polymorphisms play a role in determining the clinical consequences of H. pylori infection. We investigated whether there are any combinations of bacterial and host genotypes that are particularly associated with the occurrence of gastric carcinoma. Methods: Genotypic variations in virulence-associated genes of H. pylori vacA (s and m regions) and cagA were determined in 221 subjects with chronic gastritis and 222 patients with gastric carcinoma by polymerase chain reaction (PCR)-line probe assay. Polymorphisms in the human interleukin 1 beta (IL-1B) gene (IL-1B-511*C or IL-1B-511*T) and in the IL-1 receptor antagonist gene (IL-1RN intron 2 variable number of tandem repeats) were evaluated by PCR and single-strand conformation polymorphism analysis. All statistical tests were two-sided. Results: Infection with vacAs1-, vacAm1-, and cagA-positive strains of H. pylori was associated with an increased risk for gastric carcinoma, with odds ratios (ORs) of 17 (95% confidence interval [CI] = 7.8 to 38), 6.7 (95% CI = 3.6 to 12), and 15 (95% CI = 7.4 to 29), respectively. IL-1B-511*T carriers (IL-1B-511*T/*T or IL-1B-511*T/*C) homozygous for the short allele of IL-1RN (IL-1RN*2/*2) had an increased gastric carcinoma risk (OR = 3.3, 95% CI = 1.3 to 8.2). For each combination of bacterial/host genotype, the odds of having gastric carcinoma were greatest in those with both bacterial and host high-risk genotypes: vacAs1/IL-1B-511*T carrier (OR = 87, 95% CI = 11 to 679), vacAm1/IL-1B-511*T carrier (OR = 7.4, 95% CI = 3.2 to 17), cagA-positive/IL-1B-511*T carrier (OR = 25, 95% CI = 8.2 to 77), vacAs1/IL-1RN*2/*2 (OR = 32, 95% CI = 7.8 to 134), vacAm1/IL-1RN*2/*2 (OR = 8.8, 95% CI = 2.2 to 35), and cagA-positive/IL-1RN*2/*2 (OR = 23, 95% CI = 7.0 to 72). Conclusion: Combined bacterial/host genotyping may provide an important tool in defining disease risk and targeting H. pylori eradication to high-risk individuals. abstract_id: PUBMED:16146197 Relationship between gastric mucosal IL-8 levels and histological gastritis in patients with Helicobacter pylori infection. To determine the role of host immune responses in H. pylori infection, we examined the relationship between gastric mucosal IL-8 levels and histological gastritis in patients with H. pylori infection. Biopsy tissue obtained from 99 patients were homogenizedand mucosal IL-8 levels measured by ELISA. The gastric mucosal IL-8 levels in both the antrum and corpus were higher in patients with H. pylori than in H. pyloi negativepatients. IL-8 levels in the corpus but not the antrum correlated with the severity of the atrophy. The IL-1B polymorphism had no influence on the degree of IL-8 production. These findings indicate that IL-8 production is independent of IL-1B polymorphisms and IL-8 may play an important role in the development of atrophic gastritis. abstract_id: PUBMED:11866488 Atrophy-metaplasia-dysplasia-carcinoma sequence in the stomach: a reality or merely an hypothesis? The results of recent investigations have suggested that the old hypothesis of an atrophy-metaplasia-dysplasia-carcinoma sequence in the stomach needs to be qualified. The most common cause of intestinal metaplasia is Helicobacter pylori gastritis. The consequence of this intestinal metaplasia is focal atrophy. Helicobacter pylori infection may also trigger an autoimmune gastritis of the corpus mucosa, with atrophy and intestinal metaplasia. Most intestinal metaplasias are only 'paracancerous' but not 'precancerous' lesions. Diffuse gastric carcinomas, such as the signet ring cell carcinoma, arise independently of intestinal metaplasia. Histogenetically, numerous carcinomas of the stomach are primarily of the gastric type, and may secondarily change into the intestinal type.High-grade intra-epithelial neoplasias (dysplasias) detected during the biopsy-based diagnostic work-up appear to be a marker for carcinoma and must, therefore, be removed endoscopically. The detection of intestinal metaplasia in routinely obtained biopsy material is subject to sampling error and is, therefore, not a suitable marker for an increased risk of a gastric carcinoma developing. As an alternative, the concept of gastritis of the carcinoma phenotype, which is more frequently found in early gastric carcinomas and in the relatives of gastric carcinoma patients, has been developed. In this concept, the diffuse parameters of grade and activity of the gastritis in the antrum and corpus, which are independent of sampling error, are subjected to a comparative analysis. A risk gastritis of the carcinoma phenotype is diagnosed when the grade and activity of the gastritis in the corpus are at least equally as pronounced as in the antrum. Currently, this concept is being tested in a prospective ongoing study. Future studies must show whether, and if so which, immunohistochemical or molecular-genetically detectable changes can be applied as risk markers in the diagnostic work-up. Helicobacter pylori eradication probably does not lead to complete regression of the intestinal metaplasia and ensuing focal atrophy. However, eradication of H. pylori does lead to the normalization of changes that can lead to mutations of the stem cells of the gastric mucosa (free radicals, nitric oxide, cell proliferation and vitamin C secretion). Answer: Interleukin-1 genotyping does not appear to be significantly useful for the clinical management of patients with atrophic body gastritis in terms of predicting the development of gastric neoplastic lesions. A study investigating the genotyping of IL-1B-511/IL-1RN polymorphisms in atrophic body gastritis patients found that these polymorphisms were not associated with specific clinical, biochemical, or histological features, nor with the development of gastric neoplastic lesions at long-term follow-up (PUBMED:18047563). Therefore, based on this study, IL-1 genotyping would not provide additional information that could be used to characterize atrophic body gastritis patients at increased risk for gastric neoplasms, which limits its utility in clinical management for this purpose.
Instruction: Does cyclosporine achieve a real advantage for treatment of idiopathic nephrotic syndrome in children? Abstracts: abstract_id: PUBMED:17447154 Does cyclosporine achieve a real advantage for treatment of idiopathic nephrotic syndrome in children? A long-term efficacy and safety study. Background: Cyclosporine (CsA) was found to be efficient in decreasing proteinuria in both steroid-dependent and steroid-resistant nephrotic patients. We aimed to explore the potential long-term benefits and hazards of CsA and their predictors among a large group of nephrotic patients. Methods: In this retrospective analysis, we included 197 pediatric patients with idiopathic nephrotic syndrome (INS) of whom 103 were steroid dependent and 94 steroid resistant. Results: CsA induced complete remission in 132 (67%) and partial response in 13 (6.6%). Cyclosporine was received for a period of 22.16 +/- 12.21 months. Univariate analysis showed that the response to CsA was significantly better in steroid-dependent children, in minimal change disease (MCD), diffuse mesangial proliferative glomerulonephritis (DMP) and focal segmental glomerulosclerosis (FSGS) than in other pathological lesions and in those who had lower quantities of pretreatment proteinuria. Only the prior response to steroids and concomitant use of ketoconazole with CsA were valid predictors for better response to CsA with multivariate analysis. Discontinuation of the drug in 40 patients resulted in relapse in 26 patients while the remaining 14 patients maintained remission. Renal dysfunction developed in 18 patients of whom 12 recovered completely on drug discontinuation. Thirty-seven patients developed hypertension. Multivariate analysis showed that all side-effects were significantly more prevalent in CsA-resistant patients. Conclusion: CsA is effective and well tolerated in the long-term treatment of INS in children, however two thirds of cases showed relapse after CsA discontinuation. abstract_id: PUBMED:29775447 Mycophenolate mofetil (MMF) as the first choice immunosuppressive drug in treatment of steroid-dependent nephrotic syndrome in children Most children diagnosed with nephrotic syndrome show favourable response to corticosteroid therapy, nonetheless 30% of patients have frequent relapses or a steroid-dependent course of disease. Cyclophosphamide, cyclosporin A or MMF are being used in treatment of steroid-dependent nephrotic syndrome in search of a drug with highest long-term effectiveness and least amount of side effects. Aim: The aim of study was to assess of the efficacy of MMF as the first choice immunosuppressive drug in children with nephrotic syndrome after determining a steroid-dependency. Materials And Methods: 23 children with steroid-dependent nephrotic syndrome were enrolled in the study. Mean age at disease onset was 3.8 years. Mean disease duration time before introducing MMF was 21.3 months. Mean treatment time with MMF was 23.6 months. Patients previously treated with immunosuppressive drug, except for prednisone were excluded from the study. Results: Per year of treatment with MMF 56,5% of patients had not more than 1 relapse of the disease, 5 patients had more than 1, but less than 2 relapses. After the mean time of 23.6 months MMF treatment was discontinued in 15 patients (62,5%). 11 patients (48%) from that group significantly benefited from treatment in the form of no further relapses, defer of steroid-dependence or the possibility to reduce the dose of corticosteroids to minimal. Conclusions: MMF has advantage over cyclophosphamide and calcineurin inhibitors in reference to side effect profile, especially glomerular filtration markers, hypertension and frequent drug dependency. Treatment with MMF is effective in maintaining long-term remission and enables the reduction of cumulative steroid dose. Regarding nearly 50% of patients with benefits after MMF treatment and good treatment tolerance, it seems justified to introduce MMF as the first choice immunosuppressive drug in patients with steroiddependent nephrotic syndrome. abstract_id: PUBMED:36518356 Rituximab, Mycophenolic Acid, and Calcineurin Inhibitors Achieve Long-Term Remission in Pediatric Focal Segmental Glomerulosclerosis with Steroid-Resistant and Frequently Relapsing Nephrotic Syndrome: A Report of Two Cases. Studies investigating the effect of rituximab in children with nephrotic syndrome (NS) due to focal segmental glomerulosclerosis (FSGS) have reported conflicting results, with some concluding that patients may require additional immunosuppressive therapy to achieve and/or maintain long-term remission. We report successful treatment of pediatric FSGS with rituximab infusions, followed by maintenance immunosuppression with mycophenolic acid (MPA) and a calcineurin inhibitor (CNI) in 1 patient with refractory steroid-resistant NS (SRNS), and one with frequently relapsing NS (FRNS). Case 1 is a patient with refractory SRNS due to FSGS. MPA and tacrolimus induced complete remission within 6 months following rituximab treatment. Remission was maintained for over 2 years, and the patient's kidney function and body height also returned to normal ranges within this time. Case 2 is a patient with FRNS due to FSGS, who was treated with rituximab followed by MPA and cyclosporine, which successfully prevented relapses for 18 months, that is, at the end point of the observation. Our case report demonstrates that rituximab and a combination of CNIs and MPA can be effective in achieving complete remission in pediatric refractory SRNS and sustaining remission in pediatric FSGS with FRNS and SRNS for several years. This treatment regimen has the advantage of eliminating the need for long-term high-dose steroid treatments, allowing 1 patient to achieve normal growth and recover from other adverse steroid effects. abstract_id: PUBMED:24602468 Treatment of FSGS in Children. Focal segmental glomerulosclerosis (FSGS) is a pathologic condition that represents many disease entities. The goals of therapy are to cure the disease. When this is not possible, the secondary goals are to reduce proteinuria to avoid the complications of nephrotic syndrome and to delay progression of kidney disease. Proteinuria remission is one of the most important independent predictors of kidney survival. Children with FSGS who do not achieve partial or complete remission have a 50% risk of progression to ESRD within 5 years whereas those who enter complete remission have a 5-year kidney survival rate of 90%. Treatment of idiopathic FSGS commonly involves immune-based and nonimmunologic therapy options. This manuscript will review the current state of FSGS therapy for children. abstract_id: PUBMED:29546599 Idiopathic membranous nephropathy in children treated with rituximab: report of two cases. Background: Idiopathic membranous nephropathy is an uncommon cause of nephrotic syndrome in children and can present treatment challenges. The current treatment options of steroids and cyclophosphamide, cyclosporine, or mycophenolate require prolonged treatment durations and the associated side effects may result in nonadherence in children, especially in adolescents. Case-diagnosis: We report two adolescent patients with idiopathic membranous nephropathy with nephrotic range proteinuria and elevated anti-phospholipase A2 receptor levels who did not achieve remission with steroids and were later treated with rituximab. Both patients received two doses of rituximab and responded with remission. In addition, anti-PLA2R antibody levels normalized and/or significantly improved. Conclusions: Rituximab seems to be a safe and effective treatment option in children with idiopathic membranous nephropathy due to anti-PLA2R. Further studies are needed to evaluate this effectiveness. abstract_id: PUBMED:22087197 Treatment of steroid-resistant pediatric nephrotic syndrome. Children who suffer from steroid-resistant nephrotic syndrome (SRNS) require aggressive treatment to achieve remission. When intravenous high-dose methylprednisolone fails, calcineurin inhibitors, such as cyclosporine and tacrolimus, are used as the first line of treatment. A significant number of patients with SRNS progress to end-stage renal disease if remission is not achieved. For these children, renal replacement therapy can also be problematic; peritoneal dialysis may be accompanied by significant protein loss through the peritoneal membrane, and kidney allograft transplantation may be complicated by recurrence of SRNS. Plasmapheresis and rituximab were initially used for treatment of recurrent SRNS after transplantation; these are now under consideration as rescue therapies for refractory SRNS. Although the prognosis of SRNS is complicated and unfavorable, intensive treatment in the early stages of the disease may achieve remission in more than half of the patients. Therefore, timely referral of pediatric SRNS patients to pediatric nephrology specialists for histological and genetic diagnosis and treatment is highly recommended. abstract_id: PUBMED:29421622 Treatment of idiopathic nephrotic syndrome in children The annual incidence of the nephrotic syndrome has been estimated to be 1-3 per 100,000 children&lt;16 year of age. In children, the most common cause of nephrotic syndrome is idiopathic nephrotic syndrome (INS). INS is defined by the presence of proteinuria and hypoalbuminemia and by definition is a primary disease. Renal biopsy study shows non-specific histological abnormalities of the kidney including minimal changes, focal and segmental glomerular sclerosis, and diffuse mesangial proliferation. Steroid therapy is applied in all cases of INS. Renal biopsy is usually not indicated before starting corticosteroid therapy. The majority of patients (80-90%) are steroid-responsive. Children with INS who do not achieve a complete remission with corticosteroid therapy commonly present focal and segmental glomerular sclerosis and require treatment with calcineurin inhibitors (cyclosporine or tacrolimus), mycophenolate mofetil or rituximab, plus renin-angiotensin system blockade. In this article we review the recent accepted recommendations for the treatment of children with INS. abstract_id: PUBMED:32469166 Annexin V in children with idiopathic nephrotic syndrome treated with cyclosporine A. Background: Treatment with cyclosporine A (CsA), a calcineurin inhibitor, is effective in children with difficult idiopathic nephrotic syndrome (INS). Prolonged CsA treatment can result in several adverse effects, the most significant being nephrotoxicity (CsAN). The plasma and urine levels of the proteins annexin V (AnV) and uromodulin (UM) were investigated in order to assess their usefulness as indicators of early-stage CsAN. Uromodulin is considered a distal tubular damage marker. Annnexin V is present in the distal tubules. Objectives: To measure AnV in children with INS receiving CsA treatment and to assess the usefulness of this biomarker for monitoring CsAN and as an indicator of changes in the distal tubules of the nephron. Material And Methods: The prospective study included 30 patients with INS and 22 controls. Plasma and urinary AnV levels were measured 3 times: before CsA treatment, and after 6 and 12 months of therapy. The AnV levels were compared to those of UM. Results: The urinary AnV and UM levels were significantly higher in the INS patients before CsA therapy in comparison to the reference group. A progressive increase of urinary AnV was observed after 6 and 12 months of therapy. Urinary UM only increased after 6 months. No significant correlations were found between plasma and urinary concentrations of the proteins studied. Conclusions: The increased urinary excretion of AnV in children with INS receiving CsA treatment may suggest its usefulness as an early marker of subclinical CsAN. Annexin V seems to be a more sensitive indicator of tubular damage in the course of CsA therapy than UM, though large, multicenter studies are needed. abstract_id: PUBMED:22976259 Mycophenolate mofetil for treatment of idiopathic nephrotic syndrome in children. Introduction: Management of frequently relapsing steroid-responsive or steroid-resistant idiopathic nephrotic syndrome (NS) in children has been a clinical challenge for pediatric nephrologists. In addition, adverse effects of long-term corticosteroids and cyclosporine administration emerge seeking a safe and effective treatment. The purpose of this study was to evaluate the safety and efficacy of mycophenolate mofetil (MMF) in these patients. Materials And Methods: This study reviewed the outcomes of children with frequently relapsing or steroid-resistant idiopathic NS who were treated with MMF. Results: A total of 36 patients (23 boys and 13 girls) were included. Their mean age at the time of diagnosis of NS was 61.94 ± 43.9 months. Of the children, 91.6% of those who had frequent relapses and 8.3% of those with steroid-resistant NS responded to MMF significantly (P &lt; .001), with no significant association between age and gender with response to MMF. The treatment was well tolerated with no significant complications. Conclusions: In children with frequently relapsing NS, MMF was a safe and useful drug for maintaining remission, while it was of low value in children with steroid-resistant NS. abstract_id: PUBMED:20427873 Treatment of steroid resistant nephrotic syndrome in children. Achieving remission in children with steroid-resistant nephrotic syndrome (SRNS) could be difficult. Many immunosuppressive drugs are used with variable success rates. We have studied the response of children with SRNS who presented to our pediatric's renal unit between 2002 and 2007 to various modalities of therapy. We included patients with no response to prednisolone (60 mg/M2/day) after four weeks of therapy; all the patients had renal biopsy and followup duration for at least one year. We excluded patients with congenital nephrotic syndrome, lupus, or sickle cell disease. There were 31 (23 girls and 8 boys with F:M = 2.9:1; the mean age at presentation was 4.2 +/- 3.2) children who fulfilled the inclusion criteria. The mean duration of follow up was 3.1 +/- 1.6 years. Twenty children (65%) achieved partial (6 children) or complete (14 children) remission. There were 16 children treated with cyclophosphamide either oral or intravenous, and only 4 of them (25%) achieved remission. Seven children received oral chlorambucil, and only2 of them (28.5%) achieved remission; none of the children experienced side effects. Fifteen children received cyclosporine, and only eight of them (53%) achieved remission. Six children developed gum hypertrophy and one had renal impairment, which was reversible after discontinuing the drug. Mycophonelate mofetil (MMF) was used as the last option in 5 children, and 2 of them achieved complete remission. One child developed a systemic cytomegalovirus (CMV) infection which indicated discontinuing the drug. Fourteen (45%) children needed more than one immunosuppressive therapy. Three children progressed to end stage renal failure and required dialysis. We conclude that SRNS in children is a difficult disease with significant morbidity. However, remission is achievable with cyclosporine and other immunosuppressive agents. Treatment should be individualized according to the underlying histopathology, and clinical and social conditions of the children. Answer: Cyclosporine (CsA) has been found to be effective in the treatment of idiopathic nephrotic syndrome (INS) in children, particularly for those who are steroid-dependent or steroid-resistant. A retrospective analysis including 197 pediatric patients with INS showed that CsA induced complete remission in 67% and partial response in 6.6% of the cases. The response to CsA was significantly better in steroid-dependent children and in those with certain pathological lesions such as minimal change disease (MCD), diffuse mesangial proliferative glomerulonephritis (DMP), and focal segmental glomerulosclerosis (FSGS). However, two-thirds of cases showed relapse after CsA discontinuation, and side effects such as renal dysfunction and hypertension were more prevalent in CsA-resistant patients (PUBMED:17447154). Comparatively, mycophenolate mofetil (MMF) has been suggested as a first choice immunosuppressive drug in children with steroid-dependent nephrotic syndrome due to its favorable side effect profile and effectiveness in maintaining long-term remission. In a study with 23 children, nearly 50% benefited from MMF treatment with no further relapses or reduced corticosteroid doses (PUBMED:29775447). Additionally, rituximab combined with mycophenolic acid (MPA) and calcineurin inhibitors (CNIs) has been reported to achieve long-term remission in pediatric FSGS with steroid-resistant and frequently relapsing nephrotic syndrome. This combination has the advantage of eliminating the need for long-term high-dose steroid treatments (PUBMED:36518356). In conclusion, while CsA is effective and well-tolerated in the long-term treatment of INS in children, it is associated with a significant relapse rate after discontinuation and potential side effects. Alternative treatments such as MMF and rituximab in combination with MPA and CNIs may offer advantages in terms of side effect profiles and maintaining remission, suggesting that treatment should be individualized based on the patient's response and tolerance to the medication (PUBMED:17447154; PUBMED:29775447; PUBMED:36518356).
Instruction: Sexually transmitted infections among immigrants in Denmark. Is it a problem? Abstracts: abstract_id: PUBMED:11107979 Sexually transmitted infections among immigrants in Denmark. Is it a problem? Introduction: The aim of the study was to assess the incidence of sexually transmitted infections (STIs) in immigrants in Denmark. Material And Methods: Analysis of surveillance data from 1.8.1990-31.5.2000 for HIV infection and from 1.1.1994-31.5.2000 for gonorrhoea and syphilis. National data on the population mainly per 1.1.1998 were used. Results: Overall, 28% of the notified, newly diagnosed HIV-infected persons were foreigners, a proportion which was 18% for both cases with gonorrhoea and syphilis; 488 (64%) were from Africa, whereas 110 (14%) were from Europe. The estimated annual incidence of first-time diagnosed HIV infection was five times higher in immigrants than in native Danes (22.3 vs 4.1 per 10(5)), namely 161.8 per 10(5) in Africans and 5.6 per 10(5) in Europeans. Similar differences were found for gonorrhoea (6.87 vs 2.14 per 10(5)) and syphilis (1.66 vs 0.16 per 10(5)). In addition, the annual incidence of newly diagnosed HIV was three times higher in male and 16 times higher in female immigrants than in Danes. The incidence of both gonorrhoea and syphilis in Danes was five times higher in men than in women, which was also found for gonorrhoea in immigrants. In immigrants, no real difference in the incidence of HIV and syphilis was found between the two genders. Conclusion: The incidence of diagnosed HIV infection, gonorrhoea, and syphilis in Denmark is generally low, but fairly high in certain groups of immigrants. Information, early diagnosis and treatment are central elements in the prevention of STIs and should be adapted for new sub-populations. abstract_id: PUBMED:33624575 Infectious diseases among Ethiopian immigrants in Israel: a descriptive literature review. The mass immigration of Ethiopian Jews to Israel began in the 1980s. For most of these immigrants this was a time of sharp transition from a village life with very basic living conditions, in most cases without a regular supply of water, electricity, or healthcare services, to a modern Western urban society. The aim of this review was to search the medical literature on the characteristics of infectious diseases that are typical in Ethiopian immigrants (EI), using relevant keywords. There has been success in coping with diseases among EI, that are rare although recognized in Israel, in terms of screening and early identification. TB was common in Ethiopia over all the years of immigration to Israel. In contrast, HIV appeared in EI from 1999 when they had long stayovers in transition camps in Gondar and Addis Ababa where there was a high risk of infection with sexually transmitted diseases. There were often delays in diagnosing 'exotic' diseases that are endemic in Africa, but not well known in Israel, such as Onchocerciasis, Schistosomiasis, Strongyloidiasis, and Bartonella endocarditis, which cause severe morbidity and mortality among EI. We describe the effect on general morbidity in Israel, and how the healthcare services coped with these diseases, including obstacles, and failures and successes. In light of the ongoing immigration of African refugees to the United States and Europe, the Israeli experience can be of value to healthcare policy makers in developing strategies for the effective management by medical staff treating these immigrants. abstract_id: PUBMED:27410494 HIV Testing Among Black and Hispanic Immigrants in the United States. Late presentation is common among black and Hispanic US immigrants living with HIV. Little is known about HIV testing in this population because data are aggregated into racial and ethnic categories without regard to nativity. This study was undertaken to determine HIV testing patterns in these populations. We used data from the National Health Interview Survey (2007-2010), a nationally representative source of HIV testing data disaggregated by nativity. The sample consisted of 10,397 immigrants (83.9% Hispanic white, 13.1% non-Hispanic black, and 3.0% Hispanic black). The majority of participants were from the Caribbean, Central America, and Mexico (81.5%). Hispanic white immigrants were least likely to have undergone testing compared with non-Hispanic and Hispanic black immigrants (46.7% vs. 70.5% and 65.8%). Among immigrants with known risk factors or prior STDs, 59.2% and 74.8% reported previous HIV testing. Immigrants who had not recently talked to a healthcare provider were less likely to report testing: Hispanic white (AOR 0.65, 95% CI 0.58-0.72), non-Hispanic black (AOR 0.64, 95% CI 0.48-0.85), and Hispanic black (AOR 0.26, 95% CI 0.14-0.48). Only 17.2% of all immigrants intended to undergo HIV testing in the 12 months following participation in the survey. Among all three racial and ethnic groups, immigrants who reported a history of prior STDs were more likely to intend to test for HIV in the future. Many black and Hispanic immigrants to the United States have not undergone HIV testing. Interventions to increase access to HIV testing and awareness of transmission risk should be developed. abstract_id: PUBMED:30244601 Exploring barriers to consistent condom use among sub-Saharan African young immigrants in Switzerland. No study to date has focused on barriers to condom use specifically among young immigrants to Europe from sub-Saharan Africa. Based on a qualitative study in sociology, this paper explores generational differences in barriers to condom use between first-generation immigrants (born in Africa and arrived in Switzerland after age 10) and second-generation immigrants (born in Switzerland to two native parents or arrived in Switzerland before age 10). Results are based on in-depth, semistructured individual interviews conducted with 47 young women and men aged 18 to 25 to understand how individual, relational, and cultural dimensions influence sexual socialization and practices. Six main barriers to consistent condom use were identified: reduced pleasure perception, commitment and trust, family-transmitted sexual norms and parental control, lack of accurate knowledge on HIV transmission, lack of awareness about HIV in Switzerland, and gender inequalities. The three first barriers concerned both generations of immigrants, whereas the three last revealed generational differences. These findings can help sexual health providers identify social causes for young sub-Saharan immigrants not using condoms. The findings also highlight the necessity of offering accurate, accessible, and adapted information to all young immigrants, as well as the particular importance of addressing families' lack of discussions about sex, understanding the sexual norms transmitted by parents, and taking into consideration cultural differences among young people born in immigration countries. abstract_id: PUBMED:36626224 Vulnerability to HIV Infection Among International Immigrants in China: Cross-sectional Web-Based Survey. Background: The rising number of migrants worldwide, including in China given its recent rapid economic development, poses a challenge for the public health system to prevent infectious diseases, including sexually transmitted infections (STIs) caused by risky sexual behaviors. Objective: The aim of this study was to explore the risky sexual behaviors of international immigrants living in China to provide evidence for establishment of a localized public health service system. Methods: Risky sexual behaviors were divided into multiple sexual partners and unprotected sexual behaviors. Basic characteristics, sexual knowledge, and behaviors of international immigrants were summarized with descriptive statistics. Multivariate logistic regression analyses were used to identify factors associated with risky sexual behaviors, and the associations of demographic characteristics and risk behaviors with HIV testing and intention to test for HIV. Results: In total, 1433 international immigrants were included in the study, 61.76% (n=885) of whom had never heard of STIs, and the mean HIV knowledge score was 5.42 (SD 2.138). Overall, 8.23% (118/1433) of the participants had been diagnosed with an STI. Among the 1433 international immigrants, 292 indicated that they never use a condom for homosexual sex, followed by sex with a stable partner (n=252), commercial sex (n=236), group sex (n=175), and casual sex (n=137). In addition, 119 of the international immigrants had more than three sex partners. Individuals aged 31-40 years were more likely to have multiple sexual partners (adjusted odds ratio [AOR] 2.364, 95% CI 1.149-4.862). Married participants were more likely to have unprotected sexual behaviors (AOR 3.096, 95% CI -1.705 to 5.620), whereas Asians were less likely to have multiple sexual partners (AOR 0.446, 95% CI 0.328-0.607) and unprotected sexual behaviors (AOR 0.328, 95% CI 0.219-0.492). Women were more likely to have taken an HIV test than men (AOR 1.413, 95% CI 1.085-1.841). Those who were married (AOR 0.577, 95% CI 0.372-0.894), with an annual disposable income &gt;150,000 yuan (~US $22,000; AOR 0.661, 95% CI 0.439-0.995), considered it impossible to become infected with HIV (AOR 0.564, 95% CI 0.327-0.972), and of Asian ethnicity (AOR 0.330, 95% CI 0.261-0.417) were less likely to have an HIV test. People who had multiple sexual partners were more likely to have taken an HIV test (AOR 2.041, 95% CI 1.442-2.890) and had greater intention to test for HIV (AOR 1.651, 95% CI 1.208-2.258). Conclusions: International immigrants in China exhibit risky sexual behaviors, especially those aged over 30 years. In addition, the level of HIV-related knowledge is generally low. Therefore, health interventions such as targeted, tailored programming including education and testing are urgently needed to prevent new HIV infections and transmission among international immigrants and the local population. abstract_id: PUBMED:19272722 Development of actions for the prevention of HIV, hepatitis and sexually transmitted infections among immigrants consulting in the doctors of the world "Missions France" The mission France of Doctors of the World has for objective to facilitate the access to care and to rights in the common law system for vulnerable populations and to bring testimonies out. The objective of the project is to ensure daily actions of prevention: to bring people to screen for HIV and hepatitis as well as obtaining full access to treatment for populations consulting in the Reception centers for Care and Orientation (RCCO). The screening is proposed systematically to all new patients (90% of them are immigrants) after a medical consultation or a special prevention consultation. The prevalence of HIV, hepatitis B and C was respectively 15, 10.5, and 7 times higher than the national average among patients screened in 2007, The centers of Doctors of the World are privileged places to inform, prevent, offer screening, and bring healthcare to this population particularly exposed to risks. abstract_id: PUBMED:22273804 Sexual health knowledge of male and female Latino immigrants. To explore sexual health knowledge among Latino immigrants in a Southern U.S. city, we conducted 20 qualitative interviews, (10 Women and 10 Men). We explored knowledge and factors associated with sexual health among male and female Latino immigrants in a Southern U.S. city experiencing a major growth of Latino immigrants in the past 10 years. Both genders demonstrated limited knowledge of Human Papillomavirus (HPV) and risks of sexually transmitted infection (STI) acquisition. Neither gender perceived that they could have an asymptomatic STI, including HPV. Gender differences exist in sexual behaviors and perceptions of STI risk. Females indicated that female Latinas tend to have older sexual initiation age, fewer lifetime sexual partners and more pro-active sexual health than did Latinos when referring to Latino males. Consequently, male and female Latino immigrants could benefit from culturally relevant programs to alleviate STI disparities. abstract_id: PUBMED:13084060 The problem of venereal diseases among immigrants. N/A abstract_id: PUBMED:20157851 HIV/STI risk among male Mexican immigrants in Dallas, Texas: findings from a pilot study. Rates of HIV and STIs are higher among Latinos than the general U.S. population. A number of factors place Latino immigrants at particularly high risk. 128 male Mexican immigrants in Dallas, Texas completed personal interviews. We measured the prevalence of HIV/STI risk factors and identified sociodemographic and behavioral characteristics associated with higher risk. 9% of the total sample had 3 or more sexual partners in the past year. 5% had sex with a commercial sex worker (CSW). 11% had sex with another man. 11% had a previous STI diagnosis. Risk behaviors and STI history were more prevalent among men who had used illegal drugs or frequently consumed alcohol (18% of the sample) than among others. The overall prevalence of HIV/STI risk factors in this population was moderate. However, men who drank alcohol frequently and used illegal drugs were more likely than others to report engaging in behaviors that put them at risk for acquiring HIV/STI. abstract_id: PUBMED:19936924 Beliefs about the pap smear among Mexican immigrants. Few studies have examined Latinos' beliefs about the Pap smear or what uses they attribute to the procedure. We conducted qualitative interviews with 28 Mexican immigrant women and 23 Mexican immigrant men recruited through snowball sampling. We found that individuals learned about the Pap smear from a wide variety of sources and often understood the exam to be a screening test for sexually transmitted infections in general. They also related the need for Pap smears and the development of cervical cancer to high risk sexual behaviors. Finally, participants considered men to have a significant role as vectors for disease and as barriers to screening. Our results suggest that interventions to improve cervical cancer prevention among Mexican immigrants may be most effective if they include both men and women and if they recognize and address concerns about STI spread and prevention. Furthermore, interventions must recognize that even when women know how to prevent disease, they may feel disempowered with regard to making behavioral changes that will decrease their risk for STIs or cancer. Answer: The incidence of sexually transmitted infections (STIs) among immigrants in Denmark is a concern. Surveillance data from 1990 to 2000 for HIV infection and from 1994 to 2000 for gonorrhea and syphilis indicate that a significant proportion of newly diagnosed cases of these STIs were among foreigners. Specifically, 28% of newly diagnosed HIV-infected persons, 18% of gonorrhea cases, and 18% of syphilis cases were immigrants. The majority of these were from Africa (64%), with a smaller proportion from Europe (14%). The estimated annual incidence of first-time diagnosed HIV infection was notably higher in immigrants than in native Danes, with a particularly high rate among Africans (161.8 per 10^5) compared to Europeans (5.6 per 10^5). Similar disparities were observed for gonorrhea and syphilis. The study concluded that while the overall incidence of diagnosed HIV infection, gonorrhea, and syphilis in Denmark is low, it is relatively high in certain groups of immigrants. The authors suggest that information, early diagnosis, and treatment are central to the prevention of STIs and should be adapted for new sub-populations (PUBMED:11107979). This data indicates that STIs among immigrants in Denmark represent a public health issue that requires targeted interventions and adapted healthcare services to address the higher incidence rates in this population.
Instruction: Should prostate cancer be considered as a differential diagnosis in patients with osteolytic bone lesions? Abstracts: abstract_id: PUBMED:26744870 Should prostate cancer be considered as a differential diagnosis in patients with osteolytic bone lesions? Objective: Prostate cancer is the most frequently diagnosed cancer in men, as well as the second leading cause of death among cancers after lung cancer. In the United States, it is more prevalent in African-American men than in Caucasian men. Prostate cancer frequently metastasizes to the bone, with most of the reported lesions appearing to be osteoblastic on radiographs. Here, we describe an unusual presentation of metastatic prostate cancer with diffuse osteolytic bone lesions. Case Presentation: An 80-year-old previously healthy Hispanic man presented with worsening back pain, difficulty with ambulation, and bladder outlet obstruction. Physical examination was significant for spinal tenderness in the thoracic and lumbar region. Digital rectal examination was remarkable for asymmetric enlargement of the prostate with nodularity and firmness. Imaging studies revealed diffuse osteolytic lesions. His prostate-specific antigen was 562.8 ng/mL (normal 0-4). Prostate biopsy and imaging studies confirmed a diagnosis of metastatic prostate cancer. Conclusions: This case demonstrates that bone metastases of prostate cancer are not purely osteoblastic although most of the reported bone metastases in prostate cancer have been osteoblastic in nature. Therefore, clinicians are to consider metastatic prostate cancer as a differential diagnosis for patients with osteolytic bone lesions. abstract_id: PUBMED:31040743 Semiquantitative assessment of osteoblastic, osteolytic, and mixed lytic-sclerotic bone lesions on fluorodeoxyglucose positron emission tomography/computed tomography and bone scintigraphy. Bone scintigraphy is widely used to detect bone metastases, particularly osteoblastic ones, and F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET) scan is useful in detecting lytic bone metastases. In routine studies, images are assessed visually. In this retrospective study, we aimed to assess the osteoblastic, osteolytic, and mixed lytic-sclerotic bone lesions semiquantitatively by measuring maximum standardized uptake value (SUVmax) on FDG PET/computed tomography (CT), maximum lesion to normal bone count ratio (ROImax) on bone scintigraphy, and Hounsfield unit (HU) on CT. Bone scintigraphy and FDG PET/CT images of 33 patients with various solid tumors were evaluated. Osteoblastic, osteolytic, and mixed lesions were identified on CT and SUVmax, ROImax, and HU values of these lesions were measured. Statistical analysis was performed to determine if there is a difference in SUVmax, ROImax, and HU values of osteoblastic, osteolytic, and mixed lesions and any correlation between these values. Patients had various solid tumors, mainly lung, breast, and prostate cancers. There were 145 bone lesions (22.8% osteoblastic, 53.1% osteolytic, and 24.1% mixed) on CT. Osteoblastic lesions had a significantly higher value of CT HU as compared to osteolytic and mixed lesions (P &lt; 0.01). There was no significant difference in mean ROImax and mean SUVmax values of osteolytic and osteoblastic bone lesions. There was no correlation between SUVmax and ROImax, SUVmax and HU, and ROImax and HU values in osteolytic, osteoblastic, and mixed lesions (P &gt; 0.05). Not finding a significant difference in SUVmax and ROImax values of osteoblastic, osteolytic, and mixed lesions and also lack of correlation between SUVmax, ROImax, and HU values could be due to treatment status of the bone lesions, size of the lesion, nonmetastatic lesions, erroneous measurement of SUVmax and ROImax, or varying metabolism in bone metastases originating from various malignancies. abstract_id: PUBMED:23273271 Cancer of the prostate presenting with diffuse osteolytic metastatic bone lesions: a case report. Unlabelled: Introduction: Prostate cancer is the second most common cancer in men and the fifth most common cancer worldwide. In the USA it is more common in African-American men than in Caucasian men. Prostate cancer frequently metastasizes to bone and the lesions appear osteoblastic on radiographs. Presentation with diffuse osteolytic bone lesions is rare. We describe an unusual presentation of metastatic prostate cancer with diffuse osteolytic bone lesions. Case Presentation: A 65-year-old Namibian man presented with anemia, thrombocytopenia and worsening back pains. In addition he had complaints of effort intolerance, palpitations, dysuria and mild symptoms of bladder outlet obstruction. On examination he was found to be anemic, had a swollen tender right shoulder joint and spine tenderness to percussion. On digital rectal examination he had asymmetrical enlargement of the prostate which felt nodular and hard with diffuse firmness in some parts. His prostate-specific antigen was greater than 100ng/mL and he had diffuse osteolytic lesions involving the right humerus, and all vertebral, femur and pelvic bones. His screen for multiple myeloma was negative and the prostate biopsy confirmed prostate cancer. Conclusion: Prostate cancer rarely presents with diffuse osteolytic bone lesions and should be considered in the differential diagnosis when evaluating male patients with osteolytic bone lesions. abstract_id: PUBMED:29264191 Specific bone region localization of osteolytic versus osteoblastic lesions in a patient-derived xenograft model of bone metastatic prostate cancer. Objective: Bone metastasis occurs in up to 90% of men with advanced prostate cancer and leads to fractures, severe pain and therapy-resistance. Bone metastases induce a spectrum of types of bone lesions which can respond differently to therapy even within individual prostate cancer patients. Thus, the special environment of the bone makes the disease more complicated and incurable. A model in which bone lesions are reproducibly induced that mirrors the complexity seen in patients would be invaluable for pre-clinical testing of novel treatments. The microstructural changes in the femurs of mice implanted with PCSD1, a new patient-derived xenograft from a surgical prostate cancer bone metastasis specimen, were determined. Methods: Quantitative micro-computed tomography (micro-CT) and histological analyses were performed to evaluate the effects of direct injection of PCSD1 cells or media alone (Control) into the right femurs of Rag2-/-γc-/- male mice. Results: Bone lesions formed only in femurs of mice injected with PCSD1 cells. Bone volume (BV) was significantly decreased at the proximal and distal ends of the femurs (p &lt; 0.01) whereas BV (p &lt; 0.05) and bone shaft diameter (p &lt; 0.01) were significantly increased along the femur shaft. Conclusion: PCSD1 cells reproducibly induced bone loss leading to osteolytic lesions at the ends of the femur, and, in contrast, induced aberrant bone formation leading to osteoblastic lesions along the femur shaft. Therefore, the interaction of PCSD1 cells with different bone region-specific microenvironments specified the type of bone lesion. Our approach can be used to determine if different bone regions support more therapy resistant tumor growth, thus, requiring novel treatments. abstract_id: PUBMED:25783519 11C-choline PET/CT identifies osteoblastic and osteolytic lesions in patients with metastatic prostate cancer. Aim: The aim of this study was to compare C-choline PET/CT, prostate-specific antigen (PSA), PSA kinetics, and C-choline uptake in recurrent metastatic prostate cancer patients with osteoblastic and osteolytic bone metastases. Patients And Methods: We retrospectively analyzed 140 patients with the following criteria: (a) positive bone lesions identified with C-choline PET/CT and validated as true positive by histology (14.2%), correlative imaging (33.4%), or clinical follow-up (52.4%); (b) after radical prostatectomy (67.9%) or primary radiotherapy (22.1%); (c) proven biochemical relapse with rising PSA levels; (d) no chemotherapy, zoledronic acid, or palliative bone external beam radiation therapy previously administrated during biochemical relapse; and (f) asymptomatic for bone pain. Lesions were categorized as osteoblastic, osteolytic, or bone marrow lesions. Patients were divided into osteoblastic and osteolytic patient groups. Results: C-Choline PET/CT detected oligometastatic bone disease (1-3 lesions) in 98 (70%) of the 140 patients and multiple bone lesions in 42 (30%) of the 140 patients. By per-lesion analysis of 304 lesions, there were 184 osteoblastic, 99 osteolytic, and 21 bone marrow lesions.By per-patient analysis, 97 (69.3%) of the 140 patients were in the osteoblastic group, whereas 43 (30.7%) of the 140 patients were in the osteolytic group. Statistically significant differences in SUVmax (P &lt; 0.001), fast PSA doubling time (P = 0.01), and PSA velocity (P = 0.01) were observed between osteoblastic (lower values) and osteolytic (higher values) groups. By multivariate analysis, fast PSA doubling time was a significant predictor for osteolytic lesions. Conclusions: We demonstrated differences in PSA kinetics and SUVmax between osteolytic and osteoblastic lesions. C-Choline PET/CT may identify patients that could benefit from early targeted therapies, depending on the type of bone lesions expressed. abstract_id: PUBMED:14659142 Rationale for the use of bisphosphonates in osteoblastic and osteolytic bone lesions. Various primary malignancies develop bone metastases, and the resultant skeletal complications cause significant morbidity/mortality in advanced cancer patients. Bone lesions associated with metastases are traditionally classified radiologically as either osteolytic or osteoblastic, and both types of lesions are associated with elevated levels of specific bone resorption markers. Some common aspects in the pathophysiology of bone lesions have prompted speculation that treatments for osteolytic metastases might also be effective for predominantly osteoblastic metastases, such as in prostate cancer. Potent osteoclast activity inhibitors, bisphosphonates have been successful in the treatment of osteolytic tumor bone disease. Zoledronic acid is the first bisphosphonate shown to have a direct clinical benefit in the treatment of osteoblastic bone metastases, reducing the number and rate of skeletal events in prostate cancer patients with metastatic bone disease. Moreover, the shorter, more convenient infusion time and similar safety profile of 4 mg zoledronic acid compared with 90 mg pamidronate presently make zoledronic acid the preferred therapy for treatment of bone metastases in patients with all types of advanced malignancy. abstract_id: PUBMED:22307378 Osteolytic and osteoblastic bone metastases: two extremes of the same spectrum? Normal bone development and maintenance are sustained through a balanced communication between osteoclasts and osteoblasts. Invasion of the bone compartment by cancer cells causes an imbalance in their activities and results in predominantly bone lysing or bone forming phenotypes depending on the origin of the cancer. Tumor-induced bone lesions usually exhibit disturbances of both cell types. Thus, osteoclast activity is activated in a predominantly osteoblastic lesion and vice versa. These cancer-induced bone responses favor the survival and growth of cancer cells in their new environment. Therapies that can restore the balance may limit the growth of cancer cells in the bone. The recent development of agents that target the osteolytic components of bone metastasis, including bisphosphonates and denosumab, showed promising results in osteolytic bone diseases such as multiple myeloma but were less effective in improving the osteoblastic bone disease found in prostate cancer. Thus, while osteolytic components are present in both osteoblastic and osteolytic bone lesions, inhibition of the osteolytic component is not sufficient to alter the vicious cycle leading to tumors with an osteoblastic phenotype. These observations suggest that osteolytic and osteoblastic bone metastases are not the same and tumor-induced osteoblastic and osteolytic activity play different roles in supporting their growth and survival. abstract_id: PUBMED:272985 Detection and differential diagnosis of bone lesions by scintigraphy. Bone scintigraphy using 99mTc-labeled phosphate compounds and the scintillation camera supplements X-ray diagnosis in an excellent manner, for localizing and determing the size and metabolic activity of inflammations, degenerations, and malignancies of the skeletal system. It is of prime importance in the early detection and staging of the disease, as well as in registering the course and response to therapy of osseous metastases. abstract_id: PUBMED:12562047 Broad clinical activity of zoledronic acid in osteolytic to osteoblastic bone lesions in patients with a broad range of solid tumors. Bone metastases are a common feature of a variety of solid tumors and are associated with substantial skeletal morbidity, including severe bone pain and pathologic fractures. Treatment with bisphosphonates, primarily pamidronate, is the current standard of care for patients with breast cancer and multiple myeloma who have predominantly osteolytic lesions. However, until recently no bisphosphonate had demonstrated efficacy in patients with osteoblastic lesions, which are common during the progression of prostate cancer and other solid tumors. Zoledronic acid, a potent, new-generation, nitrogen-containing bisphosphonate, has demonstrated significant benefits for patients with bone metastases resulting from a broad range of primary tumors, including multiple myeloma and breast, lung, kidney, and prostate cancers, and other solid tumors. Benefits include a decreased incidence of pathologic fractures and longer time to the first skeletal complication. Zoledronic acid is the first and only bisphosphonate to be proved effective in patients with all types of bone lesions, from osteolytic to osteoblastic, and therefore represents an important therapeutic advancement in the treatment of bone metastases. abstract_id: PUBMED:22684559 Effect of Id1 knockdown on formation of osteolytic bone lesions by prostate cancer PC3 cells in vivo. The formation of osteolytic bone lesions is a key process for osteolytic cancer to metastasize to the bone and is under the control of a set of transcription factors. Recently, the inhibitor of differentiation 1 (Id1) has been linked with angiogenesis, tumorigenesis, metastasis and bone formation. However, the function of Id1 during the process of bone destruction caused by cancer in vivo has not yet been elucidated. We, therefore, examined whether and how Id1 affects the ability of cancer to form osteolytic lesion in vivo. The study used a lentiviral vector overexpressing short hairpin RNA (shRNA) targeting Id1 gene. PC3 cells, a prostate cancer cell line, were transduced with Id1 shRNA or negative control (NC) shRNA before implantation in BALB/c mice. Cells were implanted in a tibial injection model. Tumor formation in bone was monitored by X-ray. The relationship between parathyroid hormone-related protein (PTHrP), an osteolytic factor, and Id1 was analyzed by using immunohistochemistry in tissue sections from osteolytic lesion of the BALB/c mice. Our results showed that Id1 shRNA delivery to PC3 cells by lentivirus caused efficient and stable Id1 gene silencing. In the intratibial model, PC3 cells produced primarily osteolytic lesions in the bone. Eleven of 14 mice in Id1 shRNA group but only 4 of 14 mice in the NC shRNA group developed osteolytic lesions with cortical destruction at 4th week. Mice treated with Id1 shRNA had larger tumor volume in the bone and larger cortical destruction. The expression of PTHrP protein in PC3 cells was not affected by Id1 knockdown in vivo. These results indicate that Id1 may down-regulate the ability of PC3 cells to form osteolytic lesions in vivo and the signal pathway needs to be further investigated. Answer: Yes, prostate cancer should be considered as a differential diagnosis in patients with osteolytic bone lesions. Although prostate cancer frequently metastasizes to the bone and typically presents with osteoblastic lesions on radiographs, there are cases where metastatic prostate cancer can present with diffuse osteolytic bone lesions (PUBMED:26744870; PUBMED:23273271). This unusual presentation has been documented in case reports, indicating that prostate cancer can indeed be part of the differential diagnosis when evaluating male patients with osteolytic bone lesions. It is important for clinicians to be aware of this possibility, as the assumption that prostate cancer only causes osteoblastic lesions could lead to a misdiagnosis or delayed diagnosis in cases where osteolytic lesions are present.
Instruction: Do Safe Radiographic Sacral Screw Pathways Exist in a Pediatric Patient Population and Do They Change With Age? Abstracts: abstract_id: PUBMED:26322470 Do Safe Radiographic Sacral Screw Pathways Exist in a Pediatric Patient Population and Do They Change With Age? Objectives: Iliosacral screw pathways in the first (S1) and second (S2) sacral segments are commonly used for adult pelvic ring stabilization. We hypothesize that radiographically "safe" pathways exist in pediatric patients. Setting: Academic level I Trauma Center. Patients: All patients between ages 2 and 16 years with a computed tomography scan including the pelvis obtained over a 6-week period (174 children, mean age 10.8 ± 3.9 years; 90 boys, 84 girls). Intervention: The width and height at the "constriction point" in 3 safe screw pathways were measured bilaterally by 3 orthopaedists (resident, trauma fellow, trauma attending). Pathways corresponding to: (1) an "iliosacral" screw at S1, a "trans-sacral trans-iliac" (TSTI) screw at S1, and a TSTI screw at S2. Main Outcome Measurements: (1) Mean width and height of pathways, (2) interrater reliability coefficient, (3) availability of pathways greater than 7 mm, (4) growth of pathways with age, (5) sacral morphology. Results: The interrater reliability coefficient was above 0.917 for all measurements. Radiographically safe pathways were available for 99%, 51%, and 89% of children for iliosacral screws at S1 (width 16.4 ± 2.8 mm, height 15.1 ± 3.3 mm), TSTI screws at S1 (width 7.2 ± 4.9 mm, height 8.3 ± 5.6 mm), and TSTI at S2 (width 9.3 ± 2.2 mm, height 11.5 ± 2.7 mm), respectively. Conclusions: Contrary to our hypothesis, almost all children aged 2-16 had a radiographically safe screw pathway for an iliosacral screw at S1, and most of the children had an available pathway for a TSTI screw at S2. However, only 51% had a pathway for a TSTI screw at S1. abstract_id: PUBMED:30828220 Safe corridor for iliosacral and trans-sacral screw placement in Indian population: A preliminary CT based anatomical study. Objectives: Nonsurgical management of unstable pelvic ring injuries is associated with poor outcomes. Posterior pelvic ring injuries include sacroiliac joint disruption and sacral fractures or a combination of the two. Morbidity is high in non-operatively managed patients. Screw fixation is being increasingly used to manage unstable posterior pelvic injuries. Limitations include a steep learning curve and potential for neurovascular injury. This is the first study in Indian population to describe the safe corridor for screw placement and check the feasibility of screw in both upper and lower sacral segments. Methods: This study involved retrospective analysis of 105 pelvic CT scans of patients admitted to the emergency department of a Level 1 trauma centre. Vertical height at the level of constriction (vestibule) of S1 and S2 was measured in coronal sections and anteroposterior width of constrictions was measured in axial sections. We created a trajectory for 7.3 mm cylinder keeping additional 2 mm free bony corridor around it and confirmed that bony limits were not breached in axial, coronal and sagittal sections. Whenever there was breach in bony limit we checked applicability of 6.5 mm screw. Results: The vertical height and anteroposterior width of vestibule/constriction of S1 was significantly higher in males, whereas S2 vestibule height and width were similar in males and females. Both male and female pelves were amenable to S1 Trans-sacral and S1 Iliosacral screw fixation with a 7.3 mm screw when a safe corridor of 2 mm was kept on all sides. However, when S2 segment was analysed, only 42.9% of male pelves and 25.7% of female pelves were amenable to insertion of trans-sacral 7.3 mm screw. Conclusion: An individualized approach is necessary and each patient's CT must be carefully studied before embarking on sacroiliac screw fixation in Indian population. abstract_id: PUBMED:25231682 Transsacral screw safe zone size by sacral segmentation variations. Variations in sacral segmentation may preclude safe placement of transsacral screws for posterior pelvis fixation. We developed a novel automated 3D technique to determine the safe zone size for transsacral screws in the upper two sacral segments in 526 adult pelvis computed tomography scans. Safe zone sizes were then compared by gender and sacral segmentation variations (number of neuroforamen and the presence/absence of lumbosacral transitional vertebrae, ± LSTV). Ten millimeters was used as the safety threshold for a large screw. 3 (0.6%), 366 (70%), and 157 (30%) sacra had 3, 4, or 5 neuroforamen, respectively. Eighty-eight (17%) were +LSTV. Safe zone size depended on gender, number of neuroforamen in -LSTV sacra and presence of LSTV (p &lt; 0.001) but not on the uni- or bilateral nature of the LSTV. 17% of -LSTV sacra were below the safety threshold in S1, 27% in S2, whereas 3% of +LSTV sacra were below in S1, 74% in S2. Of -LSTV sacra that cannot take an S1 screw safely, 77% can do so in S2, leaving only 4% of sacra that cannot accommodate a screw safely in either upper segment. The results demonstrate a predictable pattern of safe zone size based on gender and sacral segmentation variations. abstract_id: PUBMED:31818320 Feasibility of iliosacral screw placement in patients with upper sacral dysplasia. Background: Exact knowledge of the sacral anatomy is crucial for the percutaneous insertion of iliosacral screws. However, dysplastic anatomical patterns are common. In addition to a preoperative computed tomography (CT) analysis, conventional radiographic measures may help to identify upper sacral dysplasia and to avoid damage to surrounding structures. Aiming to further increase safety in percutaneous iliosacral screw placement in the presence of sacral dysmorphism, this study examined the prevalence of previously established radiographic signs and, in addition, defined the "critical SI angle" as a new radiographic criterion. Methods: Pelvic CT scans of 98 consecutive trauma patients were analysed. Next to assessment of established signs indicating upper sacral dysplasia, the critical sacroiliac (SI) angle was defined in standardized pelvic outlet views. Results: The critical SI angle significantly correlates with the presence of mammillary bodies and an intraarticular vacuum phenomenon. With a cut-off value of - 14.2°, the critical SI angle detects the feasibility of a safe iliosacral screw insertion in pelvic outlet views with a sensitivity of 85.9% and a specificity of 85.7%. Conclusions: The critical SI angle can support the decision-making when planning iliosacral screw fixation. The clinical value of the established signs of upper sacral dysplasia remains uncertain. abstract_id: PUBMED:27927507 Sacral-Alar-Iliac Fixation in Pediatric Deformity: Radiographic Outcomes and Complications. Study Design: Retrospective case series. Objectives: To assess the radiographic outcomes and complications of sacral-alar-iliac (SAI) fixation in children. Summary Of Background Data: Pelvic fixation in children undergoing spinal deformity surgery can be challenging because of complex anatomy, compound biomechanical forces at the lumbosacral junction, and poor bone quality. Methods: Radiographic and clinical records of 80 consecutive patients aged 18 years or younger who underwent posterior spinal fusion surgery with SAI fixation by one pediatric orthopedic surgeon and who had a minimum 2 years of follow-up (mean follow-up: 3.5 years, range, 2-7 years) were retrospectively reviewed. Changes in coronal curve magnitude and pelvic obliquity were assessed using Student t tests. Significance was set at a p value less than .05 for all analyses. Results: Pelvic obliquity correction averaged 77%, from 26 ± 13 degrees before surgery to 6 ± 4 degrees at final follow-up (p &lt; .001); 91% of the patients were corrected to a pelvic obliquity of less than 10 degrees. Coronal curve correction averaged 72%, from 78 ± 27 degrees before surgery to 22 ± 15 degrees at final follow-up (p &lt; .001). No patient had vascular or neurologic complications or died perioperatively. Twenty patients (25%) had radiographic evidence of implant-related problems, of which there were nine screw fractures (all in the neck of screws with ≤8-mm outer diameter). Six patients had symptomatic complications associated with SAI fixation (three patients with pseudarthrosis at the lumbosacral junction, all of whom required revision surgery, two patients with implant prominence, and one patient with pain related to implant). Other complications included seven instances of wound dehiscence, three deep wound infections, and one superficial wound infection. Conclusions: SAI fixation provided a low-profile alternative to iliac screws for the correction of pelvic obliquity. The largest possible diameter screws are recommended, ideally &gt;8 mm. Level Of Evidence: Level 4. abstract_id: PUBMED:31660021 Anatomical considerations of safe drilling corridor upper sacral segment screw insertion. The upper segment of sacrum is an important for screw insertions of unstable lumbosacral spine. Measurements of the S1-S2 as sacral wings, pedicles, sacral foraminas and sacral canal were taken from 87 sacrums. The mean depths of S1 pedicle and sacral wing were estimated as 25.8 ± 2.3 mm and 50.1 ± 1.7 mm, respectively. Angles screw trajectory of sacral pedicle anteromedial and sacral wing were measured as 29.6 ± 0.9° and 29.7 ± 2.1°, respectively. To avoid injury to the vascular structures anteriorly and nerve roots medially, depth and angle of screw trajectory is important for the entrance off pedicular screw placement to the S1. abstract_id: PUBMED:26855859 The feasibility and radiological features of sacral alar iliac fixation in an adult population: a 3D imaging study. Background: Surgical treatments for adult spinal deformities often include pelvic fixation, and the feasibility of sacral-2 alar iliac (S2AI) screw fixation has been shown previously. However, sometimes S2AI screw fixation cannot be applied due to the presence of an osteolytic lesion or trauma or because the biomechanical properties of only an S2AI screw is insufficient. Therefore, we questioned the feasibility of using sacral AI screws in other segments and determined whether S3AI and S4AI screws have the potential to be used for sacral fractures. The aim of this study was to investigate the feasibility and radiological features of sacral AI fixation in S1-S4 in an adult population using 3D imaging techniques. Methods: Computed tomography (CT) scans were taken of 45 patients and were imported into Mimics (Version 10.01, Materialise, Belgium) software to reconstruct the 3D digital images. Next, a cylinder (radius of 3.5 mm) was drawn to imitate the screw trajectory of a S1-4 AI screw, and every imitated screw in each segment was adjusted to a maximum upward and downward angle to acquire the feasible region. The parameters of the S1-4AI screw trajectories were measured. Results: Sacral AI screws could be successfully imitated using 3D digital imaging. The S4AI screw trajectory could be obtained in 19 of 45 patient images (42.2%), while the feasibility rates of S1AI, S2AI, and S3AI screw fixation were 100%, 100%, and 91.1% (41/45), respectively. The feasible regions of S1AI, S2AI, and S3AI screw trajectories were wide enough, while the adjustable angle of S4AI screws was very small. Conclusion: It is feasible to place S1-2AI screws in the entire adult population and S3-4AI screws in some of the adult population. Furthermore, our study suggested that 3D digital images are suitable to study the feasibility of new screw fixation. abstract_id: PUBMED:34798616 Anatomy of the L5 nerve root in the pelvis for safe sacral screw placement: a cadaveric study. Objective: Previous reports have focused on the complications of L5 nerve root injury caused by anterolateral misplacement of the S1 pedicle screws. Anatomical knowledge of the L5 nerve root in the pelvis is essential for safe and effective placement of the sacral screw. This cadaveric study aimed to investigate the course of the L5 nerve root in the pelvis and to clarify a safe zone for inserting the sacral screw. Methods: Fifty-four L5 nerve roots located bilaterally in 27 formalin-fixed cadavers were studied. The ventral rami of the L5 nerve roots were dissected along their courses from the intervertebral foramina to the lesser pelvis. The running angles of the L5 nerve roots from the centerline were measured in the coronal plane. In addition, the distances from the ala of the sacrum to the L5 nerve roots were measured in the sagittal plane. Results: The authors found that the running angles of the L5 nerve roots changed at the most anterior surface of the ala of the sacrum. The angles of the bilateral L5 nerve roots from the right and left L5 intervertebral foramina to their inflection points were 13.77° ± 5.01° and 14.65° ± 4.71°, respectively. The angles of the bilateral L5 nerve roots from the right and left inflection points to the lesser pelvis were 19.66° ± 6.40° and 20.58° ± 5.78°, respectively. There were no significant differences between the angles measured in the right and left nerve roots. The majority of the L5 nerves coursed outward after changing their angles at the inflection point. The distances from the ala of the sacrum to the L5 nerve roots in the sagittal plane were less than 1 mm in all cases, which indicated that the L5 nerve roots were positioned close to the ala of the sacrum and had poor mobility. Conclusions: All of the L5 nerve roots coursed outward after exiting the intervertebral foramina and never inward. To prevent iatrogenic L5 nerve root injury, surgeons should insert the S1 pedicle screw medially with an angle &gt; 0° toward the inside of the S1 anterior foramina and the sacral alar screw laterally with an angle &gt; 30°. abstract_id: PUBMED:36561220 Safe Corridor for Sacroiliac Screw Insertion Can Be Found Quickly Without the Use of the Lateral Sacral View. Background: Studies have proved that in addition to the inlet and outlet views, the intraoperative lateral sacral view is required to ensure the correct entry point, reduce operating time, and lower radiation exposure. Considering the complex anatomy of the sacrum, we showed a safe corridor for sacroiliac joint (SIJ) screw insertion that was accessible using only inlet and outlet fluoroscopic views. Methods: From 2013 to 2020, we enrolled 215 patients who underwent percutaneous SIJ screw insertion. Our experience in SIJ screw insertion is presented using only two views (inlet and outlet). We reported on the radiation exposure time, operating time, rate of screw malposition, neurologic injury, and revision surgery. Results: The screw malposition rate was 5.5%, including 11 foraminal perforations and one perforated anterior sacral cortex. Paresthesia after the surgery was observed in six patients (2.8%). No revision surgery or screw removal was performed. The radiation exposure and operation time for each screw were 21 ± 4.5 s and 13.5 min, respectively. Conclusion: The most anterior and the lowest part of the S1 vertebra can be easily found using intraoperative inlet and outlet views. It is a safe corridor for SIJ screw insertion with low radiation time, neurologic injury, and revision rates. abstract_id: PUBMED:37735320 Higher pelvic incidence values are a risk factor for trans-iliac trans-sacral screw malposition in sacroiliac complex fracture treatment. Background: Percutaneous iliosacral (IS) screw fixation and trans-iliac trans-sacral (TITS) screw fixation are clinically effective treatments of posterior pelvic sacroiliac fractures. In order to accurately assess the sacrum position relative to the pelvis, pelvic incidence (PI) is a commonly utilized radiographic parameter in sagittal view. This study aimed to investigate and compare the surgical outcomes and radiographic parameters of IS or TITS screw fixations for the treatment of posterior sacroiliac complex fractures with different PI values. Materials And Methods: The data on patients with posterior pelvic sacroiliac fractures who underwent percutaneous IS or TITS screw fixations, or both, at a single level I trauma center between January 2017 and June 2020 were reviewed. We documented the patient characteristics and fracture types, reviewed surgical records, and measured the radiographic parameters via plain films and multi-planar computed tomography (mpCT) images. Radiographic variations in PI, sacral slope, pelvic tilt, sacral dysmorphism, pelvic ring reduction quality, screw deviation angles, screw malposition grading, and iatrogenic complications were documented and analyzed. Results: A total of 85 patients were included, and 65 IS and 70 TITS screws were accounted for. Patients were divided into two groups according to screw fixation method and further divided into four sub-groups based on baseline PI values. The PI cutoff values were 49.85° and 48.05° in the IS and TITS screw groups, respectively, according to receiver operating characteristic analysis and Youden's J statistic. Smaller PI values were significantly correlated with sacral dysmorphism (p = 0.027 and 0.003 in the IS and TITS screw groups, respectively). Patients with larger PI values were at a significantly increased risk of screw malposition in the TITS screw group (p = 0.049), with no association in the IS screw group. Logistic regression confirmed that a larger PI value was a significant risk factor for screw malposition in the TITS screw group (p = 0.010). The post-operative outcomes improved from poor/fair (at 6 months) to good/average (at 12 months) based on the Postel Merle d'Aubigné and Majeed scores, with no significant differences between subgroups. Conclusions: Both percutaneous IS and TITS screw fixations are safe and effective treatments for posterior pelvic sacroiliac fractures. Due to the higher risk of screw malposition in patients with larger PI values, it is crucial to identify potential patients at risk when performing TITS screw fixation surgery. Level Of Evidence: Level III. Answer: Yes, safe radiographic sacral screw pathways do exist in a pediatric patient population, and they change with age. A study conducted at an academic level I Trauma Center involving 174 children aged 2 to 16 years found that radiographically safe pathways were available for 99% of children for iliosacral screws at S1, and for 89% of children for trans-sacral trans-iliac (TSTI) screws at S2. However, only 51% had a pathway for a TSTI screw at S1. The study also observed growth of pathways with age, indicating that the availability of safe pathways may increase as children grow (PUBMED:26322470). Another study involving a retrospective analysis of 105 pelvic CT scans from a Level 1 trauma center in the Indian population found that both male and female pelves were amenable to S1 Trans-sacral and S1 Iliosacral screw fixation with a 7.3 mm screw when a safe corridor of 2 mm was kept on all sides. However, for the S2 segment, only 42.9% of male pelves and 25.7% of female pelves were amenable to insertion of a trans-sacral 7.3 mm screw, suggesting that an individualized approach is necessary and each patient's CT must be carefully studied before embarking on sacroiliac screw fixation (PUBMED:30828220). These findings suggest that while safe radiographic pathways for sacral screw placement in pediatric patients do exist, the size and availability of these pathways can vary with age and between individuals. Therefore, careful preoperative planning and individualized assessment using imaging studies are crucial for determining the feasibility and safety of sacral screw placement in pediatric patients.
Instruction: Do ray cells provide a pathway for radial water movement in the stems of conifer trees? Abstracts: abstract_id: PUBMED:23347974 Do ray cells provide a pathway for radial water movement in the stems of conifer trees? Premise Of The Study: The pathway of radial water movement in tree stems presents an unknown with respect to whole-tree hydraulics. Radial profiles have shown substantial axial sap flow in deeper layers of sapwood (that may lack direct connection to transpiring leaves), which suggests the existence of a radial pathway for water movement. Rays in tree stems include ray tracheids and/or ray parenchyma cells and may offer such a pathway for radial water transport. This study investigated relationships between radial hydraulic conductivity (k(s-rad)) and ray anatomical and stem morphological characteristics in the stems of three conifer species whose distributions span a natural aridity gradient across the Cascade Mountain range in Oregon, United States. Methods: The k(s-rad) was measured with a high-pressure flow meter. Ray tracheid and ray parenchyma characteristics and water transport properties were visualized using autofluorescence or confocal microscopy. Key Results: The k(s-rad) did not vary predictably with sapwood depth among species and populations. Dye tracer did not infiltrate ray tracheids, and infiltration into ray parenchyma was limited. Regression analyses revealed inconsistent relationships between k(s-rad) and selected anatomical or growth characteristics when ecotypes were analyzed individually and weak relationships between k(s-rad) and these characteristics when data were pooled by tree species. Conclusions: The lack of significant relationships between k(s-rad) and the ray and stem morphologies we studied, combined with the absence of dye tracer in ray tracheid and limited movement of dye into ray parenchyma suggests that rays may not facilitate radial water transport in the three conifer species studied. abstract_id: PUBMED:34100071 Radial and axial water movement in adult trees recorded by stable isotope tracing. The capacity of trees to release water from storage compartments into the transpiration stream can mitigate damage to hydraulic functioning. However, the location of these 'transient' water sources and also the pathways of water movement other than vertical through tree stems still remain poorly understood. We conducted an experiment on two tree species in a common garden in eastern Australia that naturally grow in regions of high (Eucalyptus tereticornis, 'Red Gum') and low (Eucalyptus sideroxylon, 'Ironbark') annual precipitation rates. Deuterium-enriched water (1350% label strength) was directly introduced into the transpiration stream of three trees per species for four consecutive days. Subsequently, the trees were felled, woody tissue samples were collected from different heights and azimuthal positions of the stems, and stable isotope ratios were determined on the water extracted from all samples. The presence/absence of the tracer along the radial and vertical stem axes in combination with xylem hydraulic properties inferred from sapflow, leaf and stem water potentials, wood moisture contents and anatomical sapwood characteristics elucidated species-specific patterns of short-term stem water storage and movement. The distribution of water isotopes at natural abundance among woody tissues indicated systematic differences with highest values of sapwood water and lower values in inner bark and heartwood. Presence of tracer in water of the inner bark highlighted the importance of this tissue as capacitor. Although injected at the northern side of stems, tracer was also discovered at the southern side, providing empirical evidence for circumferential flow in sapwood, particularly of Ironbark. Greater vertical water transport in Red Gum compared with more radial and circumferential water transport in Ironbark were associated with species-specific sapwood anatomy. Our study highlights the value of combining information from stable isotope tracers and wood anatomy to investigate patterns of water transport and storage of tall trees in situ. abstract_id: PUBMED:25588734 Phloem as capacitor: radial transfer of water into xylem of tree stems occurs via symplastic transport in ray parenchyma. The transfer of water from phloem into xylem is thought to mitigate increasing hydraulic tension in the vascular system of trees during the diel cycle of transpiration. Although a putative plant function, to date there is no direct evidence of such water transfer or the contributing pathways. Here, we trace the radial flow of water from the phloem into the xylem and investigate its diel variation. Introducing a fluorescent dye (0.1% [w/w] fluorescein) into the phloem water of the tree species Eucalyptus saligna allowed localization of the dye in phloem and xylem tissues using confocal laser scanning microscopy. Our results show that the majority of water transferred between the two tissues is facilitated via the symplast of horizontal ray parenchyma cells. The method also permitted assessment of the radial transfer of water during the diel cycle, where changes in water potential gradients between phloem and xylem determine the extent and direction of radial transfer. When injected during the morning, when xylem water potential rapidly declined, fluorescein was translocated, on average, farther into mature xylem (447 ± 188 µm) compared with nighttime, when xylem water potential was close to zero (155 ± 42 µm). These findings provide empirical evidence to support theoretical predictions of the role of phloem-xylem water transfer in the hydraulic functioning of plants. This method enables investigation of the role of phloem tissue as a dynamic capacitor for water storage and transfer and its contribution toward the maintenance of the functional integrity of xylem in trees. abstract_id: PUBMED:26163488 Hydraulic functioning of tree stems--fusing ray anatomy, radial transfer and capacitance. Not long ago, textbooks on plant physiology divulged the view that phloem and xylem are separate transport systems with exclusive functions. Phloem was flowing downwards providing roots with carbohydrates. Xylem transported water upwards from roots to leaves. This simplified view has changed forever. Today we have a much-refined understanding of the complex transport mechanisms, regulatory functions and surprisingly ingenuous solutions trees have evolved to distribute carbohydrates and water internally to fuel growth and help mediate biotic and abiotic stresses. This review focuses on functional links between tissues of the inner bark region (i.e., more than just phloem) and the xylem, facilitated by radially aligned and interconnected parenchyma cells, called rays. Rays are usually found along the entire vertical axis of tree stems, mediating a number of transport processes. We use a top-down approach to unveil the role of rays in these processes. Due to the central role of rays in facilitating the coupling of inner bark and xylem we dedicate the first section to ray anatomy, pathways and control mechanisms involved in radial transport. In the second section, basic concepts and models for radial movement through rays are introduced and their impacts on water and carbon fluxes at the whole-tree level are discussed. This section is followed by a closer look at the capacitive function of composite tissues in stems where gradual changes in water potential generate a diurnal 'pulse'. We explain how this pulse can be measured and interpreted, and where the limitations of such analyses are. Towards the end of this review, we include a brief description of the role of radial transport during limited availability of water. By elucidating the strong hydraulic link between inner bark and xylem, the traditional view of two separate transport systems dissolves and the idea of one interconnected, yet highly segregated transport network for carbohydrates and water arises. abstract_id: PUBMED:24661964 Radial and vertical distributions of radiocesium in tree stems of Pinus densiflora and Quercus serrata 1.5 y after the Fukushima nuclear disaster. The radial and vertical distributions of radiocesium in tree stems were investigated to understand radiocesium transfer to trees at an early stage of massive contamination from the Fukushima nuclear disaster. A conifer species (Japanese red pine) and a broad-leaved species (Japanese konara oak) were selected to determine whether the radiocesium contamination pattern differs between species. Stem disks were collected at several heights and separated into outer bark, inner bark, and wood. The radiocesium concentration was the highest in the outer bark, followed by that in the inner bark and wood. The vertical distribution of the radiocesium concentration at each stem part differed between the species. The difference between species in radiocesium concentration of the outer bark could be explained by presence or absence of leaves at the time of the disaster. However, the reasons for the differences between species in the radiocesium concentration of the inner bark and wood are unclear. The radial distribution in the wood of the studied species showed a common pattern across stem disk heights and species. However, the radiocesium concentration ratio between sapwood and inner bark was significantly different between species. Although the radial contamination pattern in the wood was similar in the studied species during the early stage of contamination, the radiocesium transport pathway and allocation would be different between the species, and the contamination pattern will likely be different between the species at later stages. Continued investigations are important for understanding the radiocesium cycle and the accumulation of radiocesium in the tree stems of each species. abstract_id: PUBMED:24510216 How to catch the patch? A dendrometer study of the radial increment through successive cambia in the mangrove Avicennia. Background And Aims: Successive vascular cambia are involved in the secondary growth of at least 200 woody species from &gt;30 plant families. In the mangrove Avicennia these successive cambia are organized in patches, creating stems with non-concentric xylem tissue surrounded by internal phloem tissue. Little is known about radial growth and tree stem dynamics in trees with this type of anatomy. This study aims to (1) clarify the process of secondary growth of Avicennia trees by studying its patchiness; and (2) study the radial increment of Avicennia stems, both temporary and permanent, in relation to local climatic and environmental conditions. A test is made of the hypothesis that patchy radial growth and stem dynamics enable Avicennia trees to better survive conditions of extreme physiological drought. Methods Stem variations were monitored by automatic point dendrometers at four different positions around and along the stem of two Avicennia marina trees in the mangrove forest of Gazi Bay (Kenya) during 1 year. Key Results: Patchiness was found in the radial growth and shrinkage and swelling patterns of Avicennia stems. It was, however, potentially rather than systematically present, i.e. stems reacted either concentrically or patchily to environment triggers, and it was fresh water availability and not tidal inundation that affected radial increment. Conclusions: It is concluded that the ability to develop successive cambia in a patchy way enables Avicennia trees to adapt to changes in the prevailing environmental conditions, enhancing its survival in the highly dynamic mangrove environment. Limited water could be used in a more directive way, investing all the attainable resources in only some locations of the tree stem so that at least at these locations there is enough water to, for example, overcome vessel embolisms or create new cells. As these locations change with time, the overall functioning of the tree can be maintained. abstract_id: PUBMED:12647177 Axial and radial water transport and internal water storage in tropical forest canopy trees. Heat and stable isotope tracers were used to study axial and radial water transport in relation to sapwood anatomical characteristics and internal water storage in four canopy tree species of a seasonally dry tropical forest in Panama. Anatomical characteristics of the wood and radial profiles of sap flow were measured at the base, upper trunk, and crown of a single individual of Anacardium excelsum, Ficus insipida, Schefflera morototoni, and Cordia alliodora during two consecutive dry seasons. Vessel lumen diameter and vessel density did not exhibit a consistent trend axially from the base of the stem to the base of the crown. However, lumen diameter decreased sharply from the base of the crown to the terminal branches. The ratio of vessel lumen area to sapwood cross-sectional area was consistently higher at the base of the crown than at the base of the trunk in A. excelsum, F. insipida and C. alliodora, but no axial trend was apparent in S. morototoni. Radial profiles of the preceding wood anatomical characteristics varied according to species and the height at which the wood samples were obtained. Radial profiles of sap flux density measured with thermal dissipation sensors of variable length near the base of the crown were highly correlated with radial profiles of specific hydraulic conductivity (k(s)) calculated from xylem anatomical characteristics. The relationship between sap flux density and k(s) was species-independent. Deuterium oxide (D(2)O) injected into the base of the trunk of the four study trees was detected in the water transpired from the upper crown after only 1 day in the 26-m-tall C. alliodora tree, 2 days in the 28-m-tall F. insipida tree, 3 days in the 38-m-tall A. excelsum tree, and 5 days in the 22-m-tall S. morototoni tree. Radial transport of injected D(2)O was detected in A. excelsum, F. insipida and S. morototoni, but not C. alliodora. The rate of axial D(2)O transport, a surrogate for maximum sap velocity, was positively correlated with the predicted sapwood k(s) and with tree height normalized by the relative diurnal water storage capacity. Residence times for the disappearance of the D(2)O tracer in transpired water ranged from 2 days in C. alliodora to 22 days in A. excelsum and were positively correlated with a normalized index of diurnal water storage capacity. Capacitive exchange of water between stem storage compartments and the transpiration stream thus had a profound influence on apparent rates of axial water transport, the magnitude of radial water movement, and the retention time in the tree of water taken up by the roots. The inverse relationship between internal water exchange capacity and k(s) was consistent with a trade-off contributing to stability of leaf water status through highly efficient water transport at one extreme and release of stored water at the other extreme. abstract_id: PUBMED:11346941 Cambial reactivation in locally heated stems of the evergreen conifer Abies sachalinensis (Schmidt) masters. A study was made of cambial activity, the localization of storage starch around the cambium, and the localization and occurrence of microtubules in cambial cells from dormancy to reactivation in locally heated (22-26 degrees C) stems of the evergreen conifer Abies sachalinensis. Heating induced localized reactivation of the cambium in the heated portions of the stem. Erect ray cambial cells resumed cell division 1 d prior to the reactivation of fusiform cambial cells and procumbent ray cambial cells. The re-initiation of the division of fusiform cambial cells occurred first on the phloem side. During the heat treatment, the amount of storage starch decreased in procumbent ray cambial cells and in the phloem parenchyma adjacent to the cambium but increased in fusiform cambial cells. Preprophase bands of microtubules, spindle microtubules and phragmoplast microtubules were observed both in erect ray cambial cells and in procumbent ray cambial cells. By contrast, no evidence of the presence of such preprophase bands of microtubules was detected in fusiform cambial cells. The results suggest that the localized heating of stems of evergreen conifers might provide a useful experimental model system for studies of the dynamics of cambial reactivation in intact trees. abstract_id: PUBMED:28381902 Soil water availability and evaporative demand affect seasonal growth dynamics and use of stored water in co-occurring saplings and mature conifers under drought. High-resolution time series of stem radius variations (SRVs) record fluctuations in tree water status and temporal dynamics of radial growth. The focus of this study was to evaluate the influence of tree size (i.e., saplings vs. mature trees) and soil water availability on SRVs. Dendrometers were installed on Pinus sylvestris at an open xeric site and on Picea abies at a dry-mesic site, and the SRVs of co-occurring saplings and mature trees were analyzed during two consecutive years. The results revealed that irrespective of tree size, radial growth in P. sylvestris occurred in April-May, whereas the main growing period of P. abies was April-June (saplings) and May-June (mature trees). Linear relationships between growth-detrended SRVs (SSRVs) of mature trees vs. saplings and climate-SSRV relationships revealed greater use of water reserves by mature P. abies compared with saplings. This suggests that the strikingly depressed growth of saplings compared with mature P. abies was caused by source limitation, i.e., restricted photosynthesis beneath the dense canopy. In contrast, a tree size effect on the annual increment, SSRV, and climate-SSRV relationships was less obvious in P. sylvestris, indicating comparable water status in mature trees and saplings under an open canopy. The results of this study provided evidence that water availability and a canopy atmosphere can explain differences in temporal dynamics of radial growth and use of stem water reserves among mature trees and saplings. abstract_id: PUBMED:27861997 An empirical method that separates irreversible stem radial growth from bark water content changes in trees: theory and case studies. Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. We employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validate the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. Our novel method provides an improved understanding of the relative source-sink carbon dynamics of tree stems at a sub-daily time scale. Answer: The question of whether ray cells provide a pathway for radial water movement in the stems of conifer trees has been investigated, but the evidence suggests that they may not play a significant role in facilitating radial water transport in conifers. A study that measured radial hydraulic conductivity (k(s-rad)) in the stems of three conifer species across an aridity gradient found that the k(s-rad) did not vary predictably with sapwood depth among species and populations. Additionally, dye tracer experiments showed that the dye did not infiltrate ray tracheids, and infiltration into ray parenchyma was limited. Regression analyses revealed inconsistent relationships between k(s-rad) and selected anatomical or growth characteristics, suggesting that rays may not facilitate radial water transport in the conifer species studied (PUBMED:23347974). In contrast, research on other tree species, such as Eucalyptus, has provided empirical evidence for circumferential flow in sapwood and highlighted the importance of inner bark as a capacitor for water storage and movement (PUBMED:34100071). Another study on Eucalyptus saligna showed that the majority of water transferred between phloem and xylem tissues is facilitated via the symplast of horizontal ray parenchyma cells, indicating that rays can play a role in radial water transfer in some species (PUBMED:25588734). Overall, while rays have been implicated in radial water movement in some tree species, the evidence from conifers specifically suggests that rays may not significantly contribute to radial water transport in these trees (PUBMED:23347974). Further research, particularly on a wider range of conifer species and under different environmental conditions, would be necessary to fully understand the role of ray cells in conifer water transport.
Instruction: The use of illustration to improve older adults' comprehension of health-related information: is it helpful? Abstracts: abstract_id: PUBMED:19286343 The use of illustration to improve older adults' comprehension of health-related information: is it helpful? Objective: To examine whether explanatory illustrations can improve older adults' comprehension of written health information. Methods: Six short health-related texts were selected from websites and pamphlets. Young and older adults were randomly assigned to read health-related texts alone or texts accompanied by explanatory illustrations. Eye movements were recorded while reading. Word recognition, text comprehension, and comprehension of the illustrations were assessed after reading. Results: Older adults performed as well as or better than young adults on the word recognition and text comprehension measures. However, older adults performed less well than young adults on the illustration comprehension measures. Analysis of readers' eye movements showed that older adults spent more time reading illustration-related phrases and fixating on the illustrations than did young adults, yet had poorer comprehension of the illustrations. Conclusion: Older adults might not benefit from text illustrations because illustrations can be difficult to integrate with the text. Practice Implications: Health practitioners should not assume that illustrations will increase older adults' comprehension of health information. abstract_id: PUBMED:34336769 Does the Internet Use Improve the Mental Health of Chinese Older Adults? Purpose: The Internet has become an important part of daily life. However, older adults in China remain digital refugees amid the rapid development of digital information technology. This study attempts to scientifically answer how Internet use affects the subjective welfare of older adults. Method: Using data from the 2014 and 2016 China Longitudinal Aging Social Survey (CLASS), a combination of ordinary least squares, ordered logit regression models, and propensity score matching (PSM) models were used to analyze the effects of Internet use on the mental health of Chinese older adults. Results: Our findings suggest that Internet use affects the mental health of older adults and increases the incidence of their depressive symptoms. These findings are robust to changing the key indicators, research method, and sample. Further heterogeneity analysis reveals that the negative effects on mental health are more evident for specific groups of older adults, such as those who are women, younger and middle-aged, high-income, non-rural Hukou, less educated, and living with others. Conclusions: Cultivating the ability of older adults to use the Internet and maintain a rational attitude while doing so can prevent its negative impact on their life satisfaction. Moreover, it can improve their attitudes toward using the technology and reduce their anxiety. abstract_id: PUBMED:30318383 Effects of presentation formats on consumers' performance and perceptions in the use of personal health records among older and young adults. Objective: To examine the effects of presentation formats on consumers' performance and perceptions in the use of personal health records (PHRs) and to compare the effects between older and young adults. Methods: A two-factor, mixed design experiment was conducted, where 36 older and 36 young participants were presented with hypothetical PHRs. The PHRs were randomly presented by six presentation formats: three tables and three line graphs, varying by visual cues. Participants' performance and perceptions in the use of PHRs were measured. Results: Presentation formats affected participants' performance and perceptions. Visual cue-enhanced line graphs and tables yielded less time and fewer errors, were considered easier and more useful in information comprehension, and resulted in higher confidence in correct comprehension of PHRs (all p's &lt; 0.01), compared with their basic counterparts. There were age-related differences in task performance (all p's &lt; 0.05), while young and older adults had similar perceptions for the six formats. Line graph with colored background was most preferred by both groups. Conclusion: Visual cue-enhanced presentation formats appear effective in facilitating comprehension and eliciting favorable perceptions of PHRs for both young and older adults. Practice Implications: Consumer-facing displays of PHRs could use visual cue-enhanced presentation formats to facilitate comprehension. abstract_id: PUBMED:33748338 Use of Health Information Varies by Region Among Older Adults in the U.S. Objective: To examine geographic variations in health information use among older adults in the United States. Methods: We compared 15,531 adults (age 45 and older) across four U.S. regions. Descriptive analyses were conducted to assess health information seeking and use by year. The relationship between health information seeking or use and regional changes were assessed using binomial logistic regression. Binomial models were adjusted by socio-demographics, chronic conditions, and health information sources. Magnitude and direction of relationships were assessed using adjusted odds ratios (aORs), 95% confidence intervals (CIs), and p-values. Results: Only the Northeast region showed increases in health information seeking (3.8%) and use (4.5%) among older adults. However adjusted models showed those living in the Northeast were 28% less likely to use health information to maintain their health and 32% less likely to use health information to treat illness. Conclusion: As a result of the current pandemic, older adults are facing a growing burden from health care expenses. Inability to gather and use health information for personal safety or self care can potentially increase inequalities in health, especially for older adults without personal health care providers. abstract_id: PUBMED:30958953 Preventive health services and physical activity improve health-related quality of life in Mexican older adults. Objective: To evaluate the combined use of IMSS preventive health services (PHS) and the practice of physical activity (PA) in order to determine their association with the perceived health-related quality of life (HRQoL) of older adults. Materials And Methods: A survey-based cross-sectional study was conducted at nine primary health care units (PHCUs) in Mexico City. The 36-Item Short-Form Health Survey evaluated the HRQoL, while PA and PHS use were assessed through self-report. Linear regression analyses were performed to determine the relationship between these three variables. Results: Of the 1 085 older adults who participated in the study, 36.8% used PHS, and 23.3% reported that they engage in PA. After controlling for possible effects due to sociodemographic and health differences, we found that participants who were physically active and used PHS reported better perceived HRQoL scores. Conclusions: The combined use of preventive practices such as the use of PHS and the practice of PA can help improve the perceived HRQoL of older adults. abstract_id: PUBMED:34794695 Alcohol Use Disorder in Older Adults. As the number of older adults worldwide continues to grow, we observe a proportional growth of substance use. Despite the myriad of complications alcohol use disorder (AUD) has on the body with regards to organ systems and mental health, the topic has been underresearched in the older adult population. Thus, it is important to create awareness about the growing problem of AUD among older adults. In this way, we can mitigate the long-term complications and side effects observed with alcohol abuse in this vulnerable population. abstract_id: PUBMED:34068702 The Relationship between Internet Use and Health among Older Adults in China: The Mediating Role of Social Capital. A growing academic attention has been paid to the health effects of Internet use among older adults. However, the relationship between Internet use and health among older adults in China remains to be studied further. On the one hand, existing research is still controversial on this issue. On the other hand, the underlying mechanism of how Internet use affects the health of older adults has not been fully explored. This article examined the relationship between Internet use and health among older adults with the mediating role of social capital in China based on the 2018 wave of China Health and Retirement Longitudinal Study (CHARLS). This study reveals that Internet use has a positive association with the health of older adults, and the positive effects of internet use among older adults are heterogeneous in age and residential location. In addition, this study also demonstrates that social capital plays a partial mediating role between Internet use and physical health among older adults. It is important for the government to take effective measures to expand Internet use and enhance social capital among older adults. abstract_id: PUBMED:36504989 Internet use and health status among older adults: The mediating role of social participation. Background: The acceleration of population aging and the arrival of the informatization make more and more older adults use the Internet, and its use is having an impact on their health. However, the relationship between internet use and the health of older adults and the mechanism of the effect of internet use on the health are not very clear. Methods: Multiple linear regression models to explore the correlation between internet use and health status in the 3,141 individuals aged ≥60 years were used. Propensity score matching (PSM) method was used to test the robustness of the regression results. In addition, sequential recursive models was used to examine the mediating effect of social participation on the relationship between internet use and health status. Results And Discussion: We found a significant positive relationship between internet use and health status, and social participation mediated the relationship between internet use and health status. In addition, the effect of internet use on health status was different among older adults in rural and urban areas. Conclusions: The development and application of internet products adapted to the development of an aging society should be accelerated to meet their needs for continued socialization. The forms and activities of social participation for the elderly groups should be enriched and public service internet usage training seminars should be conducted to improve internet skills. abstract_id: PUBMED:35679697 Quality of life in older adults with opioid use disorder: A scoping review. The increasing prevalence of Substance Use Disorder (SUD) and opioid use disorder (OUD) is part of a national health crisis and reflects an unfortunate trend among populations of older adults. Opioid Use Disorder and opioid-related mortalities are also rising among older adults following this trend. Compared to younger populations, the effect of SUD and OUD on quality of life (QOL) in older adults is complex and poorly understood. This scoping review explores how QOL has been evaluated in high-risk subpopulations of older adults with SUD, specifically OUD. The articles reviewed for this paper targeted studies measuring QOL in older adults with OUD. We uncovered a paucity of literature devoted to studying interventions to improve QOL in older adults with OUD. This review supports further research on clinical interventions targeting improving QOL for older adults with OUD. abstract_id: PUBMED:34866945 The Relationship Between Internet Use and Mental Health Among Older Adults in China: The Mediating Role of Physical Exercise. Objective: Affected by internet applications, mental health among older adults has become an important public health issue in China. This study aimed to use cross-sectional data to explore how internet use affects mental health among older adults in China, along with the moderating role of physical exercise in the above relationship. Methods: This study employed data from the Chinese General Social Survey (CGSS) conducted in 2017. In addition, OLS regression was adopted to explore the impacts of internet use on the mental health of older adults in China. Results: Results from the cross-sectional data model showed that internet use had a statistically significant and positive correlation with the mental health of older adults. The regression result of the full sample showed that the regression coefficient of the influence of internet use on the mental health of the elderly was 0.114, which was significant at the 1% level. Furthermore, this study used mediating analysis to explore the possible underlying mechanism by which internet use influenced mental health status through physical exercise. Physical exercise played a mediating role in the relationship between internet use and mental health; the ratio of the indirect effect to total effect mediated was 0.272. It is important to note that the effect size of the indirect effect through physical exercise was quite large. Conclusion: Our findings suggested that internet use was significantly associated with better mental health status for older Chinese residents, mediated by higher physical exercise frequency, which was consistent with China's recent efforts to promote health governance with the "Internet Plus Exercise" campaign. Consequently, the government must encourage the development of more internet applications suitable for the elderly to increase their exercise frequency and ultimately improve their mental health. These results also provided insights for clinical solutions. Answer: The use of illustrations to improve older adults' comprehension of health-related information does not appear to be universally helpful. A study examining whether explanatory illustrations can enhance older adults' understanding of written health information found that older adults performed less well than young adults on illustration comprehension measures. Despite spending more time reading illustration-related phrases and fixating on the illustrations, older adults had poorer comprehension of the illustrations. This suggests that older adults might not benefit from text illustrations because they can be difficult to integrate with the text. Therefore, health practitioners should not assume that illustrations will automatically increase older adults' comprehension of health information (PUBMED:19286343).
Instruction: The maintenance of traditions in marmosets: individual habit, not social conformity? Abstracts: abstract_id: PUBMED:19223965 The maintenance of traditions in marmosets: individual habit, not social conformity? A field experiment. Background: Social conformity is a cornerstone of human culture because it accelerates and maintains the spread of behaviour within a group. Few empirical studies have investigated the role of social conformity in the maintenance of traditions despite an increasing body of literature on the formation of behavioural patterns in non-human animals. The current report presents a field experiment with free-ranging marmosets (Callithrix jacchus) which investigated whether social conformity is necessary for the maintenance of behavioural patterns within groups or whether individual effects such as habit formation would suffice. Methods: Using a two-action apparatus, we established alternative behavioural patterns in six family groups composed of 36 individuals. These groups experienced only one technique during a training phase and were thereafter tested with two techniques available. The monkeys reliably maintained the trained method over a period of three weeks, despite discovering the alternative technique. Three additional groups were given the same number of sessions, but those 21 individuals could freely choose the method to obtain a reward. In these control groups, an overall bias towards one of the two methods was observed, but animals with a different preference did not adjust towards the group norm. Thirteen of the fifteen animals that discovered both techniques remained with the action with which they were initially successful, independent of the group preference and the type of action (Binomial test: exp. proportion: 0.5, p&lt;0.01). Conclusions: The results indicate that the maintenance of behavioural patterns within groups 1) could be explained by the first rewarded manipulation and subsequent habit formation and 2) do not require social conformity as a mechanism. After an initial spread of a behaviour throughout a group, this mechanism may lead to a superficial appearance of conformity without the involvement of such a socially and cognitively complex mechanism. This is the first time that such an experiment has been conducted with free-ranging primates. abstract_id: PUBMED:19924242 In-group conformity sustains different foraging traditions in capuchin monkeys (Cebus apella). Background: Decades of research have revealed rich cultural repertoires encompassing multiple traditions in wild great apes, a picture crucially complemented by experimental simulations with captive apes. Studies with wild capuchin monkeys, the most encephalized simian species, have indicated a New World convergence on these cultural phenomena, involving multiple traditions and tool use. However, experimental studies to date are in conflict with such findings in concluding that capuchins, like other monkeys, show minimal capacities for social learning. Methodology/principal Findings: Here we report a new experimental approach in which the alpha male of each of two groups of capuchins was trained to open an artificial foraging device in a quite different way, using either a slide or lift action, then reunited with his group. In each group a majority of monkeys, 8 of 11 and 13 of 14, subsequently mastered the task. Seventeen of the successful 21 monkeys discovered the alternative action to that seeded in the group, performing it a median of 4 times. Nevertheless, all 21 primarily adopted the technique seeded by their group's alpha male. Median proportions of slide versus lift were 0.96 for the group seeded with slide versus 0. 01 for the group seeded with lift. Conclusions/significance: These results suggest a striking effect of social conformity in learned behavioral techniques, consistent with field reports of capuchin traditions and convergent on the only other species in which such cultural phenomena have been reported, chimpanzees and humans. abstract_id: PUBMED:36892869 Awe and social conformity: Awe promotes the endorsement of social norms and conformity to the majority opinion. Given that awe experiences promote collective identity and decrease self-importance, we reasoned that they should lead individuals to be more prone to cherish social conformity value and to adopt conformity behaviors. In two online experiments (N = 593), compared to neutral and amusement emotional states, awe was found to drive individuals to value the respect of social norms in a greater extent (Experiment 1), and to lead individuals to conform to the majority opinion on an evaluative judgment task (Experiment 2). The present research provides the first empirical evidence of awe as leading to conformity and, although more research is needed, it offers important theoretical implications about the social function of awe as well as, more generally, the importance of emotions in social influence situations. (PsycInfo Database Record (c) 2023 APA, all rights reserved). abstract_id: PUBMED:30430321 Social Conformity in Autism. Humans are extremely susceptible to social influence. Here, we examine whether this susceptibility is altered in autism, a condition characterized by social difficulties. Autistic participants (N = 22) and neurotypical controls (N = 22) completed a memory test of previously seen words and were then exposed to answers supposedly given by four other individuals. Autistic individuals and controls were as likely to alter their judgements to align with inaccurate responses of group members. These changes reflected both temporary judgement changes (public conformity) and long-lasting memory changes (private conformity). Both groups were more susceptible to answers believed to be from other humans than from computer algorithms. Our results suggest that autistic individuals and controls are equally susceptible to social influence when reporting their memories. abstract_id: PUBMED:36173001 Conformity in mate choice, the overlooked social component of animal and human culture. Although conformity as a major driver for human cultural evolution is a well-accepted and intensely studied phenomenon, its importance for non-human animal culture has been largely overlooked until recently. This limited for decades the possibility of studying the roots of human culture. Here, we provide a historical review of the study of conformity in both humans and non-human animals. We identify gaps in knowledge and propose an evolutionary route towards the sophisticated cultural processes that characterize humanity. A landmark in the study of conformity is Solomon Asch's famous experiment on humans in 1955. By contrast, interest in conformity among evolutionary biologists has only become salient since the turn of the new millennium. A striking result of our review is that, although studies of conformity have examined many biological contexts, only one looked at mate choice. This is surprising because mate choice is probably the only context in which conformity has self-reinforcing advantages across generations. Within a metapopulation, i.e. a group of subpopulations connected by dispersing individuals, dispersers able to conform to the local preference for a given type of mate have a strong and multigenerational fitness advantage. This is because once females within one subpopulation locally show a bias for one type of males, immigrant females who do not conform to the local trend have sons, grandsons, etc. of the non-preferred phenotype, which negatively and cumulatively affects fitness over generations in a process reminiscent of the Fisher runaway process. This led us to suggest a sex-driven origin of conformity, indicating a possible evolutionary route towards animal and human culture that is rooted in the basic, and thus ancient, social constraints acting on mating preferences within a metapopulation. In a generic model, we show that dispersal among subpopulations within a metapopulation can effectively maintain independent Fisher runaway processes within subpopulations, while favouring the evolution of social learning and conformity at the metapopulation scale; both being essential for the evolution of long-lasting local traditions. The proposed evolutionary route to social learning and conformity casts surprising light on one of the major processes that much later participated in making us human. We further highlight several research avenues to define the spectrum of conformity better, and to account for its complexity. Future studies of conformity should incorporate experimental manipulation of group majority. We also encourage the study of potential links between conformity and mate copying, animal aggregations, and collective actions. Moreover, validation of the sex-driven origin of conformity will rest on the capacity of human and evolutionary sciences to investigate jointly the origin of social learning and conformity. This constitutes a stimulating common agenda and militates for a rapprochement between these two currently largely independent research areas. abstract_id: PUBMED:36287291 Brain Systems Underlying Fundamental Motivations of Human Social Conformity. From birth to adulthood, we often align our behaviors, attitudes, and opinions with a majority, a phenomenon known as social conformity. A seminal framework has proposed that conformity behaviors are mainly driven by three fundamental motives: a desire to gain more information to be accurate, to obtain social approval from others, and to maintain a favorable self-concept. Despite extensive interest in neuroimaging investigation of social conformity, the relationship between brain systems and these fundamental motivations has yet to be established. Here, we reviewed brain imaging findings of social conformity with a componential framework, aiming to reveal the neuropsychological substrates underlying different conformity motivations. First, information-seeking engages the evaluation of social information, information integration, and modification of task-related activity, corresponding to brain networks implicated in reward, cognitive control, and tasks at hand. Second, social acceptance involves the anticipation of social acceptance or rejection and mental state attribution, mediated by networks of reward, punishment, and mentalizing. Third, self-enhancement entails the excessive representation of positive self-related information and suppression of negative self-related information, ingroup favoritism and/or outgroup derogation, and elaborated mentalizing processes to the ingroup, supported by brain systems of reward, punishment, and mentalizing. Therefore, recent brain imaging studies have provided important insights into the fundamental motivations of social conformity in terms of component processes and brain mechanisms. abstract_id: PUBMED:35080146 Different drives of herding: An exploratory study of motivations underlying social conformity. We often align our behaviors, attitudes, and opinions in line with a majority of others, a phenomenon known as "social conformity." A seminal framework has proposed that conformity behaviors are mainly driven by three fundamental motives: a desire to gain more information to be accurate, to obtain social approval from others, and to maintain a favorable self-concept. However, previous studies usually have interpreted conformity behaviors as driven by one motive or another, largely ignoring the fact that human behaviors could be concurrently induced by multiple and even conflicting motivations. Adopting a typical conformity paradigm widely used in previous studies, we explored distinct and concurrent motives underlying the same conformity behavior, combining personality and individual differences with more nuanced analyses of observed conformity behaviors. Our findings provide novel evidence to show that three motivations exist within a single conformity behavior, suggesting that multiple motivations drive the conformity concurrently. These findings provide a potential solution for the extensive debate about what drives human social conformity and help to better understand the conformity behavior in daily life. abstract_id: PUBMED:14744228 Social influence: compliance and conformity. This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness. abstract_id: PUBMED:30219289 Quantifying compliance and acceptance through public and private social conformity. Social conformity is a class of social influence whereby exposure to the attitudes and beliefs of a group causes an individual to alter their own attitudes and beliefs towards those of the group. Compliance and acceptance are varieties of social influence distinguished on the basis of the attitude change brought about. Compliance involves public, but not private conformity, while acceptance occurs when group norms are internalised and conformity is demonstrated both in public and in private. Most contemporary paradigms measuring conformity conflate compliance and acceptance, while the few studies to have addressed this issue have done so using between-subjects designs, decreasing their sensitivity. Here we present a novel task which measures compliance and acceptance on a within-subjects basis. Data from a small sample reveal that compliance and acceptance can co-occur, that compliance is increased with an increasing majority, and demonstrate the usefulness of the task for future studies of conformity. abstract_id: PUBMED:30498121 Cultural flies: Conformist social learning in fruitflies predicts long-lasting mate-choice traditions. Despite theoretical justification for the evolution of animal culture, empirical evidence for it beyond mammals and birds remains scant, and we still know little about the process of cultural inheritance. In this study, we propose a mechanism-driven definition of animal culture and test it in the fruitfly. We found that fruitflies have five cognitive capacities that enable them to transmit mating preferences culturally across generations, potentially fostering persistent traditions (the main marker of culture) in mating preference. A transmission chain experiment validates a model of the emergence of local traditions, indicating that such social transmission may lead initially neutral traits to become adaptive, hence strongly selecting for copying and conformity. Although this situation was suggested decades ago, it previously had little empirical support. Answer: The maintenance of traditions in marmosets appears to be more influenced by individual habit rather than social conformity. A field experiment with free-ranging marmosets (Callithrix jacchus) investigated whether social conformity was necessary for the maintenance of behavioral patterns within groups. The study found that after a training phase where marmoset groups were exposed to only one technique to obtain a reward, they reliably maintained the trained method over a period of three weeks, even after discovering an alternative technique. This suggests that the maintenance of behavioral patterns could be explained by the first rewarded manipulation and subsequent habit formation, rather than the need for social conformity as a mechanism. The results indicated that after an initial spread of a behavior throughout a group, the mechanism of individual habit formation may lead to a superficial appearance of conformity without the actual involvement of a socially and cognitively complex mechanism like social conformity (PUBMED:19223965).
Instruction: Reducing an already low dental diagnostic X-ray dose: does it make sense? Abstracts: abstract_id: PUBMED:23406732 Dental diagnostic X-ray exposure and risk of benign and malignant brain tumors. Background: This study evaluates the risk of benign brain tumors (BBTs) and malignant brain tumors (MBTs) associated with dental diagnostic X-ray, using a large population-based case-control study. Materials And Methods: We identified 4123 BBT cases and 16 492 controls without BBT (study 1) and 197 MBT cases and 788 controls without MBT (study 2) from Taiwan National Health Insurance claim data. The risks of both types of tumor were estimated in association with the frequency of received dental diagnostic X-ray. Results: The mean ages were ~44.2 years in study 1 and 40.6 years in study 2. Multivariable unconditional logistic regression analysis showed that the risk of BBT increases as the frequency of received dental diagnostic X-ray increases. The BBT odds ratio increased from 1.33 [95% confidence interval (CI) 1.22-1.44] for those with annual mean X-ray examination of less than one to 1.65 (95% CI 1.37-1.98) for those with three or more X-ray examinations, after controlling for comorbidities. No significant association was found between MBTs and dental diagnostic X-ray exposure. Conclusions: Exposure to dental diagnostic X-rays in oral and maxillofacial care increases the risk of BBTs, but not MBTs. abstract_id: PUBMED:26119214 Reducing an already low dental diagnostic X-ray dose: does it make sense? Comparison of three cost-utility analysis methods used to assess two dental dose-reduction measures. Objectives: To find a method that is suitable for providing an objective assessment of the cost effectiveness of a dose-reducing measure used for diagnostic dental X-ray exposures. Methods: Three cost-utility analysis (CUA) methods were evaluated by comparing their assessments of two dose-reduction measures, a rectangular collimator and the combination of two devices that reduce the radiation dose received during orthodontic lateral cephalography. The following CUA methods were used: (1) the alpha value (AV), a monetary valuation of dose reduction used in the nuclear industry; (2) the value of a statistical life for valuation of the reduction in stochastic adverse effects; and (3) the time-for-time method, based on the postulate that risk reduction is effective when the number of years of life gained is more than the years that an average worker must work to earn the costs of the risk-reducing measure. The CUA methods were used to determine the minimum number of uses that was required for the dose-reducing device to be cost effective. The methods were assessed for coherence (are comparable results achieved for comparable countries?) and adaptability (can the method be adjusted for age and gender of specific patient groups?). Results: The performance of the time-for-time method was superior to the other methods. Both types of dose-reduction devices tested were assessed as cost effective after a realistic number of uses with all three methods except low AVs. Conclusions: CUA for the methods of X-ray dose reduction can be performed to determine if investment in low dose reduction is cost effective. The time-for-time method proved to be a coherent and versatile method for performing CUA. abstract_id: PUBMED:36766499 Frequency of Dental X-ray Diagnostics in Children and Adolescents: What Is the Radiation Exposure? Children are exposed to ionizing radiation through radiographs during their development for various reasons. At present, there are no officially valid reference values for dental X-rays in children and adolescents for dental X-ray diagnostics. This study retrospectively examined 9680 extraoral dental radiographs in pediatric patients between 2002 and 2020. The aim was to analyze the radiation doses in pediatric patients, which indications were used, and whether there were specific age and gender differences. The evaluation showed that radiation doses were considered low, with dose area products of 2.2 cGy × cm2 for a lateral cephalogram, 14 cGy × cm2 for an orthopantomogram (OPG), and 45 cGy × cm2 for cone beam computer tomography (CBCT). This corresponds to an effective dose of 1.5 μSv for a lateral cephalogram, 7 μSv for an OPG, and 33.8 μSv for CBCT. Of the 9680 images, 78% were orthopantomograms, and only 0.4% were CBCT images. OPG has become more important over the years, as reflected in the indication. Approximately one-third of all extraoral exposures are orthodontic indications. Overall, the indications were similar for both genders. According to the dental indications, boys were X-rayed slightly more frequently than girls (54.5-45.5%). A future publication of dose guide values and corresponding guidelines is of high priority. abstract_id: PUBMED:12035954 A comparison of Kodak Ultraspeed and Ektaspeed plus dental X-ray films for the detection of dental caries. Background: Using the fastest dental X-ray film available is an easy way of reducing exposure to ionizing radiation. However, the diagnostic ability of fast films for the detection of proximal surface caries must be demonstrated before these films will become universally accepted. Methods: Extracted premolar and molar teeth were arranged to simulate a bitewing examination and radiographed using Ultraspeed and Ektaspeed Plus dental X-ray films. Three different exposure times were used for each film type. Six general dentists were used to determine the presence and depth of the decay in the proximal surfaces of the teeth radiographed. The actual extent of the decay in the teeth was determined by sectioning the teeth and examining them under a microscope. Results: There was no significant difference between the two films for the mean correct diagnosis. However, there was a significant difference between the means for the three exposure times used for Ultraspeed film. The practitioners used were not consistent in their ability to make a correct diagnosis, or for the film for which they got the highest correct diagnosis. Conclusions: Ektaspeed Plus dental X-ray film is just as reliable as Ultraspeed dental X-ray film for the detection of proximal surface decay. The effect of underexposure was significant for Ultraspeed, but not for Ektaspeed Plus. Patient exposure can be reduced significantly with no loss of diagnostic ability by changing from Ultraspeed X-ray film to Ektaspeed Plus X-ray film. abstract_id: PUBMED:36109403 Recognition of Digital Dental X-ray Images Using a Convolutional Neural Network. Digital dental X-ray images are an important basis for diagnosing dental diseases, especially endodontic and periodontal diseases. Conventional diagnostic methods depend on the experience of doctors, so they are highly subjective and consume more energy than other approaches. The current computer-aided interpretation technology has low accuracy and poor lesion classification. This study proposes an efficient and accurate method for identifying common lesions in digital dental X-ray images by a convolutional neural network (CNN). In total, 188 digital dental X-ray images that were previously diagnosed as periapical periodontitis, dental caries, periapical cysts, and other common dental diseases by dentists in Qilu Hospital of Shandong University were collected and augmented. The images and labels were inputted into four CNN models for training, including visual geometry group (VGG)-16, InceptionV3, residual network (ResNet)-50, and densely connected convolutional networks (DenseNet)-121. The average classification accuracy of the four trained network models on the test set was 95.9%, while the classification accuracy of the trained DenseNet-121 network model reached 99.5%. It is demonstrated that the use of CNNs to interpret digital dental X-ray images is an efficient and accurate way to conduct auxiliary diagnoses of dental diseases. abstract_id: PUBMED:32744623 INTRAORAL DENTAL X-RAY RADIOGRAPHY IN BOSNIA AND HERZEGOVINA: STUDY FOR REVISING DIAGNOSTIC REFERENCE LEVEL VALUE. This paper presents the study that would allow a revision of the currently valid diagnostic reference level in intraoral dental X-ray radiography in Bosnia and Herzegovina. The study was conducted for six procedures that are used to capture incisor, premolar and molar teeth, of both maxilla and mandible, in adults. Measurements were performed on 41 intraoral X-ray devices, 20 of which were systems with digital image receptor and 21 film-based X-ray systems. In this study, incident air kerma (Ki) and air kerma-area product (PKA) were used as patient dose descriptors. The third quartile (Q3) of the measured incident air-kerma values was used to compare with the current national diagnostic reference level (DRL) for intraoral dental procedures. The obtained results of Q3 for both types of devices, with film-based and digital image receptors, have shown that the third quartile values are lower (3.5 and 1.2 mGy, respectively) than the current national DRL (7.0 mGy). Hence, new data can be used to re-establish the diagnostic reference levels in intraoral dental X-ray examinations in Bosnia and Herzegovina. abstract_id: PUBMED:16482026 Diagnostic value of X-ray procedures in dental implantations The possibilities of modern X-ray procedures at different stages of dental implantation planning and consequent postoperative control were presented depending upon real clinical situation. abstract_id: PUBMED:3473039 Utilize your dental X-ray set better. The dental X-ray set should not be used in a stereotyped way, but should be regarded as an instrument with a wealth of potential. For its optimal utilization the operator should be familiar with the principles of projection geometry and of image formation. The diagnostic problem should be analysed so that the determination of projections and angulations of the X-ray beam will result in the maximum information. To ascertain true morphology and interpret the radiographic images three-dimensionally multiple views are required. Distortion is a disturbing factor in radiographic image formation and methods minimizing distortion are to be preferred. The benefits of occlusal radiography are emphasized. In order to limit the detrimental effects of ionizing radiation X-ray examinations should not be used unless the outcome represents a net benefit to the patient. However, it is wrong policy to restrict the number of radiographs in dental practice if substantial indications for radiography exist. abstract_id: PUBMED:37016185 Establishment of a local diagnostic reference level for dental intraoral bitewing X-rays. A state-based local diagnostic reference level (LDRL) for dental intraoral X-rays has been established. LDRL values of 2.0 mGy incident air kerma (IAK) and 57 mGy•cm2 air kerma-area product (KAP) for an adult posterior bitewing X-ray were determined based on 811 X-ray units surveyed. This IAK LDRL value is greater than those established in several other nations and regions around the world in similar studies. Analyses of radiographic technique and equipment usage are included to provide broad guidance as to ways that imaging could be optimised, such as in the selection of exposure factors, collimators, image receptors, and X-ray unit models. abstract_id: PUBMED:11836821 Are conventional dental x-ray films a thing of the past? Dental x-ray film is still the most important medium for the detection and storage of x-ray images. However, just as the use of computers cannot be stopped, digital techniques will take over the role of dental x-ray films in dental radiology. Answer: Reducing an already low dental diagnostic X-ray dose is a topic of discussion in the field of dental radiology. The rationale behind further dose reduction is to minimize the exposure to ionizing radiation, which is a known risk factor for various health issues, including the development of benign brain tumors (BBTs) as indicated by a study that found an increased risk of BBTs with increased frequency of dental diagnostic X-ray exposure (PUBMED:23406732). However, the question of whether it makes sense to reduce an already low dose is complex. On one hand, dental X-ray doses are considered low, with studies showing that the radiation doses from various dental X-ray diagnostics like lateral cephalograms, orthopantomograms (OPGs), and cone beam computed tomography (CBCT) are relatively low (PUBMED:36766499). On the other hand, there is a continuous effort to optimize radiation protection and ensure that doses are kept as low as reasonably achievable (ALARA principle). A study comparing three cost-utility analysis (CUA) methods to assess the cost-effectiveness of dental dose-reduction measures found that both types of dose-reduction devices tested were assessed as cost-effective after a realistic number of uses with all three methods except low alpha values (AVs). The time-for-time method was found to be superior and coherent, suggesting that it is possible to perform CUA for methods of X-ray dose reduction to determine if investment in low dose reduction is cost-effective (PUBMED:26119214). Given the potential risks associated with radiation exposure, even at low doses, and the availability of cost-effective dose-reduction measures, it does make sense to pursue further reductions in dental diagnostic X-ray doses. This approach aligns with the principle of minimizing patient exposure while maintaining diagnostic quality, as demonstrated by the successful use of faster dental X-ray films like Ektaspeed Plus, which reduces patient exposure without loss of diagnostic ability (PUBMED:12035954). Additionally, advancements in digital imaging and the use of convolutional neural networks (CNNs) for interpreting dental X-ray images can further enhance diagnostic accuracy and potentially reduce the need for repeated exposures (PUBMED:36109403). In conclusion, while dental diagnostic X-ray doses are already low, further reduction is justified and can be achieved through various means, including the use of dose-reduction devices, faster films, and digital imaging technologies, all of which can contribute to improved patient safety and cost-effectiveness.
Instruction: Can tactile plantar stimulation improve postural control of persons with superficial plantar sensory deficit? Abstracts: abstract_id: PUBMED:19225271 Can tactile plantar stimulation improve postural control of persons with superficial plantar sensory deficit? Background And Aims: Complex interactions between visual, vestibular and somatosensory information and the cerebellar system are involved in the maintenance of upright posture. Previous studies have shown that normal aging and pathologies may lead to deterioration of the control of upright standing posture. Methods: In order to investigate postural control during quiet standing in the elderly, the center of pressure (CoP) was analysed on two force platforms in three different groups. The first group was composed of 13 healthy older adults (O), the second consisted of 9 older adults with plantar sole deficit (OD), and the third (control) group was composed of 8 young healthy subjects (Y). All subjects were tested with eyes closed, before and after tactile plantar stimulation lasting ten minutes. Center of pressure displacements were analyzed in terms of sway area, mean velocity, and mean root mean square (RMS) along both mediolateral and antero-posterior axes. Results: Results showed that, before tactile plantar stimulation, the sway area and mean RMS were greater in O and OD subjects compared with Y ones. After tactile plantar stimulation, a decrease in the mean RMS was observed in OD subjects, this effect being significant only for the medio-lateral axis. Conclusions: These results suggest that application of tactile plantar stimulation may compensate a loss of superficial plantar sensitivity. abstract_id: PUBMED:34719316 Plantar sensation, plantar pressure, and postural stability alterations and effects of visual status in older adults. Purpose: Ageing leads to plantar sensation and pressure alterations and poor postural control. The aim of this study was to compare the plantar sensation and static plantar pressure distribution between young and older adults. A secondary aim was to investigate the effect of ageing and visual status on postural stability. Materials And Methods: Forty older subjects and 43 young adult individuals participated in the study. Plantar light touch sensation was evaluated using Semmes-Weinstein monofilaments. Static plantar pressure and postural stability were assessed with the WinTrack® Pedobarography device. Results: Plantar sensation thresholds of the older individuals were higher compared to the young in all plantar regions (p &lt; 0.001). The plantar contact area was greater in older individuals (p &lt; 0.001). Maximum plantar pressure of midfoot was higher and maximum plantar pressure of the rearfoot and whole foot was less in older individuals during quiet stance (p &lt; 0.05). The main effects of group and visual condition were significant for mean latero-lateral and antero-posterior sway speed with large effect sizes (p &lt; 0.05). Conclusions: The sensation of all plantar regions reduced, the rearfoot plantar pressure decreased, and the midfoot plantar pressure increased in older individuals compared to young. Postural stability was reduced in the older individuals, and their postural control was more affected by the eliminated visual information compared to the young. Increased plantar contact area and midfoot plantar pressure may be related to decreased MLA height in older individuals. Older individuals may need visual information more to maintain postural control because of reduced plantar sensation. abstract_id: PUBMED:30639034 How can the stimulation of plantar cutaneous receptors improve postural control? Review and clinical commentary. Postural control requires constant and subconscious postural sway to manage balance and achieve postural stability. These movements of regulation are based in particular on cutaneous plantar information. The foot constitutes a functional whole that participates in the mechanisms of postural control and regulation. It represents the direct interface between the body and the ground during quiet standing, and plantar cutaneous information contributes to postural control. Upright balance mechanically depends on the gravitational torque produced by the forces of gravity and reaction of the ground. In this context, the foot behaves like a sensory system for postural regulation whose objective is to maintain a state of stability within a changing and constraining environment. There is a relation between balance improvement and the facilitation of sensory feedback related to the activation of the plantar cutaneous mechanoreceptors. From a clinical point of view, the application of additional tactile cues may have therapeutic benefits in relation to fall prevention, or to improve specific types of chronic pain. abstract_id: PUBMED:27242490 How Plantar Exteroceptive Efficiency Modulates Postural and Oculomotor Control: Inter-Individual Variability. In a previous experiment, we showed that among young and healthy subjects, thin plantar inserts improve postural control and modify vergence amplitudes. In this experiment, however, significant inter-individual variability was observed. We hypothesize that its origin could be attributed to a different reliance upon feet cutaneous afferents. In order to test this hypothesis, we re-analyzed the data relative to 31 young (age 25.7 ± 3.8) and healthy subjects who participated in the first experiment after having classified them into two groups depending on their Plantar Quotient (PQ = Surface area of CoPfoam/Surface area of CoPfirm ground × 100). Foam decreases the information arising from the feet, normally resulting in a PQ &gt; 100. Hence, the PQ provides information on the weight of plantar cutaneous afferents used in postural control. Twelve people were Plantar-Independent Subjects, as indicated by a PQ &lt; 100. These individuals did not behave like the Normal Plantar Quotient Subjects: they were almost insensitive to the plantar stimulations in terms of postural control and totally insensitive in terms of oculomotor control. We conclude that the inter-individual variability observed in our first experiment is explained by the subjects' degree of plantar reliance. We propose that plantar independence is a dysfunctional situation revealing inefficiency in plantar cutaneous afferents. The latter could be due to a latent somatosensory dysfunction generating a noise which prevents the CNS from correctly processing and using feet somatosensory afferents both for balance and vergence control: Plantar Irritating Stimulus. Considering the non-noxious nature and prevalence of this phenomenon, these results can be of great interest to researchers and clinicians who attempt to trigger postural or oculomotor responses through mechanical stimulation of the foot sole. abstract_id: PUBMED:25242449 Impaired plantar sensitivity among the obese is associated with increased postural sway. Impaired foot plantar sensitivity has been hypothesized among individuals who are obese, and may contribute to their impaired balanced during quiet standing. The objective of this study was to investigate the effects of obesity on plantar sensitivity, and explore the relationship between plantar sensitivity and balance during quiet standing. Thirty-nine young adults from the university population participated in the study including 19 obese and 20 non-obese adults. Plantar sensitivity was measured as the force threshold at which an increasing force applied to the plantar surface of the foot was first perceived, and the force threshold at which a decreasing force was last perceived. Measurements were obtained while standing, and at two locations on the plantar surface of the dominant foot. Postural sway during quiet standing was then measured under three different sensory conditions. Results indicated less sensitive plantar sensitivity and increased postural sway among the obese, and statistically significant correlations between plantar sensitivity and postural sway that were characterized as weak to moderate in strength. As such, impaired plantar sensitivity among individuals who are obese may be a mechanism by which obesity degrades standing balance among these individuals. abstract_id: PUBMED:27760494 Influence of dental occlusion on postural control and plantar pressure distribution. Objective: The number of studies investigating correlations between the temporomandibular system and body posture, postural control or plantar pressure distribution is continuously increasing. If a connection can be found, it is often of minor influence or for only a single parameter. However, small subject groups are critical. This study was conducted to define correlations between dental parameters, postural control and plantar pressure distribution in healthy males. Methods: In this study, 87 male subjects with an average age of 25.23 ± 3.5 years (ranging from 18 to 35 years) were examined. Dental casts of the subjects were analyzed. Postural control and plantar pressure distribution were recorded by a force platform. Possible orthodontic and orthopedic factors of influence were determined by either an anamnesis or a questionnaire. All tests performed were randomized and repeated three times each for intercuspal position (ICP) and blocked occlusion (BO). For a statistical analysis of the results, non-parametric tests (Wilcoxon-Matched-Pairs-Test, Kruskall-Wallis-Test) were used. A revision of the results via Bonferroni-Holm correction was considered. Results: ICP increases body sway in the frontal (p ≤ 0.01) and sagittal planes (p ≤ 0.03) compared to BO, whereas all other 29 correlations were independent of the occlusion position. For both of the ICP or BO cases, Angle-class, midline-displacement, crossbite, or orthodontic therapy were found to have no influence on postural control or plantar pressure distribution (p &gt; 0.05). However, the contact time of the left foot decreased (p ≤ 0.001) while detecting the plantar pressure distribution in each position. Conclusions: Persistent dental parameters have no effect on postural sway. In addition, postural control and plantar pressure distribution have been found to be independent postural criteria. abstract_id: PUBMED:28722491 Comparative Effectiveness of Plantar-Massage Techniques on Postural Control in Those With Chronic Ankle Instability. Context: One of the major concerns after an acute lateral ankle sprain is the potential for development of chronic ankle instability (CAI). The existing research has determined that clinician-delivered plantar massage improves postural control in those with CAI. However, the effectiveness of self-administered treatments and the underlying cause of any improvements remain unclear. Objectives: To determine (1) the effectiveness of a self-administered plantar-massage treatment in those with CAI and (2) whether the postural-control improvements were due to the stimulation of the plantar cutaneous receptors. Design: Crossover study. Setting: University setting. Patients Or Other Participants: A total of 20 physically active individuals (6 men and 14 women) with self-reported CAI. Intervention(s): All participants completed 3 test sessions involving 3 treatments: a clinician-delivered manual plantar massage, a patient-delivered self-massage with a ball, and a clinician-delivered sensory brush massage. Main Outcome Measure(s): Postural control was assessed using single-legged balance with eyes open and the Star Excursion Balance Test. Results: Static postural control improved (P ≤ .014) after each of the interventions. However, no changes in dynamic postural control after any of the interventions were observed (P &gt; .05). No differences were observed between a clinician-delivered manual plantar massage and either a patient-delivered self-massage with a ball or a clinician-delivered sensory brush massage in any postural-control outcome. Conclusions: In those with CAI, single 5-minute sessions of traditional plantar massage, self-administered massage, and sensory brush massage each resulted in comparable static postural-control improvements. The results also provide empirical evidence suggesting that the mechanism for the postural-control improvements is the stimulation of the plantar cutaneous receptors. abstract_id: PUBMED:28535097 Comparative Effectiveness of Plantar-Massage Techniques on Postural Control in Those With Chronic Ankle Instability. Context: One of the major concerns after an acute lateral ankle sprain is the potential for development of chronic ankle instability (CAI). The existing research has determined that clinician-delivered plantar massage improves postural control in those with CAI. However, the effectiveness of self-administered treatments and the underlying cause of any improvements remain unclear. Objectives: To determine (1) the effectiveness of a self-administered plantar-massage treatment in those with CAI as well and (2) whether the postural-control improvements were due to the stimulation of the plantar cutaneous receptors. Design: Crossover study. Setting: University setting. Patients Or Other Participants: A total of 20 physically active individuals (6 men and 14 women) with self-reported CAI. Intervention(s): All participants completed 3 test sessions involving 3 treatments: a clinician-delivered manual plantar massage, a patient-delivered self-massage with a ball, and a clinician-delivered sensory brush massage. Main Outcome Measure(s): Postural control was assessed using single-legged balance with eyes open and the Star Excursion Balance Test. Results: Static postural control improved (P ≤ .014) after each of the interventions. However, no changes in dynamic postural control after any of the interventions were observed (P &gt; .05). No differences were observed between a clinician-delivered manual plantar massage and either a patient-delivered self-massage with a ball or a clinician-delivered sensory brush massage in any postural-control outcome. Conclusions: In those with CAI, single 5-minute sessions of traditional plantar massage, self-administered massage, and sensory brush massage each resulted in comparable static postural-control improvements. The results also provide empirical evidence suggesting that the mechanism for the postural-control improvements is the stimulation of the plantar cutaneous receptors. abstract_id: PUBMED:24704653 The effect of transcutaneous electrical nerve stimulation on postural sway on fatigued dorsi-plantar flexor. The application of transcutaneous electrical nerve stimulation (TENS) enhances muscle weakness and static balance by muscle fatigue. It was said that TENS affects decrease of the postural sway. On the other hand, the applications of TENS to separate dorsi-plantar flexor and the comparison with and without visual input have not been studied. Thus, the aim of this study was to compare the effects of TENS on fatigued dorsi-plantar flexor with and without visual input. 13 healthy adult males and 12 females were recruited and agreed to participate as the subject (mean age 20.5 ± 1.4, total 25) in this study after a preliminary research. This experiment was a single group repeated measurements design in three days. The first day, after exercise-induced fatigue, the standing position was maintained for 30 minutes and then the postural sway was measured on eyes open(EO) and eyes closed(EC). The second, TENS was applied to dorsi flexor in standing position for 30 minutes after conducting exercise-induced fatigue. On the last day, plantar flexor applied by TENS was measured to the postural sway on EO and EC after same exercise-induced fatigue. The visual input was not statistically difference between the groups. However, when compared of dorsi-plantar flexor after applied to TENS without visual input, the postural sway of plantar flexor was lower than the dorsi flexor (p&lt; 0.05). As the result, the application of TENS in GCM clinically decreases the postural sway with visual input it helps to stable posture control and prevent to falling down. abstract_id: PUBMED:36463613 Impact of plantar fasciitis on postural control and walking in young middle-aged adults. Purpose: To assess if plantar fasciitis has an impact on postural control and walking pattern from gait analysis across different experimental conditions. Methods: Thirty participants (n = 15 with plantar fasciitis) performed 5 different balance tasks on a force platform, and the center of pressure (COP) was computed for postural control analysis. Participants were also asked to walk at 3 different speeds on a gait analysis system to compute the spatial-temporal parameters. Clinical foot measurements (pain, mobility) were also collected through all participants. Results: Clinical foot measurements showed no significant difference between the two groups; except for pain palpation in plantar fasciitis group. Significant differences were observed between the two groups for COP area displacement sway (p &lt; 0.01; d = 0.08) and velocity (p = 0.022; d = 0.04), where the fasciitis group reported poorer postural control than control mainly during more challenging balance tasks (semi-tandem, unipodal). Plantar fasciitis group reported a decrease of gait velocity (p &lt; 0.01; d = 0.12), step length (p &lt; 0.01; d = 0.16) and step width (p &lt; 0.01; d = 0.18) when compared to the healthy group across walking speed tests. Conclusions: Individuals with plantar fasciitis report poor postural control and changes in walking pattern across three speeds performance. Answer: Yes, tactile plantar stimulation can improve postural control in persons with superficial plantar sensory deficit. A study that analyzed the center of pressure (CoP) in older adults with plantar sole deficit (OD) found that after tactile plantar stimulation, there was a significant decrease in the mean root mean square (RMS) for the medio-lateral axis, suggesting that tactile plantar stimulation may compensate for a loss of superficial plantar sensitivity (PUBMED:19225271). Additionally, the stimulation of plantar cutaneous receptors is known to contribute to postural control, and the application of additional tactile cues may have therapeutic benefits in relation to fall prevention or to improve specific types of chronic pain (PUBMED:30639034). Moreover, research has shown that clinician-delivered plantar massage, self-administered massage, and sensory brush massage each resulted in comparable static postural-control improvements in individuals with chronic ankle instability (CAI), providing empirical evidence that the mechanism for postural-control improvements is the stimulation of the plantar cutaneous receptors (PUBMED:28535097, PUBMED:28722491). Therefore, the evidence supports the notion that tactile plantar stimulation can be an effective intervention to improve postural control in individuals with deficits in plantar sensation.
Instruction: Access to paid in-home assistance among disabled elderly people: do Latinos differ from non-Latino whites? Abstracts: abstract_id: PUBMED:7604922 Access to paid in-home assistance among disabled elderly people: do Latinos differ from non-Latino whites? Objectives: The purpose of this study was to compare the national prevalences and predictors of paid in-home functional assistance among disabled Latino and non-Latino elderly people who receive such assistance. Methods: Data were derived from the 1988 wave of the National Center for Health Statistics Longitudinal Study on Aging and the 1988 Commonwealth Fund Commission Survey of Elderly Hispanics. Logistic regression was used to model paid care use and to calculate estimated probabilities of such use. Results: Among Latino and non-Latino Whites 74 years of age and older who received functional assistance, similar proportions used paid assistance. Predictors of paid care coincided with established models for non-Latino Whites only. Disabled Latinos had a lower estimated probability of using paid assistance when they were highly disabled and socially isolated but had a higher estimated probability when their children lived nearby. Conclusions: The effects of disability and social support differ among non-Latino White and Latino elderly people. Latino elderly people with high anticipated needs obtain less paid assistance than similar non-Latino Whites. In addition to a reduction in financial barriers, improving access to long-term care services requires addressing this diversity in service use patterns. abstract_id: PUBMED:1837037 Will paid home care erode informal support? One of the main barriers to the expansion of paid home care for the chronically disabled is the fear that policymakers have that it will cause friends and relatives to curtail their informal caregiving efforts. Using the first wave of the National Long-Term Care Survey, we examine whether the amount of paid home care used by disabled elderly persons had a significant influence on the amount of informal support they were receiving. Results from a two-stage least squares regression analysis suggest that the amount of informal home care received was not significantly affected by the level of formal care. This conclusion held for subgroups of formal care users most likely to exhibit substitution: those without cognitive problems, the disabled elderly with above average income, and persons who lived alone. Even the more severely disabled elderly, who are the target of most proposals to expand paid home care, did not substitute paid care for unpaid. Thus, our study suggests that an increase in paid home care will not erode informal support. abstract_id: PUBMED:25642078 The role of physical therapists in introducing assistive products for the home-bound elderly disabled. [Purpose] This study explored the roles of physical therapists (PTs) in introducing assistive products, which are essential for living securely and stably at home, and examined how PTs can fulfill these roles more efficiently and effectively. [Subjects and Methods] A questionnaire on introducing assistive products was administered to PTs working at randomly selected hospitals, health-care facilities for the elderly requiring long-term care, home-visit nursing stations, and other such facilities and to PTs providing physical therapy directly to the home-bound elderly disabled. The subjects of the study were 77 PTs who returned valid responses. [Results] For awareness of systems for assistive product's introduction, PTs were more aware of the system based on the Long-Term Care (LTC) Insurance Act than the system based on the Act on Welfare for the Home-Bound Elderly Disabled. For PTs handling assistive product's introduction for the home-bound elderly disabled, approximately 91% of the respondents answered that they had handled some cases of assistive product's introduction, and PTs with longer clinical experience had handled more assistive product's introduction cases. [Conclusion] The results demonstrated that PTs understand the work involved in introducing assistive products work well and that they handle it. The results, however, also suggested that educational and operational improvements are urgently required for PTs introducing assistive products essential for the lives of the home-bound elderly disabled. abstract_id: PUBMED:21432469 Depressive states and health problems in caregivers of the disabled elderly at home. This study investigated the prevalence of depression and health problems in caregivers for the disabled elderly at home, and identified factors associated with the risk of depression. Four hundred thirty-four caregivers for disabled elderly persons in a suburban city were surveyed by in-home interviews. Two hundred ten non-caregivers were also interviewed as an age and gender-matched control group. Overall, 16.8% of caregivers met the DSM-III-R criteria for major depression and 10.6% met the criteria for depressive disorders not otherwise specified, for a total of 27.4% meeting the criteria for affective disorders, while 8.6% of controls met the criteria for depressive disorders not otherwise specified. The prevalence of major depression in caregivers was associated with mental problems and ADL limitations of the elderly. Moreover, non-cooperation by other family members or relatives contributed to an increase in the prevalence of depression. Caregivers who met the criteria for major depression had a higher frequency and longer duration of upper respiratory infections than other caregivers or controls, and showed greater fatigue than the other caregivers or controls. Approximately 90% of caregivers who met the criteria for major depression had also insomnia. Overall, it was shown that caregivers who met the criteria for major depressin had not only emotional distress but also higher morbidity. abstract_id: PUBMED:1413781 Getting by at home. Community-based long-term care of Latino elders. Although evidence suggests that the morbidity and mortality of Latino elders (of any Hispanic ancestry) are similar to those of non-Latino whites, Latinos have higher rates of disability. Little is known about influences on the use of in-home health services designed to assist disabled Latino elders. We examine the effects of various cultural and structural factors on the use of visiting nurse, home health aide, and homemaker services. Data are from the Commonwealth Fund Commission's 1988 national survey of 2,299 Latinos aged 65 and older. Mexican-American elders are less likely than the average Latino to use in-home health services despite similar levels of need. Structural factors including insurance status are important reasons, but acculturation is not pertinent. Physicians should not assume that Latino families are taking care of their disabled elders simply because of a cultural preference. They should provide information and advice on the use of in-home health services when an older Latino patient is physically disabled. abstract_id: PUBMED:24259858 The Role of Physical Therapists in Living Environment Maintenance of the Home-bound Elderly Disabled. [Purpose] This study explored the roles of physical therapists (PTs) in living environment maintenance, which is essential for living securely and stably at home, and examines how physical therapists can fulfill these roles more efficiently and effectively. [Subjects and Methods] A questionnaire on living environment maintenance was given to PTs working at randomly selected hospitals, health care facilities for the elderly requiring long-term care, home-visit nursing stations, and other such facilities and directly providing physical therapy to the home-bound elderly disabled. The subjects of the study were 77 PTs who returned valid responses. [Results] For awareness of systems for living environment maintenance, PTs were more aware of the system based on the Long-Term Care (LTC) Insurance Act than the system based on the Act on Welfare for the Home-Bound Elderly Disabled. PTs who have worked at two or more types of medical, welfare, and intermediate institutions were more aware of such systems than PTs who have worked at only one type. For PTs handling living environment maintenance for the home-bound elderly disabled, approximately 80% of respondents answered that they have handled some living environment maintenance, and PTs with longer clinical experience have handled more living environment maintenance cases. [Conclusion] The results demonstrated that PTs understand their living environment maintenance work well and handle the work. The results, however, also suggested that educational and operational improvements are urgently required for PTs handling living environment maintenance essential for the lives of the home-bound elderly disabled. abstract_id: PUBMED:35883031 The multi-subject cooperation mechanism of home care for the disabled elderly in Beijing: a qualitative research. Background: Currently, population aging has been an obstacle and the spotlight for all countries. Compared with developed countries, problems caused by China's aging population are more prominent. Beijing, as a typical example, is characterized by advanced age and high disability rate, making this capital city scramble to take control of this severe problem. The main types of care for the disabled elderly are classified as home care, institutional care, and community care. With the obvious shortage of senior care institutions, most disabled elderly people are prone to choose home care. This kind of elderly care model is in line with the traditional Chinese concept and it can save the social cost of the disabled elderly to the greatest extent. However, home care for the disabled elderly is facing bumps from the whole society, such as lack of professional medical care, social support and humanistic care, and the care resources provided by a single subject cannot meet the needs of the disabled elderly. Objective: Based on the demands of the disabled elderly and their families, this study aims to explore the current status of home care service, look for what kind of care is more suitable for the disabled elderly, and try to find an appropriate elderly care mechanism which could meet the diverse needs of the disabled elderly. Methods: A total of 118 disabled elderly people and their primary caregivers were selected from four districts of Beijing by using multi-stage stratified proportional sampling method. A one-to-one and semi-structured in-depth qualitative interview were conducted in the study to find out the health status of the disabled elderly, the relationship between the disabled elderly and their primary caregivers, and utilization of elderly care resources, etc. The views of the interviewees were analyzed through the thematic framework method. All the methods were carried out in accordance with relevant guidelines and regulations. Results: The results showed that the average age of 118 disabled elderly is 81.38 ± 9.82 years; 86 (72.9%) are severe disability; 105 (89.0%)are plagued by chronic diseases; the average duration of disability is 5.63 ± 5.25 years; most of disabled elderly have 2 children, but the primary caregiver are their own partner (42, 35.6%), and there is an uneven sharing of responsibilities among the disabled elderly's offspring in the process of home care. The disabled elderly enjoy medical care services, rehabilitation training, daily health care, psychological and other demands. However, the disabled elderly and their families in Beijing face a significant financial burden, as well as physical and psychological issues. The care services provided by the government, family doctors, family members and social organizations fall far short of satisfying the diverse care needs of the disabled elderly. Conclusions: In order to effectively provide home care services for the disabled elderly, it is therefore necessary to establish a coordination mechanism of multiple subjects and give full play to the responsibilities of each subject. This study proposes a strengthening path for the common cooperation of multiple subjects, which taking specific responsibilities and participating in the home care for the disabled elderly: (1) The government should give full play to the top-level leading responsibilities and effectively implement people-oriented measures to the disabled elderly. (2) Family doctors strengthen their responsibilities as health gatekeepers and promote continuous health management of the disabled elderly. (3) Family members assume the main responsibility and provide a full range of basic care services. (4) Social forces promote supplementary responsibilities of public welfare and expand the connotation of personalized care services. (5) The disabled elderly should shoulder appropriate personal responsibility and actively cooperate with other subjects. abstract_id: PUBMED:11584887 Changes in home care use by disabled elderly persons: 1982-1994. Objectives: The use of, and changes in the use of. formal (paid) and informal care by elderly persons who are chronically disabled and living in the community was studied. Methods: Estimates were made of the sources and volume of home care use for persons reporting chronic disability in the 1982, 1989, and 1994 National Long Term Care Surveys. Comparisons were made across disability intensity, survey data, and age. Results: The analyses showed changes over time in the sources of home care services. Generally, the combined use of both paid and informal home health care increased, whereas the use of either source of care alone decreased. The amount spent on formal care increased with disability level and age. Discussion: Use of formal sources of care by community disabled elderly residents increased, likely because of changes in the Medicare home health care benefits in 1989. Increased use of home health care was associated with the concurrent use of informal care. abstract_id: PUBMED:35970249 Addressing the Home Care Shortage: Predictors of Willingness to Provide Paid Home Care in New York. Objectives: To determine the prevalence and predictors of willingness to consider becoming a paid home care worker. Design: Cross-sectional telephone-based survey study. Setting And Participants: 800 adult residents of New York State participating in the 2020 Empire State Poll, an annual survey conducted in English and Spanish using random-digit dialing. Methods: Willingness to consider working as a paid home care worker was analyzed as the main outcome. Survey questions also involved demographics and unpaid caregiving experience. We used multinomial logistic regression to examine associations between participant characteristics and willingness to be a paid caregiver. Results: Participants had a mean age of 47.7 years (95% CI 45.4-50.0). Demographic information included 51.1% female gender, 65.4% White race, 13.4% Black race, 6.4% Asian or Pacific Islander, 14.8% another race, 19.1% Hispanic/Latino ethnicity, and 43.0% household income below $50,000 per year. A weighted 25.4% of participants would consider becoming a paid home care worker. In an adjusted model, willingness to be a paid home care worker was associated with younger age [odds ratio (OR) 0.98, 95% CI 0.96-1.00, P = .032], female gender (OR 2.46, 95% CI 1.35-4.46, P = .003), Black or other race (OR 2.40, 95% CI 1.11-5.17, P = .026, and OR 3.13, 95% CI 1.30-7.54, P = .011, respectively), Hispanic ethnicity (OR 2.26, 95% CI 1.06-4.81, P = .035), household income below $50,000 per year (OR 2.00, 95% CI 1.03-3.88, P = .040), and having provided unpaid family caregiving (OR 2.08, 95% CI 1.15-3.76, P = .016). Conclusions And Implications: A quarter of New Yorkers would consider working as a paid home care worker. Willingness to consider this occupation was associated with the demographic characteristics disproportionately represented in the current home care workforce. Improved working conditions and compensation could help attract and retain home care workers, thereby addressing the rising need for home care in the United States. abstract_id: PUBMED:29127621 How Well is the Medical Home Working for Latino and Black Children? Objective To examine the benefits of having a medical home among Latino and Black school-aged children, both with and without special health care needs (CSHCN). Methods Data from the 2011-2012 National Survey of Children's Health (NSCH) were analyzed to examine the associations of preventive dental and medical care, unmet dental or medical care, or missed school days with having a medical home among Latino and Black children compared to White children. Multivariate logistic regression with survey weights was used to adjust for child, parent, home, and geographic characteristics and an interaction term to estimate differences in outcomes among Black or Latino children receiving care in a medical home compared to White children with a medical home. Results Approximately 35% of Latino CSHCN and Latino non-CSHCN ages 6-17 years of age had a medical home. In the adjusted model comparing the effectiveness of the medical home by race and ethnicity, Latino non-CSHCN compared to White non-CSHCN were associated with lower odds of having one or more preventive dental visit in the last 12 months (OR 0.66; 95% CI 0.46-0.95) and no other associations between having a medical home and outcomes were found among Latinos compared to Whites regardless of non-CSHCN or CHSCN status. Meanwhile, having a medical home among Black non-CHSCN and CHSCN, compared to their White counterparts, showed potential benefits in regards to unmet medical care needs after adjusting for covariates, (OR 0.15; 95% CI 0.06-0.35; OR 0.16; 95% CI 0.05-0.55). Conclusions Medical homes may not be effective in delivering health services to the majority of Latino children but provide some benefit to Black children with and without CSHCN. Alternatively, the medical home may function differently for Latinos due to the specific medical home components measured by NSCH. Answer: The study with PUBMED:7604922 investigated the access to paid in-home assistance among disabled elderly people, specifically comparing Latinos to non-Latino whites. The study found that among those 74 years of age and older who received functional assistance, similar proportions of Latino and non-Latino whites used paid assistance. However, the predictors of paid care use aligned with established models only for non-Latino whites. Disabled Latinos had a lower estimated probability of using paid assistance when they were highly disabled and socially isolated but had a higher estimated probability when their children lived nearby. The study concluded that the effects of disability and social support differ between non-Latino white and Latino elderly people, with Latino elderly with high anticipated needs obtaining less paid assistance than similar non-Latino whites. The study suggests that improving access to long-term care services for Latinos requires addressing the diversity in service use patterns, beyond just reducing financial barriers.
Instruction: Are internal derangement and osteoarthrosis linked to changes in clinical outcome measures of arthrocentesis of the temporomandibular joint? Abstracts: abstract_id: PUBMED:14586851 Are internal derangement and osteoarthrosis linked to changes in clinical outcome measures of arthrocentesis of the temporomandibular joint? Purpose: Arthrocentesis and hydraulic distention of the temporomandibular joint (TMJ) have been described as an effective modality in the treatment of patients with clinical findings consistent with the diagnosis of disc displacement without reduction. The purpose of this study was to investigate whether the magnetic resonance (MR) imaging variables of internal derangement (ID) and osteoarthrosis (OA) and the presence of capsulitis/synovitis (C/S) may be linked to changes in TMJ signs and symptoms associated with the performance of arthrocentesis and hydraulic distention. Patients And Methods: The study consisted of 27 TMJ pain patients, who were assigned a clinical unilateral TMJ diagnosis of ID type III (disc displacement without reduction). Bilateral sagittal and coronal MR images were obtained immediately preoperatively and at a 2-month follow-up to establish the presence or absence of ID and OA. TMJ pain level of function and mandibular range of motion were assessed preoperatively and compared with the respective 2-month follow-up findings. Results: The pretreatment data revealed a significant relationship between the clinical finding of TMJ pain and the MR imaging findings of ID (P &lt;.001), ID type (P &lt;.001), and OA (P &lt;.05). At 2-month follow-up, clinical evaluation showed a significant reduction in TMJ pain during function (P &lt;.001), a significant reduction in the clinical diagnoses of ID type III (P &lt;.05), and a significant increase in mandibular range of motion (P &lt;.05). There was no change in the prevalence rates of associated MR imaging diagnoses of TMJ ID and/or OA. Conclusion: The study findings suggest that clinical pain is related to TMJ-related MR imaging findings of ID and OA. These diagnoses proved not to be linked to changes in therapeutic outcome measures of TMJ pain and mandibular range of motion. abstract_id: PUBMED:11593796 Internal derangement of the temporomandibular joint Internal Derangement is one of the most common disorders of the Temporomandibular joint. It is defined as an abnormal anatomical relationship between the discus articularis and the other structures of the temporomandibular joint. More than 2/3 of patients with clinical symptoms of the temporomandibular joint have an internal derangement. The most frequent finding in internal derangement is an anterior dislocation of the disc with reduction after mouth opening. In more severe cases a complete anterior dislocation is found without reduction of the disc. Internal derangement is a risk factor for developing osteoarthritis with remodeling of the condylus and the fossa mandibularis. The most important modalities of radiologic diagnosis are arthrography, CT and MRI. The diagnosis of craniofacial dysfunction has been improved dramatically by modern imaging techniques, which have become essential tools for primary diagnostic and evaluation after conservative or surgical therapy. MRI has become the gold standard. abstract_id: PUBMED:27475509 Internal Derangement of the Temporomandibular Joint: New Perspectives on an Old Problem. Internal derangement is caused by loss of the structure and function of the intra-articular tissues, leading to a failure in the biomechanics of the temporomandibular joint. This tissue failure is usually caused by joint overload, leading to an inflammatory/degenerative arthropathy of the temporomandibular joint. The intra-articular changes associated with internal derangement of the temporomandibular joint can also be caused by a systemic arthropathy or a localized atypical arthropathy involving the temporomandibular joint. Clinicians must be diligent in establishing the correct diagnosis and cause of the internal derangement, which ultimately leads to the appropriate management of patients with these disorders. abstract_id: PUBMED:25465487 Tumor necrosis factor-alpha levels in the synovial fluid of patients with temporomandibular joint internal derangement. Purpose: The purpose of this study was to investigate the level of tumor necrosis factor-alpha (TNF-α) in the synovial fluid (SF) of patients with temporomandibular joint (TMJ) internal derangement and to show the relationship between the level of TNF-α and the severity of the disease. Materials And Methods: Arthrocentesis was performed on 32 female and five male patients (aged between 17 and 45) referred to our clinic with the complaint of TMJ pain and discomfort. TNF-α levels were determined in the SF samples obtained during arthrocentesis. As a measure of pain, visual analog scale (VAS) scores were also evaluated. Results: There were statistically significant differences in VAS scores between the stages. VAS scores were found to be elevated as the stage of disease progressed. Increased levels of TNF-α were found in progressive stages of internal derangement. Conclusion: In our study, both SF TNF-α levels and pain levels (VAS scores) were found to be increased in patients with internal derangement as the stage of the disease progresses. TNF-α might contribute to the pathogenesis of synovitis and the degeneration of the TMJ cartilage and bone. abstract_id: PUBMED:33675704 Temporomandibular Joint Arthrocentesis and Microfragmented Adipose Tissue Injection for the Treatment of Internal Derangement and Osteoarthritis: A Randomized Clinical Trial. Purpose: Internal derangement and osteoarthritis are the most common degenerative temporomandibular joint diseases and initial treatment for such conditions relies on arthrocentesis. Microfragmentation of adipose tissue has been proven in orthopedic literature to represent a more effective method to preserve stem cells, but no application has ever been reported in the temporomandibular joint. The purpose of this randomized clinical trial is to compare standard treatment conducted by injecting hyaluronic acid after the procedure to the new treatment relying upon microfragmented adipose tissue injection using the Lipogems technology. Materials And Methods: A randomized clinical trial was designed enrolling 20 patients in the control group receiving the standard treatment and 20 patients in the experimental group receiving microfragmented adipose tissue obtained through the Lipogems technology after arthrocentesis. Two main outcomes were defined, pain (visual analogic scale) and function (maximum interincisal opening). Both were measured in the immediate preoperative time, and 10 days, 1 month, and 6 months after the procedure. Results: In both groups, pain reduction and mouth opening significantly improved compared with the preoperative situation (P = .001). At 6-month follow-up, there was an almost statistically significant reduction of pain compared with preoperative visual analogic scale (P = .0546) and a statistically significant improvement of mouth opening (P = .0327). Overall, statistical analyses showed that the experimental group had a statistically significant superiority in the success rate of the procedure compared with the control group (P = .018). Conclusions: Preliminary results of this clinical trial show that the injection of microfragmented adipose tissue can significantly improve outcomes of pain and function compared with the standard treatment and encourage to pursue research on this topic. Further studies with a longer follow-up time are needed to evaluate the clinical stability of the achieved improvement in pain and function. abstract_id: PUBMED:29764480 A stepped approach for the management of symptomatic internal derangement of the temporomandibular joint. Background: Internal derangement is the clinical and pathological condition of disc displacement of the temporomandibular joint. Management of these cases involve conservative and surgical treatment options. Minimally invasive surgical procedures namely arthrocentesis and arthroscopy are promising techniques in the management of internal derangement. However patient selection algorithms, indications for minimally invasive procedures and details of the techniques should be further studied for safe and cost effective management of these cases. This manuscript aims to retrospectively analyze the significance of a stepped surgical treatment approach (arthrocentesis under local anaesthesia as the first line of treatment, followed by arthroscopic lysis and lavage under general anaesthesia in unresolving cases) of internal derangement with or without osteoarthritis. Methods: This is a retrospective cohort study. Case notes of 1414 patients that were managed with a standard protocol were reviewed. Appropriate inclusion and exclusion criteria were set. Thirty-three patients were eligible for inclusion. Parameters recorded were pain-free inter-incisal opening, spontaneous pain, pain on function, difficulty on chewing, and perceived disability on jaw movements. Pre-operative and post-operative (at the end of the follow up period) pain free maximum interincisal opening values were compared with paired t test and the subjective parameters were evaluated with Chisquare analysis. Treatment outcome and success rate according to American Association of Oral and Maxillofacial Surgeons were descriptively shown. Results: Interincisal opening values increased, and the number of patients with severe or medium rated subjective parameters were reduced at discharge. These improvements were found to be statistically significant. Clinical (Wilkes) staging of internal derangement pre-operatively and at discharge remained either unchanged or was lower. Treatment outcome and success according to American Association of Oral and Maxillofacial Surgeons criteria was 94%. Conclusion: The stepped approach for the management of symptomatic internal derangement with or without osteoarthritis is a successful treatment strategy with favourable therapeutic outcomes. abstract_id: PUBMED:31540790 Efficiency of arthrocentesis treatment for different temporomandibular joint disorders. This study was performed to determine whether arthrocentesis therapy has different outcomes in three groups of patients with different temporomandibular disorders (TMDs). A clinical trial was conducted including 45 patients with 45 unilaterally affected joints divided into three groups (n=15): osteoarthritis (OA), disc displacement with reduction (DDWR), and disc displacement without reduction (DDWoR). All patients underwent the same arthrocentesis treatment protocol. The outcome variables, including visual analogue scale evaluations and measurements of mandibular motion (in millimetres), were recorded at baseline and at 1 and 6 months postoperative. Inter-group assessments showed significant short-term differences in joint sounds (P=0.016) and significant long-term differences in masticatory efficiency (P=0.046) and protrusive movement (P=0.048). The estimation of mean changes between baseline and long-term follow-up revealed significant differences in joint sounds (P&lt;0.001), disruption in daily activities (P=0.002), maximum mouth opening (P=0.008), and protrusive movement (P=0.002) between the groups. Arthrocentesis therapy may be useful to improve clinical symptoms and range of mandibular movement in patients with all three types of TMD. However, the benefit of arthrocentesis may be greater for patients with DDWoR than for those in the other groups. abstract_id: PUBMED:28761275 Efficacy of arthrocentesis versus arthrocentesis with sodium hyaluronic acid in temporomandibular joint osteoarthritis: A comparison. Introduction: Temporomandibular joint osteoarthritis (TMJ OA) is a degenerative disease characterized by deterioration of articular tissue with concomitant osseous changes in the condyle and/or articular eminence, joint positive for TMJ noise with jaw movement or function, crepitus detected on palpation on opening, closing, right/left lateral, or protrusive movement. Hyaluronic acid (HA) is a polysaccharide of the family of glycosaminoglycans. HA has been shown to improve and restore normal lubrication in joint, provide nutrition to the avascular articulating disc, and stabilize the joint. Materials And Methods: Twenty patients with OA of TMJ with age limit between 18 and 60 years of age were enrolled in this study. Patients were randomly divided into two groups, in which one group received arthrocentesis only, and another group received arthrocentesis plus intra-articular injection of sodium HA (0.5 ml) in superior joint space in a cycle of 5 weekly arthrocentesis (one per week). Patients were followed at regular interval of 1st day, 5th day, 7th day, 4 weeks, 6 weeks, and 12 weeks. Assessment of clinical outcome was done in terms of reduction in pain (visual analog scale score), maximum mouth opening (MMO) in millimeters, painful/pain-free lateral or protrusive jaw movement, and clicking/crepitus in joint. Results: Significant reduction in pain was observed in both the groups. MMO, lateral and protrusive movements improved significantly in both groups; however, arthrocentesis with sodium HA was superior to arthrocentesis alone. Conclusion: Combination of arthrocentesis with HA injection showed much better outcome than arthrocentesis alone. abstract_id: PUBMED:32109596 Retrospective evaluation of the duration of arthrocentesis in the treatment of temporomandibular joint diseases. Objective: This study investigated the duration of arthrocentesis in treatment of patients with different diagnoses of temporomandibular disorders. Methods: This retrospective study evaluated the duration of arthrocentesis used for 65 patients who were diagnosed with osteoarthritis (OA), disc displacement with reduction (DDWR), or disc displacement without reduction (DDWoR), in accordance with the Research Diagnostic Criteria for Temporomandibular Disorders protocol. Results: Mean operation times were 423.79 (± 66.77) seconds in OA patients, 459.92 (± 56.67) seconds in DDWR patients, and 609.00 (± 106.88) seconds in DDWoR patients; these significantly differed among groups (P &lt; 0.001). In addition, post hoc analyses revealed statistically significant differences in mean operation times between DDWoR and OA (P&lt;0.05) and DDWoR and DDWR (P&lt;0.05). Conclusion: The appropriate duration of arthrocentesis may vary among joint disorders, and the duration of arthrocentesis in DDWoR treatment is longer than that in DDWR and OA treatments. In addition there was no relationship between the age or gender and operation time of the patients. Changes in anatomic structures due to temporomandibular diseases are presumed to influence the duration of arthrocentesis treatment. abstract_id: PUBMED:33730123 Arthrocentesis of the Temporomandibular Joint: Systematic Review and Clinical Implications of Research Findings. Aims: To review randomized clinical trials on arthrocentesis for managing temporomandibular disorders (TMD) and to discuss the clinical implications. Methods: On March 10, 2019, a systematic search of relevant articles published over the last 20 years was performed in PubMed, as well as in Scopus, the authors' personal libraries, and the reference lists of included articles. The focus question was: In patients with TMD (P), does TMJ arthrocentesis (I), compared to any control treatment (C), provide positive outcomes (O)? Results/conclusion: Thirty papers were included comparing TMJ arthrocentesis to other treatment protocols in patients with disc displacement without reduction and/or closed lock (n = 11), TMJ arthralgia and/or unspecific internal derangements (n = 8), or TMJ osteoarthritis (n = 11). In general, the consistency of the findings was poor because of the heterogenous study designs, and so caution is required when interpreting the meta-analyses. In summary, it can be suggested that TMJ arthrocentesis improves jaw function and reduces pain levels, and the execution of multiple sessions (three to five) is superior to a single session (effect size = 1.82). Comparison studies offer inconsistent findings, with the possible exception of the finding that splints are superior in managing TMJ pain (effect size = 1.36) compared to arthrocentesis, although this conclusion is drawn from very heterogenous studies (I2 = 94%). The additional use of cortisone is not effective for improving outcomes, while hyaluronic acid or platelet-rich plasma positioning may have additional value according to some studies. The type of intervention, the baseline presence of MRI effusion, and the specific Axis I diagnosis do not seem to be important predictors of effectiveness, suggesting that, as in many pain medicine fields, efforts to identify predictors of treatment outcome should focus more on the patient (eg, age, psychosocial impairment) than the disease. Answer: Internal derangement (ID) and osteoarthrosis (OA) of the temporomandibular joint (TMJ) are conditions that can affect the function and pain levels in the TMJ. Arthrocentesis and hydraulic distention have been used as treatment modalities for these conditions. A study investigated whether the magnetic resonance imaging (MRI) variables of ID and OA, as well as the presence of capsulitis/synovitis (C/S), may be linked to changes in TMJ signs and symptoms associated with the performance of arthrocentesis and hydraulic distention. The study found that while there was a significant relationship between clinical TMJ pain and the MRI findings of ID and OA before treatment, these diagnoses were not linked to changes in therapeutic outcome measures of TMJ pain and mandibular range of motion after treatment. At the 2-month follow-up, there was a significant reduction in TMJ pain during function and a significant increase in mandibular range of motion, but no change in the prevalence rates of associated MRI diagnoses of TMJ ID and/or OA (PUBMED:14586851). In summary, while ID and OA are related to clinical pain, they do not appear to be directly linked to changes in clinical outcome measures following arthrocentesis of the TMJ. This suggests that the effectiveness of arthrocentesis in reducing pain and improving mandibular function may not be dependent on the MRI findings of ID and OA.
Instruction: Is liver transplantation associated with decreased bone mass in climacteric women? Abstracts: abstract_id: PUBMED:22948507 Is liver transplantation associated with decreased bone mass in climacteric women? Purpose: To evaluate whether climacteric women undergoing liver transplantation had higher prevalence of decreased bone mass than those without any liver disease. Methods: A cross-sectional study with 48 women receiving follow-up care at a university hospital in Southeastern Brazil, from February 4th 2009 to January 5th 2011, was conducted. Of these women, 24 were 35 years or older and had undergone liver transplantation at least one year before study entry. The remaining 24 women had no liver disease and their ages and menstrual patterns were similar to those of transplanted patients. Laboratorial tests (follicle-stimulating hormone and estradiol) and bone density measurements of the lumbar spine and femur (equipment Hologic, Discovery WI) were performed. Statistical analysis was carried out by Fisher's exact test, simple Odds Ratio (OR), and multiple logistic regression. Results: Mean age of the women included in the study was 52.8 (± 10.7) years-old, 27.1% were premenopausal and 72.9% were peri/postmenopausal. Approximately 14.6% of these women exhibited osteoporosis and 35.4% had low bone mass. The following items were associated with decreased bone mass: being postmenopausal (OR=71.4; 95%CI 3.8 - 1,339.7; p&lt;0.0001), current age over 49 years-old (OR=11.4; 95%CI 2.9 - 44.0; p=0.0002), and serum estradiol levels lower than 44.5 pg/mL (OR=18.3; 95%CI 3.4 - 97.0; p&lt;0.0001). Having a history of liver transplantation was not associated with decreased bone mass (OR=1.4; 95%CI 0.4 - 4.3; p=0.56). Conclusion: Liver transplantation was not associated with decreased bone mass in this group of climacteric women. abstract_id: PUBMED:21620127 Decrease in bone mass in women after liver transplantation: associated factors. Background: In the future, an increasing number of female liver transplant recipients will reach the climacteric with osteoporosis as a common complication. We evaluated the factors associated with decreased bone mass among women after liver transplantation. Methods: A prospective, cross-sectional study of 23 outpatient transplant recipients followed from February 2009 to March 2010 included women of age ≥35 years after liver transplantations ≥1 year prior. We recorded patient histories, liver enzyme levels, as well as bone mineral densities measured at the lumbar spine and femur. Statistical analysis used Fisher's exact test, simple odds ratio (OR), and Spearman's rank correlation coefficient. Results: The mean patient age was 52.5 ± 11 years with 30.4% premenopausal, and 69.6% perimenopausal or postmenopausal. Approximately 21% showed osteoporosis and 35%, a low bone mass. Postmenopausal women: OR 69.0 (95% CI 2.89-1647.18; P&lt;.0001), aged ≥49 years: OR 13.33 (95% CI 1.78-100.15; P=.0123) and receiving a transplant after 44 years of age: OR 49.50 (95% CI 3.84-638.43; P&lt;.0001) were associated with a lower bone mass. Having undergone transplantation for more than 5.8 years lowered the risk of bone mass change: OR 0.11 (95% CI 0.02-0.78; P=.0361). Clinical and laboratory variables, including corticosteroid use, were not associated with decreased bone mass. Conclusion: Understanding the prevalence and factors associated with osteoporosis among female liver transplant recipients is important to enhance the strategies to diagnose and treat these women, seeking to improve their quality of life. abstract_id: PUBMED:10467609 Bone loss at the proximal femur and reduced lean mass following liver transplantation: a longitudinal study. The longevity of recipients of liver transplant may be compromised by spinal osteoporosis and vertebral fractures. However, femoral neck fractures are associated with a higher morbidity and mortality than spine fractures. As there is little information on bone loss at this clinically important site of fracture, the aim of this study was to determine whether accelerated bone loss occurs at the proximal femur following transplantation. Bone mineral density and body composition were measured at the femoral neck, lumbar spine and total body, using dual x-ray absorptiometry in 22 men and 19 women, age 46 +/- 1.4 y (mean +/- SEM) before and at a mean of 19 mo after surgery (range 3-44). Results were expressed in absolute terms (g/cm2) and as a z score. Before transplantation, z scores for bone mineral density were reduced at the femoral neck (-0.47 +/- 0.21 SD), trochanter (-0.56 +/- 0.19 SD), Ward's triangle (-0.35 +/- 0.14 SD), lumbar spine (-0.76 +/- 0.13 SD), and total body (-0.78 +/- 0.15 SD) (all P &lt; 0.01 to &lt; 0.001). Following transplantation, bone mineral density decreased by 8.0 +/- 1.7% at the femoral neck (P &lt; or = 0.01) and by 2.0 +/- 1.2% at the lumbar spine (P &lt; or = 0.05). Total weight increased by 12.2 +/- 2.3%, lean mass decreased by 5.7 +/- 1.4%, while fat mass increased from 24.1 +/- 2.0% to 35.1 +/- 1.8% (all P &lt; or = 0.001). Patients with end-stage liver disease have reduced bone mineral density. Liver transplantation is associated with a rapid decrease in bone mineral density at the proximal femur, further increasing fracture risk and a reduction in lean (muscle) mass, which may also predispose to falls. Prophylactic therapy to prevent further bone loss should be considered in patients after liver transplantation. abstract_id: PUBMED:11446565 Bone disease after liver transplantation: a long-term prospective study of bone mass changes, hormonal status and histomorphometric characteristics. After liver transplantation there is a high incidence of fractures, with important rates of bone loss during the first months. However, the long-term evolution of bone mass and metabolism parameters have been scarcely studied. In order to determine the incidence and risk factors involved in the development of skeletal fractures and to analyze the long-term evolution of bone mass, bone turnover and hormonal status after liver transplantation, a 3-year prospective study was performed in 45 patients following liver transplantation. Serum osteocalcin, parathyroid hormone (PTH), 25-hydroxyvitamin D (25-OH D) and testosterone levels (men), and bone mass at the lumbar spine and femur were measured before and sequentially at different time points during 3 years. Spinal X-rays were obtained during the first year. Histomorphometric analysis of bone biopsies obtained in 24 patients within the first 12 hours after surgery and 6 months after transplantation was performed. Fifteen patients (33%) developed fractures after liver transplantation, and pre-transplant risk factors for fractures were age and low bone mass (odd's ratio for osteoporosis, 95% confidence interval: 5.69, 1.32-24.53). Serum PTH, osteocalcin, 25-OH D, testosterone and creatinine levels increased after transplantation. Moreover, PTH correlated with creatinine and osteocalcin values. Bone mass decreased during the first 6 months and reached baseline values at the lumbar spine the second year, with posterior significant recovery at the femoral neck. Long term evolution of femoral neck BMD correlated with PTH levels. Six months after transplantation bone histomorphometric data showed an increase in bone formation parameters. After liver transplantation there is a high incidence of fractures, specially in elderly patients and those with osteoporosis. Bone mass decreased in the short-term period and improved, initially at the lumbar spine and later at the femur, according to histomorphometric evidences of an increase in bone formation. The increase in creatinine values induces a secondary hyperparathyroidism that influences the changes in femoral bone mass. Treatment of osteoporosis shortly after liver transplantation may be important in the prevention of bone fractures, particularly in patients with low bone mass. abstract_id: PUBMED:8155404 Rates of cancellous bone remodeling and turnover in osteopenia associated with primary biliary cirrhosis. Osteoporosis and fracturing are well-recognized manifestations of primary biliary cirrhosis (PBC), but the abnormalities of bone remodeling and turnover that result in bone loss are poorly understood. We used dynamic histomorphometric techniques to measure tissue level rates of cancellous bone resorption, formation, and turnover in 12 premenopausal women with PBC and in 12 normal premenopausal women. We compared these values with estimates of bone resorption and formation obtained concurrently in the same subjects by radiocalcium kinetics and biochemical markers. Rates of bone turnover were analyzed as a function of a risk score that reflects the severity of hepatic disease and cholestasis (Mayo proportional-hazards model). Positive correlations were observed between tissue level and whole skeletal estimates of bone remodeling. At the remodeling site (bone multicellular unit [BMU]), the depth of eroded lacunae was unaltered by PBC, but wall thickness was decreased. At the level of bone tissues, mean bone turnover was increased in PBC patients but varied widely and increased as hepatic disease and cholestasis worsened. We conclude that PBC causes a reduction in the volume of bone formed at the remodeling site and that the overall level of bone remodeling and turnover in PBC is strongly influenced by the severity of hepatic disease and cholestasis. We hypothesize that the rate of bone loss in PBC may be decreased by therapeutic agents that slow bone turnover, and that further bone loss may be halted by liver transplantation. abstract_id: PUBMED:10555591 Bone abnormalities in gastrointestinal and hepatic disease. Bone is a dynamic tissue that undergoes constant remodeling in response to local and environmental stimuli. Bone mass is maintained by this delicate equilibrium between bone formation and bone resorption. In growing children, the balance is tilted toward bone formation until peak bone mass is achieved in the second decade of life. Alterations in bone metabolism can result in decreased bone mass (osteopenia and osteoporosis) or impaired mineralization of the bone protein matrix (rickets and osteomalacia). Diseases of the alimentary tract such as celiac disease, inflammatory bowel diseases, gastrectomy, cholestatic liver diseases, liver transplantation, and hepatitis C can affect bone mineralization, remodeling, or bone mass. This article presents a summary of recent reports concerning bone disorders associated with disorders of the liver and gastrointestinal tract. abstract_id: PUBMED:15625431 Deficits in size-adjusted bone mass in children with Alagille syndrome. Objectives: To describe bone status in children with Alagille syndrome (AGS) and healthy control children adjusted for age, gender and height (HT), and to identify dietary intake and AGS-related factors associated with bone status. Methods: Prepubertal children with AGS and healthy controls comparable in age and ethnicity were evaluated. Subjects were &gt; or =4 years of age, prepubertal and had whole body (WB) and/or lumbar spine (LS) dual energy X-ray absorptiometry (DXA) scans of acceptable quality. Anthropometric (weight, HT), diet and AGS-specific data (e.g., coefficient of fat absorption, labs, liver transplantation) were also collected. Bone area (BA), bone mineral content (BMC) and HT were log transformed for best fit. Bone data were analyzed unadjusted, adjusted for gender, age and HT, and as HT-specific z-scores. Results: AGS and control groups were similar in age, pubertal status and ethnicity. Children with AGS were small-for-age, had decreased BA and BMC-for-age, and decreased WB BA and BMC-for-HT z-scores compared to healthy controls. Prevalence of low BMC-for-HT z-scores (&lt; -2) among AGS subjects was 20% for the WB and 39% for the LS. Bone mineralization was positively related to fat absorption but not dietary intake. Conclusions: Children with AGS have deficits in bone size and bone mass relative to body size. Modifiable factors, such as treatment of malabsorption should be explored as an early focus of AGS care to prevent bone fragility. abstract_id: PUBMED:20537977 Liver and bone. Osteoporosis is a frequent complication in patients with chronic liver disease, especially in end-stages and in cases with chronic cholestasis, hemochromatosis and alcohol abuse. The problem is more critical in transplant patients when bone loss is accelerated during the period immediately after transplantation, leading to a greater incidence of fractures. Advanced age, low body mass index and severity of the liver disease are the main risk factors for bone disease in patients with cholestasis. Mechanisms underlying osteoporosis in chronic liver disease are complex and poorly understood, but osteoporosis mainly results from low bone formation, related to the effects of retained substances of cholestasis, such as bilirubin and bile acids, or to the effects of alcohol on osteoblastic cells. Increased bone resorption has also been described in cholestatic women with advanced disease. Although there is no specific treatment, bisphosphonates associated with supplements of calcium and vitamin D are effective for increasing bone mass in patients with chronic cholestasis and after liver transplantation. The outcome in reducing the incidence of fractures has not been adequately demonstrated essentially because of the low number of patients included in the therapeutic trials. Randomized studies assessing bisphosphonates in larger series of patients, the development of new drugs for osteoporosis and the improvement in the management of liver transplant recipients may change the future. abstract_id: PUBMED:36845793 Parietal mass caused by a fish bone: case report. It is a great challenge to distinguish the parietal inflammation, centered on the foreign body that pierced the digestive tract and remained in the wall before surgery, because of its atypical clinical nature. Ingestion of foreign bodies is not uncommon. Fish bones are particularly notorious culprits; however, most will pass through the gastrointestinal tract uneventfully. Patients And Methods: The authors report a case of a patient who presented with periumbilical abdominal pain and a computed tomography (CT) scan that revealed the presence of periumbilical fat infiltration on a foreign body admitted on the Department of Digestive Cancer Surgery and Liver Transplantation, Casablanca, Morocco. An exploratory laparotomy revealed a parietal mass centered by a fish bone. Results: Accidental ingestion of foreign bodies is common in clinical practice. However, perforation of the intestine by a foreign body is less common because the majority of foreign bodies pass without incident into the feces and only 1% of them (the sharpest and most elongated objects) will perforate the gastrointestinal tract, usually at the level of the ileum.CT, especially multidetector CT, is considered the method of choice for preoperative diagnoses of ingested foreign bodies and their complications due to its high-quality multiplanar capabilities and high resolution.Foreign body ingestion usually goes unnoticed, but the complications of this incident can be severe. Conclusion: This case report highlights the fact that intestinal perforation caused by an ingested foreign body is a difficult diagnosis that should always be suspected in an attack of abdominal pain. Frequently, the clinical diagnosis is difficult, and recourse to imaging is sometimes necessary. Most of the time, the treatment is only surgical. abstract_id: PUBMED:1860685 Rates of vertebral bone loss before and after liver transplantation in women with primary biliary cirrhosis. Atraumatic fractures caused by osteoporosis may be a serious complication of primary biliary cirrhosis. Mean (+/- S.D.) bone mineral density in the lumbar spine in 210 ambulatory women with primary biliary cirrhosis was 1.02 +/- 0.19 gm/cm2, 7% lower than that in 139 age-matched normal women (after adjustment for age and body weight) (p less than 0.001). Bone mineral density in the lumbar spine was inversely related to a risk score index of liver disease severity (r = -0.29, p less than 0.001). The mean rate of bone loss in 105 of these 210 women was 2%/yr +/- 4%/yr, twice as great as in the 139 normal women (p less than 0.02). In 20 women with primary biliary cirrhosis followed up after orthotopic liver transplantation, bone mineral density in the lumbar spine decreased at 3 mo (p less than 0.01), and this decrease may have resulted in atraumatic fractures in 13 of them. Bone mineral density in the lumbar spine then increased (p less than 0.01) so that by 12 mo the median bone mineral density in the lumbar spine was similar to that before transplantation and by 24 mo it was 5% above it. Therefore we conclude that the progressive bone loss observed in primary biliary cirrhosis (which is further accentuated immediately after transplantation) may be halted, and the bone mass may be restored toward normal within 2 to 3 yr after orthotopic liver transplantation. Answer: Liver transplantation does not appear to be associated with decreased bone mass in climacteric women. A cross-sectional study with 48 women, including 24 who had undergone liver transplantation and 24 without any liver disease, found that being postmenopausal, current age over 49 years-old, and serum estradiol levels lower than 44.5 pg/mL were associated with decreased bone mass. However, having a history of liver transplantation was not associated with decreased bone mass (OR=1.4; 95%CI 0.4 - 4.3; p=0.56) (PUBMED:22948507). Another study also evaluated factors associated with decreased bone mass among women after liver transplantation and found that being postmenopausal, aged ≥49 years, and receiving a transplant after 44 years of age were associated with a lower bone mass. However, having undergone transplantation for more than 5.8 years lowered the risk of bone mass change, and clinical and laboratory variables, including corticosteroid use, were not associated with decreased bone mass (PUBMED:21620127). These findings suggest that while climacteric factors such as menopausal status and age are associated with decreased bone mass, liver transplantation itself is not a significant contributing factor to decreased bone mass in climacteric women.
Instruction: Medpor lower eyelid spacer: does it biointegrate? Abstracts: abstract_id: PUBMED:16855940 Correction of lower eyelid retraction with a porous polyethylene (Medpor) lower eyelid spacer--Medpor spacer in lower eyelid retraction Background: The correction of lower eyelid retraction remains a challenge with established techniques having disadvantages. A recently described alternative is implantation of an ultrathin high density porous polyethylene lower eyelid spacer (Medpor LES). We report our experience on implanting this Medpor LES, especially in patients with lower eyelid retraction due to Graves' orbitopathy. Patients And Methods: All patients receiving a Medpor LES between March 2003 and November 2004 in the Rotterdam Eye Hospital were included. Indications and preceding procedures as well as the degree of proptosis were noted. Preoperative and postoperative lower eyelid retraction were compared by measuring scleral show inferior to the limbus (LSS). Postoperative complications, recurrent retraction and secondary surgical procedures were recorded. Results: Out of 12 patients (16 eyelids) in whom a Medpor LES was inserted 8 patients suffered from Graves' orbitopathy. Mean follow-up was 7.5 months (range 4 - 11 months). Final cosmetic outcome was good in 8/16 eyelids and improved in 7/16 eyelids. Lower eyelid retraction (LSS) was reduced significantly (1.34 mm +/- 0.214 (mean +/- std. error of mean), p = 0.004). Complications included eyelid contour deformity (4/16 eyelids), remaining irritation of the eye (1/16) and problems in down gaze (4/16) as well as recurrent lower eyelid retraction (2/16) requiring further surgery in 3 of 11 patients. Conclusions: In selected patients, insertion of a Medpor lower eyelid spacer may be a good alternative to correct lower eyelid retraction. abstract_id: PUBMED:19229747 Medpor lower eyelid spacer: does it biointegrate? Purpose: To report the histopathologic findings of explanted Medpor lower eyelid spacers (LES) in complicated cases. Materials And Methods: Four cases of lower eyelid retraction due to thyroid orbitopathy (n = 2), facial nerve palsy (n = 1), and post-enucleation socket syndrome (n = 1) were treated with Medpor LES. Results: All implants were removed between 6 months to 2 years following their original insertion due to exposure, poor stability, or contour. Histopathology of the implants showed fibrosis and vascularization although clinically, at the time of removal, did not appear vascularized. In addition, immunohistochemistry was positive for Factor VIII related antigen and CD34, thus highlighting the presence of vessels in the pores and around the implant. Conclusion: To our knowledge, we are the first to report histopathologic findings of explanted high-density porous polyethylene implants from the lower eyelid in humans. Although this study shows that Medpor LES does biointegrate, we advocate using it sparingly due to associated complications such as exposure, poor stability, and contour. abstract_id: PUBMED:15317715 The use of porous polyethylene (Medpor) lower eyelid spacers in lid heightening and stabilisation. Background/aims: The management of lower eyelid retraction can be challenging, and established techniques to correct it are not always successful. Previous reports have suggested a role for the ultrathin high density porous polyethylene lower eyelid spacer (Medpor LES) in such patients. The authors report the experience of three surgeons implanting Medpor LES over 1 year, and ascertain whether such implants are a safe and effective alternative to autogenous spacers. Methods: A prospective, interventional, non-comparative case series of consecutive patients. Surgical indications for Medpor LES were noted. Preoperative and postoperative lower marginal reflex distance (L-MRD), vertical palpebral aperture (PA), lagophthalmos, and scleral show inferior to the limbus (LSS) were recorded, together with major and minor complications. Results: 32 patients (35 eyelids) had a Medpor LES inserted, 22/32 under local anaesthetic, and nine with adjunctive procedures. Mean follow up was 22 months (range 15-28 months). The Medpor LES was effective in reducing the palpebral aperture (p&lt;0.001) and lagophthalmos (p = 0.04) and raising the lower eyelid height by reducing both L-MRD (p = 0.006) and LSS (p&lt;0.001). However there were major complications in 7/32 patients and minor complications in 8/32, most requiring further surgery. Final outcome was good in 24/35 eyelids and satisfactory in 5/35. Conclusions: Despite a good or satisfactory final outcome in the majority of patients, the value of this technique is limited by complications, and should be reserved for those unsuitable for safer techniques. abstract_id: PUBMED:10749162 Porous polyethylene as a spacer graft in the treatment of lower eyelid retraction. Purpose: An experimental study was performed to: 1) assess the tolerance and incorporation of porous polyethylene (Medpor) in the posterior lamella of the rabbit lower eyelid; 2) analyze the effect of implant thickness on incorporation; 3) investigate the ability of conjunctiva to grow over vascularized Medpor and; 4) determine the effects of Medpor surface modification on biocompatibility and fibrovascularization. Methods: In phase I, 10 rabbit eyelids were operated on to analyze the effects of implant thickness and to develop the surgical technique used in phase II of the study. In phase II, 20 lower eyelids of 10 rabbits received 0.85-mm-thick Medpor grafts, each rabbit receiving both an uncoated implant and one coated with an immobilized collagen. Results: There were no extrusions in phase II. with a postoperative follow-up from 14 to 17 weeks. Fourteen of 20 eyelids had full-thickness conjunctival incisions or excisions placed over the Medpor implant to determine the growth potential of conjunctiva over a vascularized implant. All but one eyelid showed complete defect coverage, occurring in as little as 3 days. Histopathology indicated complete Medpor fibrovascularization as early as 4 weeks after implantation. Because neither coated nor uncoated implants extruded in phase II, no conclusions can be drawn regarding the efficacy of Medpor surface modification. Conclusion: Medpor was well tolerated in this soft tissue application, and it offers advantages over other graft materials. abstract_id: PUBMED:8725075 Use of porous plyethylene (Medpor) in maxillofacial surgery The use of alloplastic material in maxillofacial surgery is having a much larger diffusion in substituting missing bone parts and in incrementing those that already exist. Medpor, a substance composed of porous polyethylene, has been of relatively recent introduction in the surgical field. Due to its characteristics of biological compatibility, easy morphological adapatability and mantaining it's initial volume after beeing positioned, Medpor lends itself well as an alternative to autogenous grafts or other alloplastic materials. The authors consider twenty patients operated between May 1991 and September 1993 with Medpor grafts; seven of these were affected by microgenia, and were submitted to vertical increase of the chin after horizontal osteotomy by interposing Medpor. Two patients affected by hemifacial microsomia had been treated with grafts on the mandibular border in order to improve their facial symmetry. In three cases Medpor was used in patients affected by previous traumas: to reconstruct a fractured orbital floor with bone loss, the dorsum of the nose, and in order to regain protrusion of the cheekbone. In two patients affected by UCLP Medpor was used in sustaining the lateral region of the nose as well as in disguising hypoplasia of the upper jaw. In other two cases Medpor had been used to fill a depression created in the temporal region where the muscle had been rotated to create a flap; other uses of Medpor have been: as a filler of titanium mesh, used in the reconstruction of a nasal valve and in the camouflage of the upper jaw. The authors report complications only in three cases all involved the exposure or prominence of the Medpor grafts. In one case the prominence of the grafts which was positioned on the upper jaw appeared in the upper part of the fornix; in another case the exposure was post-traumatic and in the third case the exposure followed the dehiscence of the titanium prosthesis within which the Medpor had been positioned. Except for the last case, in which the patient died--for other reasons--healing of the graft exposure was obtained by a simple remodeling. abstract_id: PUBMED:19668503 Lower eyelid lengthening surgery targeting the posterior layer of the lower eyelid retractors via a transcutaneous approach. The lower eyelid retractors consist of double layers, the posterior layer of which is the main tractional component. Therefore, shortening of the posterior layer of the lower eyelid retractors causes lower eyelid retraction or cicatricial entropion. Based on this concept, we report a modified lower eyelid lengthening surgery involving complete recession of the posterior layer of the lower eyelid retractors by way of a transcutaneous approach that leaves the palpebral conjunctiva intact and inserts ear cartilage as a rigid spacer between the lower edge of the tarsal plate and the recessed anterior layer of the lower eyelid retractors. This procedure completely extirpated the preoperative maladjusted states of lower eyelid retraction and cicatricial entropion. Our procedure also prevented postoperative discomfort of the ocular surface due to the intact palpebral conjunctiva. As well, lower eyelid mobility and contour were good and within their respective permissible ranges. The lower eyelid lengthening surgery focusing on the posterior layer of the lower eyelid retractors using auricular cartilage via a transcutaneous approach is a useful procedure for lower eyelid retraction or cicatricial entropion. abstract_id: PUBMED:17505711 Use of Medpor spherical implant: analysis of 61 orbital surgeries Purpose: To analyze the use of Medpor biointegrated implant in surgeries of evisceration, enucleation or secondary implantation in anophthalmic sockets. Methods: The Medpor orbitary implant was used in 61 surgeries. In 18 cases of evisceration the implant's wrapping was not necessary. In 12 enucleation and secondary implantation surgeries the utilized material to wrap the implant was homologous duramater. In the other 31 surgeries, the implant wrapping was made from autologous tissue (sclera, dermis, orbicular muscle or auricular cartilage), utilizing the "cap" technique--covering only the anterior surface of Medpor. Results: The patients were operated from January 1998 to December 2004 with an average follow-up period of 30 months. In all cases the implant was well accepted, allowing for a good adaptation of the prosthesis. Late exposure of the implant occurred in two patients: the first was corrected by the replacement of the implant in the cavity, while in the second case the exposure was wrapped with a tarsus flap. Conclusions: In our experience the Medpor implant presents good acceptance, with a small exposure rate (only 2 cases), no infection or migration cases, and a good prosthesis adaptation in all patients. The use of autologous tissue to wrap the implant's anterior surface eliminates the inherent risk of using homologous tissues. abstract_id: PUBMED:16736619 Analysis of high density porous polyethylene (Medpor) orbital implant in 266 cases Objective: To evaluate the postoperative complications, the appearance and motility in patients who underwent placement of Medpor orbital implant after enucleation, or secondary implantion. Methods: A retrospective study was carried out on the 266 cases in whom a Medpor orbital implant was implanted from 2001 to 2004. Results: All the cases were followed up from 3 to 26 months. A total of 23 in 266 (8.6%) cases had documented postoperative complications: inflammatory granuloma in 2 cases (0.8%), and moderate or severe ptosis in 4 cases (1.5%), and remained upper eyelid notch with mild ptosis in 8 cases (3.0%), and lower eyelid laxity in 2 cases (0.8%). In 5 cases (1.9%), implant exposure developed: 3 cases were resolved with conservative therapy, and 2 cases with surgical revision of the socket. 2 cases experienced persistent pyogenic discharge in conjunctival sac, and the implant was took out because the discharge couldn't control with local and systemic antibiotic therapy. The motility is excellent in 148 cases and good in 11 cases after the primary obital implant, and excellent in 36 cases and good in 68 cases and bad in 3 case after the secondary implant. The difference between the primary and the secondary obital implant is obvious. Conclusions: It is considered that Medpor orbital implant is a good method of ocular plasty. Although there are some mild postoperative complications, but those can be resolved with conservative therapy or surgical revision. Medpor orbital implant has good histocompatibility, and the operative processes are convenient. Patients are satisfied with appearance and motility. So the Medpor orbital implant is one of the best selection in ocular socket plasty. abstract_id: PUBMED:27105804 Lower Eyelid Reconstruction. Lower eyelid defects are common, and a systematic approach to reconstruction of the lower eyelid is required. Attention to the bilaminar eyelid anatomy and canthal support structures, with efforts to maintain functionally important structures, such as the lacrimal canalicular system, is vital to appropriate lower eyelid reconstruction. Techniques of advancement and rotation flaps and grafting of skin and mucosa are mainstays of lower eyelid reconstruction. An appropriate armamentarium of techniques allows for optimal surgical results. abstract_id: PUBMED:24796200 Combination of high porous polyethylene lower eyelid spacers and lateral tarsal-strip procedure for reconstruction of eyelid closure function in paralytic lagophthalmus after facial palsy Objective: To evaluate the effectivness of using high porous polyethylene lower eyelid spacers (Medpor LES) combined with the lateral tarsal-strip procedure for reconstruction of the eyelid closure function in paralytic lagophthalmus after facial palsy. Methods: Between March 2008 and December 2012, 32 patients (32 eyelids) with hypophasis in facial palsy which all sorts of causes lead to were treated with Medpor LES combined with the lateral tarsal-strip procedure. Of 32 cases, 20 were male and 12 were female, aged from 20 to 72 years (mean, 46.8 years). The left eye was involved in 18 cases and the right eye in 14 cases with a disease duration of 1.5 months to 2 years (mean, 4.4 months). All the patients were shown as lower eyelid ectropion, lower eyelid retraction, exposure keratitis, and corneal ulcer, but no obvious upper eyelid retraction was observed. Results: Primary healing of incision was obtained in all patients. The edema time of the eyelid was from 5 days to 3 weeks (mean, 2 weeks). Conjunctival edema appeared in 4 cases after 2 weeks of operation, which was cured after expectant treatment. The patients were followed up 5-8 months (mean, 6 months). At 1 week, 3 and 6 months after operation, the height of palpebral fissure was reduced, and the degree of lagophthalmus and low eyelid retraction were significantly corrected (P &lt; 0.05), but no significant difference among different time points after operation (P &gt; 0.05). Outer canthus displacement occurred in 3 cases at 1 month postoperatively, and was cured after the lateral tarsal strip procedure. No lower eyelid ectropion, corneal exposure, or Medpor LES displacement and exposure occurred during follow-up. Conclusion: Medpor LES combined with the lateral tarsal-strip procedure has good effectiveness for reconstruction of eyelid closure function in most cases of paralytic lagophthalmus after facial palsy. Answer: Yes, the Medpor lower eyelid spacer does biointegrate. Histopathologic findings of explanted Medpor lower eyelid spacers (LES) in complicated cases showed fibrosis and vascularization, indicating biointegration. Although the implants did not appear vascularized clinically at the time of removal, immunohistochemistry was positive for Factor VIII related antigen and CD34, highlighting the presence of vessels in the pores and around the implant (PUBMED:19229747).
Instruction: Interpretation of bedside chest X-rays in the ICU: is the radiologist still needed? Abstracts: abstract_id: PUBMED:26316460 Interpretation of bedside chest X-rays in the ICU: is the radiologist still needed? Purpose: To compare diagnostic accuracy of intensivists to radiologists in reading bedside chest X-rays. Methods: In a retrospective trial, 33 bedside chest X-rays were evaluated by five radiologists and five intensivists with different experience. Images were evaluated for devices and lung pathologies. Interobserver agreement and diagnostic accuracy were calculated. Computed tomography served as reference standard. Results: Seniors had higher diagnostic accuracy than residents (mean-ExpB(Senior)=1.456; mean-ExpB(Resident)=1.635). Interobserver agreement for installations was more homogenously distributed between radiologists compared to intensivists (ExpB(Rad)=1.204-1.672; ExpB(Int)=1.005-2.368). Seniors had comparable diagnostic accuracy. Conclusion: No significant difference in diagnostic performance was seen between seniors of both disciplines, whereas the resident intensivists might still benefit from an interdisciplinary dialogue. abstract_id: PUBMED:27262231 Accuracy of remote chest X-ray interpretation using Google Glass technology. Objectives: We sought to explore the accuracy of remote chest X-ray reading using hands-free, wearable technology (Google Glass, Google, Mountain View, California). Methods: We compared interpretation of twelve chest X-rays with 23 major cardiopulmonary findings by faculty and fellows from cardiology, radiology, and pulmonary-critical care via: (1) viewing the chest X-ray image on the Google Glass screen; (2) viewing a photograph of the chest X-ray taken using Google Glass and interpreted on a mobile device; (3) viewing the original chest X-ray on a desktop computer screen. One point was given for identification of each correct finding and a subjective rating of user experience was recorded. Results: Fifteen physicians (5 faculty and 10 fellows) participated. The average chest X-ray reading score (maximum 23 points) as viewed through the Google Glass, Google Glass photograph on a mobile device, and the original X-ray viewed on a desktop computer was 14.1±2.2, 18.5±1.5 and 21.3±1.7, respectively (p&lt;0.0001 between Google Glass and mobile device, p&lt;0.0001 between Google Glass and desktop computer and p=0.0004 between mobile device and desktop computer). Of 15 physicians, 11 (73.3%) felt confident in detecting findings using the photograph taken by Google Glass as viewed on a mobile device. Conclusion: Remote chest X-ray interpretation using hands-free, wearable technology (Google Glass) is less accurate than interpretation using a desktop computer or a mobile device, suggesting that further technical improvements are needed before widespread application of this novel technology. abstract_id: PUBMED:31446493 Evaluation of a computer-aided method for measuring the Cobb angle on chest X-rays. Objectives: To automatically measure the Cobb angle and diagnose scoliosis on chest X-rays, a computer-aided method was proposed and the reliability and accuracy were evaluated. Methods: Two Mask R-CNN models as the core of a computer-aided method were used to separately detect and segment the spine and all vertebral bodies on chest X-rays, and the Cobb angle of the spinal curve was measured from the output of the Mask R-CNN models. To evaluate the reliability and accuracy of the computer-aided method, the Cobb angles on 248 chest X-rays from lung cancer screening were measured automatically using a computer-aided method, and two experienced radiologists used a manual method to separately measure Cobb angles on the aforementioned chest X-rays. Results: For manual measurement of the Cobb angle on chest X-rays, the intraclass correlation coefficients (ICC) of intra- and inter-observer reliability analysis was 0.941 and 0.887, respectively, and the mean absolute differences were &lt; 3.5°. The ICC between the computer-aided and manual methods for Cobb angle measurement was 0.854, and the mean absolute difference was 3.32°. These results indicated that the computer-aided method had good reliability for Cobb angle measurement on chest X-rays. Using the mean value of Cobb angles in manual measurements &gt; 10° as a reference standard for scoliosis, the computer-aided method achieved a high level of sensitivity (89.59%) and a relatively low level of specificity (70.37%) for diagnosing scoliosis on chest X-rays. Conclusion: The computer-aided method has potential for automatic Cobb angle measurement and scoliosis diagnosis on chest X-rays. These slides can be retrieved under Electronic Supplementary Material. abstract_id: PUBMED:36335600 Efficacy of chest X-rays after drain removal in adult and pediatric patients undergoing cardiac and thoracic surgery: A systematic review. Background: Chest X-rays are routinely obtained after the removal of chest drains in patients undergoing cardiac and thoracic surgical procedures. However, a lack of guidelines and evidence could question the practice. Routine chest X-rays increase exposure to ionizing radiation, increase health-care costs, and lead to overutilisation of available resources. This review aims to explore the evidence in the literature regarding the routine use of chest X-rays following the removal of chest drains. Materials & Method: A systematic literature search was conducted in PubMed, Medline via Ovid, Cochrane central register of control trials (CENTRAL), and ClinicalTrials. gov without any limit on the publication year. The references of the included studies are manually screened to identify potentially eligible studies. Results: A total of 375 studies were retrieved through the search and 18 studies were included in the review. Incidence of pneumothorax remains less than 10% across adult cardiac, and pediatric cardiac and thoracic surgical populations. The incidence may be as high as 50% in adult thoracic surgical patients. However, the reintervention rate remains less than 2% across the populations. Development of respiratory and cardiovascular symptoms can adequately guide for a chest X-ray following the drain removal. As an alternative, bedside ultrasound can be used to detect pneumothorax in the thorax after the removal of a chest drain without the need for ionizing radiation. Conclusion: A routine chest X-ray following chest drain removal in adult and pediatric patients undergoing cardiac and thoracic surgery is not necessary. It can be omitted without compromising patient safety. Obtaining a chest X-ray should be clinically guided. Alternatively, bedside ultrasound can be used for the same purpose without the need for radiation exposure. abstract_id: PUBMED:37244797 Patient rotation chest X-rays and the consequences of misinterpretation in paediatric radiology. Purpose: We aimed to demonstrate the consequences of rotation on neonatal chest radiographs and how it affects diagnosis. In addition, we demonstrate methods for determining the presence and direction of rotation. Background: Patient rotation is common in chest X-rays of neonates. Rotation is present in over half of chest X-rays from the ICU, contributed to by unwillingness of technologists to reposition new-borns for fear of dislodging lines and tubes. There are six main effects of rotation on supine paediatric chest X-rays: 1) unilateral hyperlucency of the side that the patient is rotated towards; 2) the side 'up' appears larger; 3) apparent deviation of the cardiomediastinal shadow in the direction that the chest is rotated towards; 4) apparent cardiomegaly; 5) distorted cardio-mediastinal configuration; and 6) reversed position of the tips of the umbilical artery and vein catheters with rotation to the left. These effects can cause diagnostic errors due to misinterpretation, including air-trapping, atelectasis, cardiomegaly, and pleural effusions, or disease may be masked. We demonstrate the methods of evaluating rotation with examples, including a 3D model of the bony thorax as a guide. In addition, multiple examples of the effects of rotation are provided including examples where disease was misinterpreted, underestimated or masked. Conclusion: Rotation is often unavoidable in neonatal chest X-rays, especially in the ICU. It is therefore important for physicians to recognise rotation and its effects, and to be aware that it can mimic or mask disease. abstract_id: PUBMED:32986183 Evaluation of dose reduction potential in scatter-corrected bedside chest radiography using U-net. Bedside radiography has increasingly attracted attention because it allows for immediate image diagnosis after X-ray imaging. Currently, wireless flat-panel detectors (FPDs) are used for digital radiography. However, adjustment of the X-ray tube and FPD alignment are extremely difficult tasks. Furthermore, to prevent a poor image quality caused by scattered X-rays, scatter removal grids are commonly used. In this study, we proposed a scatter-correction processing method to reduce the radiation dose when compared with that required by the X-ray grid for the segmentation of a mass region using deep learning during bedside chest radiography. A chest phantom and an acrylic cylinder simulating the mass were utilized to verify the image quality of the scatter-corrected chest X-rays with a low radiation dose. In addition, we used the peak signal-to-noise ratio and structural similarity to quantitatively assess the quality of the low radiation dose images compared with normal grid images. Furthermore, U-net was used to segment the mass region during the scatter-corrected chest X-ray with a low radiation dose. Our results showed that when scatter correction is used, an image with a quality equivalent to that obtained by grid radiography is produced, even when the imaging dose is reduced by approximately 20%. In addition, image contrast was improved using scatter radiation correction as opposed to using scatter removal grids. Our results can be utilized to further develop bedside chest radiography systems with reduced radiation doses. abstract_id: PUBMED:29708195 Can portable tomosynthesis improve the diagnostic value of bedside chest X-ray in the intensive care unit? A proof of concept study. Portable bedside chest X-ray (CXR) is an important and frequently used tool in the intensive care unit (ICU). Unfortunately, the diagnostic value of portable CXR is often low due to technical limitations and suboptimal patient positioning. Additionally, abnormalities in the chest may be hidden on the projection image by overlapping anatomy and devices such as endotracheal tubes, lines and catheters. Digital tomosynthesis (DTS) can solve the problem of anatomical overlap. In DTS, several low-dose X-ray images from different angles are acquired and subsequently used by a reconstruction algorithm to compute section images along planes parallel to the detector. However, a portable device to be used for portable bedside chest DTS is not on the market yet. In this work, we discuss modifications to a portable X-ray device to enable portable DTS and illustrate the potential of portable DTS to improve the diagnostic value of bedside CXR in the ICU. A simulation, based on computed tomography scans, is presented. Our experiments comparing portable DTS with conventional bedside CXR showed a substantially improved detection of pneumothorax and other abnormalities. abstract_id: PUBMED:35870303 Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach. The chest X-ray is a widely used medical imaging technique for the diagnosis of several lung diseases. Some nodules or other pathologies present in the lungs are difficult to visualize on chest X-rays because they are obscured byoverlying boneshadows. Segmentation of bone structures and suppressing them assist medical professionals in reliable diagnosis and organ morphometry. But segmentation of bone structures is challenging due to fuzzy boundaries of organs and inconsistent shape and size of organs due to health issues, age, and gender. The existing bone segmentation methods do not report their performance on abnormal chest X-rays, where it is even more critical to segment the bones. This work presents a robust encoder-decoder network for semantic segmentation of bone structures on normal as well as abnormal chest X-rays. The novelty here lies in combining techniques from two existing networks (Deeplabv3+ and U-net) to achieve robust and superior performance. The fully connected layers of the pre-trained ResNet50 network have been replaced by an Atrous spatial pyramid pooling block for improving the quality of the embedding in the encoder module. The decoder module includes four times upsampling blocks to connect both low-level and high-level features information enabling us to retain both the edges and detail information of the objects. At each level, the up-sampled decoder features are concatenated with the encoder features at a similar level and further fine-tuned to refine the segmentation output. We construct a diverse chest X-ray dataset with ground truth binary masks of anterior ribs, posterior ribs, and clavicle bone for experimentation. The dataset includes 100 samples of chest X-rays belonging to healthy and confirmed patients of lung diseases to maintain the diversity and test the robustness of our method. We test our method using multiple standard metrics and experimental results indicate an excellent performance on both normal and abnormal chest X-rays. abstract_id: PUBMED:38106737 Clinicians' Interpretation of Unreported Chest Radiographs in Biologic Prescription Workup Service: A Comprehensive Analysis. Clinicians without a radiology specialization face difficulties when they attempt to interpret chest X-rays (CXRs), a crucial and extensively utilized diagnostic tool that plays a fundamental role in the detection of pulmonary and cardiovascular disorders. This cross-sectional study assessed the confidence and competence of clinicians, including junior specialty trainees, higher specialty trainees, and specialist nurses, in interpreting CXRs before starting biological treatment. An online survey was used to collect data from clinicians in various healthcare settings, focusing on their experience, training, confidence levels, and CXR interpretation proficiency. The survey uncovered clinicians' insufficient confidence in interpreting the pre-biological screening CXRs despite their clinical expertise. This uncertainty raises concerns about potential misinterpretations, affecting timely treatment decisions. A Kruskal-Wallis test indicated a significant difference between training levels required with a p-value of 0.001, rejecting the null hypothesis. Subsequently, a Dunn-Bonferroni test revealed that the higher specialty trainee-specialist nurse pair differed significantly, with the specialist nurse group requiring more training. This study highlighted the need for enhanced radiology education for clinicians involved in chest radiograph interpretation for pre-biological screening. Implementing a structured training program is essential to improve skills and ensure accurate interpretation of non-formally reported chest radiographs, ultimately enhancing patient outcomes and healthcare practices. abstract_id: PUBMED:34173055 Routine chest X-rays after pigtail chest tube removal rarely change management in children. Background: The need for chest X-rays (CXR) following large-bore chest tube removal has been questioned; however, the utility of CXRs following removal of small-bore pigtail chest tubes is unknown. We hypothesized that CXRs obtained following removal of pigtail chest tubes would not change management. Methods: Patients &lt; 18 years old with pigtail chest tubes placed 2014-2019 at a tertiary children's hospital were reviewed. Exclusion criteria were age &lt; 1 month, death or transfer with a chest tube in place, or pigtail chest tube replacement by large-bore chest tube. The primary outcome was chest tube reinsertion. Results: 111 patients underwent 123 pigtail chest tube insertions; 12 patients had bilateral chest tubes. The median age was 5.8 years old. Indications were pneumothorax (n = 53), pleural effusion (n = 54), chylothorax (n = 6), empyema (n = 5), and hemothorax (n = 3). Post-pull CXRs were obtained in 121/123 cases (98.4%). The two children without post-pull CXRs did not require chest tube reinsertion. Two patients required chest tube reinsertion (1.6%), both for re-accumulation of their chylothorax. Conclusions: Post-pull chest X-rays are done nearly universally following pigtail chest tube removal but rarely change management. Providers should obtain post-pull imaging based on symptoms and underlying diagnosis, with higher suspicion for recurrence in children with chylothorax. Answer: The interpretation of bedside chest X-rays in the ICU is a critical task, and the question of whether a radiologist is still needed for this can be informed by several studies. A retrospective trial compared the diagnostic accuracy of intensivists to radiologists in reading bedside chest X-rays, with computed tomography serving as the reference standard. The study found that senior intensivists and radiologists had comparable diagnostic accuracy, while resident intensivists might still benefit from an interdisciplinary dialogue with radiologists (PUBMED:26316460). This suggests that while experienced intensivists can interpret chest X-rays with a level of accuracy similar to radiologists, less experienced intensivists may still require the expertise of radiologists. Another study evaluated the accuracy of remote chest X-ray reading using Google Glass technology and found that it was less accurate than interpretation using a desktop computer or a mobile device (PUBMED:27262231). This indicates that while technology is advancing, the role of a radiologist in providing accurate interpretations remains important, especially when using less conventional methods. The efficacy of chest X-rays after drain removal in adult and pediatric patients undergoing cardiac and thoracic surgery was also reviewed, suggesting that routine chest X-rays may not be necessary and can be replaced by clinically guided imaging or bedside ultrasound (PUBMED:36335600). This could imply that the need for radiologist interpretation in such scenarios may be reduced if alternative imaging strategies are employed effectively. Additionally, the impact of patient rotation on the interpretation of neonatal chest radiographs was examined, highlighting the importance of recognizing rotation and its effects to avoid misinterpretation (PUBMED:37244797). This complexity in interpretation underscores the value of radiologists' expertise in neonatal and pediatric care settings. In conclusion, while certain scenarios may allow for non-radiologists to accurately interpret bedside chest X-rays, particularly with the aid of advanced imaging techniques and in the hands of experienced clinicians, the expertise of radiologists remains crucial in many instances to ensure accurate diagnosis and patient safety. The interdisciplinary dialogue between intensivists and radiologists, as well as the consideration of alternative imaging strategies, can optimize patient care in the ICU.
Instruction: Management of epistaxis in Rendu-Osler disease: is brachytherapy effective? Abstracts: abstract_id: PUBMED:30908653 Sclerotherapy and Topical Nasal Propranolol: An Effective and Safe Therapy for HHT-Epistaxis. Objectives: Epistaxis is the most frequent clinical manifestation of hereditary hemorrhagic telangiectasia (HHT). Several topical, systemic, and surgical treatments have been tried, but none have been completely effective. The aim of the present study is to evaluate whether a combined treatment sclerotherapy and topical therapy with propranolol 0.5% nasal formulation would reduce the epistaxis due to HHT and improve patient's quality of life. Methods: An observational cross-sectional study was carried out. The primary outcome measure was frequency and severity of epistaxis as measured by the epistaxis severity score (ESS) at baseline (4 weeks before therapy) and at least 4 weeks after the treatment was implemented. Quality of life was analyzed using EuroQol-5D (EQ-5D) scale and visual analogue (VAS) scale before and after treatment. Results: A total of 38 consecutive patients subjected to the combined treatment were evaluated (mean age: 57.2 years, standard deviation [SD] = 13.9; 60.5% women). The mean time of treatment was 37.1 weeks (SD = 14.9). Combined therapy significantly reduces frequency and severity of epistaxis, with an ESS improvement of 5 points from 6.9 ± 2.6 to 1.9 ± 1.3 (P &lt; 0.05); however, the EQ-5D scale increased from 0.66 ± 0.27 to 0.93 ± 0.12 (P &lt; 0.05). The difference in VAS means showed an increase from 44.6 ± 28.3 to 82.5 ± 12.5 (P &lt; 0.05). The increases in quality of life are in line with the drop in ESS. Conclusion: The study demonstrated that combined therapy (sclerotherapy and topical nasal propranolol) significantly reduced the epistaxis due to HHT and increased patients' quality of life. Level Of Evidence: 2c Laryngoscope, 129:2216-2223, 2019. abstract_id: PUBMED:23066781 Argon plasma coagulation is an effective treatment for hereditary hemorrhagic telangiectasia patients with severe nosebleeds. Conclusions: In contrast to the current trend according to which the treatment of hereditary hemorrhagic telangiectasia (HHT) epistaxis depends on clinical severity, argon plasma coagulation (APC) has also proven to be effective as a first-line procedure in patients with severe nosebleeds. Furthermore, with this approach patients are free from requirements for blood transfusions for a long time in the vast majority of cases. Objective: The aim of this study was to test the efficacy of APC treatment as a first-line procedure in HHT patients affected by severe epistaxis. Methods: From 1996 until 2011, 252 HHT patients were treated with APC in our clinic. We selected 26 patients with severe epistaxis for whom the need for blood transfusion had been recorded. This group of patients was asked to answer a questionnaire that aimed to evaluate the severity of epistaxis (defined by its intensity, frequency, and duration), to assess the duration of the benefit of treatment and to evaluate the number of blood transfusions required before and after treatment. Results: After APC treatment, a statistically significant decrease in all epistaxis parameters was recorded and most patients did not need blood transfusions for several years after the procedure. abstract_id: PUBMED:18031399 Rupture of hepatic aneurysm complicating hereditary hemorrhagic telangiectasia (Osler-Weber-Rendu disease) for which hepatic arterial coil embolization was effective. A 74-year-old woman was transferred to our hospital for further examinations because of abdominal fullness and abnormal levels of serum liver/biliary enzyme persisting for 3 weeks. She had anemia and dilatation of many capillary vessels in her fingers, palms, and tongue in addition to reporting frequent incidences of nasal bleeding in herself and her family. Abdominal ultrasonography detected a cystic lesion in the right hepatic lobe, connected to a dilated tortuous hepatic artery. A low-echoic hepatic phyma was also detected in the back of the cystic lesion. Abdominal computed tomography and magnetic resonance imaging indicated that the cystic lesion was an aneurysm and the low-echoic phyma was a hematoma. Hepatic arteriography confirmed a hepatic aneurysm, tortuous dilatation of the hepatic artery, and the complication of an arteriovenous shunt in the liver. Taking all of these findings into consideration, this case was diagnosed as hereditary hemorrhagic telangiectasia (HTT) complicated by a hepatic aneurysm causing intrahepatic hematoma. To prevent re-rupture of the aneurysm, we performed a hepatic arterial coil embolization. After therapy, no blood flow to the aneurysm was detected by ultrasonic color Doppler method and the hematoma gradually diminished. There have been no reports of a case in which hepatic arterial embolization was effective for HHT-associated hepatic aneurysm causing intrahepatic hematoma. This very rare case provides important clinical information regarding abdominal vascular complications of HTT and a less invasive treatment for them. abstract_id: PUBMED:8262830 Management of epistaxis in Rendu-Osler disease: is brachytherapy effective? Purpose: This paper reviews the results of intranasal brachytherapy for epistaxis in 43 patients with Rendu-Osler disease treated between 1971-1991 at Henri Mondor Hospital. Methods And Materials: 2-3 intranasal catheters were afterloaded with 192Ir sources. Computer dosimetry was performed and then the dose was prescribed to an isodose thought to cover the nasal mucosa. The dose rate ranged from 0.16 Gy/h-0.63 Gy/h with a median of 0.34 Gy/h. Dose at one application ranged from 15-35 Gy with a median of 30 Gy. The severity of epistaxis was graded 1 to 5. Results: The time to recurrence of significant epistaxis ranged from 6-178 months with a median of 24 months. The dose prescribed did not correlate with control rate. The only brachytherapy complication was septal perforation in 4 patients; in one this was a result of repeated nasal coagulation. Conclusion: We suggest that intranasal brachytherapy is a useful modality in the management of epistaxis in Rendu-Osler disease. abstract_id: PUBMED:27796042 Is bevacizumab effective for reducing epistaxis in hereditary hemorrhagic telangiectasia? N/A abstract_id: PUBMED:29451965 Pazopanib effective for bevacizumab-unresponsive epistaxis in hereditary hemorrhagic telangiectasia. Hereditary hemorrhagic telangiectasia (HHT) most commonly manifests with nasal mucosal telangiectasias, and vascular endothelial growth factor (VEGF) plays a significant role in this angiodysplasia. We describe a patient with HHT with epistaxis recalcitrant to several endonasal procedures and six cycles of intravenous bevacizumab, for which he was dependent on iron infusions and packed red blood cells transfusions. He then started pazopanib at 100 mg with dramatic improvements in epistaxis and normalization of hemoglobin and iron levels, without replenishment needs for 12 months. This is the first report on the efficacy of pazopanib with high selectivity for abrogating VEGF receptor-2 signaling in HHT, and needs to be explored further. Laryngoscope, 128:2234-2236, 2018. abstract_id: PUBMED:32584331 FID Score: an effective tool in Hereditary Haemorrhagic Telangiectasia - related epistaxis. Background: Hereditary haemorrhagic telangiectasia (HHT) is a rare disease characterized by a multisystemic vascular dysplasia and epistaxis, that is the most common cause of disability and social impairment. Patient management strictly depends on the severity of this symptom; therefore, it is of paramount importance for the clinicians to effectively grade epistaxis severity. The aim of this report was to validate the Frequency, Intensity and Duration score (FID) for grading epistaxis severity in patients with HHT; we studied repeatability and external validity comparing FID score with Epistaxis Severity Score (ESS). Methods: This is a descriptive, observational study that included 264 adult HHT patients with epistaxis. Diagnosis of HHT was established with Curacao criteria or positivity at genetic testing. Nosebleed severity was evaluated according to the FID score and the ESS. The first 30 patients were included in the validation of the FID score, which was graded on days 0, 1, 3 and 7. In the remaining 234 patients, a comparison between the ESS and FID score was performed. Results: The statistical analysis performed in order to validate the FID score showed very good agreement between scores calculated on different days; analysis comparing the FID score with the ESS revealed a high correlation between the two grading systems. Conclusions: The FID score is a quick, easy and precise tool for evaluating HHT-related epistaxis and could be a possible alternative to the ESS. The FID score meets the need for an intuitive and smart grading system that is easy to manage in clinicians’ hands. abstract_id: PUBMED:22138950 Low dose of bevacizumab is safe and effective in preventing bleeding episodes in hereditary hemorrhagic telangiectasia. N/A abstract_id: PUBMED:20824275 Anti-VEGF with 3-week intervals is effective on anemia in a patient with severe hereditary hemorrhagic telangiectasia. N/A abstract_id: PUBMED:21792959 A new method of nasal dermoplasty (MW grafting method) in patients with large septal perforation. Nasal dermoplasty is effective in controlling epistaxis in patients with hereditary hemorrhagic telangiectasia (HHT). Skin graft take failure occurs mostly in cases of large septal perforation. The MW method was developed as a modification of nasal dermoplasty designed for patients with HHT having a large septal perforation. It seems to be a safe and effective approach and should be tried for recurrent bleeders with septal perforation. Answer: Yes, brachytherapy appears to be an effective treatment for managing epistaxis in Rendu-Osler disease, also known as hereditary hemorrhagic telangiectasia (HHT). According to a study that reviewed the results of intranasal brachytherapy in 43 patients with Rendu-Osler disease treated between 1971-1991 at Henri Mondor Hospital, the time to recurrence of significant epistaxis ranged from 6-178 months with a median of 24 months. The prescribed dose did not correlate with the control rate, and the only complication noted was septal perforation in 4 patients, which in one case was a result of repeated nasal coagulation. The study suggests that intranasal brachytherapy is a useful modality in the management of epistaxis in Rendu-Osler disease (PUBMED:8262830).
Instruction: Does a brief training on suicide prevention among general hospital personnel impact their baseline attitudes towards suicidal behavior? Abstracts: abstract_id: PUBMED:17056123 Does a brief training on suicide prevention among general hospital personnel impact their baseline attitudes towards suicidal behavior? Objective: To evaluate a brief training program on suicide prevention for front-line general hospital personnel in terms of its impact on their attitudes and beliefs towards suicidality. Method: Forty non-clinical (e.g., security staff), and 102 clinical (e.g., nursing attendants) professionals employed in a university hospital in Brazil were evaluated with the Suicide Behavior Attitude Questionnaire [SBAQ] before the start and immediately after a 3-hour training on suicide prevention. Results: Surprisingly, there were no significant differences for the great majority of the SBAQ items (i.e., 20 of 21) between clinical and non-clinical staff both pre- and post-training. Furthermore, their attitudes and beliefs towards suicidality were significantly improved after training in the majority of SBAQ items, with p-values ranging from 0.01 to &lt;0.0001. Limitations: Relatively small sample size, and absence of a randomized controlled design and long-term follow-up. Conclusions: Our study suggests that attitudes and beliefs of clinical and non-clinical general hospital personnel towards suicidality were unexpectedly similar, and reinforces the need for suicide awareness training programs in the general hospital setting. Additionally, we have shown that it is feasible to provide such basic knowledge concerning suicidal behavior to this specific population. This is particularly important for the development of suicide prevention programs in under-resourced countries. However, our results are preliminary, and further studies are needed to address a number of important questions in the field. abstract_id: PUBMED:25296153 Impact of "+Contigo" training on the knowledge and attitudes of health care professionals about suicide. Objectives: to evaluate the results of "+Contigo" training, developed by nurses and directed at 66 health professionals of integrated school health teams in Primary Health Care. Method: quantitative with data collection through the Suicide Behavior Attitude Questionnaire, administered before and after the training. Results: significant increases were observed in suicide prevention knowledge and in changing attitudes of health professionals towards individuals with suicidal behavior. Conclusion: these results allow us to affirm that nurses hold scientific and pedagogical knowledge that grant them a privileged position in the health teams, to develop training aimed at health professionals involved in suicide prevention. abstract_id: PUBMED:32166827 Two-day assertive-case-management educational program for medical personnel to prevent suicide attempts: A multicenter pre-post observational study. Aim: Suicide attempters have a high risk of repeated suicide attempts and completed suicide. There is evidence that assertive case management can reduce the incidence of recurrent suicidal behavior among suicide attempters. This study evaluated the effect of an assertive-case-management training program. Methods: This multicenter, before-and-after study was conducted at 10 centers in Japan. Participants were 274 medical personnel. We used Japanese versions of the Attitudes to Suicide Prevention Scale, the Gatekeeper Self-Efficacy Scale, the Suicide Intervention Response Inventory (SIRI), and the Attitudes Toward Suicide Questionnaire. We evaluated the effects with one-sample t-tests, and examined prognosis factors with multivariable analysis. Results: There were significant improvements between pre-training and post-training in the Attitudes to Suicide Prevention Scale (mean: -3.07, 95% confidence interval [CI]: -3.57 to -2.57, P &lt; 0.001), the Gatekeeper Self-Efficacy Scale (mean: 10.40, 95%CI: 9.48 to 11.32, P &lt; 0.001), SIRI-1 (appropriate responses; mean: 1.15, 95%CI: 0.89 to 1.42, P &lt; 0.001), and SIRI-2 (different to the expert responses; mean: -4.78, 95%CI: -6.18 to -3.38, P &lt; 0.001). Significant improvements were found on all Attitudes Toward Suicide Questionnaire subscale scores, except Unjustified Behavior. The effect of training was influenced by experience of suicide-prevention training and experience of working with suicidal patients. Conclusion: The training program (which was developed to implement and disseminate evidence-based suicide-prevention measures) improved attitudes, self-efficacy, and skills for suicide prevention among medical personnel. Specialized suicide-prevention training and experience with suicidal patients are valuable for enhancing positive attitudes and self-efficacy; furthermore, age and clinical experience alone are insufficient for these purposes. abstract_id: PUBMED:36180366 Experience and attitudes of pharmacy teams towards suicide prevention: A cross-sectional survey. Background: Community pharmacists are increasingly recognized as integral members in suicide prevention programs, as part of a multidisciplinary and multifaceted approach. However, further research is required to understand then optimize the whole pharmacy teams' role across sectors. Objective: To explore pharmacy teams' experience of, and attitudes towards, suicide prevention in England. Methods: A cross-sectional survey was purposively distributed to pharmacy staff in England before accessing an optional suicide awareness raising video, hosted by Centre for Pharmacy Postgraduate Education (CPPE), in September 2019-March 2021. Questions included demographics and experience of, attitudes towards, and preparedness for, suicide prevention. The 14-item Attitudes to Suicide Prevention (ASP) scale was used (possible range 14-70 with lower scores representing positive attitudes). Descriptive and comparative statistics were reported. Free-text comments were invited to explore respondents' experience of suicide prevention and reflexive thematic analysis used. Results: Of 403 respondents, 82% were female; most were pharmacists (59%) or pharmacy technicians (21%), with the remainder having other roles. Eighty-five percent worked in community pharmacy. Eleven percent had prior suicide prevention training, and 71% reported interacting with at least one patient about suicide. Most often, suicidality was disclosed by the patient (40%), with 6% of pharmacy staff having directly asked a patient about suicidal behavior or plans. The aggregated ASP score was 31.51 (SD 6.23), and role did not affect experience or attitude. Pharmacy teams' experiences of suicide prevention can be summarized by three major themes i) Exposure to suicide; ii) Responsibility for action; and iii) Access to means of suicide. Conclusions: Pharmacy teams felt responsibility in caring for those at risk of suicide and had experience of this. Further training should include understanding of medicines means restriction and involve all roles and sectors of pharmacy. Pharmacy teams should be integrated into the 'circle of care' to access referral pathways. abstract_id: PUBMED:28335446 Knowledge, Self-Confidence and Attitudes towards Suicidal Patients at Emergency and Psychiatric Departments: A Randomised Controlled Trial of the Effects of an Educational Poster Campaign. Educational posters are used to enhance knowledge, attitudes and self-confidence of patients. Little is known on their effectiveness for educating health care professionals. As these professionals may play an important role in suicide prevention, the effects of a poster and accompanying evaluation and triage guide on knowledge, self-confidence and attitudes regarding suicidal thoughts and behaviours, were studied in a multicentre cluster randomised controlled trial, involving staff from 39 emergency and 38 psychiatric departments throughout Flanders (n = 1171). Structured self-report questionnaires assessed the knowledge, confidence and beliefs regarding suicidal behaviour management, and attitudes. Data were analysed through a Solomon four-group design, with random assignment to the different conditions. Baseline scores for knowledge and provider confidence were high. The poster and accompanying evaluation and triage guide did not have an effect on knowledge about suicide and self-confidence in suicidal behaviour management. However, the poster campaign appeared to be beneficial for attitudes towards suicidal patients, but only among staff from mental health departments that were assigned to the un-pretested condition. Given the limited effects of the poster campaign in the studied population with a relatively high baseline knowledge, the evaluation of this poster as part of a multimodal educational programme in a more heterogeneous sample of health care professionals is recommended. abstract_id: PUBMED:29393518 A brief peer gatekeeper suicide prevention training: Results of an open pilot trial. Objective: The purpose of the present study was to test a 1-hour peer suicide gatekeeper training for students from the broad college community in the context of an open pilot trial. Method: Two-hundred and thirty-one college students were recruited university-wide, Mage = 20.7, 65.4% female, and completed a peer suicide prevention gatekeeping training program. Assessments were completed at pre-training and post-training as well as 3-month follow-up. Results: This brief peer suicide gatekeeper training program was associated with increases in suicide prevention knowledge. It was also associated with an increase in the number of students who identified suicidal youth and made mental health referrals, as well as total number of referrals made, over the course of three months. Females reported greater improvement in suicide prevention skills and knowledge post-training than males. Conclusions: Offering peer suicide gatekeeper training to students from the general college population may hold promise in suicide prevention efforts. abstract_id: PUBMED:27230910 Attitudes towards suicide in urban and rural China: a population based, cross-sectional study. Background: Suicide intervention programs have been guided by findings that attitude towards suicide and suicidal behavior may be causally linked. These findings also make it imperative to identify the factors that influence attitudes towards suicide. However, there has been little research on attitudes towards suicide among the general population, especially in low-income and middle-income countries. This population-based, cross-sectional study investigated the associated factors of attitudes towards suicide among a representative sample of urban and rural adult residents in China. Methods: A multi-stage, stratified random sampling approach was implemented to select participants. Data were collected by a survey using the Scale of Public Attitudes about Suicide (SPAS). The survey also collected some socio-demographic factors and suicidal history of participants. Statistical tests were conducted to identify associated factors that account for variations in attitudes towards suicide. Results: The residents in China generally hold a neutral attitude towards suicide. Attitudes towards suicide among Chinese residents were associated with age, duration of formal education, marital status, job and suicidal ideation. Different attitudinal subscales seemed not to share the same risk factors. However, gender, ethnicity, religious belief, housing style and economic status might not influence residents' attitudes towards suicide. Attitudes towards suicide among Chinese urban and rural residents generally had no statistical difference with one notable exception: opinions on whether or not suicides and suicide attempts are different phenomena. Conclusions: Age, duration of formal education, marital status, job and suicidal ideation seem to have an impact on attitudes towards suicide among residents. Urban and rural residents have similar attitudes towards suicide with the only statistically significance difference being their opinions on whether or not suicides and suicide attempts are different phenomena. abstract_id: PUBMED:30155558 Confidence and attitudes of pharmacy students towards suicidal crises: patient simulation using people with a lived experience. Purpose: Health care professionals, including pharmacists, have the potential to recognise and assist those at risk of suicide. The primary aim of this study was to assess the impact of utilising people with a lived experience of mental illness as simulated patients on final year pharmacy students' attitudes toward and confidence in caring for people at risk of suicide after first receiving Mental Health First Aid (MHFA) training. Methods: A parallel group repeated measures design was used. People with a lived experience of mental illness enacted patients experiencing a mental health crisis, including possible suicidal ideation. Following MHFA training, the first group directly participated in the simulation, the second group observed, and the final group had no exposure to the simulation. Validated surveys measuring student attitudes and confidence were conducted at three time points; pre and post MHFA, and then at 2-4 weeks follow-up. Results: Full datasets of survey responses were received from 34/40 direct participants (85%), 104/146 observers (71%) and 50/66 comparison students (76%). Mean confidence scores significantly improved for all groups post MHFA training (p &lt; 0.05). At follow-up, all 8 confidence items for the direct participant and observer group maintained significance from baseline to post intervention (p &lt; 0.05). Mixed results in relation to attitudes towards suicide were evident at each time point and among each participant group. Conclusions: Utilising people with a lived experience of mental illness as simulated patients has a positive effect on sustaining pharmacy student confidence in discussing suicidal behaviour post MHFA training. The inconsistency in attitudes towards suicide suggests that attitudes are complex in nature, involving multiple dynamic influences. abstract_id: PUBMED:35703608 Attitudes of health professionals towards suicidal behavior: an intervention study. Objective: To evaluate the effect of an educational intervention on the attitudes of primary healthcare providers regarding patients with suicidal behavior. Methods: Clinical trial randomized by clusters, with a sample of 261 healthcare professionals, from 22 health units selected by stratified sampling, were chosen and randomly allocated, by drawing, into two groups: intervention (n = 87) and control (n = 174). The participants of the intervention group were exposed to a 20-hour training on suicidal behavior. All 261 participants were evaluated before and after the intervention; the groups were compared to evaluate their attitude towards suicidal behavior using the Suicide Behavior Attitude Questionnaire (SBAQ), an evaluation made by comparison of the means via t-Student test, for independent samples, and paired t-test, for dependent samples. Results: The intervention group, in comparison to their evaluation before and after training, as well as in the comparison with the evaluation of the control group, showed statistically significant differences in attitudes towards suicidal behavior, according to the differences presented in the scores for the domains: "perception of professional capacity," in all four items; "negative feeling," in six of the seven items; and in the "right to commit suicide" domain, in three of the five items. Conclusion: The brief training developed in primary health care was effective to improve the attitudes of the participants who were part of the intervention group regarding patients with suicidal behavior. abstract_id: PUBMED:36566696 Assessing knowledge, attitudes, and practices of emergency department staff towards patients with suicidal behaviors in Pakistan. The study aimed to assess knowledge, attitudes and practices of Emergency Department staff towards patients with suicidal behaviors at a tertiary care hospital in Karachi Pakistan. An online survey found that majority of respondents were unaware of the law regarding suicide and self-harm (SH) in Pakistan. Majority had not received any specific training in management of suicidal patients. About 50 % of doctors and 16 % of nurses felt that SH patients are treated 'less seriously than patients with medical problems.' Institutional policy on management and pathway to care for suicidal patients should be disseminated and monitored for its implications in clinical practice. Answer: Yes, a brief training on suicide prevention among general hospital personnel does impact their baseline attitudes towards suicidal behavior. Studies have shown that after such training, there are significant improvements in attitudes and beliefs towards suicidality. For example, a study evaluating a 3-hour training program on suicide prevention for front-line general hospital personnel in Brazil found that their attitudes and beliefs towards suicidality were significantly improved after training across the majority of items on the Suicide Behavior Attitude Questionnaire (SBAQ) (PUBMED:17056123). Similarly, the "+Contigo" training aimed at health professionals in Primary Health Care resulted in significant increases in suicide prevention knowledge and changes in attitudes towards individuals with suicidal behavior (PUBMED:25296153). Furthermore, a two-day assertive-case-management educational program for medical personnel in Japan led to significant improvements in attitudes to suicide prevention, self-efficacy, and skills for suicide prevention (PUBMED:32166827). These findings suggest that even brief interventions can have a positive impact on the attitudes of healthcare professionals towards suicide prevention, which is crucial for the development of effective suicide prevention programs.
Instruction: Isthmic-vaginal smear cytology in the follow-up after radical vaginal trachelectomy for early stage cervical cancer: is it safe? Abstracts: abstract_id: PUBMED:24478265 Isthmic-vaginal smear cytology in the follow-up after radical vaginal trachelectomy for early stage cervical cancer: is it safe? Background: Isthmic-vaginal cytology is a follow-up method in patients who have undergone radical vaginal trachelectomy (RVT) for early cervical cancer. However, to the authors' knowledge, little is known regarding its ability to monitor patients and diagnose disease recurrence. Herein, the authors report their experience with cytology after RVT compared with cytology in patients after cone biopsy and women undergoing screening. Methods: A database of 563 specimens from 303 patients was analyzed retrospectively (RVT in 361 specimens, conization in 102 specimens, and screening in 100 specimens). The following criteria were applied: Bethesda system, the presence of endocervical and metaplasia cells, regeneration criteria, vaginal flora, and morphological signs of human papillomavirus. The analysis was performed by 2 cytopathologists. Differences between the groups and correlation between the cytopathologists were analyzed. Results: Smears without endocervical and metaplasia cells were significantly less frequent among the patients who underwent RVT. There was no difference in regeneration signs, vaginal flora, and morphologic signs of human papillomavirus between the groups. After RVT, 26/23 smears (cytopathologist 1/cytopathologist 2) smears were diagnosed as abnormal. Biopsies revealed 7 cases of dysplasia and 1 case of disease recurrence. After conization, 1 patient was diagnosed with a low-grade lesion on cytology; follow-up cytology was normal. In the screening, 10/13 smears were diagnosed with lesions on cytology; biopsy revealed dysplasia in 2 cases. The correlation between both cytopathologists was high. Conclusions: After RVT, histological verification of cytology is frequently needed. The reasons might include alterations of anatomy, regeneration, and inflammation process after RVT. Cytopathologists should become familiar with the spectrum of changes in post-RVT cytology and communication between cytopathologists and clinicians should be improved. This might reduce false-positive results. abstract_id: PUBMED:19306425 Postradical vaginal trachelectomy follow-up by isthmic-vaginal smear cytology: a 13-year audit. Radical trachelectomy is a fertility preserving alternative for early cervical cancer patients. This audit assesses the role of isthmic-vaginal smear in postoperative follow-up. A total of 94 patients were identified generating 913 smears. The final surgical margin was at the lower uterine segment in 37 cases (39.4%) and significantly correlated with the presence of lower uterine segment endometrial cells (LUSEC) in smears (P = 0.035). The most common abnormal diagnoses in the presence of LUSEC were ASC-US and AGUS seen in 14.2% and 11.9% of positive smears, respectively. The most common follow-up pattern was initial positive smears, which converted to negative (45.7% of patients), showing that reactive changes are another potential overcall pitfall. The only 2 central recurrences were successfully diagnosed by smears. This study summarizes our experience, emphasizing the role of isthmic-vaginal smears for early detection of central recurrence and highlighting the role of LUSEC and reactive changes as potential overcall pitfalls. abstract_id: PUBMED:35111936 Radical Vaginal Trachelectomy. Recently, radical vaginal hysterectomy (RVH) has developed into laparoscopically assisted radical vaginal hysterectomy (LARVH), which is associated with the laparoscopical procedure, and it is applied as radical vaginal trachelectomy and semi-radical vaginal hysterectomy. LARVH is indicated for patients with stage IB1 and IIA1 cervical carcinoma, especially those with a tumor size of less than 2 cm, because the cardinal ligaments cannot be resected widely. Although RVH that is associated with laparoscopic pelvic lymphadenectomy is the most used surgical procedure, radical trachelectomy may be performed either abdominally or vaginally (laparoscopic or robotic). One report found that the pregnancy rate was higher in patients who underwent minimally invasive or radical vaginal trachelectomy than in those who underwent radical abdominal trachelectomy. abstract_id: PUBMED:23802711 The role of cytological follow-up after radical vaginal trachelectomy for early-stage cervical cancer. Objectives: To identify whether recurrences were picked up by cytology alone after radical vaginal trachelectomy and to determine the false-positive rate of abnormal cytology. Methods: Retrospective collection of patients from the cancer registry since radical vaginal trachelectomy was first performed in Bristol in 1999. All cytology results were collated and re-reviewed by a senior consultant cellular pathologist at the cytopathology centre in Southmead Hospital, Bristol. Cytology results and pathology and survival data are discussed, and any downgrading or upgrading of reports is reviewed. Results: Eighteen women were identified and 80 isthmic cytology samples were reviewed. Only one recurrence has occurred. Lower uterine segment sampling was apparent in 25 samples and other endometrial cells in 21 samples: thus 58% showed endometrial cell sampling. Odd metaplastic cells from the newly formed transformation zone were found in 25 samples (31%). Fifteen (19%) showed significant inflammation, two with actinomyces. After cytology review, seven of 80 reports were changed: two between negative and inadequate, two borderline changes in endocervical cells and one mild dyskaryosis were downgraded to negative, and two cases reported as ?glandular neoplasia were changed to squamous cell carcinoma and negative, respectively. Conclusions: Cytology reporting may be challenging after trachelectomy. Cytology in our series did not add to the diagnosis of recurrence in the one case in which it occurred. We propose a pragmatic follow-up regime, and discuss the importance of the centralization of cytology reporting in these patients. abstract_id: PUBMED:15056170 A review of post-trachelectomy isthmic and vaginal smear cytology. Currently in the UK cervical cancer has a peak incidence in women aged 35-39. Fertility-conserving surgical treatment by radical trachelectomy is established in the management of early disease. This study aimed at establishing the value of cytology in follow-up after trachelectomy. The cytological features of isthmic-vaginal smears post-trachelectomy for cervical cancer are presented together with a discussion of relevant clinical issues. One hundred and ninety seven smears from 32 women were reviewed. Two of the 32 patients developed pelvic recurrences. In both cases recurrence was detected cytologically long before development of a clinical or radiological abnormality. There is, however, a potential for overcall due to the presence of endometrial cells. These were present in large numbers and varying configurations in 58% of smears and led to a false positive report of malignancy in 2% of smears. The rate of referral for a cytologist opinion was significantly higher in smears containing endometrial cells (26%) than those without (13%). While all smears contained squamous cells, 41% contained squamous cells only and it is proposed that such smears should be reported as unsatisfactory in the first 2 years after surgery and negative thereafter, although the absence of glandular cells should be recorded. When an abnormality is reported, smear review and multidisciplinary discussion may avoid unnecessary investigations. abstract_id: PUBMED:27574342 Radical Vaginal Trachelectomy with Laparoscopic Pelvic Lymphadenectomy for Fertility Preservation in Young Women with Early-Stage Cervical Cancer. The primary objective of this study was to describe our experience with the conservative treatment of early-stage cervical cancer (stages IA1, IA2, and IB1) with radical vaginal trachelectomy (RVT) and laparoscopic pelvic lymphadenectomy. This retrospective observational case series included 36 patients with early cervical cancer. Radical trachelectomy and laparoscopic pelvic lymphadenectomy were performed as described by D. Dargent in 32 of these cases. Oncologic, reproductive, and obstetric outcomes were observed subsequently over a median period of 42 (24-96) weeks. A total of 32 RVTs were preformed with a mean operating time of 117 ± 22.8 (77-167) minutes and an average blood loss of 486 mL (150-800 mL). All obtained resection margins were negative for cancer. Lymphovascular space invasion was noted in 11 (30.55 %) of the cases. No recurrences occurred during the study period. Seven (17.8 %) patients were able to become pregnant postoperatively, five of whom delivered healthy infants near term. Radical vaginal trachelectomy with laparoscopic pelvic lymphadenectomy appears to be a safe therapeutic option for fertility preservation in young women with early cervical cancer. abstract_id: PUBMED:26518715 Laparoscopic Vaginal-Assisted Nerve-Sparing Radical Trachelectomy. Study Objective: To demonstrate a laparoscopic vaginal-assisted nerve-sparing radical trachelectomy. Design: An edited educational video, including a step-by-step description of the procedure. Setting: Radical trachelectomy is the main surgical indication for selected cases of initial cervical cancer with a fertility-sparing approach. Although transvaginal access is the most traditional route, this technique has not gained widespread acceptance because of the complexity of the ureteral dissection and the limited amount of resected parametrial tissue. This video describes a laparoscopic technique including an adequate parametrial resection with autonomic preservation (C1 level) and a standard laparoscopic ureteric dissection under direct visualization. All parametrial dissections were performed by laparoscopy using a nerve-sparing technique. The uterine vessels were well dissected and transected at their origin. The colpotomy was performed by laparoscopy with a 1-cm vaginal margin using a monopolar energy hook. To achieve an adequate endocervical margin and to avoid thermal injury to the endocervix, the cervical section was performed transvaginally with a cold knife. The specimen was then retrieved, and a segment of the remaining cervix was removed for frozen section analysis. If the margin was free of tumor, the cervical-vaginal anastomosis was performed transvaginally. An endocervical device was placed to avoid postoperative cervical stenosis. Interventions: Laparoscopic vaginal-assisted nerve-sparing radical trachelectomy. Conclusion: This video demonstrates a reproducible laparoscopic technique for radical trachelectomy with a nerve-sparing technique. The vaginal route was used at the moment of sectioning the cervix/isthmus to permit adequate margin evaluation. abstract_id: PUBMED:24596824 Successful pregnancy by direct intraperitoneal insemination in an infertile patient with failure of recanalization of isthmic stenosis after laparoscopic radical trachelectomy. Radical trachelectomy is a promising fertility-sparing treatment for patients with early stage cervical cancer who want to preserve their fertility. However, isthmic stenosis occurs frequently in patients who received radical trachelectomy and it is one of the causes of infertility following radical trachelectomy. Moreover, despite the treatment for recanalization of isthmic stenosis, recanalization can fail or isthmic stenosis can recur. Herein we report a successful pregnancy and birth by direct intraperitoneal insemination in an infertile woman with failure of recanalization of isthmic stenosis after laparoscopic radical trachelectomy. abstract_id: PUBMED:28717824 Follow-up after radical vaginal trachelectomy (RVT): patients' problems and physicians' difficulties. Purpose: Patients after radical vaginal trachelectomy (RVT) need specific follow-up treatment because their problems differ from those of other gyneco-oncologic patients. Anatomic changes after surgery complicate examinations. Recognition and treatment of these issues require physician's expertise. Patients And Methods: We evaluated the follow-up data of 70 patients who underwent RVT for early cervical cancer between 03/2010 and 12/2013. The follow-up interval in the first 2 years was 3 and 6 months in the following 2 years. We used a tailored protocol to describe the special problems after RVT. Results: Cervical stenosis was one of the central problems independent of time interval to RVT. Physicians' most significant problem was to locate the exact position of the neo-cervix and thus to receive valid pap smears. Conclusions: Follow-up of patients after RVT needs special expertise because the symptoms differ from those after hysterectomy and examinations ensuring oncologic safety require special attention. abstract_id: PUBMED:31927813 Vaginal-assisted laparoscopic nerve sparing radical trachelectomy Fertility-sparing surgery has gained popularity in the last three decades for the management of cervical cancer in women under 40 years of age. Radical trachelectomy is a fertility-sparing surgical technique for women who wish to retain their fertility. Vaginal-assisted laparoscopic radical trachelectomy is feasible in selected patients with early cervical cancer. The aim of this video is to present a nerve-sparing vaginal-assisted laparoscopic radical trachelectomy demonstrating pelvic anatomical structures. Answer: Isthmic-vaginal smear cytology is used as a follow-up method after radical vaginal trachelectomy (RVT) for early-stage cervical cancer. The safety and effectiveness of this method in monitoring patients and diagnosing disease recurrence have been explored in several studies. One study analyzed 563 specimens from 303 patients, including those who underwent RVT, conization, and routine screening. The results indicated that smears without endocervical and metaplasia cells were significantly less frequent in patients post-RVT. There were 26/23 abnormal smears diagnosed by two different cytopathologists after RVT, leading to the discovery of 7 cases of dysplasia and 1 case of disease recurrence. The study concluded that histological verification of cytology is often needed after RVT due to anatomical alterations, regeneration, and inflammation processes. It also suggested that cytopathologists should become familiar with the spectrum of changes in post-RVT cytology to reduce false-positive results (PUBMED:24478265). Another study, which was a 13-year audit, assessed the role of isthmic-vaginal smear in postoperative follow-up and found that the presence of lower uterine segment endometrial cells (LUSEC) in smears correlated with the final surgical margin at the lower uterine segment. The study highlighted the role of isthmic-vaginal smears for early detection of central recurrence and pointed out that reactive changes are potential overcall pitfalls (PUBMED:19306425). A retrospective collection of patients from the cancer registry since RVT was first performed in Bristol in 1999 showed that cytology did not add to the diagnosis of recurrence in the one case where it occurred. The study proposed a pragmatic follow-up regime and emphasized the importance of centralized cytology reporting (PUBMED:23802711). In summary, isthmic-vaginal smear cytology can be a safe and useful follow-up method after RVT for early-stage cervical cancer, but it requires careful interpretation due to the potential for false-positive results and the need for histological verification. Close communication between cytopathologists and clinicians, as well as specialized expertise in interpreting post-RVT cytology, are crucial for ensuring the safety and reliability of this follow-up method.
Instruction: Are health science students' beliefs about infant nutrition evidence-based? Abstracts: abstract_id: PUBMED:23537940 Are health science students' beliefs about infant nutrition evidence-based? Background: Globally, breastfeeding is a fundamental health promotion strategy, improving the health of mothers and infants, well beyond childhood. Healthcare professionals have the responsibility of providing breastfeeding education to families. Worldwide, most healthcare professionals do not receive sufficient evidence-based education to adequately support breastfeeding families. Objectives: (1) What experiences have university health science students had with breastfeeding? (2) What are university health science students' beliefs and attitudes toward breast and formula feeding of infants? (3) What are the perceptions of university health science students about how other important people in their lives regard breastfeeding? (4) What are the relationships between students' personal experiences with breastfeeding and their beliefs and attitudes about infant feeding choices? Design: A descriptive cross-sectional survey conceptualized using the Theory of Planned Behavior. Setting: The health science college within a major metropolitan research university in the United States. Participants: Health science undergraduate and graduate students (N=514), who were over the age of 18 and who were enrolled during the spring of 2011. Methods: Validated survey instruments were used to collect the data on the Theory of Planned Behavior variables. The request for participants was done by emailing all health science students. If students chose to participate, they filled out an anonymous on-line survey. Results: Most participants were not parents; however, the majority of the 95 (21.05%) students who were parents reported their child was breastfed. Significantly more positive attitudes and beliefs were found in graduate students (n=101; 20.10%) when compared to undergraduates (n=403; 89.9%). Conclusions: Health science students' beliefs and attitudes toward infant nutrition often were not evidence-based. However, all students were remarkably consistent in their responses concerning formula feeding. Incorporating adequate education about human lactation is an unmet responsibility of university health science programs. abstract_id: PUBMED:29568484 How frequent are non-evidence-based health care beliefs in chiropractic students and do they vary across the pre-professional educational years. Background: Evidence suggests that a students' beliefs already prior to entering a program may be important as a determinant in sustaining unsuitable health care beliefs. Our objectives were to investigate the proportion of Australian chiropractic students who hold non-evidence-based beliefs in the first year of study and the extent to which they may be involved in non-musculoskeletal health conditions. Finally, to see if this proportion varies over the course of the chiropractic program. Method: In 2016, students from two Australian chiropractic programs answered a questionnaire on how often they would give advice on five common health conditions in their future practices as well as their opinion on whether chiropractic spinal adjustments could prevent or help seven health-related conditions. Results: From a possible 831 students, 444 responded (53%). Students were highly likely to offer advice (often/quite often) on a range of non-musculoskeletal conditions. The proportions were lowest in first year and highest the final year. Also, high numbers of students held non-evidence-based beliefs about 'chiropractic spinal adjustments' which tended to occur in gradually decreasing in numbers in sequential years, except for fifth year when a reversal of the pattern occurred. Conclusions: New strategies are required for chiropractic educators if they are to produce graduates who understand and deliver evidence-based health care and able to be part of the mainstream health care system. abstract_id: PUBMED:33413336 Evidence-based practice beliefs and implementations: a cross-sectional study among undergraduate nursing students. Background: Integrating evidence-based practice (EBP) into the daily practice of healthcare professionals has the potential to improve the practice environment as well as patient outcomes. It is essential for nurses to build their body of knowledge, standardize practice, and improve patient outcomes. This study aims to explore nursing students' beliefs and implementations of EBP, to examine the differences in students' beliefs and implementations by prior training of EBP, and to examine the relationship between the same. Methods: A cross-sectional survey design was used with a convenience sample of 241 nursing students from two public universities. Students were asked to answer the questions in the Evidence-Based Practice Belief and Implementation scales. Results: This study revealed that the students reported a mean total belief score of 54.32 out of 80 (SD = 13.63). However, they reported a much lower implementation score of 25.34 out of 72 (SD = 12.37). Students who received EBP training reported significantly higher total belief and implementation scores than those who did not. Finally, there was no significant relationship between belief and implementation scores (p &gt; .05). Conclusion: To advance nursing science, enhance practice for future nurses, and improve patient outcomes, it is critical to teach nursing students not only the value of evidence-based knowledge, but also how to access this knowledge, appraise it, and apply it correctly as needed. abstract_id: PUBMED:33039062 Capability beliefs and the intention to adopt evidence-based practices in the future among nursing students: An international study. Background: Capability beliefs and the intention to adopt EBP in future nursing practice have been identified as strong antecedents of subsequent utilization of EBP. Despite this, a multicountry comparison may yield interesting findings that can be useful in designing empirically based and culturally tailored strategies to facilitate and enhance students' capability beliefs as well as their intentions to integrate evidence into nursing practice. Aims And Objectives: This report sought to compare capability beliefs and intentions to adopt EBP in the future among student nurses from four countries-specifically, Oman, India, Nigeria, and Saudi Arabia-and to identify specific variables that predict the intention to adopt EBP. Methods: This study employed a descriptive, cross-sectional, comparative research design to gather data from a convenience sample of 1387 nursing students from four countries using self-report questionnaires that captured their EBP capability beliefs and their intention to adopt EBP in clinical practice upon graduation. This study adhered to STROBE guideline for cross-sectional studies. Results: Overall, nursing students reported a high intent to adopt EBP in their future nursing practice; however, the exact rate varied according to the country of origin. Students' gender, student track, education, country of origin, access to the internet, and capability beliefs were seen as important in explaining the likelihood of future implementation of evidence in clinical nursing practice. Capability beliefs among students were moderate, with large differences observed between countries. Conclusions: Although the surveyed nursing students reported moderate capability beliefs, their intention to adopt and integrate EBP in their future nursing practice was relatively high. Relevance To Clinical Practice: The findings of this study highlight the critical role of nurse educators and nurse administrators in fostering EBP capability beliefs and future intentions to adopt among students through curricular review, implementation of evidence-based strategies, and the promotion of EBP culture within the clinical learning sites. abstract_id: PUBMED:38333784 What makes Indonesian government officials believe in and implement evidence-based policy: The mediating role of religion-science compatibility beliefs. This research is aimed at examining the relationship between religion-science compatibility belief (RSCB) and evidence-based policy (EBP) belief and implementation in Indonesia, a country with the biggest Muslim population in the world. A dataset containing responses to a questionnaire completed by 499 government officials in Indonesia was collected for the partial least square structural equation modeling (PLS-SEM). This study finds a relationship between RSCB and EBP belief and implementation. In addition, EBP belief also affected the implementation of EBP. The effect that RSCB has on EBP implementation was partially mediated by EBP belief. Studying how these beliefs relate to the attitude of policy makers toward science in a sociocultural context is important, considering that the focus of previous research is on different contexts pertaining to levels of education, industrialization, wealth, and democratization. This is important to encourage a more comprehensive understanding of the public about science globally. This study responds to the need for the literature to examine factors influencing EBP beliefs and implementation at the individual level in non-health contexts and developing countries. abstract_id: PUBMED:27610064 Evidence-Based Practice Beliefs and Implementation among the Nursing Bridge Program Students of a Saudi University. Objectives: This study aimed to measure the evidence-based practice (EBP) beliefs and implementation among Saudi nursing students enrolled in the bridge program. It also sought to identify the factors that influence EBP beliefs and implementation. Methods: A convenience sample of 188 nursing bridge program students at a university in Saudi Arabia was surveyed in this descriptive, cross-sectional study. The EBP Beliefs Scale (EBP-B) and the EBP Implementation Scale (EBPI) were employed to collect data regarding the respondents' beliefs and implementation of EBP, respectively. Descriptive and inferential statistics were used to analyze the data. Results: All of the items in the EBP beliefs scale received good endorsement rate (60.6 - 89.4%). A very low overall mean score of 22.57 was reported by the respondents in the implementation of EBP. The EBP beliefs of the respondents was primarily influenced by awareness on EBP (β = 0.26, p &lt;0.001), followed by gender (β = -0.20, p&lt;0.001), age (β = 0.17, p&lt;0.05), and attendance at EBP trainings/seminars (β = 0.16, p&lt;0.05). Attendance to EBP training and seminars (β = 0.19, p&lt;0.01), gender (β = 0.18, p&lt;0.05) and awareness on EBP (β = 0.15, p&lt;0.05) were identified as factors that influence EBP implementation. Conclusions: Despite the positive responses on the belief scale, the implementation of EBP by the respondents is very poor. Education and training courses should be initiated to increase the implementation of EBP among the nursing bridge program students. abstract_id: PUBMED:26538523 Comparison of health risk behavior, awareness, and health benefit beliefs of health science and non-health science students: An international study. This study determines the differences in health risk behavior, knowledge, and health benefit beliefs between health science and non-health science university students in 17 low and middle income countries. Anonymous questionnaire data were collected in a cross-sectional survey of 13,042 undergraduate university students (4,981 health science and 8,061 non-health science students) from 17 universities in 17 countries across Asia, Africa, and the Americas. Results indicate that overall, health science students had the same mean number of health risk behaviors as non-health science university students. Regarding addictive risk behavior, fewer health science students used tobacco, were binge drinkers, or gambled once a week or more. Health science students also had a greater awareness of health behavior risks (5.5) than non-health science students (4.6). Linear regression analysis found a strong association with poor or weak health benefit beliefs and the health risk behavior index. There was no association between risk awareness and health risk behavior among health science students and an inverse association among non-health science students. abstract_id: PUBMED:26577748 Health beliefs and their sources in Korean and Japanese nurses: A Q-methodology pilot study. Background: Many health beliefs do not have supporting scientific evidence, and are influenced by culture, gender, religion, social circumstance and popular media. Nurses may also hold non-evidenced-based beliefs that affect their own health behaviours and their practices. Objectives: Using Q-methodology, pilot Q-cards representing a concourse of health beliefs for Japanese and South Korean nurses and explain the content and sources of health beliefs. Design: Qualitative. Settings: Two university campuses, one each in Japan and Korea. Participants: A convenience sample of 30 was obtained, 14 clinical nurses and 16 academic nurses. Methods: Literature reviews and expert informants were used to develop two sets of 65 Q-cards which listed culturally appropriate health beliefs in both Japan and Korea. These beliefs were examined in four structured groups and five individual interviews in Japan, and five groups and two individual interviews in Korea. Results: Our unique study revealed six categories regarding sources of health beliefs that provide rich insights about how participants accessed, processed and transmitted health information. They were more certain about knowledge from their specialty area such as that from medical or nursing resources, but derived and distributed many general health beliefs from personal experience, family and mass media. They did not always pass on accurate information to students or those in their care, and often beliefs were not based on scientific evidence. Conclusion: Findings highlight the dangers of clinical and academic nurses relying on health belief advice of others and passing this on to patients, students or others, without mindfully examining the basis of their beliefs through scientific evidence. abstract_id: PUBMED:34516482 Connecting Evidence-Based Practice and Clinical Practice Guidelines With Bachelor of Science in Nursing Students. Abstract: Research has demonstrated the ability of evidence-based practice (EBP) to enhance quality and reliability of health care, improve health outcomes, and reduce cost and health disparities. Nursing curricula often lack best practices for teaching EBP, as well as actual EBP course content, objectives, and activities, to advance student understanding of EBP. The unfortunate results are nurse graduates who do not value or perceive that they can use EBP. This study implemented an EBP assignment assessing clinical practice guidelines as a means of improving EBP beliefs in senior-level nursing students. abstract_id: PUBMED:28388829 Evidence-Based Practice Beliefs and Implementation in Doctor of Nursing Practice Students. Background: Doctors of Nursing Practice focus on leadership in evidence-based practice (EBP). EBP is influenced by one's beliefs in and implementation of EBP. Little is known to date about the EBP beliefs and implementation of Doctor of Nursing Practice students and outcomes of Doctor of Nursing Practice education. Aims: Guided by the Advancing Research and Clinical practice through close Collaboration (ARCC) Model, the Evidence-Based Practice Beliefs (EBPB) and Implementation (EBPI) tools were used to assess the impact of EBP as a program pillar, curricular thread, and content area on EBPB and EBPI of Doctor of Nursing Practice-Family Nurse Practitioner students. Methods: Five cohorts who completed the same curriculum were studied. Fifty-four of the 89 students across the five cohorts began and completed the study. Results: Paired t-test for group effects showed statistical significance from pre- to post-measure in students overall EBPB, t = 4.4 (52), p &lt; .001, and EBPI, t = 8.4 (52), p &lt; .001. A large effect size of .75 standard deviation (SD) gain above the mean for EBPB, and a very large effect size of 1 SD gain above the mean for EBPI were observed. Repeated measures ANOVA showed that all cohorts made gains across the curriculum. Effect sizes for EBPB ranged from .25 to 1 SD above the mean, and .75 to 1.5 for EBPI. Linking Evidence To Action: DNP students who are educated to be EBP leaders must have a curriculum that supports them in the knowledge and skill-set needed to translate evidence into practice. The ARCC Model can guide faculty in EBP curriculum development. EBPB and EBPI are valid and reliable measures to assess for gains across a curriculum. Through educational outcomes, educators can assess desired student outcomes for EBP across a curriculum and can build an evidence base for ongoing curriculum development. Answer: Health science students' beliefs about infant nutrition are often not evidence-based. A study conducted within a major metropolitan research university in the United States found that while health science students were consistent in their responses concerning formula feeding, their attitudes and beliefs toward infant nutrition were not always aligned with evidence-based practices. The majority of the health science students who were parents reported that their child was breastfed, and graduate students exhibited significantly more positive attitudes and beliefs compared to undergraduates. However, the study concluded that incorporating adequate education about human lactation is an unmet responsibility of university health science programs, indicating a gap between current student beliefs and evidence-based recommendations (PUBMED:23537940).
Instruction: Do I misconstrue? Abstracts: abstract_id: PUBMED:23616314 A case of misconstrue proptosis. This case illustrates an overlooked metastatic lung carcinoma to the orbit. The rapid onset of symptoms and oedema mimicked a cellulitis. The patient was initially treated with a broad-spectrum topical and intravenous antibiotic therapy. The white blood cell count (WBC) and C reactive protein (CRP) were normal. Compressive optic neuropathy and serous retinal detachment ensued. The initial chest x-ray revealed lung shadows; however, this was not reviewed. Eventually, CT of the chest was organised and showed a lung mass. The management of such a case remains a challenge for the uninitiated, and high index of suspicion is required to ensure early appropriate management. Despite complete resolution of the swelling with sequential chemotherapy (cisplatin and etoposide) and radiotherapy, the visual prognosis remained grim. abstract_id: PUBMED:34669630 Constructed Measures and Causal Inference: Towards a New Model of Measurement for Psychosocial Constructs. Psychosocial constructs can only be assessed indirectly, and measures are typically formed by a combination of indicators that are thought to relate to the construct. Reflective and formative measurement models offer different conceptualizations of the relation between the indicators and what is sometimes conceived of as a univariate latent variable supposed to correspond in some way to the construct. I argue that the empirical implications of reflective and formative models will often be violated by data since the causally relevant constituents will generally be multivariate, not univariate. These empirical implications can be formally tested but factor analysis is not adequate to do so. I argue that formative models misconstrue the relation between the constructed measures and the underlying reality by which causal processes operate, but that reflective models misconstrue the nature of the underlying reality itself by typically presuming that the constituents of it that are causally efficacious are unidimensional. I discuss the ensuing problems arising from these misconstruals. I propose a causal interpretation of associations between constructed measures and various outcomes that is applicable to both reflective and formative models and is applicable even if the usual assumptions of these models are violated. I put forward an outline for a new model of the process of measure construction and discuss the practical implications of these observations and proposals for the provision of definitions, the selection of items, item-by-item analyses, the construction of measures, and the interpretation of the associations of these measures with subsequent outcomes. abstract_id: PUBMED:12728830 Mis-takes: mistakes in the nurse-person process. This column supports the global mantra to develop mechanisms that address the worldwide crisis in patient safety and demonstrates, using nursing as an example, that the mechanisms currently being developed are too limited. Patient safety initiatives must be expanded to acknowledge and make ways to minimize mis-takes, that is, misconceptions of meaning that emerge when healthcare professionals dismiss, misconstrue, guess, or undervalue patients' perspectives. Commitment to the human becoming school of thought as a guide for practice is suggested as one way to address mis-takes in the nurse-person process. abstract_id: PUBMED:25104796 The systems approach to medicine: controversy and misconceptions. The 'systems approach' to patient safety in healthcare has recently led to questions about its ethics and practical utility. In this viewpoint, we clarify the systems approach by examining two popular misunderstandings of it: (1) the systematisation and standardisation of practice, which reduces actor autonomy; (2) an approach that seeks explanations for success and failure outside of individual people. We argue that both giving people a procedure to follow and blaming the system when things go wrong misconstrue the systems approach. abstract_id: PUBMED:25948759 Misconstrual of EAPC's position paper on euthanasia. This is a response to Barutta and Vollmann's article 'Physician-assisted death with limited access to palliative care.' I show how they misconstrue a key empirical statement made by the European Association for Palliative Care regarding legalisation of euthanasia and physician-assisted suicide. Additionally, I include some further remarks on the relationship between euthanasia and palliative care. I read with interest the article, which delineate well several positions and gives a nice overview of arguments presented on either side. I also found the line of argument unprejudiced and clear, and am sure people working within palliative care would benefit from reading it. abstract_id: PUBMED:15535350 Giant aneurysm or pituitary macroadenoma: a diagnostical misconstrue. We describe a case of a giant thrombosed intrasellar internal carotid artery aneurysm initially diagnosed as pituitary macroadenoma. The differentiating neuroimaging features were discussed. abstract_id: PUBMED:10123607 Articles misconstrue Joint Commission's position on quality improvement. N/A abstract_id: PUBMED:17592170 Legal fallacies of antipsychotic drugs. Advances in the biological sciences have dramatically improved the understanding of schizophrenia and related psychotic illnesses. One of the most compelling findings is the substantial degree to which cognition is impaired in these illnesses and the remedial effects that antipsychotic drugs have in treating these cognitive impairments. Despite these promising discoveries, legal cases and scholarship remain replete with pejorative associations with antipsychotic drug action. References to antipsychotic medications as mind-altering drugs and their effects as "synthetic sanity" misconstrue the beneficial effects these medicines have on cognition. We review the prevailing legal attitude of antipsychotic medications and contrast these views with prevailing scientific knowledge. We conclude that legal opinion is misinformed about the effects of antipsychotic medications on cognition. abstract_id: PUBMED:18818445 Are comparisons the answer to understanding behavioral aspects of aging in racial and ethnic groups? The knowledge gained from studying diverse populations should help to address inequities and prepare us to deal with the needs of the increasing number of older minorities in this country. At the same time, research that is not properly conducted threatens to lead us astray and misconstrue relationships and outcomes related to behavioral aspects of aging. In this article, we propose that simple comparisons between groups are neither necessary nor sufficient to advance our understanding of ethnic minorities. We discuss common pitfalls conducted in group-differences research, including a specific treatment on the issue of statistical power issues. Our goal is to encourage the use of multiple methodological designs in the study of issues related to racial and ethnic minorities by demonstrating some of the advantages of lesser employed approaches. abstract_id: PUBMED:38087888 The Orphan Drug Act at 40: Legislative Triumph and the Challenges of Success. Policy Points The Orphan Drug Act (ODA) was the result of patient advocacy and by many measures has been strikingly successful. However, approximately 95% of the more than 7,000 known rare diseases still have no US Food and Drug Administration-approved treatment. The ODA's success led to sustained criticism of high drug prices, often for products that have orphan drug indications. Critics misconstrue the ODA's intent and propose reducing its incentives instead of pursuing policies focused on addressing broader prescription drug price challenges that exist in both the orphan and nonorphan drug market. Patients and their families will continue to defend the purpose and integrity of the ODA and to drive investments into rare disease research and clinical development. Answer: Based on the abstracts provided, the term "misconstrue" is used in various contexts related to medical and psychosocial fields. Misconstruing typically refers to misunderstanding or misinterpreting information, situations, or concepts. For instance, in the case of the patient with metastatic lung carcinoma to the orbit, the rapid onset of symptoms was initially misconstrued as cellulitis, leading to inappropriate treatment with antibiotics (PUBMED:23616314). In the field of psychosocial constructs, the author argues that both reflective and formative models misconstrue the nature of underlying reality or the relationship between constructed measures and causal processes (PUBMED:34669630). In nursing, misconceptions of meaning, or "mis-takes," occur when healthcare professionals misconstrue patients' perspectives (PUBMED:12728830). The systems approach to medicine is sometimes misconstrued as reducing actor autonomy or seeking explanations for failure outside of individual people (PUBMED:25104796). Misconstruals also occur in the interpretation of position papers on sensitive topics like euthanasia (PUBMED:25948759), in the diagnosis of medical conditions such as mistaking a giant aneurysm for a pituitary macroadenoma (PUBMED:15535350), and in the legal field where the beneficial effects of antipsychotic drugs on cognition are often misconstrued (PUBMED:17592170). Research on behavioral aspects of aging in racial and ethnic groups can lead to misconstrued relationships and outcomes if not properly conducted (PUBMED:18818445). Lastly, the intent of the Orphan Drug Act and its incentives are sometimes misconstrued by critics focusing on high drug prices (PUBMED:38087888). Whether you, as the user, misconstrue or not cannot be determined from these abstracts, as they do not provide information about your understanding or interpretation of a specific subject. Misconstrual is context-dependent and requires an assessment of the individual's comprehension in relation to the subject matter at hand.
Instruction: Are Retail Outlets Complying with National Legislation to Protect Children from Exposure to Tobacco Displays at Point of Sale? Abstracts: abstract_id: PUBMED:27019418 Are Retail Outlets Complying with National Legislation to Protect Children from Exposure to Tobacco Displays at Point of Sale? Results from the First Compliance Study in the UK. Background: From April 6th 2015, all small shops in the UK were required to cover up tobacco products at point of sale (POS) to protect children from exposure. As part of a larger 5-year study to measure the impact of the legislation in Scotland, an audit was conducted to assess level and nature of compliance with the ban immediately following its introduction. Materials And Methods: A discreet observational audit was conducted 7-14 days post implementation which took measures of physical changes made to cover products, server/assistant practices, tobacco signage and advertising, and communication of price information. The audit was conducted in all small retail outlets (n = 83) selling tobacco in four communities in Scotland selected to represent different levels of urbanisation and social deprivation. Data were analysed descriptively. Results: Compliance with the legislation was high, with 98% of shops removing tobacco from permanent display and non-compliance was restricted almost entirely to minor contraventions. The refurbishment of shops with new or adapted tobacco storage units resulted in the removal of nearly all commercial brand messages and images from POS, dropping from 51% to 4%. The majority of shops stored their tobacco in public-facing storage units (81%). Most shops also displayed at least one generic tobacco message (88%). Conclusions: Compliance with Scottish prohibitions on display of tobacco products in small retail outlets was high immediately after the legislation implementation date. However, although tobacco branding is no longer visible in retail outlets, tobacco storage units with generic tobacco messages are still prominent. This points towards a need to monitor how the space vacated by tobacco products is utilised and to better understand how the continuing presence of tobacco storage units influences people's awareness and understanding of tobacco and smoking. Countries with existing POS bans and who are considering such bans should pay particular attention to regulations regarding the use of generic signage and where within the retail setting tobacco stocks can be stored. abstract_id: PUBMED:27178132 Young people's exposure to point-of-sale tobacco products and promotions. Objectives: Point of sale (POS) displays are one of the most important forms of tobacco marketing still permitted in many countries. Reliable methods for measuring exposure to such displays are needed in order to assess their potential impact, particularly on smoking attitudes and uptake among young people. In this study we use a novel method for evaluating POS exposure based on young people's use of retail outlets and recall of tobacco displays and observational data on the characteristics of displays. Study Design: Observational audit of retail outlets (n = 96) and school-based pupil survey (n = 1482) in four Scottish communities reflecting different levels of social deprivation and urbanisation, conducted in 2013 before legislation to remove POS displays was implemented in supermarkets. Methods: Measures were taken of: visibility and placement of tobacco displays; internal and external advertising; display unit size, branding and design; visibility of pack warnings; proximity of tobacco products to products of potential interest to children and young people; pupils' self-reported frequency of visiting retail outlets; and pupils' recall of tobacco displays. Variation in POS exposure across social and demographic groups was assessed. Results: Displays were highly visible within outlets and, in over half the stores, from the public footway outside. Tobacco products were displayed in close proximity to products of interest to children (e.g. confectionery, in 70% of stores). Eighty percent of pupils recalled seeing tobacco displays, with those from deprived areas more likely to recall displays in small shops. When confectioners, tobacconists and newsagents (CTNs) and grocery/convenience stores (two of the outlet types most often visited by young people) were examined separately, average tobacco display unit sizes were significantly larger in those outlets in more deprived areas. Conclusions: POS displays remain a key vector in most countries for advertising tobacco products, and it is important to develop robust measures of exposure. The data reported in this paper provide a baseline measure for evaluating the efficacy of legislation prohibiting such displays. abstract_id: PUBMED:32548359 Removal of point-of-sale tobacco displays in Bogor city, Indonesia: A spatial analysis. Introduction: Indonesia contributes over 61 million smokers to global tobacco users, and the smoking prevalence is increasing among young people. In October 2017, Bogor city started the ban on tobacco displays at point-of-sale (POS), starting with modern cigarette retailers. This study aims to assess compliance with the ban and the visibility of POS with tobacco displays around educational facilities. Methods: We included 266 modern retailers surveyed throughout the city during November and December 2017. Compliance indicators included no tobacco product displays, advertisements, promotions, and sponsorship. We conducted spatial and quantitative analyses in ArcMap 10.6 and Stata 15.1, respectively. Results: Immediately following the ban, the compliance with all four criteria was high (83%). However, POS in areas with higher population density and poverty rates had significantly lower compliance. We also found that the ban reduced the visibility of tobacco displays around schools and universities. Conclusions: Compliance with the ban was high, which helped to reduce the visibility of tobacco displays around educational facilities. abstract_id: PUBMED:31516431 Pro-smoking responses and attitudes due to point-of-sale tobacco displays in never smokers: A cross-sectional study in Hong Kong. Introduction: Never smokers' responses to tobacco control policy are often overshadowed by the opposition from smokers and tobacco industry during policy advocacy and legislation. Very few studies have examined never smokers' exposure to point-of-sale (POS) tobacco displays and their effects. Therefore, we investigated the exposure, pro-smoking responses due to and attitudes towards such displays in never smokers in Hong Kong. Methods: We conducted two-stage, randomized cross-sectional telephone-based surveys in 2015 and 2016 of 1833 never-smoking adults. They were asked how often they noticed POS displays in the past 30 days (often, sometimes, never), whether they found POS displays attractive, felt encouraged to smoke, perceived POS displays as advertisements, and if they supported banning them. The distributions of the outcomes were analyzed by descriptive statistics with weighting to the general population. Risk ratios (RR) from Poisson regression models adjusted for sociodemographic characteristics were used to analyze the associations. Results: Our results showed that, in never smokers, the younger were more likely to often notice POS displays (RR=0.80, 95% CI: 0.72-0.89, p&lt;0.01). Finding POS displays attractive was associated with primary (RR=2.52, 95% CI: 1.51-4.22, p&lt;0.01) and secondary education (RR=1.68, 95% CI: 1.16- 2.44, p=0.01) versus tertiary education. Often noticing displays was associated with perceived attractiveness (RR=1.90, 95% CI: 1.32-2.75, p&lt;0.01). The positive association between often noticing displays and being encouraged to smoke was marginally significant (RR=4.05, 95% CI: 0.98-16.85, p=0.054). Respondents who often noticed POS displays (RR=0.87, 95% CI: 0.77-0.98, p=0.02) and did not perceive them as advertisements (RR=0.70, 95% CI: 0.61-0.98, p&lt;0.01) showed less support on banning them than those who did not notice them. Conclusions: Frequent exposure to POS displays was associated with greater perceived attractiveness and lower support for banning them. A total ban on POS displays is needed to eliminate the advertising and normalization effect of POS displays. abstract_id: PUBMED:28345846 Compliance with Tobacco Promotion and Sale Laws in School Neighbourhoods in India Objective: Regulation of sale of tobacco has given sufficient attention in India and little information exists about the impact of bans near schools. Our study aim was to check the levels of tobacco promotion, advertising and sales in school neighborhoods’ of Central Delhi. Methods: Using multistage random sampling 15 schools were selected in Central Delhi. Areas 100 meters around each were mapped using a map tool and screened using a self designed questionnaire consisting of 26 questions, both closed and open ended, to determine the details of outlets, sales of tobacco and tobacco products, advertising, promotions, school roles, and children seeking tobacco. The data were subjected to statistical analysis. Results: The response rate was 65%. Outlet licenses were present in only 6 (3.47%). The point sale of tobacco was most frequently in tea stalls and a total of 173 (41.2%) outlets had some form of tobacco sale. The brands of smokeless tobacco sold more were shikar (50%) and classic citrus (30%). Advertisement or promotion of sales was mainly in the form of signs and displays (53%). Major schools did not have any no tobacco boards displayed. Conclusion: Sale of tobacco continues in central Delhi with a lack of compliance with the rules of COPTA. The implications of this non compliance in the Capital region is of major significance for the rest of the country. abstract_id: PUBMED:25152505 The relationship between proximity of tobacco retail outlets to schools and tobacco use among school personnel in sub-Saharan Africa. Background: We investigated the impact of proximity of tobacco retail outlets to schools on tobacco-related behavior and attitudes of school personnel in sub-Saharan Africa. Methods: A pooled analysis of the 2006-2011 Global School Personnel Surveys from 19 sub-Saharan African countries was performed (n=12,201). Multivariate logistic regression analyses were applied to measure the relationship between proximity of tobacco outlets to school premises and school personnel's tobacco-related behavior and attitudes (p&lt;0.05). Results: Among all school personnel, there was a significant association between the presence of tobacco outlets on school premises and current cigarette smoking (adjusted odds ratio [aOR]=3.89; 95% confidence interval [CI]: 2.17-7.20), current smokeless tobacco use (aOR=2.63; 95%CI: 1.04-6.65), and permissiveness towards tobacco industry sponsorship activities in school (aOR=2.27; 95%CI: 1.37-3.78). Complete smoke-free school policies were associated with lower odds of current cigarette smoking (aOR=0.42; 95%CI: 0.23-0.74), but were not significantly associated with current smokeless tobacco use. Among lifetime any tobacco users, there was a significant association between the presence of tobacco outlets on school premises and use of combustible (aOR=2.03; 95%CI: 1.09-3.75) and smokeless (aOR=3.85; 95%CI: 1.51-9.81) tobacco products on campus within the past year. Conclusions: Policies limiting the distance of tobacco outlets from schools, coupled with complete tobacco-free school policies may reduce tobacco use among school personnel. abstract_id: PUBMED:22180327 Audit of tobacco retail outlets in Hangzhou, China. Objective: To determine the prevalence of tobacco advertisements and warning messages at points of sale as well as to examine the density of tobacco retail outlets in neighbourhoods and around schools in Hangzhou, China. Method: Tobacco retail outlets (n=1639) in all food and tobacco specialty stores were observed objectively by trained students. Tobacco advertisements and warning messages were assessed with an audit, and stores' addresses were recorded with Global Positioning System coordinates. The distances (1) between all pairs of tobacco retail outlets (2) between each tobacco retail outlet and 15 middle schools were calculated to assess the density of tobacco retail outlets in neighbourhoods and around schools. Results: Among the 1639 tobacco retail outlets, &lt;1% had 'no sales to minors' signs, 1.5% had tobacco warning messages, 28% had signs indicating tobacco sale and 12.4% had tobacco advertisements. For 48.7% of tobacco retail outlets, the nearest distances to other tobacco retail outlets were &lt;50 m. For 80% of schools, there was at least one tobacco retail outlets within a 100 m radius. Conclusion: Tobacco advertisement in retail outlets is prevalent and the density of tobacco retail outlets is high in Hangzhou, China. Signs indicating 'no sales to minors' and tobacco warning signs are almost non-existent. These findings point to an urgent need for the enforcement of regulations on display of 'no sales to minors' and a new density standard for tobacco retail outlets based on protecting the public's health. abstract_id: PUBMED:26191538 Public Health Effects of Restricting Retail Tobacco Product Displays and Ads. Objectives: To estimate the public health impact from restricting US retail point-of-sale (POS) tobacco product displays and advertising. Methods: Based on existing research, this paper estimates the effects on initiation and cessation rates from restricting POS tobacco product displays and ads in the US and uses the SimSmoke simulation model to project related smoking declines and health benefits. Results: New comprehensive POS restrictions are projected to reduce smoking prevalence by approximately 16% [range=3%-31%] relative to the status quo by 2065, preventing about 630,000 smoking-attributable deaths [range=108,000-1,225,000], 215,000 low birth weight births [range=33,000-421,000], 140,000 preterm births [range=22,000-271,000], and 1900 infant deaths from SIDSs [range=300-3800]. Conclusions: Federal, state, or local action to restrict POS tobacco product displays and ads would contribute to a substantial reduction in smoking-attributed death and disease. abstract_id: PUBMED:32411873 Adolescents notice fewer tobacco displays after implementation of the point-of-sale tobacco display ban in Finland. Introduction: Finland implemented a point-of-sale (POS) display ban for tobacco products and their trademarks in 2012, and for electronic cigarettes in 2016. In this study we examine whether noticing tobacco products changed among adolescents after the implementation of the display ban and describe the noticing of e-cigarette displays post-implementation. Methods: Repeated cross-sectional national survey data of 12-to 16-year-olds from years 2011, 2013, 2015 and 2017 were used. Measures included adolescent self-report on noticing displays of tobacco and e-cigarettes in shops, kiosks and service stations. Adjusting for age, gender, smoking and academic achievement, logistic regression analyses were conducted by retailer type to test the association between survey year and noticing POS tobacco displays. Results: The proportions of adolescents noticing tobacco displayed in shops, kiosks and service stations decreased from 2011 to 2013 (from over 80% to about 30%). Between 2013 and 2017, exposure to tobacco displays decreased for kiosks but slightly increased for shops and service stations. However, in all retailer types, rates in 2017 were still well below the pre-ban levels of 2011. In 2017, noticing e-cigarette displays by 12- to16-year-old adolescents was reported by 5.6% in shops, 20.5% in kiosks and 5.1% in service stations. Conclusions: The pronounced decline in adolescents' exposure to tobacco products at POS observed in the current study implies adherence to the ban, as intended by the law. We encourage more countries to enact POS tobacco advertising and display bans, and to enforce the POS regulations to all tobacco products and novel nicotine delivery systems. abstract_id: PUBMED:36866095 Policies regulating retail environment to reduce tobacco availability: A scoping review. Background: In 2005, the World Health Organization Framework Convention on Tobacco Control (WHO FCTC) entered into force. This treaty was developed in response to the global tobacco epidemic, and it includes measures to reduce both demand for and supply of tobacco. The measures related to demand reduction include raising tax, providing cessation services, promoting smoke free public places, banning advertising, and raising awareness. However, there are a limited number of measures for supply reduction, and these mainly include fighting illicit trade, banning sales to minors and providing alternatives to tobacco workers and growers. Unlike regulation of many other goods and services that have been subjected to retail restrictions, there is a lack of resources about restricting tobacco availability through regulation of tobacco retail environment. Considering the potential of retail environment regulations in reducing tobacco supply and consequently reducing tobacco use, this scoping review aims to identify relevant measures. Methods: This review examines interventions, policies, and legislations to regulate tobacco retail environment to reduce tobacco availability. This was done by searching the WHO FCTC and its Conference of Parties decisions, a gray literature search including tobacco control databases, a scoping communication with the Focal Points of the 182 WHO FCTC Parties, and a databases search in PubMed, EMBASE, Cochrane Library, Global Health, and Web of Science. Results: Themes of policies were identified to reduce tobacco availability by regulating retail environment: four WHO FCTC and twelve non-WHO FCTC policies. The WHO FCTC policies included requiring a license to sell tobacco, banning tobacco sale via vending machines, promoting economically alternative activities to individual sellers, and banning ways of sale that constitute a way of advertising, promotion, and sponsorships. The Non-WHO FCTC policies included banning tobacco home delivery, tray sale, tobacco retail outlets in or within a minimum distance from specific facilities, sale in specific retail outlets, and sale of tobacco or one or more of its products, in addition to restricting tobacco retail outlets per density of population and per geographic area, capping the tobacco amount allowed per purchase, limiting the number of hours or days in which tobacco can be sold, requiring a minimum distance between tobacco retailers, reducing tobacco products availability and proximity within a retail outlet, and restricting sale to government controlled outlets. Discussion And Conclusion: Studies show the effects of regulation of the retail environment in influencing overall tobacco purchases, and there is evidence that having fewer retails reduces the level of impulse purchasing of cigarettes and tobacco goods. The measures covered by WHO FCTC are much more implemented than ones not covered by it. Although not all widely implemented, many themes of limiting tobacco availability by regulating tobacco retail environment are available. Further studies to explore such measures and the adoption of the effective ones under the WHO FCTC decisions, could possibly increase their implementation globally to reduce tobacco availability. Answer: Yes, retail outlets are largely complying with national legislation to protect children from exposure to tobacco displays at point of sale. Compliance with the legislation was high in Scotland, with 98% of shops removing tobacco from permanent display immediately after the legislation implementation date. However, there were still some minor contraventions, and while tobacco branding was no longer visible, tobacco storage units with generic tobacco messages remained prominent (PUBMED:27019418). In Bogor city, Indonesia, compliance with the ban on tobacco displays at point of sale was also high at 83%, and the ban helped reduce the visibility of tobacco displays around educational facilities (PUBMED:32548359). In Finland, after the implementation of the point-of-sale tobacco display ban, there was a pronounced decline in adolescents' exposure to tobacco products at point of sale, with rates in 2017 still well below the pre-ban levels of 2011 (PUBMED:32411873). However, in India, there was a lack of compliance with the rules of COPTA, indicating that the sale of tobacco continues in central Delhi with a lack of compliance with the regulations (PUBMED:28345846). In Hangzhou, China, tobacco advertisement in retail outlets was prevalent, and the density of tobacco retail outlets was high, with signs indicating 'no sales to minors' and tobacco warning signs being almost non-existent, pointing to an urgent need for the enforcement of regulations (PUBMED:22180327). Overall, while there is a high level of compliance in some areas, there are still challenges in others, and continuous monitoring and enforcement are necessary to ensure that retail outlets comply with legislation to protect children from exposure to tobacco displays at point of sale.
Instruction: Is dysuria useful for the diagnosis of infection of the urinary tract? Abstracts: abstract_id: PUBMED:11149188 Is dysuria useful for the diagnosis of infection of the urinary tract? Objective: To assess the validity of criterion and predictive value of dysuria for the diagnosis of UTI. Design: Cross-sectional study to assess a symptom. Setting: Six general medicine clinics (four urban clinics and two rural clinics) in the 11th Health Area in Madrid. Patients: The sample consists of 232 patients aged above fourteen who consulted during six consecutive months (116 of them reported having dysuria and 116 were asymptomatic). Measurements And Main Results: The diagnosis of urinary tract infections (UTI) was achieved through positive urine cultures or bacteriuria and leukocyturia in the centrifuged urine sediment. The sensitivity (S) of dysuria analysis for the diagnosis of UTI, its specificity (E), its predictive value (VP), and its probability coefficient (CP) were considered. Average age of the sample was 54 years old (range 19-82); 73% of the patients were female. No statistically significative difference of sex and age was found between cases and non-cases (p &gt; 0.1). Dysuria showed a 96% of sensitivity (95% CI, 86-98%), a 69% of E (95% CI, 61-76%) and 3.1 of CPP (95% CI, 2.7-3.5) for UTI diagnosis. In the women subgroup there was 95% of sensitivity (95% CI, 84-99%) and 67% of E (95% CI, 58-75%). A positive predictive value of 30% (95% CI, 22-40) and a negative predictive value of 99% (95% CI, 95-100) were estimated for this symptom. Conclusions: The diagnosis of urinary tract infections is unlikely in the absence of dysuria, but to treat all dysuria patients as UTI entails a high rate of overtreatment. abstract_id: PUBMED:24063877 Female patient self-diagnosis compared with emergency physician diagnosis of urinary tract infection. Background: Urinary tract infection (UTI) is a common emergency department (ED) complaint and self-diagnosis may be possible. Objective: The objective was to compare female patient self-diagnosis compared with emergency physician (EP) diagnosis of UTI. Methods: This was a prospective cohort study in a single urban ED. Women aged 18-64 years with a chief complaint of UTI, urinary frequency, or dysuria who presented to an urban academic ED were enrolled in a convenience sample fashion. Patients completed a written four-question survey by an ED greeter before triage. Charts of respondents were reviewed for demographic, laboratory, and EP diagnosis. Results: Fifty women were enrolled; 100% of patients who were approached participated in the study. Mean age was 33.7 years (standard deviation 13.8). Forty-three patients (86%) had a history of UTI. Forty-one patients (82%) thought they had a UTI on the index visit. Thirty patients (60%) preferred to buy over-the-counter antibiotics for their symptoms instead of seeing a doctor. Fifteen patients (30%) identified a specific antibiotic they would take. Of the 41 patients who thought they had a UTI, 25 (61%) were given that diagnosis. Of the 30 patients who would have preferred over-the-counter antibiotics, 20 (67%) were actually prescribed them. Agreement between EP and women's final impressions was low (κ = 0.11). Conclusions: There was poor agreement between EP diagnosis and self-diagnosis of UTI. In our ED population, women should be encouraged to seek medical attention to confirm the diagnosis. abstract_id: PUBMED:6424853 Clinical judgment in the diagnosis and management of frequency and dysuria in general practice. In a study of 40 women with the urethral syndrome and 46 women with conventional urinary tract infection, none of whom was pregnant, general practitioners predicted the diagnosis correctly before the report on the midstream urine specimen was received, as evidenced by their management. They seemed to do this by balancing the symptom of dysuria with the psychological make up of the patient: patients with the urethral syndrome suffered appreciably less dysuria than patients with urinary tract infection; patients with the urethral syndrome suffered appreciably more psychological illness. This ability to distinguish between the two disorders has important clinical and economic implications. abstract_id: PUBMED:27610444 Paediatric urinary tract infections: Diagnosis and treatment. Background: Urinary tract infections (UTIs) commonly occur in children. An estimated 8% of girls and 2% of boys will have at least one episode by seven years of age. Of these children, 12-30% will experience recurrence within one year. Australian hospital admission records indicate that paediatric UTIs represent 12% of all UTI hospital admissions. Objective: The aim of this article is to review the pathogenesis, clinical assessment and management of UTIs, and prevention strategies in children. Discussion: Clinically, paediatric UTI presentations are challenging because symptoms are vague and variable. Young infants may present with sepsis or fever and lack specific symptoms, whereas older children present with classical features such as dysuria, frequency and loin pain. Early diagnosis with appropriate urine specimen collection techniques, investigations and treatment is necessary for prevention of renal damage and recurrence. Effective, evidence-based investigations and treatment options are available, and physicians should feel confident in identifying and managing paediatric UTIs. abstract_id: PUBMED:8411876 The modern diagnosis of stenosis and obstruction of the ureter Included in the study were 681 patients with stenosis and obstruction of the ureter (SOU), diagnosed at the Department of Urology, Clinical Center of Urology, University Aleksandrovska Hospital in Sofia for a period of 5 years. The diagnosis rested on clinical, laboratory, X-ray, radioisotopic, echographic, endoscopic and other investigations. Analysis of the clinical and laboratory data demonstrated an outstanding triad of clinical symptoms: pain (76.8 per cent), hematuria (74.0 per cent), dysuria (70.9 per cent). The laboratory data confirmed the standpoint of other authors that most common is the urinary tract infection, followed in incidence by microhematuria, leukocyturia and hemoglobin content lower than 10 mg %. The most common causative agents of infection were E. coli (29.8 per cent) and Proteus (26.28 per cent). The X-ray methods being used were excretory urography (76.8 per cent) and plain X-ray on kidney-ureter-bladder film (93.2 per cent of the patients). Excretory urography furnished information on the cause, degree and location of SOU and on the effect upon the kidney. It helped to determine the approach to treatment--conservative transureteral or operative. abstract_id: PUBMED:32661011 Effect of pre-existing conditions on bladder cancer stage at diagnosis: a cohort study using electronic primary care records in the UK. Background: Pre-existing concurrent medical conditions (multimorbidity) complicate cancer diagnosis when they provide plausible diagnostic alternatives for cancer symptoms. Aim: To investigate associations in bladder cancer between: first, pre-existing condition count and advanced-stage diagnosis; and, second, comorbidities that share symptoms with bladder cancer and advanced-stage diagnosis. Design And Setting: This observational UK cohort study was set in the Clinical Practice Research Datalink with Public Health England National Cancer Registration and Analysis Service linkage. Method: Included participants were aged ≥40 years with an incident diagnosis of bladder cancer between 1 January 2000 and 31 December 2015, and primary care records of attendance for haematuria, dysuria, or abdominal mass in the year before diagnosis. Stage at diagnosis (stage 1 or 2 versus stage 3 or 4) was the outcome variable. Putative explanatory variables using logistic regression were examined, including patient-level count of pre-existing conditions and 'alternative-explanations', indicating whether pre-existing condition(s) were plausible diagnostic alternatives for the index cancer symptom. Results: In total, 1468 patients (76.4% male) were studied, of which 399 (35.6%) males and 217 (62.5%) females had alternative explanations for their index cancer symptom, the most common being urinary tract infection with haematuria. Females were more likely than males to be diagnosed with advanced-stage cancer (adjusted odds ratio [aOR] 1.62; 95% confidence interval [CI] = 1.20 to 2.18; P = 0.001). Alternative explanations were strongly associated with advanced-stage diagnosis in both sexes (aOR 1.69; 95% CI = 1.20 to 2.39; P = 0.003). Conclusion: Alternative explanations were associated with advanced-stage diagnosis of bladder cancer. Females were more likely than males to be diagnosed with advanced-stage disease, but the effect was not driven entirely by alternative explanations. abstract_id: PUBMED:23299232 Initial presentation in psychiatry emergency room led to diagnosis of many urinary bladder stones in a male patient. The first case of man who presented to psychiatry emergency room for evaluation of abnormal behavior because of urinary stones was reported. Careful evaluation of patient led to a diagnosis of 37 urinary bladder stones in an Egyptian man with obstructive uropathy and metabolic defects in the form of hyperoxaluria and hypocitraturia. Knowledge of the differential diagnosis of metabolic defects can lead to successful outcome in preventing reformation of urinary tract stones after surgery. A 61-year-old Egyptian man presented to psychiatry emergency room because he was found lying on floor in bathroom to urinate by his wife who thought her husband needed psychiatric evaluation. Patient gave history of frequent urination and dysuria on and off for 3 years. In the last 3 months before his presentation to emergency room, he got into a habit of lying down on his left side when he went to bathroom to urinate because it was easier for him to pass urine. Renal consultation requested because of presence of red blood cells in urinalysis. Computed tomography of the abdomen and pelvis showed bilateral hydronephrosis and multiple bladder stones. Twenty-four-hour urine collection showed low urinary citrate and high oxalate. Patient underwent open vesicolithotomy and removal of 36 stones. Stone analysis showed 75% uric acid and 25% calcium oxalate. Patient did very well after surgery, and 1 month later, he underwent transuretheral resection of prostate without any complications. Now patient has no difficulty passing urine and he has no recent attack of urinary tract infection. Knowledge of the differential diagnosis of metabolic defects in men with urinary bladder stones would hopefully provide clinicians with the proper diagnostic tools to more specifically treat such patients with improved success in preventing reformation of urinary tract stones after surgery. abstract_id: PUBMED:10372584 Clinical diagnosis and the order of information. Background: Information order can influence judgment. However, it remains unclear whether the order of clinical data affects physicians' interpretations of these data when they are engaged in familiar diagnostic tasks. Methods: Of 400 randomly selected family physicians who were given a questionnaire involving a brief written scenario about a young woman with acute dysuria, 315 (79%) returned usable responses. The physicians had been randomized into two groups, and both groups had received the same clinical information but in different orders. After learning the patient's chief complaint, physicians received either the patient's history and physical examination results followed by the laboratory data (the H&amp;P-first group) or the laboratory data followed by the history and physical examination results (the H&amp;P-last group). The results of the history and physical examination were supportive of the diagnosis of UTI, while the laboratory data were not. All physicians judged the probability of a urinary tract infection (UTI) after each piece of information. Results: The two groups had similar mean estimates of the probability of a UTI after learning the chief complaint (67.4% vs 67.8%, p = 0.85). At the end of the scenario, the H&amp;P-first group judged UTI to be less likely than did the H&amp;P-last group (50.9% vs 59.1%, p = 0.03) despite having identical information. Comparison of the mean likelihood ratios attributed to the clinical information showed that the H&amp;P-first group gave less weight to the history and physical than did the H&amp;P-last group (p = 0.04). Conclusions: The order in which clinical information was presented influenced physicians' estimates of the probability of disease. The clinical history and physical examination were given more weight by physicians who received this information last. abstract_id: PUBMED:37521363 The importance of morphological changes in neutrophils in the diagnosis of bacterial infections in dogs with confirmed urinary tract infections in a Veterinary Care Service, Rio de Janeiro, Brazil. Neutrophils (PMNs) are cellular markers used for diagnosing inflammation and/or infections. In this study, the objective was to highlight the importance of recording the toxic morphological alterations of the PMNs as markers of infection in 10 cases, positive bacterial isolation by culture due to dysuria, hematuria and/or fetid urine, as manifestations of urinary tract disease. Smear observations were performed by immersion for counting and morphological evaluations of 3,000 leukocytes in smears and in leukocyte concentrate. One (10.0%) of the dogs had leukocytosis, and two (20.0%) dogs had leukopenia. All animals showed toxic PMNs with positive bacterial culture. None of the cases in the study showed any quantitative alterations in PMNs such as: neutropenia or neutrophilia, where 100% had nuclear displacement of the regenerative type of PMNs to the left. 100% cases had toxic morphological changes: 90.0% had PMNs with toxic granulations, 80.0% had giant rod neutrophils, 70.0% had target PMNs, in 50.0% of those with vacuolation in the cytoplasm, in 40.0% of the animals, the presence of giant PMNs, 10.0% with Döhle bodies, and another animal 10.0% with karyorrhexis. All case studies had at least one association of two types of toxic changes. Toxic morphological alterations observed in PMNs through cystoscopy proved to be more reliable and sensitive in evidencing the diagnosis of infections than the quantitative alterations of absolute values of total leukocytes; therefore, they were essential in the laboratory diagnosis by blood count in the course of infections in dogs. abstract_id: PUBMED:3043881 Protocol for diagnosis of urinary tract infection: reconsidering the criterion for significant bacteriuria. The traditional criterion of 10(5) colony-forming units (CFU) per milliliter of urine to diagnose urinary tract infection was based on studies of pregnant and nonpregnant women with asymptomatic bacteriuria or acute pyelonephritis. Recent studies of symptomatic women revealed that urine cultures in approximately one third of those with confirmed urinary tract infections grew only 10(2) to 10(4) CFU/mL. The major causes of acute dysuria among such women are urinary tract infection, sexually transmitted disease, and vaginitis. In most instances, it is possible to make the diagnosis based on clinical features. The major features of urinary tract infection are internal dysuria; frequency, urgency, and voiding of small volumes; abrupt onset; suprapubic pain; presence of pyuria. Presence of hematuria which occurs in about 50 percent of patients strongly suggests bacterial cystitis. Three to seven days of empiric antimicrobial therapy is indicated for these patients, with selection of a first-line antimicrobial agent that offers efficacy against Escherichia coli or Staphylococcus saprophyticus; reasonable cost; few side effects. Ampicillin is not recommended. Indications for culture include uncertain clinical features; history of previous infection within the past three weeks; duration of symptoms of more than seven days; recent hospitalization or catheterization; pregnancy; diabetes. To maximize the sensitivity and specificity of the urine culture in acutely symptomatic women, it is necessary to request the laboratory to report 10(2) to 10(4) CFU/mL. Answer: Yes, dysuria is useful for the diagnosis of urinary tract infections (UTIs). A cross-sectional study assessing the symptom of dysuria in patients consulting at general medicine clinics found that dysuria had a high sensitivity of 96% for the diagnosis of UTI, meaning that the absence of dysuria makes a UTI unlikely. However, the specificity was 69%, indicating that not all patients with dysuria have a UTI, which could lead to overtreatment if dysuria is used as the sole diagnostic criterion. The positive predictive value was 30%, and the negative predictive value was 99%, suggesting that while many patients with dysuria may not have a UTI, those without dysuria are very unlikely to have a UTI (PUBMED:11149188). In another study involving female patients in an emergency department, there was poor agreement between emergency physician diagnosis and self-diagnosis of UTI based on symptoms like dysuria. This suggests that while patients may recognize symptoms such as dysuria, it is not sufficient for a definitive diagnosis without medical confirmation (PUBMED:24063877). General practitioners have been observed to use clinical judgment effectively by considering both the symptom of dysuria and the psychological makeup of the patient to distinguish between the urethral syndrome and conventional UTI (PUBMED:6424853). In pediatric cases, dysuria is one of the classical features that older children may present with, alongside frequency and loin pain. However, the symptoms of UTI in children can be vague and variable, making early diagnosis with appropriate urine specimen collection techniques, investigations, and treatment necessary (PUBMED:27610444). In summary, dysuria is a useful symptom for the diagnosis of UTI, but it should not be the sole criterion for diagnosis. Other clinical features, patient history, and laboratory tests are important to confirm the presence of a UTI and to avoid overtreatment based on the symptom of dysuria alone.
Instruction: Visual estimates of nucleus-to-nucleus ratios: can we trust our eyes to use the Bethesda ASCUS and LSIL size criteria? Abstracts: abstract_id: PUBMED:18798538 Visual estimates of nucleus-to-nucleus ratios: can we trust our eyes to use the Bethesda ASCUS and LSIL size criteria? Background: Apart from several subjective criteria, ASCUS and LSIL are defined by nuclear enlargement of 2.5x to 3x and &gt; or = 3x the area of a normal intermediate squamous cell nucleus, respectively. The aim of this study was to assess the ability of observers with various degrees of experience to estimate nuclear area ratios. Methods: Forty-five participants (5 anatomic pathologists, 5 cytopathologists, 2 cytopathology fellows, 16 pathology residents, 8 cytotechnologists, 6 medical students, and 3 cytopreparatory staff members) judged the area ratios of pairs of squamous cell nuclei imaged at x100 objective magnification. For Test 1 (T1), participants chose between 5 preset area ratios (1.0x to 1.4x, 1.5x to 1.9x, 2.0x to 2.4x, 2.5x to 2.9x, 3.0x to 3.4x) for 15 pairs of cells with ratios falling in the middle of these intervals. One week after T1, an instructional tutorial was given and T1 was repeated in different order of cell pairs as Test 2 (T2). Results: The kappa values for all participants were 0.30 for T1 and 0.39 for T2. Accurate responses were given in 50.5% in T1 and 53.5% in T2. Both T1 and T2 responses failed to differentiate between area ratios corresponding to ASCUS and LSIL. There were no differences in accuracy according to sex, experience, or expertise in cytopathology. Conclusions: The overall accuracy of participants in estimating nuclear area ratios was low, especially for ratios in the range of ASCUS versus LSIL, with only minimal improvement after an instructional tutorial. Nuclear area ratio estimation may, therefore, contribute to the well-known substantial variability in ASCUS rates and ASC/squamous intraepithelial lesion ratios. abstract_id: PUBMED:12063442 Cervical cytologic reports of ASCUS and LSIL. Cyto-histological correlation and implication for management Background: The purpose of this retrospective study is to evaluate the appropriate management of ASCUS and LSIL pap smears by correlating the histological findings obtained by punch biopsy or excised specimens. Methods: The study group included 584 women with abnormal pap smear: 358 with ASCUS and 226 with LSIL cytological abnormalities. All patients underwent colposcopy and, if necessary, directed-punch biopsy. In case of biopsy-proven dysplasia a destructive or excisional treatment was performed, as indicated. Results: The prevalence of HPV-CIN histological lesions in ASCUS patients was 36.3% and in LSIL patients was 67.7%. High grade CIN was observed in 15.7 and 20.8% respectively. In one ASCUS patients an invasive lesion was diagnosed on punch biopsy and two LSIL patients showed stromal invasion on the final histopathologic report on excised specimens. Conclusions: Patients with ASCUS or LSIL pap smear exhibit a wide spectrum of histological findings ranging from no pathologic abnormality to frequent high grade CIN and invasive carcinoma in rare cases. Because of the histological assessment by directed-punch biopsy and its therapeutical indications, colposcopic examination is recommended for all women with a cytologic diagnosis of ASCUS and LSIL. abstract_id: PUBMED:24842156 Long-term HPV type-specific risks for ASCUS and LSIL: a 14-year follow-up of a randomized primary HPV screening trial. Human papillomavirus (HPV) infections result in a significant burden of low-grade cervical lesions. Between 1997 and 2000, our randomized trial of primary HPV screening enrolled 12,527 women participating in population-based screening. Women between 32 and 38 years of age (median: 34, interquartile range: 33-37) were randomized to HPV and cytology double testing (intervention arm, n = 6,257 enrolled, n = 5,888 followed-up) or to cytology, with samples frozen for future HPV testing (control arm, n = 6,270 enrolled, n = 5,795 followed-up). We estimated the HPV type-specific, long-term absolute risks (AR), and population attributable proportions (PAR) for cytological diagnoses of atypical squamous cells of undetermined significance (ASCUS) or low-grade squamous intraepithelial lesion (LSIL) and for histopathologically diagnosed cervical intraepithelial neoplasia grade 1 (CIN1). The women were followed using comprehensive, nationwide register-based follow-up. During a mean follow-up time of 11.07 years, 886 ASCUS and LSIL lesions were detected, 448 in the intervention arm and 438 in the control arm. Poisson regression estimated the incidence rate ratios (IRRs) of low-grade lesions by HPV type. The IRRs were strongly dependent on follow-up time. The IRRs for ASCUS/LSIL associated with high-risk HPV positivity were 18.6 (95% CI: 14.9-23.4) during the first screening round, 4.1 (95% CI: 2.8-6.2) during the second, 2.6 (95% CI: 1.7-4.1) during the third, and 1.1 (95% CI: 0.7-1.8) for &gt;9 years of follow-up, with similar declines seen for the individual types. Type 16 contributed consistently to the greatest proportion of ASCUS, LSIL, and CIN1 risk in the population (first screening round PAR: ASCUS: 15.5% (95% CI: 9.7-21.9), LSIL: 14.7% (95% CI: 8.0-20.9), and CIN1: 13.4% (95% CI: 3.2-22.5)), followed by type 31 [8.4% (95% CI: 4.2-12.5) for ASCUS to 17.3% (95% CI: 6.8-26.6) for CIN1]. In summary, most ASCUS/LSIL lesions associated with HPV infection are caused by new HPV infections and most lesions are found during the first screening round. abstract_id: PUBMED:24403090 HPV DNA testing improves CIN2+ risk stratification and detection of CIN2+ in delayed triage of ASCUS and LSIL. A population-based follow-up study from Western Norway. In Norway, Pap smears with atypical squamous cells of uncertain significance (ASCUS) and low-grade squamous intraepithelial lesions (LSIL) are triaged after 6 months. The aim of the study was to evaluate effects of implementing human papillomavirus (HPV) test (2005) in delayed triage of ASCUS and LSIL in a cohort of women from Western Norway. After a survey of 119,469 cervical Pap smears during 2005-2007, a total of 1055 women with an index ASCUS or LSIL were included in the study and followed up for 3-6 years with respect to progression into cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Overall sensitivity for detection of CIN2+ with HPV testing and cytology was 96% and 72%, respectively. The sensitivity for detection of CIN2+ was not affected by age, but the specificity of the HPV test increased with age. Thus, for the age groups &lt;34 years, 34-50 years, and &gt;50 years, the specificity of a positive HPV test to detect CIN2+ was 47%, 71%, and 82%, respectively. Positive predictive values for CIN2+ in women with positive cytology, positive HPV test, negative cytology, negative HPV test, or negative HPV and cytology tests were 52%, 41%, 8%, 1.5%, and 0.4%, respectively. HPV testing resulted in a net 22% increased detection of CIN2+. Fifty-six percent of CIN2+ was detected at an earlier time point with HPV testing in triage. Implementation of HPV testing in delayed triage of ASCUS and LSIL improved the stratification of CIN2+ risk and increased CIN2+ detection and at an earlier time point than with triage by cytology alone. abstract_id: PUBMED:11453853 Comparative study of vitrectomy for dropped nucleus with and without the use of perfluorocarbon liquid. Clinical, electrophysiological and visual field outcomes. Purpose: To compare pars plana vitrectomy for dropped nucleus with and without perfluorocarbon liquid. Methods: 24 eyes with dropped nucleus were randomized to undergo PPV with perfluorodecalin, (study, n=12) and without it (control, n=12). Visual acuity, IOP, visual evoked response and electroretinography were noted at study entry and up to 3 months after PPV. Humphrey perimetry was done at 3 months. Results: Final visual acuity was &gt;20/60 in 75% of study eyes and 41.6% of controls. The two groups were statistically comparable with regard to the above parameters before PPV and during follow-up. The timing of vitrectomy did not significantly influence the final visual acuity. A higher rate of complications was seen in the control group. Conclusions: Perfluorodecalin facilitated the procedure and possibly minimized complications, and was found to be safe for intraoperative use in terms of recovery of visual acuity, stable ERG and VER, and normal visual fields. abstract_id: PUBMED:12722467 Nucleolus organizer regions (agnors) and Ki-67 antigens expression from the cervix and cervical canal smears with respect to changes in ascus and AGUS (atypical cells of undetermined significance) according to Bethesda Cytological System of Classification Objectives: The cytologic report written by a cytopathologist on the basis of smears from the cervix and cervical canal ought to give detailed and worthwhile information which allows making appropriate diagnostic and therapeutic decisions. The diagnosis of atypical cells of unspecified importance (ASCUS and AGUS) ought to have clinicians to carry out further diagnostic histopathologic procedures in order to explain the pathologic character of processes going on in the female reproductive organ. This is because such pathological states may as well spell could mean reactive, preinvasive as neoplastic changes of invasive character. Design: This problem has led to carry out a research in order to examine the proliferative potential of cells in cytological smears (Ki-67, AgNORs) and to answer whether this potential allows us to qualify atypical cells as changes of reactive or neoplastic type. Materials: In our study there were estimated 97 cases of cytological smears from the cervix and cervical canal. In this paper, immunohistochemical methods, and staining techniques developed by Ploton et al. in the detection of AgNORs and in the examination of Ki-67 antigen were used. Conclusions: 1. The average number of AgNORs in cell nucleus was highest in ASCUS cells and exceedingly higher in CIN 1 and CIN 3.2. The areal ratio of AgNORs to that of the cell nucleus was highest in HSIL and cancer cells, least in ASCUS. 3. In ASCUS and LSIL, AgNORs were minute and scattered whereas they were large and centrally located in HSIL and cancer cells. Results: We found that analysis of AgNORs makes easier the differentiation of ASCUS and LSIL cells from HSIL and neoplastic ones according to the classification system of Bethesda. abstract_id: PUBMED:31001477 Over Expressed TKTL1, CIP-2A, and B-MYB Proteins in Uterine Cervix Epithelium Scrapings as Potential Risk Predictive Biomarkers in HR-HPV-Infected LSIL/ASCUS Patients. High oncogenic risk human papillomaviruses (HR-HPVs) promote cervical carcinoma development, the fourth most common feminine cancer. A slow oncodevelopmental phase-defined histopathologically as Cervical Intraepithelial Neoplasia (CIN) grades 1-3, or cytologically as Low- or High-grade Squamous Intraepithelial Lesions (LSIL or HSIL)-precedes the malignancy. Cervical carcinoma screenings through HR-HPV genotyping and Pap smears are regularly performed in Western countries. Faulty cytology screening or genotyping or patients' non-compliance with follow-ups can let slip an oncoprogression diagnosis. Novel biomarker tests flanking HR-HPV genotyping and cytology could objectively predict the risk of disease progression thus helping triage LSIL/ASCUS patients. Here, anonymized leftovers of fresh cervical epithelium scrapings from twice (LSIL/ASCUS and HR-HPV DNA)-positive and twice (Pap smear- and HR-HPV DNA)-negative (control) patients in a proteome-preserving solution served to assess the biomarker worth of three cervical carcinoma-related proteins, i.e., B-MYB (or MYBL2), Cancerous Inhibitor of PP2A (CIP-2a), and transketolase-like1 (TKTL1). Leftovers anonymity was strictly kept and storage at -80°C, protein extraction, immunoblotting, and band densitometry were blindly performed. Only after tests completion, the anonymous yet code-corresponding HR-HPV-genotyping and cytology data allowed to assign each sample to the twice-positive or twice-negative group. Descriptive statistics showed that the three proteins levels significantly increased in the twice-positive vs. twice-negative scrapings. Diagnostic ROC curve analysis identified each protein's Optimal Decision Threshold (OTD) showing that TKTL1 and CIP-2a are stronger risk predictive biomarkers (Sensitivity, 0.91-0.93; Specificity, 0.77-0.83) than B-MYB. Logistic Regression coupled with Likelihood-Ratio Tests confirmed that a highly significant relation links increasing TKTL1/CIP-2a/B-MYB protein levels in twice-positive cervical scrapings to the risk of HR-HPV-driven oncoprogression. Finally, a 3 year clinical follow-up showed that 13 patients (50% of total) of the twice-positive group with biomarker values over OTDs compliantly underwent scheduled colposcopy and biopsy. Of these, 11 (i.e., 84.7%) received a positive histological diagnosis, i.e., CIN1 (n = 5; 38.5%) or CIN2/CIN2+ (n = 6; 46,2%). Therefore, TKTL1/CIP-2a/B-MYB protein levels could objectively predict oncoprogression risk in twice (HR-HPV- and Pap smear)-positive women. Further studies will assess the translatability of these findings into clinical settings. abstract_id: PUBMED:22630095 Visual outcome of pars plana vitrectomy for dropped nucleus after phacoemulsification. Objective: To determine the visual outcome of patients who underwent pars plana vitrectomy for dropped nucleus after phacoemulsification. Study Design: Interventional case series. Place And Duration Of Study: LRBT Free Base Eye Hospital, Karachi, from February 2008 to January 2011. Methodology: Forty-eight eyes of forty eight patients having history of dropped nucleus (soft remnant, half nucleus or complete nucleus) underwent 20 gauge pars plana vitrectomy within 24 days of phacoemulsification. After complete vitrectomy nucleus was lifted with the help of perfluorocarbon and removed either through a limbal incision or by using phacofragmenter, whereas small lens remnants were removed with a vitreous cutter. Intraocular lens was implanted at the end of surgery. Postoperative visual acuity, and any complications were assessed. Patients were followed for a period of 12 months. Results: Final visual acuity ranged from 6/9 to 6/18 in 34 eyes (70.83%), 6/24 to 6/36 in 8 eyes (16.66%) and 6/60 or less in 6 of 48 eyes (12.5%). Complications included raised intraocular pressure in 6 eyes (12.5%) and retinal detachment in 2 eyes (4.1%), corneal oedema and decompensation in 3 eyes (6.25%) and cystoids macular oedema in 4 cases (8.33%) out of 48 cases. Conclusion: The loss of crystalline lens in the vitreous during phacoemulsification is a severe complication, but appropriate and timely management can restore good visual outcome and minimize complications. abstract_id: PUBMED:8844638 The effect of ageing on neurones in the visual sector of the thalamic reticular nucleus. This paper studies the quantitative morphological changes occurring during ageing in neurones of the dorsocaudal or visual sector of the thalamic reticular nucleus. Male Wistar rats aged 3, 6, 18, 24 and 30 months were used in this study which applied morphometric methods. We have observed an increase in the size of neurones from this sector between the 3rd and 24th month and a decrease between the 24th and 30th month. In all the ages studied the majority of neurones are fusiform. abstract_id: PUBMED:15587914 Controversies in the Bethesda System. To diagnose atypical squamous cells of undetermined significance (ASCUS) or not to diagnose? The classification of pap smears according to the Bethesda System introduced to the cytologic diagnostics atypical squamous cells of undetermined significance (ASCUS). From the moment of the introduction this term arises many controversies because morphological criteria proposed for the determining ASCUS are very subjective. Even most experienced cytologists not always are able to correct diagnosis of ASCUS. There are also doubts in the clinical procedure connected with the recognition ASCUS. To diminish doubts to the minimum appeared several proposals categorization of ASCUS. The cytologic diagnosis of ASCUS is a serious dilemma both clinical and considerable economic problem. There are suggestions of the elimination from the Bethesda System recognition ASCUS. Many authors it maintains that there is no reason to qualify atypical cells as ASCUS because it can be an image of the occurrence cervical intraepithelial neoplasia (CIN). The hope is that the three-dimensional microscopy, HPV DNA testing, cellular biomarkers, involvement of the cytology to colposcopic research considerably will decrease the number of incorrect diagnoses of ASCUS with the advantage for the clinical procedure. In light of opinions introduced in this paper it appears that the diagnostics ASCUS is legitimate. It is necessary to perform precise microscopic analysis of morphology observed cellules. The recognition of ASCUS one ought to place only exceptionally remembering simultaneously about clinical consequences of such diagnosis. Answer: Based on the study presented in abstract PUBMED:18798538, it appears that visual estimates of nucleus-to-nucleus ratios are not entirely reliable when using the Bethesda ASCUS and LSIL size criteria. The study aimed to assess the ability of observers with various degrees of experience to estimate nuclear area ratios, which are part of the criteria for defining ASCUS and LSIL. The results showed that the overall accuracy of participants in estimating nuclear area ratios was low, with accurate responses given in only 50.5% of cases in Test 1 and 53.5% in Test 2. Furthermore, the responses failed to differentiate between area ratios corresponding to ASCUS and LSIL, and there was only minimal improvement after an instructional tutorial. These findings suggest that the subjective nature of visual estimation may contribute to the substantial variability in ASCUS rates and ASC/squamous intraepithelial lesion ratios. Therefore, it can be concluded that we cannot fully trust our eyes to use the Bethesda ASCUS and LSIL size criteria, and this may have implications for the diagnosis and management of cervical cytological abnormalities.
Instruction: Can cross-talk occur in human myelinated nerve fibers? Abstracts: abstract_id: PUBMED:27214607 Can cross-talk occur in human myelinated nerve fibers? Introduction: The possibility that impulse cross-talk can occur between myelinated human nerve fibers was explored. Methods: Instances of impulse conduction without decrement were found, and published recordings of compound action potentials of functionally homogeneous fibers were scrutinized. Results: Both analytical approaches yielded results consistent with cross-talk occurring in some nerves after electrical stimulation. Conclusions: The possible ionic current paths in and out of neighboring fibers, which could be responsible for the phenomenon, have been considered in the light of seminal work on unmyelinated single axons. Muscle Nerve 54: 361-365, 2016. abstract_id: PUBMED:11084448 Myelinated retinal nerve fibers. Review of the literature We describe the clinical findings and diagnostic particularities of myelinated retinal nerve fibers observed in a personal case and review the literature. An 8-year-old child was referred for bilateral papillary abnormalities. The diagnosis was myelinated retinal nerve fibers as white striated patches with feathery edges. Myelinated retinal nerve fibers are a developmental anomaly which may be continuous or discontinuous with the optic nerve head. They are reported to occur in 0.57% to 0.98% of ophthalmic patients. The pathogenesis has not been established. Myelinated retinal nerve fibers are clinically conspicuous. Although they are generally believed to be benign lesions, they occasionally are associated with ocular complications such as myopia. abstract_id: PUBMED:1506849 Development of myelinated and unmyelinated fibers of human vagus nerve during the first year of life. As an important component of cardiorespiratory control, the vagus nerve and its maturation were evaluated in normal infants in order to provide standards of comparison with infants with dysfunctional neural control mechanisms. Myelinated and unmyelinated fibers in the cervical vagus nerve were examined in 27 term infants. Number of fibers, axon diameters, and myelin thickness were compared among four age groups. The histograms of axon size exhibited a skewed distribution that persisted during the age range examined for both myelinated and unmyelinated vagus fibers. Fiber size distributions of myelinated fibers, however, already showed an incipient multimodal distribution after 3 months. No major increase was observed in the average axon size of myelinated and unmyelinated fibers. A significant increase was observed, however, in the average content of myelin in myelinated fibers expressed as myelin thickness, number of lamellae or g ratio (internal/external diameter). These results suggest an active myelination during the first 9 months of life without a major change in the axonal characteristics of the fibers. A marginal increase in the density of myelinated fibers with age, both in terms of total number and the relation to unmyelinated fibers, suggests a slow transition from unmyelinated to myelinated fibers during the first year of life, particularly during the first 3 months. The present morphometric parameters indicate an active deposition of myelin before the maturation of cross-axonal dimensions. The distribution of g ratios also suggests that optimal conduction velocity is compromised only in a fraction of all vagal myelinated fibers. abstract_id: PUBMED:1292373 Number and size-spectra of myelinated nerve fibers of human premolars. The primary objective of this study was to determine the number and size of myelinated nerve fibers at the subcervical, midroot and juxta-apical levels of human premolars. Sixty-seven healthy premolars extracted from adolescents were utilized. Root-discs were prepared from the three sites and processed for light and electron microscopy. The myelinated nerve fibers were counted from semithin sections using a sampling microscope. The measurements were taken from composite electron micrographs using an electronic image processing unit. A total of 1883 myelinated axons from seven mandibular second premolars was gauged. The 67 teeth had an average of 312 +/- 149 myelinated nerve fibers at the juxta-apical level (range 18 to 728). The contra- and ipsilateral differences in means among the four groups of premolars were not significant (P &gt; 0.05). The number of nerves increased significantly (P &lt; 0.05) toward midroot and subcervical (P &lt; 0.001) levels in all groups. The average neural diameter was 3.5 + 1.0 microns at the juxta-apical level, and the between-teeth difference in mean was found to be significant (P &lt; 0.01). There was no decline (P &gt; 0.05) in the diameter of myelinated nerve fibers toward midroot and subcervical levels. abstract_id: PUBMED:25976073 Morphometric analysis of the diameter and g-ratio of the myelinated nerve fibers of the human sciatic nerve during the aging process. Myelinated nerve fibers suffer from different degrees of atrophy with age. The success of subsequent regeneration varies. The aim of this research was to analyze myelinated fibers of the human sciatic nerve during the aging process. Morphometric analysis was performed on 17 cases with an age range from 9 to 93 years. The outer and inner diameter of 100 randomly selected nerve fibers was measured in each of the cases evaluated, and the g-ratio (axonal diameter/outer diameter of the whole nerve fiber) of each was calculated. Scatter plots of the diameters and g-ratios of the analyzed fibers were then analyzed. Nerve fibers of each case were classified into three groups according to the g-ratio values: group I (g-ratio lower than 0.6), group II (g-ratio from 0.6 to 0.7) and group III (g-ratio higher than 0.7). Afterwards, nerve fibers of group II were further classified into small and large subgroups. The percentages of each group of nerve fibers were computed for each case and these values were used for correlational and bivariate linear regression analysis. The percentage of myelinated nerve fibers with large diameter and optimal g-ratio of the sciatic nerve declines significantly with age. This is accompanied by a simultaneous significant increase in the percentage of small myelinated fibers with g-ratio values close to 1 that occupy the upper left quadrant of the scatter plot. It can be concluded that aging of the sciatic nerve is associated with significant atrophy of large myelinated fibers. Additionally, a significant increase in regenerated nerve fibers with thinner myelin sheath is observed with age, which, together with the large myelinated fiber atrophy, might be the cause of the age-related decline in conduction velocity. A better understanding of the changes in aging peripheral nerves might improve interpretation of their pathological changes, as well as comprehension of their regeneration in individuals of different age. abstract_id: PUBMED:7234927 Myelinated retinal nerve fibers. In a series of 3,968 consecutive autopsies, myelinated nerve fibers of the retina were presented in 39 (0.98%) cases and bilateral in three (7.7%) affected cases; thus, 42 (0.54%) of the 7,936 eyes examined were affected. Myelinated nerve fiber lesions appeared as white or gray-white striated patches corresponding in shape to the distribution of retinal nerve fibers and demonstrated frayed borders. Myelinated retinal nerve fibers were continuous with the optic nerve head in 14 (33%) and discontinuous with the optic nerve head in 28 (66%) affected eyes. By light microscopy and electron microscopy, myelinated retinal fibers were marked by a ganglion cell axon surrounded by concentric lipoprotein lamellae that formed the myelin sheath. Glial cells were often prominent near the myelin sheaths, but other components of the sensory retina were morphologically normal. Clinically, 32 patients with myelinated retinal nerve fibers had comparable overall features, visual field defects less extensive than expected on the basis of ophthalmoscopic appearance, and normal findings on fluorescein angiography. Four patients had a syndrome characterized by ipsilateral extensive myelinated retinal nerve fibers, anisometropic myopia, amblyopia, and strabismus. abstract_id: PUBMED:11915683 Myelinated nerve fibers associated with juxtapapillary haemorrhages The myelinated nerve fibers (MNF) are o congenital anomaly characterized by the extension of the myelinisation process anterior from the lamina cribosa. The incidence of myelinated nerve fibers is 0.3-0.6% in patients and 0.54% in eyes from autopsies. Although these represent benign lesion, rarely are associated with retinal abnormalities: vascular abnormality (retinal telangiectasis, cranial-facial lesions, coloboma of the iris, keratoconus, myopia/strabismus, amblyopia). Myelinated nerve fibers asSociated with juxtapapillary haemorrhages was not reported in the available literature. We describe the clinical findings and diagnostic particularities of myelinated retinal nerve fibers associated with juxtapapillary haemorrhages in a personal case. An 40-year-old woman was referred for unilateral papillary abnormalities. The diagnosis was myelinated retinal nerve fibers as white striated patches with feathery edges, associated with two juxtapapillary haemorrhages. The clinical and pathogenic features are discussed. Based on the association between the myelinated nerve fibers, the juxtapapillary haemorrhages there is a possible pathogenic correlation between these elements. The action of the myelinated nerve fibers can be explained by a mechanical compression, with the disruption of the retinal artery, a structural vulnerable artery. abstract_id: PUBMED:11591916 Myelinated nerve fibers associated with cilioretinal artery occlusion Although the myelinated nerve fibers are considered as benign, cases associated with retinal vascular abnormalities (telangiectasis, neovascularization, vascular occlusions) have recently been reported, suggesting a possible pathogenic correlation between these elements. Our observation presents a 44-year-old patient, with a sudden decrease of visual acuity in the right eye, discal and peridiscal myelinated nerve fibers, associated with cilioretinal artery occlusion. The general clinical and laboratory examination, except for a drug stabilized arterial hypertension, did not reveal other local or general embolic factors. One year later, the decrease in visual acuity had remained unchanged and the area of the myelinated nerve fibers had diminished. Based on the association between the myelinated nerve fibers, the cilioretinal artery occlusion, the young age of the patient, and the absence of other local or general embolic factors, we consider there is a possible pathogenic correlation between these elements. The action of the myelinated nerve fibers can be explained by a mechanical compression, with the disruption of the cilioretinal artery route, a structurally vulnerable artery. abstract_id: PUBMED:18401291 Retinal neovascularization with myelinated nerve fibers Introduction: The prevalence of myelinated retinal nerve fibers is 0.3%-0.6% of eyes. Although they often constitute benign lesions, they are rarely associated with retinal vascular abnormalities including preretinal neovascularization. Case Report: A 31-year-old patient, with no previous pathological antecedents, consulted us for myodesopsia in her left eye. The exam found a visual acuity of 10/10 and a normal anterior segment in the two eyes. At the ophthalmoscopic examination of the left eye, we noted myelinated nerve fibers in the inferotemporal quadrant that were associated with a temporal neovascular bouquet covering area of 1.5 optic disks. A small amount of intravitreal hemorrhage was found. The ophthalmoscopic examination of the right eye was normal. The treatment consisted in sector-based photocoagulation with argon laser and cryotherapy. Discussion: The source of neovascularization in the myelinated retinal nerve fibers was discussed. Underlying retinal ischemia is the most probable mechanism. The treatment was based on sector-based or panretinal photocoagulation. Conclusion: Preretinal neovascularization is a rare complication of myelinated nerve fibers. The earlier the care is given, the more complications can be avoided. abstract_id: PUBMED:34851788 Progressive Myelinated Retinal Nerve Fibers in Children With Craniosynostosis. The authors found the changes of myelinated retinal nerve fibers in one pair of identical twins with nonsyndromic craniosynostosis and two patients with Crouzon syndrome, who were at risk for impaired lamina cribrosa barrier function. This is the first report of progressive and presumed acquired myelinated retinal nerve fibers in craniosynostosis. [J Pediatr Ophthalmol Strabismus. 2021;58(6):e40-e43.]. Answer: Yes, cross-talk can occur in human myelinated nerve fibers. This is supported by a study that explored the possibility of impulse cross-talk between myelinated human nerve fibers. The study found instances of impulse conduction without decrement and scrutinized published recordings of compound action potentials of functionally homogeneous fibers, yielding results consistent with cross-talk occurring in some nerves after electrical stimulation (PUBMED:27214607).
Instruction: A Comparative Retrospective Analysis of Complications After Oncoplastic Breast Reduction and Breast Reduction for Benign Macromastia: Are These Procedures Equally Safe? Abstracts: abstract_id: PUBMED:24691307 A Comparative Retrospective Analysis of Complications After Oncoplastic Breast Reduction and Breast Reduction for Benign Macromastia: Are These Procedures Equally Safe? Introduction: Oncoplastic breast reduction has been shown to be an effective approach to breast conservation surgery in women with macromastia. Clear surgical margins can be achieved while simultaneously improving symptomatic macromastia and enhancing aesthetic outcomes. Little has been written about postoperative complications after this procedure, beyond the risk of locoregional recurrence. This study aimed to compare the complication profile for oncoplastic breast reduction versus reduction for benign macromastia. Methods: A retrospective review of our experience with oncoplastic breast reduction was performed. This represented a consecutive series of 118 patients undergoing bilateral breast reduction during the 7-year study period from March 2005 to March 2012. There were 64 patients identified who underwent oncoplastic breast reduction. Patients were determined to be a good candidate for breast conservation therapy if it was felt that clear surgical margins could be obtained without mastectomy. Postoperative complications (within 6 weeks of surgery) were compared to a control group of 56 patients undergoing reduction for benign macromastia. The associations between complications and potential risk factors were analyzed using logistic regression. Results: Patients undergoing oncoplastic breast reduction and reduction for benign macromastia had some key differences. In general, macromastia patients were younger (mean age, 42.3 vs 57.5 years; P &lt; 0.001) and had lower body mass index (mean, 26.1 vs 30.6 kg/m2; P &lt; 0.001) compared to those patients having oncoplastic reduction. Within the oncoplastic reduction group, 14 (21.9%) patients had a total of 16 complications; among the benign macromastia group, 9 (16.1%) patients had a total of 10 complications (P = 0.420). On univariate analysis, oncoplastic reduction was not predictive of having a perioperative complication (odds ratio, 1.462; 95% confidence interval, 0.579-3.696; P = 0.422). Body mass index was found to be predictive of having a complication after reduction for either indication (odds ratio, 1.108; 95% confidence interval, 1.018-1.206; P = 0.017). Within the oncoplastic reduction cohort at an average follow-up of 34.6 months (range, 0.3-90.3 months), 5 (7.9%) patients developed locoregional recurrence and 2 patients developed distant metastasis. Conclusions: Compared with reduction mammoplasty for benign macromastia, a widely accepted procedure, patients undergoing oncoplastic breast reduction were equally likely to have a postoperative complication. Elevated body mass index was shown to be a statistically significant predictor of having a complication after reduction for either indication. Overall complication rates were acceptably low for both procedures. abstract_id: PUBMED:35801636 Comparing oncoplastic breast reduction with immediate symmetry surgery to standard breast reduction surgery: Are postoperative complications worse? Introduction: Oncoplastic breast reduction mammoplasty (ORM) is an excellent treatment option for women with breast cancer and macromastia undergoing breast conservation therapy. Here, we aim to better understand the risks associated with ORM compared to standard reduction mammoplasty (SRM). Methods: A retrospective chart review was performed of patients undergoing ORM or SRM from 2015 to 2021. Primary outcomes included the occurrence of major or minor postoperative complications in the two groups and delays to adjuvant therapy (&gt;90 days) among the women undergoing ORM. Results: Women in the ORM group (n = 198) were significantly older (p &lt; 0.001) with a higher prevalence of smoking (p &lt; 0.001), diabetes mellitus (p &lt; 0.01), and a Charlson comorbidity index ≥ 3 (p &lt; 0.001) compared to women undergoing SRM (n = 177). After controlling for potential confounders, there were no significant between-group differences in the odds of developing postoperative complications (odds ratio = 0.80, 95% confidence interval: 0.36-1.69). Only 3% (n = 4) of the 150 women undergoing adjuvant radiation or chemotherapy experienced delays related to postoperative complications. Conclusion: ORM has a similar safety profile as SRM, despite the older age and higher number of comorbidities often seen in patients undergoing ORM, and is a safe option for achieving contralateral symmetry at the time of partial mastectomy without delays to adjuvant therapy. abstract_id: PUBMED:35006098 Comparing Outcomes after Oncoplastic Breast Reduction and Breast Reduction for Benign Macromastia. Background: Oncoplastic breast reconstruction improves cosmetic outcomes when compared to standard breast conservation therapy alone. The authors studied whether tailoring a breast reduction to a cancer resection affects complication rates by comparing (1) outcomes between oncoplastic and benign macromastia patients and (2) complication rates between the cancer side and the symmetrizing side of an oncoplastic reduction. Methods: A retrospective chart review was performed on female patients who underwent either oncoplastic or benign breast reduction over 9 years by a single surgeon. Patient demographics, intraoperative data, and postoperative outcomes were gathered from the electronic medical record. Chi-square and t tests were performed when appropriate to determine significance. Results: Of the 211 patients included in the study, 62 (29.4 percent) underwent oncoplastic breast reduction and 149 (70.6 percent) underwent breast reduction for benign macromastia. Total resection weight was greater in the benign group (p = 0.00). There was a higher rate of loss of nipple sensation in the oncoplastic group (p = 0.005) but no differences in any other complication. There was a higher complication rate in the oncologic breast when compared to the symmetrizing breast within the oncoplastic cohort (p = 0.039), but no differences in the rates of individual complications. Conclusions: Although the loss of nipple sensation was increased in patients undergoing oncoplastic breast reduction, all other outcomes were not significantly different between the two groups. The authors' findings indicate that oncoplastic breast reduction can be performed with a safety profile similar to that of a standard breast reduction. Clinical Question/level Of Evidence: Therapeutic, III. abstract_id: PUBMED:36735002 Aesthetic, Quality-of-Life, and Clinical Outcomes after Inferior Pedicle Oncoplastic Reduction Mammoplasty. Background: Oncoplastic breast surgery is more likely to achieve superior aesthetic outcomes compared to lumpectomy alone. Oncoplastic reduction mammoplasty (ORM) is a volume displacement oncoplastic technique that combines lumpectomy and reduction mammoplasty. Data on aesthetic and quality-of-life (QoL) outcomes after ORM are scarce in the literature. Based on a literature review, this present study reports outcomes on the largest group of ORM patients to date. Methods: A retrospective review was conducted of all patients who underwent ORM between 2011 and 2018 at a tertiary care centre. Patients were excluded if no pedicle information was available or did not undergo post-operative radiotherapy. All patients with available post-operative photographs were aesthetically evaluated by four blinded, independent investigators blinded based on breast symmetry, nipple symmetry, and overall appearance. The BREAST-Q (breast conserving module) was used to assess QoL outcomes. Results: Two-hundred-and-sixteen consecutive patients (223 breasts) were included. Macromastia (cup size D or higher) was present in 173 patients (80.1%). Inferior pedicle ORM was utilized in 179 (80.3%) breasts. Eighty-eight patients (40.7%) were aesthetically evaluated, of whom 69 patients (78.4%) had "good", "very good", or "excellent" grades in all aesthetic categories. Seventy-five patients (85.2%) had "good" or better grades in overall appearance. Preoperative ptosis grade, cup size, presence of post-operative complications, and breast specimen weight had no significant correlations with aesthetic grades. Inferior pedicle ORM was associated with a higher "satisfaction with breast" Q-score (p=0.017) compared to other pedicle approaches. Conclusion: Inferior pedicle ORM achieves objectively excellent aesthetic outcomes and high patient satisfaction with the reconstruction. Level Of Evidence Iv: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . abstract_id: PUBMED:36516504 The no-vertical scar technique for oncoplastic breast reconstruction. Introduction: Oncoplastic reconstruction can optimize breast conserving therapy. Nipple loss is a concern in patients with significant ptosis and breast hypertrophy particularly with pedicle undermining during tumor resection. The modified Robertson technique (No-Vertical Scar reduction) has been previously described for breast reduction in large, ptotic patients using a wide, bell-shaped inferior pedicle with only inframammary fold and peri‑areolar incisions. The purpose of this study was to evaluate the No-Vertical Scar (NVS) technique applied to oncoplastic reconstruction. Methods: Women undergoing oncoplastic breast reduction using a NVS, Wise, or Vertical method were assessed. Predictive variables included patient demographics, comorbidities, and sternal notch to nipple (SNN) distance. Outcome variables were delayed wound healing, surgical site infection, seroma, fat necrosis, nipple necrosis, use of a free nipple graft, and time between surgery and adjuvant radiation. Results: Fifty patients met inclusion criteria using NVS (N = 15), Wise (N = 16), and Vertical (N = 19) methods. The NVS group had a significantly higher BMI (p=.009), greater sternal notch to nipple distance (p=&lt;0.001) and increased resection volume (p=&lt;0.001) as compared to Wise and Vertical groups. There was no significant difference in complications (p=.25). No nipple necrosis occurred, and no free nipple grafts were required. Conclusion: The NVS approach is a useful technique for oncoplastic reconstruction in select patients with macromastia and severe Grade II or Grade III ptosis. The wide, bell-shaped pedicle is versatile for obliterating a lumpectomy cavity and optimizing nipple perfusion if pedicle undermining occurs during resection. abstract_id: PUBMED:26358362 Oncoplastic reduction mammoplasty for breast cancer in women with macromastia: Oncological long-term outcomes. Objective: To evaluate the long-term results of tumorectomy and concomitant bilateral oncoplastic reduction mammoplasty (ORM) for early stage breast cancer patients with macromastia in terms of local disease control and long-term oncological results. Patients And Method: Data of 82 patients with macromastia undergoing ORM for breast cancer between 1996 and 2011 were retrospectively examined and evaluated with regard to oncological results. Results: The median age was 50 years. The median follow-up was 121 months (range 28-212 months). The median breast volume was 1402 cm3 and the median weight of excised breast material was 679 g. The median surgical margin was 16 mm. Ten-year local recurrence rate was 8.7%. The 10-year overall survival rate was 82.2% and the disease-free survival rate was 73.2%. Early and late complication rates were 12.2% and 14.6%, respectively. Conclusions: From the standpoint of local disease control and long-term observation, ORM can be considered a very safe and acceptable treatment for early stage breast cancer in women with macromastia. abstract_id: PUBMED:25121787 The oncoplastic reduction approach to breast conservation therapy: benefits for margin control. Background: Reduction mammaplasty during lumpectomy allows more generous resection and minimizes potential for poor cosmesis as compared with breast conservation therapy alone. Objectives: The authors assessed the benefits of oncoplastic reduction for margin status in patients with breast cancer by conducting a retrospective review of cases of tumor resection alone vs tumor resection with oncoplastic reduction. Methods: Patients with breast cancer who underwent lumpectomy performed by a single oncologic surgeon between 2009 and 2013 were included. Patients were stratified into 2 groups based on surgical procedure: tumor resection with oncoplastic reduction (group 1) vs tumor resection alone (group 2). Patient demographics including risk factors, diagnosis, cancer stage, and procedure type were recorded. Tumor size, specimen weight, width of narrowest margin, and receptor status were determined. Outcome variables included margin positivity (≤1 mm), need for re-excision, and conversion to completion mastectomy. Results: A total of 222 breasts from 207 patients were included in the study: 83 in group 1 and 139 in group 2. The patients in group 1 had a lower incidence of positive margins and wider free surgical margins, required re-excision less often, and went on to completion mastectomy less often. Patients in group 1 were younger and had cancer that was more advanced. When controlling for these variables on multivariate regression analysis, the oncoplastic technique was independently associated with fewer positive margins and fewer instances of re-excision. Conclusions: The oncoplastic reduction technique achieves wider free margins and less often necessitates re-excision or subsequent mastectomy. The long-term oncologic effect of this approach deserves further study. Level Of Evidence: 4. abstract_id: PUBMED:26816558 Bilateral Reduction Mammaplasty as an Oncoplastic Technique for the Management of Early-Stage Breast Cancer in Women with Macromastia. Objective: Lumpectomy may result in contour deformities or breast asymmetry in women with breast cancer and macromastia. This study investigates the use of bilateral reduction mammaplasty, with the tumor and margins included within the reduction specimen. Methods: Twenty-four patients who underwent lumpectomy with immediate bilateral reduction mammaplasty for unilateral breast cancer were included. Patient medical records were reviewed for demographic, oncological, and surgical characteristics. Results: Mean patient age was 57 years, and mean body mass index was 32.2 kg/m(2). Mean tumor size was 1.7 cm. All tumor margins were free of neoplastic involvement. No difference was noted between the ipsilateral and contralateral resection weights (P = .81). Adjuvant radiation therapy was delivered to 21 patients (88%). There were no significant differences in postoperative total (P = .36), major (P = .44), or minor (P = .71) complications between the tumor and nontumor sides. Only 1 patient required additional revision surgery following the initial lumpectomy with bilateral reduction mammaplasty. Conclusion: Lumpectomy with bilateral reduction mammaplasty did not compromise surgical margins. Lumpectomy with bilateral reduction mammaplasty may allow for adequate surgical treatment of breast cancer while avoiding significant breast asymmetry in women with macromastia. abstract_id: PUBMED:27647991 Oncoplastic reduction mammoplasty for breast cancer in women with macromastia: long term aesthetic, functional and satisfaction outcomes. Aim Of The Study: Despite the increase in studies concerning oncoplastic reduction mammoplasty (ORM), data showing long-term aesthetic and patient satisfaction for ORM in patients with macromastia remain limited. Therefore, this study evaluated the long-term results of tumorectomy and concomitant bilateral ORM for early-stage breast cancer patients with macromastia in terms of cosmesis, patient satisfaction, and functional outcomes. Material And Methods: Retrospective data of patients with macromastia undergoing ORM for breast cancer between 1996 and 2011 were examined and evaluated regarding the aesthetic results, patient satisfaction, and functional outcomes. Results: The median age of the 82 patients was 50 years. The median follow-up was 120 months (range: 28-212 months). The median breast volume was 1402 cm(3), and the median weight of the excised breast material was 679 g. A good or excellent evaluation of the cosmetic outcome was as follows: self-evaluation: 84.1% at the early-stage, 80.3% at the later stage; panel evaluation: 75.4% at the late-stage. Median patient satisfaction rates were 9.1% for early-stage disease and 8.8% for late-stage disease. Reduced mobility and intertrigo improved by three-fold during the post-operative period. Conclusions: ORM for early-stage breast cancer in women with macromastia results in good cosmesis in both the early-stage and long-term, and is quite acceptable for use in patients. Patients reacted favorably to the prospect of having their breast cancer and macromastia treated in a single session, and positive results continued over the long-term. abstract_id: PUBMED:28331682 The Role of Oncoplastic Breast Surgery in Breast Cancer Treatment. The aim of this study is to discuss indications, advantages, disadvantages, oncologic and aesthetic results of Oncoplastic Surgery (OBS). Pubmed and Medline database were searched for articles published between 1998 and 2014 for keywords: oncoplastic breast surgery, therapeutic mammoplasty, oncoplastic breast reduction, synchrenous reconstructions. Role of OBS in breast cancer surgery, its aspects to be considered, its value and results have been interpreted. This technique has advantages by providing more extensive tumourectomy, yielding better aesthetic results compared with breast conserving surgery, allowing oncoplastic reduction in breast cancer patients with macromastia, with higher patient satisfaction and quality of life and by being inexpensive due to single session practice. As for its disadvantages are: re-excision is more difficult, risk for mastectomy is higher, it is depent on the Surgeron's experience, it has a risk for delay in adjuvant therapies and its requirement for additional imaging studies during management. Main indications are patients with small tumour/breast volume, macromastia, multifocality, procedures which can disrupt breast cosmesis such as surgeries for upper inner breas tquadrient tumours. Contraindications are positive margin problems after wide excision, diffuse malign microcalsifications, inflammatory breast cancer, history of radiotherapy and patients' preferences. Despite low evidence level, Oncoplastic Breast Surgery seems to be both reliable and acceptable in terms of oncologic and aesthetic aspects. Oncoplastic Breast Surgery increase the application rate of breast conserving surgery by obviating practical limitations and improve the results of breast conserving surgery. Correct patient and technique choice in OBS is vital for optimization of post surgical. Answer: The comparative retrospective analysis of complications after oncoplastic breast reduction and breast reduction for benign macromastia aimed to determine if these procedures are equally safe. The study reviewed experiences with oncoplastic breast reduction over a 7-year period, comparing postoperative complications within 6 weeks of surgery to a control group undergoing reduction for benign macromastia. The results indicated that patients undergoing oncoplastic breast reduction were equally likely to have a postoperative complication compared to those undergoing reduction mammoplasty for benign macromastia. Elevated body mass index was a statistically significant predictor of having a complication after reduction for either indication. Overall complication rates were acceptably low for both procedures (PUBMED:24691307). Additional studies have supported these findings, showing that oncoplastic breast reduction has a similar safety profile to standard reduction mammoplasty (SRM), even though patients undergoing oncoplastic procedures are often older and have more comorbidities. The studies found no significant differences in the odds of developing postoperative complications between the two groups, and delays to adjuvant therapy were minimal (PUBMED:35801636). Another study found that while there was a higher rate of loss of nipple sensation in the oncoplastic group, there were no differences in any other complication when compared to the benign macromastia group (PUBMED:35006098). Furthermore, research has shown that oncoplastic reduction mammoplasty for breast cancer in women with macromastia can be considered a very safe and acceptable treatment in terms of local disease control and long-term oncological results, with low rates of early and late complications (PUBMED:26358362). The oncoplastic reduction approach to breast conservation therapy has also been associated with benefits for margin control, achieving wider free margins and less often necessitating re-excision or subsequent mastectomy (PUBMED:25121787). In conclusion, the evidence suggests that oncoplastic breast reduction and breast reduction for benign macromastia are equally safe procedures, with similar rates of postoperative complications and acceptable long-term outcomes.
Instruction: The CT scout view: does it need to be routinely reviewed as part of the CT interpretation? Abstracts: abstract_id: PUBMED:34075469 The CT scout view: complementary value added to abdominal CT interpretation. Computed tomography (CT) scout images, also known as CT localizer radiographs, topograms, or scanograms, are an important, albeit often overlooked part of the CT examination. Scout images may contain important findings outside of the scanned field of view on CT examinations of the abdomen and pelvis, such as unsuspected lung cancer at the lung bases. Alternatively, scout images can provide complementary information to findings within the scanned field of view, such as characterization of retained surgical foreign bodies. Assessment of scout images adds value and provides a complementary "opportunistic" review for interpretation of abdominopelvic CT examinations. Scout image review is a useful modern application of conventional abdominal radiograph interpretation that can help establish a diagnosis or narrow a differential diagnosis. This review discusses the primary purpose and intent of the CT scout images, addresses standard of care and bias related to scout image review, and presents a general systematic approach to assessing scout images with multiple illustrative examples, including potential pitfalls in interpreting scout images. abstract_id: PUBMED:24848822 The CT scout view: does it need to be routinely reviewed as part of the CT interpretation? Objective: The objective of our study was to determine whether the CT scout view should be routinely reviewed by comparing diagnostic information on the scout view with that provided by the correlative CT study. Materials And Methods: Two radiologists blinded to history and CT findings reviewed retrospectively 2032 scout views. All cases with major findings (defined as any abnormality that would prompt additional diagnostic tests or require management) were correlated with the CT study, other imaging study, or medical record when necessary by a third radiologist to determine the validity of the scout view finding and whether the finding was identifiable on the current CT study. Results: Major findings were identified in 257 (13%, reader 1) and 436 (23%, reader 2) of cases. Most major findings were confirmed (69-78%) or refuted (13-16%) by the CT study. However, 15 (6%, reader 1) and 48 (11%, reader 2) of the major findings were not included in the CT FOV, of which five (2%, reader 1) and 21 (5%, reader 2) constituted a missed pathologic finding. The most common one was cardiomegaly detected on a nonchest CT scout view. Additional pathologic findings included fracture, metastasis, avascular necrosis or subluxation of the humeral head, dilated bowel, and thoracic aortic dilatation. The most common false-positive finding was cardiomegaly. Conclusion: In a small percentage of cases, review of the CT scout view will disclose significant pathologic findings not included in the CT FOV. The results of this study support the routine inspection of the scout view, especially for the detection of pathologic findings in anatomic regions not imaged by CT. abstract_id: PUBMED:26295646 Reviewing CT Scout Images: Observations of an Expert Witness. Objective: The purpose of this article is to share the views of an experienced expert witness in medical malpractice cases on the use of CT scout images. Conclusion: The medical literature has resurrected issues regarding viewing of CT scout images. Scout images are an integral part of any CT examination and should be carefully reviewed for findings that may or may not be included in the FOV of the study. abstract_id: PUBMED:29755848 CT dose modulation using automatic exposure control in whole-body PET/CT: effects of scout imaging direction and arm positioning. Automatic exposure control (AEC) modulates tube current and consequently X-ray exposure in CT. We investigated the behavior of AEC systems in whole-body PET/CT. CT images of a whole-body phantom were acquired using AEC on two scanners from different manufactures. The effects of scout imaging direction and arm positioning on dose modulation were evaluated. Image noise was assessed in the chest and upper abdomen. On one scanner, AEC using two scout images in the posteroanterior (PA) and lateral (Lat) directions provided relatively constant image noise along the z-axis with the arms at the sides. Raising the arms increased tube current in the head and neck and decreased it in the body trunk. Image noise increased in the upper abdomen, suggesting excessive reduction in radiation exposure. AEC using the PA scout alone strikingly increased tube current and reduced image noise in the shoulder. Raising the arms did not substantially influence dose modulation and decreased noise in the abdomen. On the other scanner, AEC using the PA scout alone or Lat scout alone resulted in similar dose modulation. Raising the arms increased tube current in the head and neck and decreased it in the trunk. Image noise was higher in the upper abdomen than in the middle and lower chest, and was not influenced by arm positioning. CT dose modulation using AEC may vary greatly depending on scout direction. Raising the arms tended to decrease radiation exposure; however, the effect depends on scout direction and the AEC system. abstract_id: PUBMED:31788673 Feasibility study on ultra-low dose 3D scout of organ based CT scan planning. 3D volumetric CT images hold the potential to become a rich source of information for 3D organ segmentation and far exceed that made available through 2D radiograph images. Acquiring and generating 3D volumetric images for scan preparation purposes, i.e. 3D scout, while delivering radiation dose equivalent to conventional 2D radiograph is challenging. We explore various acquisition parameters and post-processing methods to reduce dose of a 3D scout while reducing the noise and maintaining the edge strength around the target organ. We demonstrated that similar edge strength and noise to the conventional dose CT scan can be achieved with 3D scout acquisition and post-processing while being dose neutral to a 2D scout acquisition. abstract_id: PUBMED:25227037 Automatic heart positioning method in computed tomography scout images. Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart. abstract_id: PUBMED:30415458 Clinical evaluation of CT radiation dose in whole-body 18F-FDG PET/CT in relation to scout imaging direction and arm position. Objective: Radiation exposure in CT is modulated by automatic exposure control (AEC) mainly based on scout images. We evaluated CT radiation dose in whole-body PET/CT in relation to scout imaging direction and arm position, and investigated the behavior of AEC. Methods: Eighty adult patients who underwent whole-body 18F-FDG PET/CT were divided into groups A, B, C, and D. The posteroanterior scout image alone (PA scout) was used for AEC-based dose modulation in groups A and B, while the posteroanterior and lateral scout images (PA + Lat scout) were used in groups C and D. Patients in groups A and C were imaged with their arms beside the head, while those in groups B and D were imaged with their arms at the sides of the trunk. Dose-length product provided by the scanner was recorded. The tube current value, a determinant of radiation dose, for each slice was plotted against slice location to produce a tube current modulation curve. The scan range was divided into seven anatomical regions, and regional tube current was defined as average tube current for each region. Effective dose was calculated for each region and then summed together. Results: Regional tube current was higher in the body trunk and proximal thigh using the PA scout than using the PA + Lat scout, resulting in higher dose-length product and effective dose using the PA scout. A marked dose increase was shown in the shoulder especially using the PA scout. Spike-like high current at the top of the head was often observed in tube current modulation curves using the PA scout but not using the PA + Lat scout. Raising the arms increased tube current in the head and neck and decreased it in the chest and abdomen. Although dose-length product did not differ significantly depending on arm position, raising the arms decreased effective dose significantly. Conclusions: AEC-based CT dose modulation in whole-body PET/CT is affected by scout imaging direction and arm position, which should be considered to determine an optimal imaging protocol for whole-body PET/CT. abstract_id: PUBMED:27933480 Alternatives to the baseline KUB for CTKUB-detected calculi: evaluation of CT scout and average and maximum intensity projection images. Purpose: A CT of the kidneys, ureters, and bladder (CTKUB) is the investigation of choice in suspected renal colic. Plain kidney, ureter, and bladder radiographs (KUB) can be used to monitor the progress of a stone if radiographically visible on a baseline KUB. This study aims to determine if a low-dose CT Scout, thick-slab average intensity projection (AIP), or maximum intensity projection (MIP) images are suitable as substitutes for a baseline KUB. Method: A retrospective review of patients from a tertiary adult institute that had a positive CTKUB and a KUB within 4 h of the CT was performed. Two consultant radiologists independently reviewed the KUB, CT Scout, AIP, and MIP for stone visibility and their sensitivities and agreement values were compared. Stone characteristics (size, location, and peak Hounsfield units) and patient thickness were recorded and examined for any association with discordant results. Results: 74 stones were evaluated for the study. KUB had a sensitivity of 66.2% (95% CI 54.3-76.8), CT Scout 47.3% (95% CI 35.6-59.3), AIP 55.4% (95% CI 43.4-67.0), and MIP 83.8% (95% CI 73.4-91.3). Fair agreement was found between the KUB and both CT Scout (κ=0.363, 95% CI 0.167-0.558) and AIP (κ=0.384, 95% CI 0.175-0.592). Moderate agreement was found between the KUB and MIP (κ=0.412, 95% CI 0.198-0.625). Neither any stone characteristic nor patient thickness had a significant association with discordant results. Conclusion: None of the possible substitutes for a baseline KUB showed strong agreement with the KUB. Low-dose CT Scouts have a similar sensitivity to the published literature for higher dose CT Scouts. abstract_id: PUBMED:33638579 The combined use of 2D scout and 3D axial CT images to accurately determine the catheter tips for high-dose-rate brachytherapy plans. Purpose: To develop a method combining CT scout images with axial images to improve the localization accuracy of catheter tips in high-dose-rate (HDR) brachytherapy treatments. Materials And Methods: CT scout images were utilized along with conventionally reconstructed axial images to aid the localization of catheter tips used during HDR treatment planning. A method was developed to take advantage of the finer image resolution of the scout images to more precisely identify the tip coordinates. The accuracies of this method were compared with the conventional method based on the axial CT images alone, for various slice thicknesses, in a computed tomography dose index (CTDI) head phantom. A clinical case which involved multiple interstitial catheters was also selected for the evaluation of this method. Locations of the catheter tips were reconstructed with the conventional CT-based method and this newly developed method, respectively. Location coordinates obtained via both methods were quantitatively compared. Results: Combination of the scout and axial CT images improved the accuracy of identification and reconstruction of catheter tips along the longitudinal direction (i.e., head-to-foot direction, more or less parallel to the catheter tracks), compared to relying on the axial CT images alone. The degree of improvement was dependent on CT slice thickness. For the clinical patient case, the coordinate differences of the reconstructed catheter tips were 2.6 mm ± 0.9 mm in the head-to-foot direction, 0.4 mm ± 0.2 mm in the left-to-right direction, and 0.6 mm ± 0.2 mm in the anterior-to-posterior direction, respectively. Conclusion: Combining CT scout and axial images demonstrates the ability to provide a more accurate identification and reconstruction of the interstitial catheter tips for HDR brachytherapy treatment, especially in the longitudinal direction. The method developed in this work has the potential to be implemented clinically together with automatic segmentation in modern brachytherapy treatment planning systems, in order to improve the reconstruction accuracy of HDR catheters. abstract_id: PUBMED:34153884 Using CT scout view to scan illicit drug carriers may reduce radiation exposure. Objective: The aim of this study was to present our experience on the evaluation of suspected illicit drug carriers via computerized tomography scout view (CTSV) with analysis of detectability and features such as content, form and density. Methods: A total of 120 individuals suspected of body packing were screened via CT in a university emergency department between January 2014 and December 2017. Results: 88.3% of 120 body packing cases examined in the study were male. The median (1st quartile-3rd quartile) age of the cases was 35.5 (30-41) years. We found that solid packs have a 4.573-fold higher likelihood of detection in CTSV screening than liquid ones (95%CI: 1.879-11.134). Additionally, the number of CT scans needed were significantly associated with capsule localization (stomach) (OR:7.178, 95%CI: 2.420-21.293) and solid form packs (OR: 8.705, 95%CI: 2.318-32.692) are associated with number of CT scans. Conclusion: CTSVs have a successful detection rate, especially in solid form packs. Our results suggest that conclusive CTSV imaging for body packing may be recommended to delay the application of the next CT scan until the passage of the first capsule; thereby preventing unnecessary radiation exposure. Answer: The CT scout view, also known as a CT localizer radiograph, topogram, or scanogram, is an integral part of the CT examination and should be carefully reviewed for findings that may or may not be included in the field of view (FOV) of the study (PUBMED:26295646). The review of CT scout views can reveal significant pathologic findings not included in the CT FOV, supporting the routine inspection of the scout view, especially for the detection of pathologic findings in anatomic regions not imaged by CT (PUBMED:24848822). Scout images may contain important findings outside of the scanned field of view on CT examinations of the abdomen and pelvis, such as unsuspected lung cancer at the lung bases, or provide complementary information to findings within the scanned field of view, such as characterization of retained surgical foreign bodies (PUBMED:34075469). Therefore, the assessment of scout images adds value and provides a complementary "opportunistic" review for interpretation of abdominopelvic CT examinations (PUBMED:34075469). In conclusion, the evidence supports the routine review of CT scout views as part of the CT interpretation process to ensure that significant findings are not overlooked, particularly those outside the primary FOV of the CT scan.
Instruction: Bipolar spectrum disorders in a clinical sample of patients with Internet addiction: hidden comorbidity or differential diagnosis? Abstracts: abstract_id: PUBMED:26132914 Bipolar spectrum disorders in a clinical sample of patients with Internet addiction: hidden comorbidity or differential diagnosis? Background And Aims: Behavioral addictions and bipolar disorders have a certain probability of co-occurrence. While the presence of a manic episode has been defined as an exclusion criterion for gambling disorder, no such exclusion has been formulated for Internet addiction. Methods: A clinical sample of 368 treatment seekers presenting with excessive to addictive Internet use was screened for bipolar spectrum disorders using the Mood Disorder Questionnaire. Psychopathology was assessed by the Symptom Checklist 90R and a clinical interview was administered to screen for comorbid disorders. Results: Comorbid bipolar disorders were more frequent in patients meeting criteria for Internet addiction (30.9%) than among the excessive users (5.6%). This subgroup showed heightened psychopathological symptoms, including substance use disorders, affective disorders and personality disorders. Further differences were found regarding frequency of Internet use regarding social networking sites and online-pornography. Discussion: Patients with Internet addiction have a heightened probability for meeting criteria of bipolar disorders. It is not possible to draw conclusions regarding the direction of this association but it is recommended to implement screening for bipolar disorders in patients presenting with Internet addiction. Conclusion: Similar to gambling disorder, it might prove necessary to subsume bipolar disorders as an exclusion criterion for the future criteria of Internet addiction. abstract_id: PUBMED:19285348 Pediatric bipolar disorder in a Spanish sample: features before and at the time of diagnosis. Introduction: Bipolar disorder (BD) often starts in childhood or adolescence. Diagnostic delay is common and may have a negative impact on treatment response and outcome. Objectives: To describe the clinical characteristics and symptoms of children with BD prior to their diagnosis and at the time of diagnosis in a sample in Spain. Methods: We retrospectively reviewed the medical records of all children and adolescents (N=38) with a DSM-IV diagnosis of BD evaluated in the Child &amp; Adolescent Psychiatry Unit, University of Navarra, over a 6-year period. We collected the DSM-IV symptoms of BD prior and at the time of diagnosis using the K-SADS-PL interview template. Results: BD was diagnosed in close to 4% of clinic patients. Thirty (79%) were boys and 8 (21%) were girls; 17 (44.7%) had BD-1, 2 (5.3%) BD-2, and 19 (49.9%) BD-NOS. Median age at diagnosis was 13.9 (10.6;15.9). Delay of diagnosis was 1.5 (0.7;3.4) years. Symptoms of BD were similar to those reported in U.S. samples with high rates of severe irritability (94.6%) and psychiatric comorbidity: 92.1% of the BD children had at least one comorbid disorder and 18.4% had three comorbidities, most frequently ADHD (21%) and substance abuse (18.4%). Conclusions: Clinical findings in this Spanish sample of children with BD closely resembles those described in U.S. clinics. Diagnostic delay, as in the U.S., and frequent misdiagnosis may explain low prevalence estimates found outside the U.S. abstract_id: PUBMED:11011829 Bipolar II disorder and comorbidity. The validity and reliability of the diagnosis of bipolar II disorder has been questioned by means of comorbidity with nonaffective disorders, including substance abuse, personality disorders, and anxiety disorders. This study examined the comorbid diagnosis of a sample of bipolar II patients, comparing patients with comorbidity and those with "pure" bipolar II disorder. Forty Research Diagnostic Criteria (RDC) bipolar II patients were assessed by means of the Schedule for Affective Disorders and Schizophrenia, Lifetime Version (SADS-L) and Structured Clinical Interview for DSM-III-R axis I (SCID-II) for personality disorders. Patients fulfilling RDC criteria for any psychiatric disorder (except personality disorders) or DSM-IV criteria for any personality disorder were compared with patients without comorbidity. For practical reasons, cyclothymia was not considered as a comorbid diagnosis. Half of the sample had lifetime comorbidity with other psychiatric disorders, mainly personality disorders (33%), substance abuse or dependence (21%), and anxiety disorders (8%). However, only the rates of suicidal ideation (74% v 24%, chi square [chi2] = 9.03, P = .003) and suicide attempts (45% v 5%, chi2 = 8.53, P = .003) were significantly different between patients with and without comorbidity. In summary, although the rates of comorbidity are relatively high in bipolar II disorder, most clinical and course variables are strikingly similar in patients with and without comorbidity except for suicidal behavior, suggesting that comorbidity does not reduce the validity of the diagnosis of bipolar II disorder. abstract_id: PUBMED:9674931 Bipolar disorder: recent issues in diagnosis and classification. Recent findings in the diagnosis, classification, and epidemiology of bipolar disorder are reviewed. Specific bipolar subtypes are delineated. A number of key diagnostic issues that have implications for correctly establishing the diagnosis or for determining optimal treatment approaches are discussed. The epidemiology of comorbid substance abuse and bipolar disorder is reviewed as is the significance of this comorbidity vis-à-vis presenting symptoms, treatment, and outcome. The differential diagnosis between so-called mixed states and agitated depressed is reviewed, and the potential significance of comorbid bipolar disorder and attention-deficit/hyperactivity disorder is discussed. abstract_id: PUBMED:16630705 Social anxiety disorder comorbidity in patients with bipolar disorder: a clinical replication. Background: The authors investigated frequency, clinical correlates and onset temporal relationship of social anxiety disorder (SAD) in adult patients with a diagnosis of bipolar I disorder. Methods: Subjects were 189 patients whose diagnoses were assessed by the Structured Clinical Interview for DSM-III-R-Patient Version. Results: Twenty-four patients (12.7%) met DSM-III-R criteria for lifetime SAD; of these, 19 (10.1% of entire sample) had SAD within the last month. Significantly more bipolar patients with comorbid SAD also had substance use disorders compared to those without. On the HSCL-90, levels of interpersonal sensitivity, obsessiveness, phobic anxiety and paranoid ideation were significantly higher in bipolar patients with SAD than in those without. Bipolar patients with comorbid SAD recalled separation anxiety problems (school refusal) more frequently during childhood than those without. Lifetime SAD comorbidity was associated with an earlier age at onset of syndromal bipolar disorder. Pre-existing OCD tended to delay the onset of bipolarity. Conclusions: Social anxiety disorder comorbidity is not rare among patients with bipolar disorder and is likely to affect age of onset and phenomenology of bipolar disorder. These findings may influence treatment planning and the possibility of discovering a pathophysiological relationship between SAD and bipolarity. abstract_id: PUBMED:35854247 Comorbidity of adult ADHD and substance use disorder in a sample of inpatients bipolar disorder in Iran. Backgrounds: The study of the relationship between adult Attention deficit hyperactivity disorder (ADHD) and bipolar disorder has received more attention in recent years and there is limited information in this area. On the other hand, there is a significant comorbidity between ADHD and bipolar disorder with substance use disorder. In this study, we investigated the prevalence of comorbidity of adult ADHD and substance use disorder among a group of bipolar patients admitted to a psychiatric hospital. Methods: One hundred fifty patients from a total of 200 consecutive patients who were referred to the emergency department of Roozbeh Psychiatric Hospital in Tehran, diagnosed with bipolar disorder based on the initial psychiatric interview and needed hospitalization, were evaluated again by an experienced faculty member psychiatrist by using a subsequent interview based on the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition(DSM-5). They were evaluated using the Structured Clinical Interview for DSM-5 (SCID-5) questionnaire to confirm the diagnosis of bipolar disorder and the comorbidity of adult ADHD and substance use disorder. Results: From 150 patients diagnosed with bipolar disorder, 106 patients (70.7%) had adult ADHD. 89 patients (59.3%) had substance use disorder and 58 patients (38.7%) had both of these comorbidities with bipolar disorder. Comorbidity of adult ADHD was associated with the earlier onset of the first mood episode in bipolar disorder (p value = 0.025). There was no statistically significant relationship between substance use disorder and age of onset of the first episode. (P value = 0.57). Conclusions: Due to the limitations of studies on adult ADHD comorbidity with bipolar disorder, especially in hospital settings, as well as the increased risk of association with substance use disorder, further multicenter studies in this area with larger sample sizes can increase awareness in this regard. abstract_id: PUBMED:36419713 Psychiatric comorbidity and quality of life in patients with bipolar disorder. Background: Psychiatric comorbidity in patients with bipolar disorder (BD) has been associated with an earlier onset, rapid cycling, worsening severity and outcome, and increased suicidality. Patients with BD have been reported to have poor quality of life (QOL) even during remission. Aims And Objectives: Estimate the prevalence of psychiatric comorbidity and assess the QOL in patients with BD, and find the associated sociodemographic and clinical variables. Materials And Methods: This cross-sectional study was done in a general hospital psychiatric unit on 100 patients (both inpatients and outpatients) with DSM-IV-TR BD in partial or complete remission. Patients were assessed using a semi-structured pro forma, Mini International Neuropsychiatric Interview 6.0 - Hindi version, Hamilton Rating Scale for Depression or Young Mania Rating Scale and World Health Organization QOL instrument, short-form (WHOQOL-BREF), Hindi Version. Results: Sixty-one percent of the sample had at least one psychiatric comorbidity, and the commonest comorbid disorders were substance use disorders (SUD) (30%) and anxiety disorders (AD) (28%). Male gender was identified as a predictor for comorbidity in BD. The mean QOL-BREF score was 85.96 ± 14.35. Poor QOL was associated with older age at onset of comorbidity, multiple comorbidities, mixed episodes, rapid cycling, partial remission, and increased severity of depression while patients with current hypomanic symptoms were found to have better QOL. BD with comorbidity had worse QOL than those without comorbidity. Conclusion: The majority of patients with BD had at least one comorbidity. SUD were the most common comorbid disorders. QOL, already poor in BD patients, was poorer in cases with comorbidity. abstract_id: PUBMED:21210177 Migraine and psychiatric comorbidity: a review of clinical findings. Migraine is an extremely common disorder. The underlying mechanisms of this chronic illness interspersed with acute symptoms appear to be increasingly complex. An important aspect of migraine heterogeneity is comorbidity with other neurological diseases, cardiovascular disorders, and psychiatric illnesses. Depressive disorders are among the leading causes of disability worldwide according to WHO estimation. In this review, we have mainly considered the findings from general population studies and studies on clinical samples, in adults and children, focusing on the association between migraine and psychiatric disorders (axis I of the DSM), carried over after the first classification of IHS (1988). Though not easily comparable due to differences in methodology to reach diagnosis, general population studies generally indicate an increased risk of affective and anxiety disorders in patients with migraine, compared to non-migrainous subjects. There would also be a trend towards an association of migraine with bipolar disorder, but not with substance abuse/dependence. With respect to migraine subtypes, comorbidity mainly involves migraine with aura. Patients suffering from migraine, however, show a decreased risk of developing affective and anxiety disorders compared to patients with daily chronic headache. It would also appear that psychiatric disorders prevail in patients with chronic headache and substance use than in patients with simple migraine. The mechanisms underlying migraine psychiatric comorbidity are presently poorly understood, but this topic remains a priority for future research. Psychiatric comorbidity indeed affects migraine evolution, may lead to chronic substance use, and may change treatment strategies, eventually modifying the outcome of this important disorder. abstract_id: PUBMED:28068464 Clinical Outcomes in Children and Adolescents With Bipolar Disorder and Substance Use Disorder Comorbidity. Objective: To assess the global functioning and clinical outcomes of children and adolescents with bipolar disorder, children and adolescents with bipolar disorder and substance use disorder (SUD) comorbidity and healthy controls. Methods: This study had a cross-sectional design. Participants were children and adolescents aged between 6 and 17 years, and data were collected between 2003 and 2015. Psychiatric diagnosis was established according to DSM-IV criteria using the Kiddie-SADS-Present and Lifetime Version or the Mini-International Neuropsychiatric Interview for Children and Adolescents. Global functioning was assessed using the Children's Global Assessment Scale. Depressive symptoms were assessed using the Children's Depression Rating Scale. Manic symptoms were measured using the Young Mania Rating Scale, and the severity of anxious symptoms was assessed using the Screen for Child Anxiety Related Disorders. Results: The sample included 187 children and adolescents with bipolar disorder, 29 with BD and SUD comorbidity, and 115 healthy controls. Children and adolescents with BD and SUD comorbidity presented later onset of mood disorder (P &lt; .001); higher rates of lifetime history of suicide attempt (P &lt; .001), lifetime history of psychosis (trend toward significance: P = .076), and lifetime hospitalization (P &lt; .001); and higher severity of depressive symptoms (trend toward significance: P = .080) as compared to those with BD without SUD comorbidity. In addition, both diagnosis groups presented higher rates of functional impairment when compared to controls (P &lt; .001). Moreover, BD and SUD comorbidity presented higher functional impairment, as compared to BD without SUD comorbidity (P = .020). Conclusions: Children and adolescents with bipolar disorder and substance use disorder comorbidity present a worse clinical course than those with bipolar disorder but without substance use disorder comorbidity. abstract_id: PUBMED:1541721 Differential diagnosis of bipolar disorder. The historical antecedents of the current diagnostic criteria for mania involve the German phenomenologic descriptions of the late 1800s, the introduction of lithium for treatment and prevention of mania (which broadened the definition of mania in this country), the attempts to subclassify bipolar disorder into at least two subtypes, and the differentiation of patients with mania and hypomania from those with depression alone. Current diagnostic criteria for bipolar disorder are delineated in DSM-III-R. The differential diagnosis of bipolar disorder includes other conditions that may have manic-like symptoms, including organic mood disorders such as endocrine or metabolic conditions, drug intoxications, and tumors. Mania occurring in the context of substance abuse would be called a secondary mania. In addition, schizoaffective disorder can be diagnosed if there is a manic syndrome superimposed in the context of schizophrenia. Because of the absence of duration criteria for mania in DSM-III-R, the differential diagnosis within the bipolar disorders is largely based on severity and duration of depression. A problem in studying mania at present is that the prototypic cases have largely disappeared from treatment centers because of the success of lithium maintenance treatment. Patients available for study at psychiatric treatment facilities are largely treatment resistant, atypical, and likely to have experienced considerable amounts of substance abuse in their histories. Among the changes being considered for DSM-IV are to include duration criteria for mania, to separate bipolar II patients (depression and hypomania) from bipolar not otherwise specified, to refine the criteria for hypomania, and to add rapid cycling to the list of parenthetical modifiers for bipolar disorder with mania and bipolar disorder with hypomania. Answer: The study described in abstract PUBMED:26132914 investigated the prevalence of bipolar spectrum disorders in a clinical sample of patients with Internet addiction. The results indicated that comorbid bipolar disorders were more frequent in patients meeting criteria for Internet addiction (30.9%) than among the excessive users (5.6%). This subgroup also showed heightened psychopathological symptoms, including substance use disorders, affective disorders, and personality disorders. The study suggests that there is a heightened probability for patients with Internet addiction to meet criteria for bipolar disorders. However, it is not possible to determine the direction of this association from the study. The authors recommend screening for bipolar disorders in patients presenting with Internet addiction and suggest that, similar to gambling disorder, bipolar disorders might need to be considered as an exclusion criterion for the future criteria of Internet addiction. This finding aligns with the broader literature on the comorbidity of bipolar disorder with other psychiatric conditions. For instance, studies have shown that bipolar disorder often co-occurs with substance use disorders (PUBMED:36419713, PUBMED:35854247, PUBMED:16630705), anxiety disorders (PUBMED:16630705), and attention-deficit/hyperactivity disorder (ADHD) (PUBMED:35854247). Additionally, comorbidity can affect the clinical outcomes and quality of life of patients with bipolar disorder (PUBMED:36419713, PUBMED:28068464), and it may influence the age of onset and phenomenology of bipolar disorder (PUBMED:16630705). In conclusion, the presence of bipolar spectrum disorders in patients with Internet addiction may represent a hidden comorbidity rather than a differential diagnosis. The high rate of comorbidity observed suggests that clinicians should be vigilant for the presence of bipolar disorders in patients with Internet addiction, as it may have significant implications for treatment and prognosis.
Instruction: Can serum mast cell tryptase help diagnose anaphylaxis? Abstracts: abstract_id: PUBMED:32034676 Using Baseline and Peak Serum Tryptase Levels to Diagnose Anaphylaxis: a Review. The diagnosis of anaphylaxis relies on a suggestive clinical history after exposure to a potential triggering factor. Serum tryptase concentrations increase on degranulation of mast cells and therefore serum tryptase levels are measured to diagnose anaphylaxis. There is no standardized method for assessing total serum mast cell tryptase (MCT) in anaphylaxis. The Working Conference in 2010 proposed a consensus equation (peak MCT should be &gt; 1.2x baseline tryptase + 2 ng/L) to diagnose acute mast cell activation (aMCA). Our objective was to narratively review the literature since the Working Conference in 2010, examining the use of the consensus equation and other equations comparing baseline and peak serum tryptase during anaphylaxis. Computerized bibliographic searches of PUBMED and EMBASE were supplemented with a manual search of reference lists. English-language studies were included. Eleven studies met our inclusion criteria with a total of 4551 participants. However, only four studies with 653 participants used the consensus equation. The other seven studies used other methods to compare peak and baseline serum tryptase concentrations. Measuring serum tryptase levels is valuable in the diagnosis of anaphylaxis but is unable to detect all anaphylactic reactions. Based on our current literature review, the consensus equation is underused in the diagnosis of anaphylaxis. There is also a need for exploration of other biomarkers which could be used in parallel to peak and baseline serum tryptase measurements for further diagnostic certainty. Serum tryptase is the most studied biomarker in anaphylaxis but is still far from being the ideal biomarker for this. There is a need to identify new potential useful biomarkers. Serum tryptase levels are valuable in the diagnosis of anaphylaxis, but are unable to detect all anaphylactic reactions. Additionally serial tryptase measurements are laborious in daily clinical practice. abstract_id: PUBMED:15239726 Can serum mast cell tryptase help diagnose anaphylaxis? Objective: We aimed to determine the utility of serum mast cell tryptase to diagnose anaphylaxis. Methods: As part of a venom immunotherapy trial, we performed 64 sting challenges. Blood samples were taken before the sting (baseline), and 15 min and 60 min after the sting. Tryptase was measured in baseline, 15 minute and 60 minute serum samples. Histamine was measured in baseline and 15 minute plasma samples. Eleven people had undisputed severe anaphylactic reactions; tryptase and histamine levels were assessed against this clinical gold standard diagnosis. Results: Excluding mild reactions from the analysis, peak tryptase readings had sensitivity of 0.36 and specificity of 0.93 using the recommended cut-off range (&lt; 12.0 microg/L). Receiver-operator curve analysis found a cut-off of 9.0 microg/L would improve diagnostic performance (sensitivity 0.55, specificity 0.93). Serial tryptase measurement was significantly more discriminatory; an increase in tryptase of 2.0 microg/L or greater had a sensitivity of 0.73 and specificity of 0.98. The addition of histamine measurements, defining a positive result by either a rise in tryptase or a rise in histamine, appeared to further increase sensitivity (0.90). Conclusions: Clinicians should use caution when using serum tryptase to refute or support a diagnosis of anaphylaxis. Serial tryptase measurement increases sensitivity and specificity. Further studies using serial tryptase determinations in general emergency department practice, perhaps supplemented by histamine determinations, are warranted. abstract_id: PUBMED:35597543 Incorporating Tryptase Genotyping Into the Workup and Diagnosis of Mast Cell Diseases and Reactions. The measurement of mast cell tryptase levels in serum has found utility in the diagnosis and management of both clonal mast cell disorders and severe mast cell-dependent systemic reactions in the form of anaphylaxis. A more recent discovery is that a majority of individuals with elevated basal serum tryptase levels have increased germline TPSAB1 gene copy number encoding α-tryptase. This genetic trait is referred to as hereditary α-tryptasemia (HαT) and affects nearly 6% of the general population. In clinical practice, the presence or absence of HαT should thus now be determined when defining what constitutes an abnormal serum tryptase level in the diagnosis of mastocytosis. Further, as rises in serum tryptase levels are used to support the diagnosis of systemic anaphylaxis, variability in baseline serum tryptase levels should be factored into how significant a rise in serum tryptase is required to confirm the diagnosis of a systemic allergic reaction. In practicality, this dictates that symptomatic individuals undergoing evaluation for a mast cell-associated disorder or reaction with a baseline serum tryptase level exceeding 6.5 ng/mL should be considered for tryptase genotyping in order to screen for HαT. This review provides detailed information on how to use the results of such testing in the diagnosis and management of both mastocytosis and anaphylaxis. abstract_id: PUBMED:32126554 Elevated Serum Tryptase in Non-Anaphylaxis Cases: A Concise Review. One of the most important blood tests in the field of allergy, mast cell tryptase has numerous diagnostic uses, particularly for anaphylactic reactions and for the diagnosis of mastocytosis. However, there are numerous other non-anaphylactic conditions where clinicians may see elevated serum tryptase (hypertryptasemia) and the practicing clinician ought to be aware of these important differential diagnoses. Such conditions include systemic mastocytosis, hematological malignancies, and chronic kidney disease. This article provides a comprehensive, updated summary on the variety of non-anaphylactic conditions where hypertryptasemia may be seen. abstract_id: PUBMED:24793463 Human mast cell tryptase in biology and medicine. The most abundant prestored enzyme of human mast cell secretory granules is the serine-protease tryptase. In humans, there are four tryptase isoforms, but only two of them, namely the alpha and beta tryptases, are known as medically important. Low levels of continuous tryptase production as an immature monomer makes up the major part of the baseline serum tryptase levels, while transient release of mature tetrameric tryptase upon mast cell degranulation accounts for the anaphylactic rise of serum tryptase levels. Serum tryptase determination contributes to the diagnosis or monitoring of mast cell disorders including mast cell activation - induced anaphylaxis, mastocytosis and a number of myeloproliferative conditions with mast cell lineage involvement. Baseline serum tryptase levels are predictive of the severity risk in some allergic conditions. abstract_id: PUBMED:27888720 Postmortem serum levels of IgE and mast cell tryptase in fatal asthma. The aim of this study was to characterize the concentrations of total and specific IgE as well as mast cell-derived tryptase in postmortem serum from femoral blood and evaluate the usefulness of their determination for diagnostic purposes in cases of fatal asthma attacks. 36 autopsy cases were retrospectively selected including cases of witnessed death in asthmatic subjects, deaths in asthmatics and atopic subjects with causes of death unrelated to asthma or allergic anaphylaxis as well as deaths in non-atopic, non-allergic individuals with causes of death unrelated to asthma or allergic anaphylaxis. Initial results indicated increased postmortem serum total IgE in atopic and asthmatic individuals, irrespective of the cause of death. Conversely, increased mast cell tryptase levels were noted exclusively in fatal asthma attacks. This suggests that mast cell tryptase measurement in cases of death potentially caused by severe acute bronchial asthma may be useful for diagnostic purposes. abstract_id: PUBMED:29094472 Serum mast cell tryptase measurements: Sensitivity and specificity for a diagnosis of anaphylaxis in emergency department patients with shock or hypoxaemia. Objective: Clinical diagnosis of anaphylaxis is principally based on symptoms and signs. However, particularly for patients with atypical symptoms, laboratory confirmation of anaphylaxis would be useful. This study investigated the utility of mast cell tryptase, an available clinical biomarker, for differentiating anaphylaxis from other causes of critical illness, which can also involve mast cell activation. Methods: Tryptase was measured (ImmunoCAP) in serum from patients with anaphylaxis and non-anaphylactic critical illness (controls) at ED arrival, and after 1-2, 3-4 and 12-24 h. Differences in both peak and delta (difference between highest and lowest) tryptase concentrations between groups were investigated using linear regression models, and diagnostic ability was analysed using Receiver Operating Characteristic curve analysis. Results: Peak tryptase was fourfold (95% CI: 2.9, 5.5) higher in anaphylaxis patients (n = 67) than controls (n = 120) (P &lt; 0.001). Delta-tryptase was 5.1-fold (95% CI: 2.9, 8.9) higher in anaphylaxis than controls (P &lt; 0.001). Optimal test characteristics (sensitivity: 72% [95% CI: 59, 82] and specificity: 72% [95%CI: 63, 80]) were observed when peak tryptase concentrations were &gt;11.4 ng/mL and/or delta-tryptase ≥2.0 ng/mL. For hypotensive patients, peak tryptase &gt;11.4 ng/mL had improved test characteristics (sensitivity: 85% [95% CI: 65, 96] and specificity: 92% [95% CI: 85, 97]); the use of delta-tryptase reduced test specificity. Conclusion: While peak and delta tryptase concentrations were higher in anaphylaxis than other forms of critical illness, the test lacks sufficient sensitivity and specificity. Therefore, mast cell tryptase values alone cannot be used to establish the diagnosis of anaphylaxis in the ED. In particular, tryptase has limited utility for differentiating anaphylactic from non-anaphylactic shock. abstract_id: PUBMED:29331682 Post mortem tryptase cut-off level for anaphylactic death. Serum mast cell tryptase is used to support the diagnosis of anaphylaxis. The recommended clinical cut-off for total tryptase (&lt;11.4μg/L) appears unsuitable in the post mortem setting due to largely unknown processes which result in significantly elevated levels in these samples. Consequently there is no widely accepted tryptase cut-off level for diagnosing an anaphylactic death. This 5-year retrospective study compared total tryptase levels in post mortem femoral blood in anaphylactic deaths and control. Univariate and multivariate analysis was used to assess the relative contribution of other factors (age, gender, post mortem interval, and presence of resuscitation) on post mortem tryptase levels. Nine anaphylactic deaths and 45 controls were identified. Receiver-operating characteristic (ROC) curve analysis identified an optimal cut-off of 53.8μg/L, with sensitivity of 89%, and specificity of 93%, for total post mortem tryptase in femoral blood to diagnosis anaphylaxis. No other factors showed any statistical significant contribution to post mortem tryptase elevation. Femoral total post mortem tryptase level of 53.8μg/L and above is a useful ancillary test in diagnosing an anaphylactic death. abstract_id: PUBMED:31585081 An Allosteric Anti-tryptase Antibody for the Treatment of Mast Cell-Mediated Severe Asthma. Severe asthma patients with low type 2 inflammation derive less clinical benefit from therapies targeting type 2 cytokines and represent an unmet need. We show that mast cell tryptase is elevated in severe asthma patients independent of type 2 biomarker status. Active β-tryptase allele count correlates with blood tryptase levels, and asthma patients carrying more active alleles benefit less from anti-IgE treatment. We generated a noncompetitive inhibitory antibody against human β-tryptase, which dissociates active tetramers into inactive monomers. A 2.15 Å crystal structure of a β-tryptase/antibody complex coupled with biochemical studies reveal the molecular basis for allosteric destabilization of small and large interfaces required for tetramerization. This anti-tryptase antibody potently blocks tryptase enzymatic activity in a humanized mouse model, reducing IgE-mediated systemic anaphylaxis, and inhibits airway tryptase in Ascaris-sensitized cynomolgus monkeys with favorable pharmacokinetics. These data provide a foundation for developing anti-tryptase as a clinical therapy for severe asthma. abstract_id: PUBMED:20395827 Effect of general anesthesia and orthopedic surgery on serum tryptase. Background: Mast cell tryptase is used clinically in the evaluation of anaphylaxis during anesthesia, because symptoms and signs of anaphylaxis are often masked by the effect of anesthesia. No larger studies have examined whether surgery and anesthesia affect serum tryptase. The aim of this study was to investigate the effect of anesthesia and surgery on serum tryptase in the absence of anaphylaxis. Methods: The study included 120 patients (median age, 54 yr; range, 19-94 yr) undergoing elective orthopedic surgery in general anesthesia. Exclusion criteria were allergic reactions during this or previous anesthesia, hematologic disease, or high-dose corticosteroid treatment. Blood samples for tryptase analysis (ImmunoCAP; Phadia, Uppsala, Sweden) were drawn shortly before anesthesia and after anesthesia and surgery. Results: Median duration of anesthesia was 105 min (range, 44-263 min). Median interval between blood samples was 139 min (range, 39-370 min). Mean tryptase before surgery was 5.01 microg/l, with a mean decrease of 0.55 microg/l (P &lt; 0.0001; 95% CI, 0.3-0.8) postoperatively. All patients received intravenous fluid (median value 750 ml; range, 200-2000 ml) perioperatively. There was no significant effect of gender, age, American Society of Anesthesiologist's physical status classification, or self-reported allergy on serum tryptase. Conclusions: Serum tryptase shows small intraindividual variation in the absence of anaphylaxis. A small decrease was observed postoperatively, likely due to dilution by intravenous fluid. On suspected anaphylaxis during anesthesia, tryptase values, even within the normal reference interval, should, when possible, be compared with the patient's own basal level taken more than 24 h after the reaction. Answer: Yes, serum mast cell tryptase can help diagnose anaphylaxis. Serum tryptase concentrations increase upon degranulation of mast cells, which is a characteristic feature of anaphylaxis, making the measurement of serum tryptase levels valuable in the diagnosis of this condition (PUBMED:32034676). However, it is important to note that serum tryptase levels alone may not detect all anaphylactic reactions, and serial tryptase measurements are more laborious in daily clinical practice (PUBMED:32034676). Studies have shown that peak tryptase readings have a certain level of sensitivity and specificity when using a recommended cut-off range, and that serial tryptase measurement can significantly increase the discriminatory power for diagnosing anaphylaxis. The addition of histamine measurements, defining a positive result by either a rise in tryptase or a rise in histamine, appeared to further increase sensitivity (PUBMED:15239726). Incorporating tryptase genotyping into the workup for mast cell diseases and reactions is also suggested, as variability in baseline serum tryptase levels should be factored into the significance of a rise in serum tryptase required to confirm the diagnosis of a systemic allergic reaction (PUBMED:35597543). It is also important to be aware of other non-anaphylactic conditions that may cause elevated serum tryptase (hypertryptasemia), such as systemic mastocytosis, hematological malignancies, and chronic kidney disease (PUBMED:32126554). In the context of emergency department patients with shock or hypoxaemia, while peak and delta tryptase concentrations were higher in anaphylaxis than other forms of critical illness, the test lacks sufficient sensitivity and specificity to be used alone to establish the diagnosis of anaphylaxis (PUBMED:29094472). In postmortem settings, a specific cut-off level for tryptase has been suggested to aid in diagnosing anaphylactic death, as postmortem tryptase levels can be significantly elevated due to largely unknown processes (PUBMED:29331682). In summary, serum mast cell tryptase is a useful biomarker for diagnosing anaphylaxis, but its interpretation should be done with caution and in the context of clinical presentation and other potential causes of elevated tryptase levels. Serial measurements and additional biomarkers like histamine may enhance diagnostic accuracy.
Instruction: Sexual intercourse, abuse and pregnancy among adolescent women: does sexual orientation make a difference? Abstracts: abstract_id: PUBMED:10379429 Sexual intercourse, abuse and pregnancy among adolescent women: does sexual orientation make a difference? Context: Although a limited amount of research has retrospectively explored the childhood and adolescent heterosexual experiences of lesbians, little is known about the prevalence of heterosexual behavior and related risk factors or about pregnancy histories among lesbian and bisexual teenagers. Methods: A secondary analysis was conducted using responses from a subsample of 3,816 students who completed the 1987 Minnesota Adolescent Health Survey. Behaviors, risk factors and pregnancy histories were compared among adolescents who identified themselves as lesbian or bisexual, as unsure of their sexual orientation and as heterosexual. Results: Overall, bisexual or lesbian respondents were about as likely as heterosexual women ever to have had intercourse (33% and 29%, respectively), but they had a significantly higher prevalence of pregnancy (12%) and physical or sexual abuse (19-22%) than heterosexual or unsure adolescents. Among sexually experienced respondents, bisexual or lesbian and heterosexual women reported greater use of ineffective contraceptives (12-15% of those who used a method) than unsure adolescents (9%); bisexual or lesbian respondents were the most likely to have frequent intercourse (22%, compared with 15-17% of the other groups). In the sample overall, among those who were sexually experienced and among those who had ever been pregnant, bisexual or lesbian women were the most likely to have engaged in prostitution during the previous year. Conclusions: Providers of reproductive health care and family planning services should not assume that pregnant teenagers are heterosexual or that adolescents who say they are bisexual, lesbian or unsure of their sexual orientation are not in need of family planning counseling. Further research should explore the interactions between adolescent sexual identity development and sexual risk behaviors. abstract_id: PUBMED:22727072 The relationship between sexual abuse and risky sexual behavior among adolescent boys: a meta-analysis. Purpose: Childhood and adolescent sexual abuse has been shown to lead to increased odds of sexual behaviors that lead to sexually transmitted infections and early pregnancy involvement. Research, meta-analyses, and interventions, however, have focused primarily on girls and young women who have experienced abuse, yet some adolescent boys are also sexually abused. We performed a meta-analysis of the existing studies to assess the magnitudes of the link between a history of sexual abuse and each of the three risky sexual behaviors among adolescent boys in North America. Methods: The three outcomes were (a) unprotected sexual intercourse, (b) multiple sexual partners, and (c) pregnancy involvement. Weighted mean effect sizes were computed from ten independent samples, from nine studies published between 1990 and 2011. Results: Sexually abused boys were significantly more likely than nonabused boys to report all three risky sexual behaviors. Weighted mean odds ratios were 1.91 for unprotected intercourse, 2.91 for multiple sexual partners, and 4.81 for pregnancy involvement. Conclusions: Our results indicate that childhood and adolescent sexual abuse can substantially influence sexual behavior in adolescence among male survivors. To improve sexual health for all adolescents, even young men, we should strengthen sexual abuse prevention initiatives, raise awareness about male sexual abuse survivors' existence and sexual health issues, improve sexual health promotion for abused young men, and screen all people, regardless of gender, for a history of sexual abuse. abstract_id: PUBMED:35234277 Incestuous childhood sexual abuse between siblings resulting in pregnancy without sexual intercourse. Victims of incest are usually women and children who are mostly girls. The abuse between siblings may remain hidden because of the family's fear of social stigma, though it may result in pregnancy. This article presents a case of incestuous childhood sexual abuse between siblings resulting in pregnancy without intercourse, an event rarely reported in the literature. An 11-year-old girl was brought to the hospital because of a stomachache and was detected pregnancy from examinations. Forensic medicine experts determined that the girl's hymen was still intact and non-elastic in the genital examination. Gynecology and Obstetrics Department confirmed the pregnant compatibility with 14 weeks, 4 days; after ultrasonographic monitoring. She was abused by her 14-year-old brother by rubbing for 1 year and this resulted in pregnancy. During the psychiatric examination, her attitude was negative and symptoms of post-traumatic stress disorder emerged. Medical abortion was applied after psychiatric and gynecological examinations. The girl's brother was confirmed as the biological father after the DNA paternity test was conducted on the fetus. Based on this case, this article aimed to examine the medical, social, and legal problems caused by incestuous childhood sexual abuse and contribute to the literature by emphasizing what forensic medicine and preventive medicine approaches should consider in such cases. abstract_id: PUBMED:34682377 How Can Unintended Pregnancies Be Prevented among Adolescents Who Engaged in Sexual Intercourse at Earlier Ages? The Role of Female Education and Partner Age Difference. Several studies have identified explicative factors for adolescents' sexual risk behaviors and related outcomes such as unintended pregnancy; however, less is known about the mechanisms through which such factors act. Our study explored the role of female education and partner age difference as explicative mechanisms of the association between age at first sexual intercourse (AFSI) and unintended pregnancy while controlling for the role of other contextual factors (i.e., socioeconomic status, ethnicity, religious beliefs, and place of residence) and sexual-related mechanisms (i.e., number of sexual partners) that are known to be associated with adolescent pregnancy. The sample consisted of 613 sexually experienced female adolescents who did not intend to become pregnant: 349 were pregnant for the first time, and 264 had never been pregnant. Mediation and moderation analyses were performed. An earlier AFSI was associated with unintended pregnancy 1-6 years after first sexual intercourse by increasing the adolescents' likelihood of having less education and being involved with partners older than themselves. There was no significant direct effect of AFSI on pregnancy occurrence after controlling for the mediators. Our findings bring to light nonsexual mechanisms that must be considered in public health interventions aimed at preventing unintended pregnancies among adolescents who engaged in sexual intercourse at early ages. Specific implications are discussed. abstract_id: PUBMED:12558093 Sexual intercourse and the age difference between adolescent females and their romantic partners. Context: The age difference between a female and her partner may influence relationship dynamics in ways that put the female at increased risk of unintended pregnancies and sexually transmitted diseases. Very little is known, however, about how romantic involvement progresses to intercourse, particularly among adolescent females with older male partners. Methods: Data from 1,975 female participants in the National Longitudinal Study of Adolescent Health were analyzed using logistic regression to determine whether the age difference between an adolescent female and her romantic partner is a risk factor for sexual intercourse. Results: Adolescent females involved with an older partner have higher odds of having intercourse with that partner than females with partners their own age, after adjustment for demographic covariates. The magnitude of this association is most dramatic among the youngest females--for example, the odds of intercourse among 13-year-old females with a partner six years older are more than six times the odds among 13-year-old females with a same-age partner (odds ratio, 6.4), while 17-year-old females with partners six years older have about twice the odds of intercourse when compared with those who have a same-age partner (2.1). Conclusions: Young adolescent females with substantially older partners are much more likely than their peers to have sex with their partner, which exposes them to the risks of pregnancy and sexually transmitted diseases. abstract_id: PUBMED:18091137 The challenges of studying childhood sexual abuse among adolescent mothers. Sexual abuse researchers are faced with many challenges. This article begins with a discussion of the theoretical underpinnings of topic sensitivity and describes difficulties specific to the study of childhood sexual abuse among adolescent mothers. In the last part of this article, suggestions for diminishing the difficulties and factors to consider when planning future studies are discussed. The aim of this article is to prepare novice researchers for the challenges that may occur while studying childhood sexual abuse among adolescent mothers and provide them with solutions to consider when they encounter such challenges. abstract_id: PUBMED:9763160 Sexual orientation, sexual behaviors, and pregnancy among American Indian adolescents. Purpose: A recent study found a disproportionate number of pregnancies among Euro-American lesbian and bisexual adolescents compared to heterosexual peers. American Indian adolescents have reported higher prevalence of gay/lesbian/bisexual orientations than Euro-Americans; do they also report higher prevalence of pregnancy? Methods: The study assessed prevalence of teen pregnancy and related factors by sexual orientation among sexually experienced, reservation-based American Indian adolescent males (n = 2056) and females (n = 1693) who participated in a national school-based survey in 1991. Self-reported orientation was classified as heterosexual, gay/lesbian/bisexual, and "unsure" of orientation. Results: Gay/bisexual males were more likely than other males to report early heterosexual intercourse (&lt;14 years), more consistent contraception, and a higher prevalence of abuse and running away (p &lt; 0.05 to p &lt; 0.0001). Likewise, lesbian/bisexual females were more likely to report early onset of heterosexual intercourse, more frequent intercourse, and running away. Sexual or physical abuse did not vary by orientation for females. Prevalence of pregnancy also did not vary by orientation (males, 18.6% gay/bisexual vs. 10.4% "unsure" vs. 11.8% heterosexual; females, 25.0% lesbian/bisexual vs. 22.1% "unsure" vs. 21.9% heterosexual). For lesbian/bisexual females, no variables were significantly associated with pregnancy history; for "unsure" females, pregnancy was associated with contraceptive frequency and early onset of heterosexual activity. For heterosexual females, age, intercourse frequency, and physical abuse were associated. For gay/bisexual males, intercourse frequency, ineffective contraception, and physical abuse were associated with involvement in a pregnancy; for "unsure" and heterosexual males, most items except ineffective contraception were related to pregnancy involvement history. Conclusions: Although prevalence of pregnancy is similar, findings show group differences in associated risk factors by sexual orientation. Interventions to reduce pregnancy among American Indian adolescents should include assessment of sexual orientation and behavioral risk factors. abstract_id: PUBMED:24894425 Stress symptoms and frequency of sexual intercourse among young women. Introduction: We have previously documented the relationships between stress and depression symptoms and adolescent women's nonuse and misuse of condoms and other contraceptive methods and on their unintended pregnancy rates. Aim: Here, we examine relationships between mental health symptoms and another understudied adolescent reproductive health behavior-frequency of sexual intercourse. Main Outcome Measure: Our outcome was weekly sexual intercourse activity. Methods: We used panel data from a longitudinal, population-based cohort study of 992 women ages 18-20. Weekly journals measured sociodemographic, relationship, reproductive, and mental health characteristics, sexual and contraceptive behaviors, and pregnancy history. We examined 27,130 surveys from 952 women during the first study year. Predictors of weekly sexual intercourse were moderate to severe stress (Perceived Stress Scale-4) and depression (Center for Epidemiologic Studies Depression Scale-5) symptoms measured at baseline. Multilevel, mixed-effects logistic regression models estimated the relationships between stress and depression symptoms and the weekly odds of sexual intercourse while adjusting for covariate fixed effects and random woman effects. Results: Nearly a quarter of the sample had moderate to severe stress (23%) and depression (24%) symptoms at baseline. Women reported sexual intercourse in 36% of weeks. Proportions of sexually active weeks were higher among women with stress (43%) and depression (40%) compared with those without symptoms (35% and 35%, respectively; P values&lt;0.001). Controlling for covariates, women with baseline stress symptoms had 1.6 times higher weekly odds of sexual intercourse compared with women without stress (adjusted odds ratio 1.6, confidence interval [1.1, 2.5]; P=0.04). Depression symptoms were not associated with sexual intercourse frequency in adjusted models. Conclusions: Stress symptoms were positively associated with sexual intercourse frequency among these young women. Research and practice efforts are needed to identify effective sexual health promotion and risk-reduction strategies, including contraceptive education and counseling, in the context of mental health symptoms and unintended pregnancy. abstract_id: PUBMED:9323495 Adolescent pregnancy and sexual risk-taking among sexually abused girls. Data on 3,128 girls in grades eight, 10 and 12 who participated in the 1992 Washington State Survey of Adolescent Health Behaviors were used to analyze the association of a self-reported history of sexual abuse with teenage pregnancy and with sexual behavior that increases the risk of adolescent pregnancy. In analyses adjusting for grade level, respondents who had been sexually abused were 3.1 times as likely as those who had not been abused to say they had ever been pregnant; in multivariate analyses, respondents who had experienced abuse were 2.3 times as likely as others to have had intercourse but were not more likely than other sexually active respondents to have been pregnant. However, those with a history of sexual abuse were more likely to report having had intercourse by age 15 (odds ratio, 2.1), not using birth control at last intercourse (2.0) and having had more than one sexual partner (1.4). Thus, an association between sexual abuse and teenage pregnancy appears to be the result of high-risk behavior exhibited by adolescent girls who have been abused. abstract_id: PUBMED:34245975 Factors associated with childhood sexual abuse and adolescent pregnancy. Background: People who experience childhood sexual abuse (CSA) have a higher rate of adolescent pregnancy than people who do not experience CSA. The purpose of this integrative review was to identify risk or protective factors that are associated with this group to help understand the high rate of adolescent pregnancy in people with CSA histories. Methods: This review was conducted using strategies described by Whittemore and Knafl (2005). Five research articles met the following criteria: written in English, published in peer-reviewed journals in the past 10 years, and included the examination of predictors of adolescent pregnancy in any domain of the social ecological model of individual, relationship, community, or societal factors present among girls with CSA histories. Results: Studies suggest that people who are abused in childhood through adolescence and are not believed when they report abuse may be at greater risk for pregnancy in adolescence. CSA was associated with a range of sexual risk taking behavior (e.g., ineffectual contraception use, drug and alcohol use prior to sex, multiple partners) which could lead to adolescent pregnancy. Individual-level behaviors where predominantly studied. There were no reports at the community or societal level of the model. Conclusions: Identifying additional risk or protective factors at the relationship, community, and societal level may prove helpful in developing strategies tailored to this population. The unique characteristics that lead to higher rates of sexual risk-taking behavior that can lead to adolescent pregnancy have not been well documented and deserve further study to guide design and prevention/intervention modalities. Answer: Sexual orientation does appear to make a difference in the context of sexual intercourse, abuse, and pregnancy among adolescent women. According to a study analyzing responses from the 1987 Minnesota Adolescent Health Survey, bisexual or lesbian respondents were about as likely as heterosexual women to have had intercourse, but they had a significantly higher prevalence of pregnancy and physical or sexual abuse than their heterosexual or unsure counterparts (PUBMED:10379429). This suggests that sexual orientation is associated with different experiences and risks regarding sexual health and victimization among adolescent women. Additionally, research has shown that American Indian adolescents with gay/lesbian/bisexual orientations did not report a higher prevalence of pregnancy compared to heterosexual peers, but there were group differences in associated risk factors by sexual orientation (PUBMED:9763160). This indicates that while the prevalence of pregnancy might be similar across sexual orientations, the underlying risk factors and behaviors associated with pregnancy can vary. Moreover, the relationship between sexual abuse and risky sexual behavior among adolescent boys has been studied, revealing that sexually abused boys were significantly more likely to engage in risky sexual behaviors, such as unprotected intercourse, multiple sexual partners, and pregnancy involvement (PUBMED:22727072). Although this study focuses on adolescent boys, it underscores the broader impact of sexual abuse on risky sexual behavior, which can be extrapolated to include adolescent women as well. In summary, sexual orientation does seem to influence the patterns of sexual intercourse, abuse, and pregnancy among adolescent women, with bisexual or lesbian adolescents experiencing higher rates of pregnancy and abuse. However, the prevalence of pregnancy may not differ significantly across orientations, but the risk factors and behaviors associated with it do vary. Providers of reproductive health care should be aware of these differences and not assume the sexual orientation of pregnant teenagers, ensuring that all adolescents receive appropriate family planning counseling and support (PUBMED:10379429).
Instruction: Expectations Among Academic Clinicians of Inpatient Imaging Turnaround Time: Does it Correlate with Satisfaction? Abstracts: abstract_id: PUBMED:26310727 Expectations Among Academic Clinicians of Inpatient Imaging Turnaround Time: Does it Correlate with Satisfaction? Rationale And Objectives: Imaging report turnaround time (RTAT) is an important measure of radiology performance and has become the leading priority in customer satisfaction surveys conducted among nonradiologists, who may not be familiar with the imaging workflow. Our aim was to assess physicians' expected RTAT for commonly ordered studies and determine if satisfaction correlates with met expectations. Materials And Methods: Retrospective review of inpatient imaging was conducted at a single academic institution, and RTAT for 18,414 studies was calculated. Examinations were grouped by study type, priority, and time of day. A cross-sectional survey instrument was completed by 48 internal medicine and surgery resident physicians with questions regarding RTAT and their level of satisfaction with various examinations. Results: Actual RTAT ranged from 1.6 to 26.0 hours, with chest radiographs and computed tomographies generally faster than magnetic resonance images and ultrasounds. Urgent (STAT) examinations and those ordered during business hours have shorter RTAT. The time for image interpretation largely contributed to the RTAT because of the lack of night-time radiology coverage. Referring physician expectations were consistently shorter than actual RTAT, ranging from 30 minutes to 24 hours. Overall satisfaction scores were inversely correlated with RTAT, with a strong correlation to the time from study order to imaging (r(2) = 0.63) and a weak correlation to the image interpretation time (r(2) = 0.17). Satisfaction scores did not correlate with whether the actual RTAT met expectations (r(2) = 0.06). Conclusions: Referring physician satisfaction is likely multifactorial. Although RTAT has been reported as a priority, shortening turnaround time alone may not directly improve clinician satisfaction. abstract_id: PUBMED:34299874 Work-Related Satisfaction among Clinicians Working at Inpatient Treatment Facilities for Substance Use Disorder: The Role of Recovery Orientation. Several psychosocial factors have been suggested as facilitators of change among inpatients treated for substance use disorder (SUD). Research suggests that staff members are also influenced by the practice in which they are involved, and by contextual psychosocial factors at their treatment facilities. This cross-sectional questionnaire survey study was conducted to investigate the role of recovery-orientated interventions in describing work-related satisfaction among clinicians at inpatient SUD treatment facilities. The respondents (n = 407) rated items indicating work-related satisfaction and the degree of recovery orientation at their treatment facilities. The main findings of two block regression analyses indicated that clinicians' work-related satisfaction was positively influenced by inpatients' opportunities to pursue their goals and choices, and negatively influenced by inpatient involvement. The change in clinicians' work-related satisfaction could not be described by the degree of individually tailored and varied interventions at the treatment facility. Clinicians should be supported and involved in the process of implementing measures to increase inpatient involvement in the treatment programmes, and treatment measures that enable inpatients to pursue their goals and choices should be enhanced. The findings of this and previous studies indicate that a recovery-oriented framework promotes clinicians' work-related satisfaction and has an enabling influence on both inpatients and clinicians. abstract_id: PUBMED:37213396 Proculturation shaped by social representations of academic migrants from Italy to the United States. Introduction: Existing literature has highlighted the phenomenon of academic migrants leaving Italy for the United States with the hope of finding institutions that offer more opportunities for growth and recognition based on merit, as opposed to corruption, nepotism, and excessive bureaucracy. Likely, these may be the expectations of Italian academic migrants, who seem to be thriving and flourishing in their careers. This paper discusses proculturation of academic migrants from Italy to the United States, in the light of their expectations related to self-concept as well as social representations of North American university instructors from transnational families. Methods: In this study, 173 participants volunteered to provide information in an online survey that included their demographic profile, family situation, language ability, recalled pre-migration expectations and preparations, satisfaction with life, self-perceived stress, self-rated health, free responses to questions about major successes, challenges, and goals, as well as self-identification. Results: The results have shown that participants were indeed thriving in their careers and lives (majority scored high in satisfaction with life, health, realistic expectations and helpful per-migration preparations, while low in stress, also indicating work-related accomplishments and successes), but somewhat struggled with proculturation-related issues, frequently mentioned among major challenges. abstract_id: PUBMED:29265935 Mental health and academic attitudes and expectations in university populations: results from the healthy minds study. Background: Mental health problems are highly prevalent in university populations and have been shown to impair academic performance. Yet little is known about the ways in which mental health influences academic outcomes in higher education. Aims: This study seeks to offer new insight into the relationship between mental health and academic performance, focusing on students' academic experience and expectations as interrelated mechanisms. Method: Data come from 3556 students at four campuses that participated in the Healthy Minds Study. We explore unadjusted and multivariable relationships between mental health and academic experiences, expectations and impairment. Results: We find significant differences by mental health status, including that one-in-four students with symptoms are dissatisfied with their academic experience, relative to one-in-ten without (p &lt; 0.001). Approximately 30% with symptoms doubt whether higher education is worth their time, money and effort, compared to 15% without (p &lt; 0.001). In multivariable models, mental health problems were a significant predictor of academic dissatisfaction and drop out intentions, while positive mental health was a significant predictor of satisfaction and persistence. Conclusions: This descriptive study offers further evidence of the importance of mental health for university success, identifying pathways related to students' experiences and expectations that may help to explain this relationship. abstract_id: PUBMED:37728727 Patient Expectations and Satisfaction in Pediatric Orthopedics. Purpose Of Review: The purpose of the current review is to analyze the current literature regarding the tools available to evaluate patient expectations and satisfaction. There have been an increasing number of tools that have been developed and validated for various orthopedic procedures. Despite the growing number of tools, there are a limited number of tools available for pediatric patients. Recent Findings: Several tools have been developed in orthopedics to evaluate patient expectations. However, there are no tools that have been validated in the pediatric population. In addition, pediatric patient expectations should be collected in conjunction with parent/caregiver expectations. Although not specifically validated for pediatric patients, there are several tools available that may pertain to pediatric patients including the HSS ACL Expectations Survey, HSS Shoulder Expectations Survey, HSS Knee Surgery Expectations Survey, HSS Foot and Ankle Surgery Expectation Survey, Sunnybrook Surgery Expectations Survey, Musculoskeletal Outcomes Data Evaluation and Management System (MODEMS) Instruments, Quick DASH, and DASH. In terms of patient satisfaction, there are even fewer tools available. Several tools have been developed to evaluate patient satisfaction and five additional tools within orthopedics. Of these tools, there are two that have been validated for pediatric patients: The Swedish Parents Satisfaction Questionnaire and the Scoliosis Research Society-22. There are a growing number of tools to evaluate patient's expectations and satisfaction in the orthopedic literature. Given most of these tools pertain to adult patients, there is a need for further development of tools specifically validated for pediatric patients and their parents/caregivers. Through the measurement of expectations and satisfaction, medical professionals can hope to improve satisfaction and outcomes. abstract_id: PUBMED:27872386 Transformational leadership in primary care: Clinicians' patterned approaches to care predict patient satisfaction and health expectations. Clinicians face the complex challenge of motivating their patients to achieve optimal health while also ensuring their satisfaction. Inspired by transformational leadership theory, we proposed that clinicians' motivational behaviors can be organized into three patient care styles (transformational, transactional, and passive-avoidant) and that these styles differentially predict patient health outcomes. In two studies using patient-reported data and observer ratings, we found that transformational patient care style positively predicted patients' satisfaction and health expectations above and beyond transactional and passive-avoidant patient care style. These findings provide initial support for the patient care style approach and suggest novel directions for the study of clinicians' motivational behaviors. abstract_id: PUBMED:34840538 Undergraduate nursing students' course expectations, actual experiences, and associated satisfaction levels: A mixed methods survey. Background: Student satisfaction is related to experiences and expectations. It is challenging for educational administrators to deliver quality experiences without student expectation insight. Without this insight, student satisfaction is at risk. Aim: This study aims to develop a better understanding of the gaps that exist between student expectations and student experiences in a nursing program to inform and potentially improve student satisfaction in undergraduate nursing programs. Methods: This research was a mixed methods survey undertaken at an Australian regional university. A total of 82 nursing students completed the survey, which contained open and closed questions addressing various domains of course experience. The questions were mostly categorical (respondents selected from predetermined ordinal options), with opportunities to elaborate on some questions. Findings: The survey responses showed that flexibility in subject requirements, the helpfulness of academic/teaching staff (lecturers and tutors), and clarity of teacher communications correlated strongly with satisfaction in those respective areas. Expectation-experience discrepancy correlated strongly with satisfaction in these areas: support to navigate university systems, the timing of subject availability, the helpfulness of academic staff, and timetable suitability. The strongest predictor of students' overall satisfaction with their course was the extent to which they felt "supported to navigate university systems." Conclusion: The findings indicate that a better understanding of student course expectations can lead to better student satisfaction and, as such, educational administrators should work to innovate methods to discover and address student expectations. abstract_id: PUBMED:34192320 Longitudinal Associations Between Expectations of Receiving Care in the Future and Life Satisfaction Among Older Adults in Korea. Objectives: Uncertainty about receiving care and assistance in the future has been increasing among older adults in Korea. This study examines whether expectations about receiving care from various sources (i.e., formal and/or filial caregivers) are related to life satisfaction among older adults in Korea. Methods: Using data from the Korean Longitudinal Study of Ageing (N = 3,607, aged 65 or older), this study estimated fixed effects regression models to investigate longitudinal within-person associations between future care expectations and life satisfaction. Results: The results of this study revealed that developing expectations of care from family caregivers is positively associated with life satisfaction. Beginning to expect care from nonfamily caregivers, however, is not associated with life satisfaction. When disaggregating different sources of care by family member type, expecting care from a spouse or daughter(s), but not son(s), is associated with higher life satisfaction. Gender-specific analyses showed that expecting care from daughter(s) is positively associated with life satisfaction among both men and women, whereas expectations of spousal care are associated with only men's life satisfaction. This study also found suggestive but not conclusive evidence that an association between care expectations from family caregivers and life satisfaction is stronger among older adults with lower education. Discussion: Reducing uncertainty about future care may improve older adults' subjective well-being. Policymakers may consider policies and programs that support family care of the aged, and more fundamentally, encourage family involvement in the lives of older people. abstract_id: PUBMED:34843748 Understanding Patient Expectations and Determinants of Satisfaction in an Outpatient Urology Clinic at an Academic Medical Center. Objective: To examine and better understand expectations and facilitators of satisfaction amongst patients presenting to an ambulatory urology clinic at an academic medical center. Methods: Patients completed an anonymous survey regarding expectations for their clinic visit. Patients were included in the investigation if they were aged 18-89 years and had the ability to complete informed consent. Chi-square analysis was then used to analyze the collected data. Results: A total of five hundred patients were enrolled in the study. Patients were predominantly white males and were older than 60 years of age. Most patients had at least a college education and drew an annual household income between $40,000-$99,999. Most enrollees were return patients (74.8%). Most expected to be seen within 3-7 days of referral and expected 16-30 minutes with their provider. Patients noted they would not be equally satisfied seeing a physician vs advanced practice provider on their initial visit but would on a return visit. About half (52%) of the cohort stated they would be dissatisfied with their clinic experience if their expectations were not met. Significance was found between variables including age, race, gender and type of visit and their survey responses. Conclusion: Patient satisfaction remains an important measure for the quality and safety of patient care. This investigation highlighted patient prioritization of time to be seen after referral and the provider that cares for them at both initial and follow-up visits. Future research is needed to enhance stakeholder understanding of precisely how expectations impact overall satisfaction. abstract_id: PUBMED:35600449 Perceived academic satisfaction level, psychological stress and academic risk among Indian students amidst COVID-19 pandemic. Background: The Covid-19 pandemic has a significant impact on education and mental health outcomes. This study attempts to analyze the factors associated with academic satisfaction level, psychological stress/anxiety, and future academic risk among Indian students of higher education in the wake of the Covid-19 pandemic. Methods: An online survey was conducted through a structured questionnaire among students of higher education. Multivariate ordered logistic regression models were performed to find out the predictors of perceived academic satisfaction level, psychological stress, and academic risk among the participants. Results: Among the 630 participants, the majority of the students (73%) had low to moderate levels of academic satisfaction. Over two-thirds of participants (68%) had a high level of stress and nearly two-fifths (38%) of the participants felt very high risk in their academic career. The multivariate logistic regression models show that the likelihood of psychological stress and academic risk was significantly higher among students aged above 25 years, researchers, and those who belong to broken families. Besides, the higher probability of satisfaction level is associated with female students, undergraduates, belonging to economically well-off families, and rural residents. Conclusion: Our study suggests that the Covid-19 pandemic leads to a range of psychological health problems. Therefore, increase students' satisfaction with online classes and it is essential to preserve the mental health of individuals and to develop psychological interventions that can improve the mental health of students during the Covid-19 pandemic. Answer: The study described in abstract PUBMED:26310727 aimed to assess physicians' expected imaging report turnaround time (RTAT) for commonly ordered studies and determine if satisfaction correlates with met expectations. The study found that actual RTAT ranged from 1.6 to 26.0 hours, with chest radiographs and computed tomographies generally faster than magnetic resonance images and ultrasounds. Referring physician expectations for RTAT were consistently shorter than the actual RTAT, ranging from 30 minutes to 24 hours. Satisfaction scores were inversely correlated with RTAT, with a strong correlation to the time from study order to imaging (r(2) = 0.63) and a weak correlation to the image interpretation time (r(2) = 0.17). However, satisfaction scores did not correlate with whether the actual RTAT met expectations (r(2) = 0.06). The study concluded that while RTAT is a priority, shortening turnaround time alone may not directly improve clinician satisfaction, suggesting that satisfaction is likely influenced by multiple factors.
Instruction: Is the "Glasgow effect" of cigarette smoking explained by socio-economic status? Abstracts: abstract_id: PUBMED:19615067 Is the "Glasgow effect" of cigarette smoking explained by socio-economic status?: a multilevel analysis. Background: The Glasgow area has elevated levels of deprivation and is known for its poor health and associated negative health-related behaviours, which are socially patterned. Of interest is whether high smoking rates are explained by the area's socio-economic profile. Methods: Data on age, sex, current/previous smoking status, area deprivation, social class, education, economic activity, postcode sector, and health board region were available from Scottish Health Surveys conducted in 1995, 1998 and 2003. Multilevel logistic regression models were applied by sex, unadjusted and adjusted for age, survey year, and socio-economic factors, accounting for geographical hierarchy and missing data. Results: Compared with the rest of Scotland, men living in Greater Glasgow were 30% and women 43% more likely to smoke [odds ratio (OR) = 1.30, (95% CI = 1.08-1.56) and (OR = 1.43, CI = 1.22-1.68), respectively] before adjustment. In adjusted results, the association between living in Greater Glasgow and current smoking was attenuated [OR = 0.92, CI = 0.78-1.09 for men, and OR = 1.08, CI = 0.94-1.23 for women; results based on multiply imputed data to account for missing values remained borderline significant for women]. Accounting for individuals who had been told to give up smoking by a medical person/excluding ex-smokers did not alter results. Conclusion: High levels of smoking in Greater Glasgow were attributable to its poorer socio-economic position and the strong social patterning of smoking. Tackling Glasgow's, and indeed Scotland's, poor health must involve policies to alleviate problems associated with poverty. abstract_id: PUBMED:29333764 Education is the strongest socio-economic predictor of smoking in pregnancy. Aims: To investigate socio-economic disparities in smoking in pregnancy (SIP) by the mother's education, occupational class and current economic conditions. Design: Cross-sectional analysis with linked survey and register data. Setting: South-western Finland. Participants: A total of 2667 pregnant women [70% of the original sample (n = 3808)] from FinnBrain, a prospective pregnancy cohort study. Measurements: The outcome was smoking during the first pregnancy trimester, measured from the Finnish Medical Birth Register. Education and occupational class were linked from population registers. Income support recipiency and subjective economic wellbeing were questionnaire-based measures of current economic conditions. These were adjusted for age, partnership status, residential area type, parental separation, parity, childhood socio-economic background, childhood adversities (the Trauma and Distressing Events During Childhood scale) and antenatal stress (Edinburgh Postnatal Depression Scale). Logistic regressions and attributable fractions (AF) were estimated. Findings: Mother's education was the strongest socio-economic predictor of SIP. Compared with university education, adjusted odds ratios (aORs) of SIP were: 2.2 [95% confidence interval (CI) = 1.2-3.9; P = 0.011] for tertiary vocational education, 4.4 (95% CI = 2.1-9.0; P &lt; 0.001) for combined general and vocational secondary education, 2.9 (95% CI = 1.4-6.1; P = 0.006) for general secondary education, 9.5 (95% CI 5.0-18.2; P &lt; 0.001) for vocational secondary education and 14.4 (95% CI = 6.3-33.0; P &lt; 0.001) for compulsory schooling. The total AF of education was 0.5. Adjusted for the other variables, occupational class and subjective economic wellbeing did not predict SIP. Income support recipiency was associated positively with SIP (aOR = 1.8; 95% CI = 1.1-3.1; P = 0.022). Antenatal stress predicted SIP (aOR = 2.0; 95% CI = 1.4-2.8; P &lt; 0.001), but did not attenuate its socio-economic disparities. Conclusions: In Finland, socio-economic disparities in smoking in pregnancy are attributable primarily to differences in the mother's educational level (low versus high) and orientation (vocational versus general). abstract_id: PUBMED:36564898 Socio-economic status moderates within-person associations of risk factors and smoking lapse in daily life. Background And Aims: Individuals of lower socio-economic status (SES) display a higher prevalence of smoking and have more diffxiculty quitting than higher SES groups. The current study investigates whether the within-person associations of key risk (e.g. stress) and protective (self-efficacy) factors with smoking lapse varies by facets of SES. Design And Setting: Observational study using ecological momentary assessment to collect data for a 28-day period following a smoking quit attempt. Multi-level mixed models (i.e. generalized linear mixed models) examined cross-level interactions between lapse risk and protective factors and indicators of SES on smoking lapse. Participants: A diverse sample of 330 adult US smokers who completed a larger study examining the effects of race/ethnicity and social/environmental influences on smoking cessation. Measurements: Risk factors included momentary urge, negative affect, stress; protective factors included positive affect, motivation, abstinence self-efficacy; SES measures: baseline measures of income and financial strain; the primary outcome was self-reported lapse. Findings: Participants provided 43 297 post-quit observations. Mixed models suggested that income and financial strain moderated the effect of some risk factors on smoking lapse. The within-person association of negative [odds ratio (OR) = 0.967, 95% CI= 0.945, 0.990, P &lt; 0.01] and positive affect (OR = 1.023, 95% CI = 1.003, 1.044, P &lt; 0.05) and abstinence self-efficacy (OR = 1.020, 95% CI = 1.003, 1.038, P &lt; 0.05) on lapse varied with financial strain. The within-person association of negative affect (OR = 1.005, 95% CI = 1.002, 1.008, P &lt; 0.01), motivation (OR = 0.995, 95% CI = 0.991, 0.999, P &lt; 0.05) and abstinence self-efficacy (OR = 0.996, 95% CI = 0.993, 0.999, P &lt; 0.01) on lapse varied by income. The positive association of negative affect with lapse was stronger among individuals with higher income and lower financial strain. The negative association between positive affect and abstinence self-efficacy with lapse was stronger among individuals with lower financial strain, and the negative association between motivation and abstinence self-efficacy with lapse was stronger among those with higher income. The data were insensitive to detect statistically significant moderating effects of income and financial strain on the association of urge or stress with lapse. Conclusion: Some risk factors (e.g. momentary negative affect) exert a weaker influence on smoking lapse among lower compared to higher socio-economic status groups. abstract_id: PUBMED:36203474 Lifestyle factors as mediators of area-level socio-economic differentials in cardiovascular disease risk factors. The Tromsø Study. Introduction: Cardiovascular disease (CVD) is a leading cause of death and disability and living in areas with low socio-economic status (SES) is associated with increased risk of CVD. Lifestyle factors such as smoking, physical inactivity, an unhealthy diet and harmful alcohol use are main risk factors that contribute to other modifiable risk factors, such as hypertension, raised blood cholesterol, obesity, and diabetes. The potential impact of area-level socio-economic status (ASES) on metabolic CVD risk factors via lifestyle behaviors independent of individual SES has not been investigated previously. Aims: To estimate associations of ASES with CVD risk factors and the mediating role of lifestyle behaviors independent of individual-level SES. Methods: In this cross-sectional study, we included 19,415 participants (52% women) from the seventh survey of the Tromsø Study (2015-2016) (Tromsø7). The exposure variable ASES was created by aggregating individual-level SES variables (education, income, housing ownership) at the geographical subdivision level. Individual-level SES data and geographical subdivision of Tromsø municipality (36 areas) were obtained from Statistics Norway. Variables from questionnaires and clinical examinations obtained from Tromsø7 were used as mediators (smoking, snuff, alcohol, and physical activity), while the outcome variables were body mass index (BMI), total/high-density lipoprotein (HDL) cholesterol ratio, waist circumference, hypertension, diabetes. Mediation and mediated moderation analysis were performed with age as a moderator, stratified by sex. Results: ASES was significantly associated with all outcome variables. CVD risk factor level declined with an increase in ASES. These associations were mediated by differences in smoking habits, alcohol use and physical activity. The associations of ASES with total/HDL cholesterol ratio and waist circumference (women) were moderated by age, and the moderating effects were mediated by smoking and physical activity in both sexes. The largest mediated effects were seen in the associations of ASES with total/HDL cholesterol ratio, with the mediators accounting for 43% of the observed effects. Conclusions: Living in lower SES areas is associated with increased CVD risk due to unhealthy lifestyle behaviors, such as smoking, alcohol use and physical inactivity. These associations were stronger in women and among older participants. abstract_id: PUBMED:29475107 Effect of socio-economic status, family smoking and mental health through social network on the substance use potential in adolescents: a mediation analysis. Objectives: Understanding pathways that influence substance use potential (SUP) can help with effective substance use prevention interventions among adolescents. The aim of the present study is to contribute to a better understanding of the SUP of adolescents by examining the mediating role of social network quality in the SUP of Iranian adolescents. Study Design: A cross-sectional study. Methods: Structural equation modeling was conducted to assess the hypothesized model that social network quality would mediate the association of family socio-economic status, a mental health disorder, and family smoking with addiction potential. Results: The model shows a good fit to the data. Social network quality mediated the effect of family smoking on the SUP for boys. A mental health disorder had a positive significant direct effect on addiction potential for both girls and boys. Conclusions: Social network quality mediates the effect of family smoking on boys' addiction potential in the context of Iran. Educational programs based on local societal ways and cultural norms are recommended to change tobacco smoking behavior among family members. In addition, to prevent subsequent substance use among adolescents, more effort is needed to improve their mental health. abstract_id: PUBMED:6958184 Socio-economic status as a coronary risk factor: the Oslo study. The association between socioeconomic status, measured by a combination of income and education, and CHD mortality has been studied in a cohort of 40-49 year old Oslo men. Socio-economic status was significantly associated with CHD mortality. However, the lowest CHD mortality was found in social status Group III (middle class) and this could not be explained by the risk factor gradients seen among those studied. Although the number of fatalities is small (68 CHD deaths during 4.5 years) socio-economic status seems to be independently associated with coronary risk after adjusting for serum cholesterol, systolic blood pressure and cigarette smoking. abstract_id: PUBMED:8740877 Effects on birthweights of maternal education, socio-economic status, and work-related characteristics. Birthweights of 3,451 infants of women registered for antenatal care in Uppsala County, Sweden, were analyzed using three different maternal socio-economic indicators; education, socio-economic status and work environment exposure characteristics. Other explanatory variables were maternal age, parity, height, smoking habits, and length of gestation. Mean birthweights increase with longer education and higher socio-economic status. No general pattern was seen for work environment characteristics. When smoking habits are controlled for, social differences in birthweight decrease to non-significant values. A regression model with a socio-economic indicator alone explains only a minor part, less than 1%, of the variation in birthweight. When smoking is included, adding a socio-economic indicator does not significantly improve the model. Practically all social differences in birthweight are related to the differences in maternal age, parity, height, and smoking habits. If a socio-economic indicator is to be included in the analysis of birthweights (for other reasons like international comparisons), we recommend education. abstract_id: PUBMED:27781078 Exposure to Hookah and Cigarette Smoke in Children and Adolescents According to Their Socio-Economic Status: The CASPIAN-IV Study. Background: Exposure to smoking or passive smoking is one of serious health problems especially in the pediatric age group. Objectives: To compare the prevalence and determinants of passive smoking in a nationally representative sample of Iranian children and adolescents according to their socioeconomic status (SES). Materials And Methods: This nationwide study was conducted in 2011 - 2012 among 14880 students aged 6 - 18 years, living in 30 provinces in Iran. Exposure to the smoke of hookah or cigarette was documented by using validated questionnaires. Possible influencing factors were determined and the frequency of passive smoking was compared according to the regional and familial SES. Results: Participants consisted of 13,486 children and adolescents including 49.2% girls and 75.6% urban inhabitants (90.6% participation rate). The mean age of participants was 12.47 ± 3.36 years. Overall, 43.87% of them (44.07% of boys and 43.66% of girls) were exposed to second hand smoke at home. Exposures to hookah or cigarette smoke at home were respectively reported in 21.46% and 34.49% of participants. The prevalence of passive smoking was lower in children of families with higher SES level, but higher in high SES regions of the country than in low SES ones, and ranged from 39.2% in the region with lowest SES to 49.05% in the highest SES region. Higher education levels of fathers and mothers were significantly associated with lower frequency of passive smoking. Conclusions: Exposure to second hand smoke is a major problem among Iranian children and adolescents. Low family SES and low parental education increased the frequency of passive smoking. Appropriate public health education and legislation for smoke free home as well as family-centered counseling should be strengthened. abstract_id: PUBMED:19540029 The role of behavioural factors in explaining socio-economic differences in adolescent health: a multilevel study in 33 countries. Attempts to describe and explain socio-economic differences in health have mainly focused on adults. Little is known about the mechanisms of the relationship between socio-economic status (SES) and health in adolescence including inconsistent findings between SES and health among young people. Data were derived from representative samples of 13 and 15-year-old students in 33 European and North American countries (n=97,721) as part of the Health Behaviour in School-aged Children (HBSC) study 2001/2002. Multilevel logistic regression models were used to investigate socio-economic differences in self-rated health among adolescents and the contribution of health-related behaviours to the explanation of such differences. Odds ratios of self-rated health by family affluence were calculated before and after adjustment for behavioural factors (tobacco smoking, physical activity, television use, breakfast intake, consumption of fruits and vegetables). On average, adolescents from low affluent families had an odds ratio for low self-rated health of 1.84 for boys and 1.80 for girls, compared to those from high affluent families. The majority of behavioural factors were significantly associated with family affluence in all countries and explained part of the relationship between self-rated health and family affluence. Smoking, physical activity and breakfast consumption showed the largest independent effect on health. The present study suggests that behavioural factors in early adolescence partly account for the association between self-rated health and socio-economic status. Prevention programmes should target unhealthy behaviours of adolescents from lower socio-economic groups to help prevent future life-course disadvantages in terms of health and social inequalities. abstract_id: PUBMED:27162105 Adolescent smoking and tertiary education: opposing pathways linking socio-economic background to alcohol consumption. Background And Aims: If socio-economic disadvantage is associated with more adolescent smoking, but less participation in tertiary education, and smoking and tertiary education are both associated with heavier drinking, these may represent opposing pathways to heavy drinking. This paper examines contextual variation in the magnitude and direction of these associations. Design: Comparing cohort studies. Setting: United Kingdom. Participants: Participants were from the 1958 National Child Development Study (NCDS58; n = 15 672), the British birth cohort study (BCS70; n = 12 735) and the West of Scotland Twenty-07 1970s cohort (T07; n = 1515). Measurements: Participants self-reported daily smoking and weekly drinking in adolescence (age 16 years) and heavy drinking (&gt; 14/21 units in past week) in early adulthood (ages 22-26 years). Parental occupational class (manual versus non-manual) indicated socio-economic background. Education beyond age 18 was coded as tertiary. Models were adjusted for parental smoking and drinking, family structure and adolescent psychiatric distress. Findings: Respondents from a manual class were more likely to smoke and less likely to enter tertiary education (e.g. in NCDS58, probit coefficients were 0.201 and -0.765, respectively; P &lt; 0.001 for both) than respondents from a non-manual class. Adolescent smokers were more likely to drink weekly in adolescence (0.346; P &lt; 0.001) and more likely to drink heavily in early adulthood (0.178; P &lt; 0.001) than adolescent non-smokers. Respondents who participated in tertiary education were more likely to drink heavily in early adulthood (0.110 for males, 0.182 for females; P &lt; 0.001 for both) than respondents with no tertiary education. With some variation in magnitude, these associations were consistent across all three cohorts. Conclusions: In Britain, young adults are more likely to drink heavily both if they smoke and participate in tertiary education (college and university) despite socio-economic background being associated in opposite directions with these risk factors. Answer: The "Glasgow effect" refers to the high levels of poor health and negative health-related behaviors, including elevated smoking rates, observed in the Glasgow area compared to the rest of Scotland. The question is whether this effect is explained by socio-economic status (SES). The study identified in abstract PUBMED:19615067 investigated this question by analyzing data from Scottish Health Surveys. The results showed that before adjusting for socio-economic factors, men and women living in Greater Glasgow were significantly more likely to smoke compared to the rest of Scotland. However, after adjusting for socio-economic factors, the association between living in Greater Glasgow and current smoking was attenuated and became non-significant for men and borderline significant for women. This suggests that high levels of smoking in Greater Glasgow were largely attributable to its poorer socio-economic position and the strong social patterning of smoking. The conclusion drawn was that policies to alleviate poverty could be crucial in tackling Glasgow's poor health and high smoking rates. In summary, the "Glasgow effect" of cigarette smoking appears to be explained, at least in part, by socio-economic status, as the elevated smoking rates were significantly attenuated after accounting for socio-economic factors. This indicates that socio-economic status is a key factor in understanding the high prevalence of smoking in the Glasgow area.
Instruction: Directly eroding tobacco industry power as a tobacco control strategy: lessons for New Zealand? Abstracts: abstract_id: PUBMED:16224507 Directly eroding tobacco industry power as a tobacco control strategy: lessons for New Zealand? Aims: To examine some recent examples of tobacco control policies used elsewhere that seek to directly erode tobacco industry power, and to consider the relevance of these to New Zealand. Methods: A literature search was supplemented with six key informant interviews, with World Health Organization (WHO) officials, and Canadian officials and advocates. Results: The Provincial Government of British Columbia (BC) from 1997 to 2001 had an explicit objective of 'denormalising' the tobacco industry. Legal action was started against the industry to recover healthcare costs. The Canadian Government has been involved in defending its comprehensive tobacco control legislation in court against the industry since 1988. The policies to directly erode industry power, of both Canada overall and at the province level (BC), have been temporally associated with significant declines in smoking prevalence. Since 1998, WHO has conducted a series of inquiries into tobacco industry influence within WHO, and at regional and national levels. Its research and publishing focus on the industry has supported the creation of the Framework Convention on Tobacco Control, which has sections with the potential to assist national governments in strengthening strategies to erode tobacco industry power. The limitations of such strategies, and the uncertainties with using these approaches in the New Zealand context, suggests the need for careful planning and ongoing evaluation. Conclusions: Recent experience (in several jurisdictions and organisations) suggests that policies to directly erode tobacco industry power may contribute to the effectiveness of comprehensive tobacco control programmes. Some of these lessons could be incorporated into New Zealand's tobacco-control strategy. abstract_id: PUBMED:25827185 Tobacco tax and the illicit trade in tobacco products in New Zealand. Objectives: To estimate the size of illegal tobacco trade and consumption and assess the impact of tobacco tax on the illicit tobacco market in New Zealand (NZ). Methods: Data on the import and seizure of legal and illegal tobacco in NZ was obtained from NZ Customs. Previous literature was used to calculate interception rates of illegal tobacco being smuggled and grown in NZ. Annual tobacco returns figures, obtained via the NZ Ministry of Health, were analysed to assess the market dynamics of legal tobacco products. Results: This study found that illicit tobacco constituted 1.8-3.9% of total national tobacco consumption in NZ in 2013. This represents a minor increase compared to previous estimates from 2007-09, suggesting that tax increases enacted by the NZ Government since 2010 have had a minimal impact on encouraging the use and procurement of illicit tobacco. Conclusions: The results highlight a slight rise in small-scale tobacco smuggling through ports and mail centres. However, tobacco returns figures show that current tobacco tax policy has forced manufacturers to focus on the production of cheap legal tobacco products, directly competing with and undercutting the demand for illicit tobacco products. At the same time, locally grown illicit tobacco continues to remain a small, isolated problem and, with recent cuts in duty free tobacco allowance, it is expected that overall illicit tobacco will remain a very small proportion of total tobacco consumption in NZ. abstract_id: PUBMED:24890485 Newspaper coverage of tobacco control in New Zealand. Objective: The New Zealand (NZ) government has proposed that the country be 'Smokefree' by 2025. This paper examines how NZ newspapers have portrayed tobacco control initiatives to achieve this goal. Method: We examined tobacco-related articles from NZ newspapers published between 1 November 2011 and 31 October 2012, using a coding frame to capture smoking themes, portrayal of actions and degree of support expressed for various pro- and anti- tobacco control objectives. Results: A total of 537 separate articles were obtained from national and regional newspapers. Six themes appeared in more than 5% of the total articles: Smokefree 2025; smokefree areas; pricing; tobacco industry; plain packaging; and cessation. Overall, articles on tobacco smoking were three times more likely to emphasise actions consonant with tobacco control objectives rather than against them, and to report them with a positive rather than negative tone. Conclusions: NZ tobacco control advocates can take heart from the nature and extent of coverage of tobacco control initiatives. Opportunities for further media advocacy are discussed. abstract_id: PUBMED:15843834 Rethinking the regulatory framework for tobacco control in New Zealand. Tobacco is a particularly unusual consumer product in that it is highly addictive, kills over half its long-term users, and is a major cause of premature death and health inequalities in New Zealand. We therefore examined the place of regulatory frameworks in advancing tobacco control, and suggest the formation of a Government Tobacco Authority. Such an authority could enable the Government to specify the design of tobacco products (to maximise harm reduction), to eliminate the marketing for profit of branded products, and to appropriately control the introduction of alternative nicotine delivery devices or less hazardous alternative tobacco products. As the authority could be funded through levies on the tobacco industry, it has the potential advantage of delivering major population health gains while costing the taxpayer nothing. abstract_id: PUBMED:29649194 Regulatory chills: tobacco industry legal threats and the politics of tobacco standardised packaging in New Zealand. Aims: To describe the process of enacting tobacco standardised packaging (SP) amidst tobacco industry legal threats in New Zealand. Methods: Relevant government and NGO documents, and media items were reviewed. Policymakers and health advocates in New Zealand were interviewed. The data were triangulated and thematically analysed. Results: In 2011, the New Zealand Government announced the goal of becoming a smokefree country (reducing smoking prevalence to 5%) by the year 2025, and considered adopting SP. In April 2012, the Government announced it would introduce SP, but tobacco companies threatened the Government with litigation in international courts for violating investment and intellectual property rights. In response, the Government adopted a 'wait and see' approach, waiting until two legal challenges against Australia's SP law were resolved before it enacted its legislation in September 2016. Health advocates, limited due to funding constraints, attempted to alter the Government's approach to the legal threats without success. Interviews with policymakers and health advocates confirmed these threats helped produce a regulatory chill, delaying the policymaking process by three years. Conclusion: The New Zealand case illustrates how the threat of a potential international lawsuit can create a chilling effect by helping delay the implementation of public health policies. abstract_id: PUBMED:27694401 Analysis of the logic and framing of a tobacco industry campaign opposing standardised packaging legislation in New Zealand. Background: The tobacco industry routinely opposes tobacco control policies, often using a standard repertoire of arguments. Following proposals to introduce standardised packaging in New Zealand (NZ), British American Tobacco New Zealand (BATNZ) launched the 'Agree-Disagree' mass media campaign, which coincided with the NZ government's standardised packaging consultations. This study examined the logic of the arguments presented and rhetorical strategies employed in the campaign. Methods: We analysed each advertisement to identify key messages, arguments and rhetorical devices, then examined the arguments' structure and assessed their logical soundness and validity. Results: All advertisements attempted to frame BATNZ as reasonable, and each contained flawed arguments that were either unsound or based on logical fallacies. Flawed arguments included misrepresenting the intent of the proposed legislation (straw man), claiming standardised packaging would harm all NZ brands (false dilemma), warning NZ not to adopt standardised packaging because of its Australian origins (an unsound argument) or using vague premises as a basis for claiming negative outcomes (equivocation). Conclusions: BATNZ's Agree-Disagree campaign relied on unsound arguments, logical fallacies and rhetorical devices. Given the industry's frequent recourse to these tactics, we propose strategies based on our study findings that can be used to assist the tobacco control community to counter industry opposition to standardised packaging. Greater recognition of logical fallacies and rhetorical devices employed by the tobacco industry will help maintain focus on the health benefits proposed policies will deliver. abstract_id: PUBMED:22535362 Support for a tobacco endgame and increased regulation of the tobacco industry among New Zealand smokers: results from a National Survey. Aim: To examine the prevalence of smoker support for a ban on cigarette sales in 10 years time and increased regulation of the tobacco industry and to investigate the independent associations of support for these measures. Methods: The authors surveyed opinions among adult smokers in two survey waves (N=1376 and N=923) from the New Zealand arm of the International Tobacco Control Policy Evaluation Survey during 2007-2009. The authors report prevalence of support stratified by age, gender and ethnicity. The authors carried out multivariate analyses to identify significant associations among potential determinants (demographics, socioeconomic status, mental health and smoking-related beliefs and behaviours) of support. Results: Most New Zealand smokers supported greater regulation of the tobacco industry (65%) and more government action on tobacco (59%). Around half (46%) supported banning sales of cigarettes in 10 years time, provided effective nicotine substitutes were available. In a fully adjusted model, significant associations with support for greater tobacco company regulation included Māori ethnicity, experience of financial stress and greater awareness about the harms of smoking. Significant associations with support for a ban on tobacco sales in 10 years time included increasing area-based deprivation level, increasing intention to quit and greater concern about the health effects of smoking. Conclusions: The findings suggest that most smokers will support stronger government action to control the tobacco industry and that many support radical 'endgame' approaches. Greater support among Māori, more deprived and possibly Pacific smokers, is an important finding, which could inform the design and implementation of new policies given the very high smoking prevalence among these groups and hence high priority for targeted tobacco control interventions. Perceived difficulties in gaining public support should not impede the introduction of rigorous tobacco control measures needed to achieve a tobacco-free New Zealand. abstract_id: PUBMED:21946961 A cross-sectional study of opinions related to the tobacco industry and their association with smoking status amongst 14-15 year old teenagers in New Zealand. Background: New Zealand has been at the forefront of tobacco control and can boast an impressive range of tobacco control intervention. To date, tobacco control policy and interventions have directed very little attention to the tobacco industry because they concentrate on reducing demand for tobacco. In addition, the tobacco industry does not have a bold profile in the mass media. Given this low profile of the tobacco industry and the predominance of measures to reduce demand we were interested in teenage perceptions of the tobacco industry in New Zealand. Methods: A cross-sectional sample of 31,459 Year 10 students was obtained in 2006. Attitudes towards the tobacco industry and smoking outcomes were analysed using multivariate logistic regression. Results: Thirty-six percent of students disagreed that tobacco companies are responsible for people starting to smoke and 34% agreed that tobacco companies have equal right to sell cigarettes as other companies to sell their products. Female, Māori and students from low decile schools, who are all more likely to be smoking or have tried smoking, were more likely to show greater acceptance of the tobacco industry. Intention to smoke was associated with the belief that tobacco industry is not responsible for smoking initiation (odds ratio 1.7, p&lt;0.001), and that the tobacco industry is legitimate and credible (odds ratio 2.9, p&lt;0.001). Tolerant attitudes towards the tobacco industry were strongly associated with intentions to smoke and current smoking amongst teenagers. abstract_id: PUBMED:29482539 Anti-tobacco control industry strategies in Turkey. Background: Transnational tobacco companies (TTCs) penetrated the Turkish cigarette market due to trade and investment liberalization in the post-1980 period and eventually secured full control. Despite tobacco control policies put in place in reaction to accelerating consumption, TTCs reinforced their market power through a variety of strategies. This paper explores industry strategies that counteract tobacco control policies in Turkey. Methods: The study employs both qualitative and quantitative analyses to explore industry strategies in Turkey. Besides the content analyses of industry and market reports, descriptive analyses were conducted for the sub-periods of 1999-2015. The analyses focus on the market strategies of product innovation, advertisement-promotion, cost management and pricing. Results: Rising sales of low tar, ultra-low tar, slim, super-slim and flavoured cigarettes indicate that product innovation served to sustain consumption. Besides, the tobacco industry, using its strong distribution channels, the Internet, and CSR projects, were found to have promoted smoking indirectly. The industry also rationalized manufacturing facilities and reduced the cost of tobacco, making Turkey a cigarette-manufacturing base. Tobacco manufacturers, moreover, offered cigarettes in different price segments and adjusted net prices both up and down according to price categories and market conditions. In response to the successful effect of shifts in price margins, the market share of mid-priced cigarettes expanded while those within the economy category maintained the highest market share. As a result of pricing strategies, net sales revenues increased. Aside from official cigarette sales, the upward trends in the registered and unregistered sales of cigarette substitutes indicate that the demand-side tobacco control efforts remain inadequate. Conclusions: The Turkish case reveals that the resilience of the tobacco industry vis-à-vis mainstream tobacco control efforts necessitates a new policy perspective. Rising market concentration by TTCs and the global nature of industry strategies require that the highly profitable manufacturing and trade of tobacco products should be discouraged on a basis of international collaboration. To reduce and eventually eradicate tobacco consumption, supply-side tobacco control measures are needed along with demand-side policies. abstract_id: PUBMED:27613924 Young Adult Perceptions of the British American Tobacco New Zealand Agree/Disagree Plain Packaging Counter-Campaign. Introduction: In 2012, British American Tobacco New Zealand (BATNZ) launched a mass media campaign branded "Agree/ Disagree" as a response to the New Zealand government's plans to introduce plain packaging. We examined young adult's views about the campaign to assist tobacco control policymakers in planning future interventions. Methods: Interviews with young adults living in the Auckland area were conducted. Interviews covered existing knowledge about plain packaging of tobacco, knowledge about the tobacco industry, and perceptions of specific advertisements included in the campaign. Interview data were analyzed to determine the dominant views about the campaign. Results: Perspectives from 12 interviews reflect the dominant discourse in New Zealand on the benefits of serving economic progress and international trade. Pervasive views about the campaigns reflected perceptions of the risk to the New Zealand image, trade, and economy. Similarly, participants expressed concern about the potential for plain packaging to creep toward other products, such as alcohol. Perceptions of the tobacco industry were mixed but reflect a tolerant view of the ethics of business if viewed to be beneficial to the economy. Discussion: Participants were broadly accepting of the campaign messages. We identified a tendency toward quasi-libertarian leanings when discussing economics and trade, commercial versus personal responsibility, and the value of freedom (commercial and social). These perspectives were often held simultaneously with conflicting socially responsive values and references to the hegemonic position of "big business." Conclusion: This study affirms the importance of closely monitoring public opinion about tobacco control and of broadening denormalization strategies to reflect the critical role the tobacco industry plays in prolonging the tobacco epidemic. Implications: To date there is little evidence on the perceptions young adults hold toward to the tobacco industry and their arguments regarding the introduction of plain packaged cigarettes. Our findings indicate that some young people are attuned to the social discourse regarding the importance of economics, individuality, choice, and freedom as expressed by the tobacco industry. Tobacco control initiatives could challenge core values such as fairness and justice as related to corporate behaviors to retain public support, particularly among young people. Answer: Directly eroding tobacco industry power has been considered as a tobacco control strategy in various jurisdictions and could potentially be applied in New Zealand. The Provincial Government of British Columbia (BC) in Canada had an explicit objective of 'denormalising' the tobacco industry and initiated legal action against the industry to recover healthcare costs. This approach, along with comprehensive tobacco control legislation defended by the Canadian Government, has been temporally associated with significant declines in smoking prevalence (PUBMED:16224507). The World Health Organization (WHO) has also conducted inquiries into tobacco industry influence and supported the creation of the Framework Convention on Tobacco Control, which includes sections that could assist national governments in eroding tobacco industry power (PUBMED:16224507). In New Zealand, there is evidence of support for increased regulation of the tobacco industry among smokers, with a majority supporting greater government action on tobacco and many supporting radical 'endgame' approaches, such as a ban on cigarette sales in 10 years time, provided effective nicotine substitutes are available (PUBMED:22535362). However, the tobacco industry has been known to employ various strategies to counteract tobacco control policies. For instance, British American Tobacco New Zealand (BATNZ) launched the 'Agree-Disagree' campaign to oppose standardised packaging legislation, using flawed arguments and logical fallacies to frame themselves as reasonable (PUBMED:27694401). The threat of litigation by tobacco companies can also create a regulatory chill, delaying the implementation of public health policies, as was the case with the delay in enacting tobacco standardised packaging in New Zealand (PUBMED:29649194). Given these challenges, it is important for New Zealand to consider the lessons from other jurisdictions and to carefully plan and evaluate strategies that directly target tobacco industry power. This includes considering the potential for legal action, the role of media coverage in portraying tobacco control initiatives (PUBMED:24890485), and the need to counter industry opposition and misinformation campaigns (PUBMED:27694401). Additionally, understanding public perceptions, especially among young adults, towards the tobacco industry and its tactics is crucial for maintaining public support for tobacco control measures (PUBMED:27613924).
Instruction: Use of different comorbidity scores for risk-adjustment in the evaluation of quality of colorectal cancer surgery: does it matter? Abstracts: abstract_id: PUBMED:22703758 Use of different comorbidity scores for risk-adjustment in the evaluation of quality of colorectal cancer surgery: does it matter? Aims: Comorbidity affects outcomes after colorectal cancer surgery. However, it's importance in risk adjustment is unclear and different measures are being used. This study aims to assess its impact on post-operative outcomes. Methods: All 2204 patients who were operated on for stage I-III colorectal cancer in the Midwestern region of the Netherlands between January 1, 2006 and December 31, 2008 were analyzed. A multivariate two-step enter-model was used to evaluate the effect of the American Society of Anaesthesiologists Physical Status classification (ASA) score, the sum of diseased organ systems (SDOS), the Charlson Comorbidity Index (CCI) and a combination of specific comorbidities on 30-day mortality, surgical complications and a prolonged length of stay (LOS). For each retrieved model, and for a model without comorbidity, a ROC curve was made. Results: High ASA score, SDOS, CCI, pulmonary disease and previous malignancy were all strongly associated with 30-day mortality and a prolonged LOS. High ASA score and gastro-intestinal comorbidity were risk factors for surgical complications. Predictive values for all comorbidity measures were similar with regard to all adverse post-operative outcomes. Omitting comorbidity only had a marginal effect on the predictive value of the model. Conclusion: Irrespective of the measure used, comorbidity is an independent risk factor for adverse outcome after colorectal surgery. However, the importance of comorbidity in risk-adjustment models is limited. Probably the work and costs of data collection for auditing can be reduced, without compromising risk-adjustment. abstract_id: PUBMED:26174550 Assessing measures of comorbidity and functional status for risk adjustment to compare hospital performance for colorectal cancer surgery: a retrospective data-linkage study. Background: Comparing outcomes between hospitals requires consideration of patient factors that could account for any observed differences. Adjusting for comorbid conditions is common when studying outcomes following cancer surgery, and a commonly used measure is the Charlson comorbidity index. Other measures of patient health include the ECOG performance status and the ASA physical status score. This study aimed to ascertain how frequently ECOG and ASA scores are recorded in population-based administrative data collections in New South Wales, Australia and to assess the contribution each makes in addition to the Charlson comorbidity index in risk adjustment models for comparative assessment of colorectal cancer surgery outcomes between hospitals. Methods: We used linked administrative data to identify 6964 patients receiving surgery for colorectal cancer in 2007 and 2008. We summarised the frequency of missing data for Charlson comorbidity index, ECOG and ASA scores, and compared patient characteristics between those with and without these measures. The performance of ASA and ECOG in risk adjustment models that also included Charlson index was assessed for three binary outcomes: 12-month mortality, extended length of stay and 28-day readmission. Patient outcomes were compared between hospital peer groups using multilevel logistic regression analysis. Results: The Charlson comorbidity index could be derived for all patients, ASA score was recorded for 78 % of patients and ECOG performance status recorded for only 24 % of eligible patients. Including ASA or ECOG improved the predictive ability of models, but there was no consistently best combination. The addition of ASA or ECOG did not substantially change parameter estimates for hospital peer group after adjusting for Charlson comorbidity index. Conclusions: While predictive ability of regression models is maximised by inclusion of one or both of ASA score and ECOG performance status, there is little to be gained by adding ASA or ECOG to models containing the Charlson comorbidity index to address confounding. The Charlson comorbidity index has good performance and is an appropriate measure to use in risk adjustment to compare outcomes between hospitals. abstract_id: PUBMED:31209671 Assessment of the Value of Comorbidity Indices for Risk Adjustment in Colorectal Surgery Patients. Background And Purpose: Comorbidity indices (CIs) are widely used in retrospective studies. We investigated the value of commonly used CIs in risk adjustment for postoperative complications after colorectal surgery. Methods: Patients undergoing colectomy without stoma for colonic neoplasia at a single institution from 2009 to 2014 were included. Four CIs were calculated or obtained for each patient, using administrative data: Charlson-Deyo (CCI-D), Charlson-Romano (CCI-R), Elixhauser Comorbidity Score, and American Society of Anesthesiologists classification. Outcomes of interest in the 90-day postoperative period were any surgical complication, surgical site infection (SSI), Clavien-Dindo (CD) grade 3 or higher complication, anastomotic leak or abscess, and nonroutine discharge. Base models were created for each outcome based on significant bivariate associations. Logistic regression models were constructed for each outcome using base models alone, and each index as an additional covariate. Models were also compared using the DeLong and Clarke-Pearson method for receiver operating characteristic (ROC) curves, with the CCI-D as the reference. Results: Overall, 1813 patients were included. Postoperative complications were reported in 756 (42%) patients. Only 9% of patients had a CD grade 3 or higher complication, and 22.8% of patients developed an SSI. Multivariable modeling showed equivalent performance of the base model and the base model augmented by the CIs for all outcomes. The ROC curves for the four indices were also similar. Conclusions: The inclusion of CIs added little to the base models, and all CIs performed similarly well. Our study suggests that CIs do not adequately risk-adjust for complications after colorectal surgery. abstract_id: PUBMED:25524216 Model for risk adjustment of postoperative mortality in patients with colorectal cancer. Background: A model was developed for risk adjustment of postoperative mortality in patients with colorectal cancer in order to make fair comparisons between healthcare providers. Previous models were derived in relatively small studies with the use of suboptimal modelling techniques. Methods: Data from adults included in a national study of major surgery for colorectal cancer were used to develop and validate a logistic regression model for 90-day mortality. The main risk factors were identified from a review of the literature. The association with age was modelled as a curved continuous relationship. Bootstrap resampling was used to select interactions between risk factors. Results: A model based on data from 62 314 adults was developed that was well calibrated (absolute differences between observed and predicted mortality always smaller than 0·75 per cent in deciles of predicted risk). It discriminated well between low- and high-risk patients (C-index 0·800, 95 per cent c.i. 0·793 to 0·807). An interaction between age and metastatic disease was included as metastatic disease was found to increase postoperative risk in young patients aged 50 years (odds ratio 3·53, 95 per cent c.i. 2·66 to 4·67) far more than in elderly patients aged 80 years (odds ratio 1·48, 1·32 to 1·66). Conclusion: Use of this model, estimated in the largest number of patients with colorectal cancer to date, is recommended when comparing postoperative mortality of major colorectal cancer surgery between hospitals, clinical teams or individual surgeons. abstract_id: PUBMED:28806301 Risk Adjustment in ALPPS Is Associated With a Dramatic Decrease in Early Mortality and Morbidity. Objective: To longitudinally assess whether risk adjustment in Associating Liver Partition and Portal Vein Ligation for Staged Hepatectomy (ALPPS) occurred over time and is associated with postoperative outcome. Background: ALPPS is a novel 2-stage hepatectomy enabling resection of extensive hepatic tumors. ALPPS has been criticized for its high mortality, which is reported beyond accepted standards in liver surgery. Therefore, adjustments in patient selection and technique have been performed but have not yet been studied over time in relation to outcome. Methods: ALPPS centers of the International ALPPS Registry having performed ≥10 cases over a period of ≥3 years were assessed for 90-day mortality and major interstage complications (≥3b) of the longitudinal study period from 2009 to 2015. The predicted prestage 1 and 2 mortality risks were calculated for each patient. In addition, questionnaires were sent to all centers exploring center-specific risk adjustment strategies. Results: Among 437 patients from 16 centers, a shift in indications toward colorectal liver metastases from 53% to 77% and a reverse trend in biliary tumors from 24% to 9% were observed. Over time, 90-day mortality decreased from initially 17% to 4% in 2015 (P = 0.002). Similarly, major interstage complications decreased from 10% to 3% (P = 0.011). The reduction of 90-day mortality was independently associated with a risk adjustment in patient selection (P &lt; 0.001; OR: 1.62; 95% CI: 1.36-1.93) and using less invasive techniques in stage-1 surgery (P = 0.019; OR: 0.39; 95% CI: 0.18-0.86). A survey indicated risk adjustment of patient selection in all centers and ALPPS technique in the majority (80%) of centers. Conclusions: Risk adjustment of patient selection and technique in ALPPS resulted in a continuous drop of early mortality and major postoperative morbidity, which has meanwhile reached standard outcome measures accepted for major liver surgery. abstract_id: PUBMED:28337579 Comorbidity and the risk of anastomotic leak in Chinese patients with colorectal cancer undergoing colorectal surgery. Purpose: Comorbidities had considerable effects on the prognosis in patients with colorectal cancer (CRC). The primary aim of the present study was to examine the influence of comorbidity on the risk of anastomotic leak (AL) in patients with CRC who underwent surgical resection. Methods: Using the electronic Hospitalization Summary Reports in the top-ranked public hospitals in China, we identified 11,397 patients with CRC undergoing resection surgery from 2013 through 2015. We estimated the risk of AL according to Charlson Comorbidity Index (CCI) score using logistic regression analysis, adjusting for age, sex, and geographic regions. Results: The incidence rate of AL in the study population was 1.8% (204/11,397). Multivariable analyses identified male sex and CCI score as independent risk factors for AL. The CCI score had a positive graded association with the risk of AL (P for trend = 0.006). The risk increased by an estimated 10.2% (odds ratio [OR], 1.10; 95% confidence interval [CI], 1.03-1.18) for each additional 1 point in the CCI score. After adjusting for potential confounders, patients with a CCI score ≥3 had 1.82 times (95% CI, 1.24-2.69) higher risk of AL compared with patients with a CCI score of 0. Conclusion: The findings suggested that CCI score was an independent risk factor for the development of AL in Chinese patients with CRC who underwent surgical resection. abstract_id: PUBMED:27621179 Clinical risk scores in the current era of neoadjuvant chemotherapy for colorectal liver metastases. Background: Clinical risk scores (CRS) within the context of neoadjuvant chemotherapy for colorectal liver metastases (CRLM) has not been validated. The predictive value of clinical risk scoring in patients administered neoadjuvant chemotherapy prior to liver surgery for CRLM is evaluated. Methods: A prospective database over a 15-year period (April 1999 to March 2014) was analysed. We identified two groups: A, neoadjuvant chemotherapy prior to CRLM surgery; and B, no neoadjuvant chemotherapy. Results: Overall median survival in groups A and B were 36 (2-137) months and 33 (2-137) months. In group A, nodal status, size, number of metastases and carcinoembryonic antigen levels were not found to be independent predictors of overall survival (OS). However, patients with a shorter disease-free interval of less than 12 months had an increased OS (P = 0.0001). Multivariate analysis of high- and low-risk scores compared against survival in group B (P &lt; 0.05) confirms the applicability of the scoring system in traditional settings. Conclusion: Traditional CRS are not a prognostic predictive tool when applied to patients receiving neoadjuvant chemotherapy for CRLM. Disease-free interval may be one independent variable for use in future risk score systems specifically developed for the neoadjuvant chemotherapy era. abstract_id: PUBMED:20931199 Risk adjustment as basis for rational benchmarking: the example of colon carcinoma. Background: The results of resection of colorectal carcinoma can vary greatly from one hospital to another. However, this does not necessarily reflect differences in the quality of treatment. The purpose of this study was to compare various tools for the risk-adjusted assessment of treatment results after resection of colorectal carcinoma within the context of hospital benchmarking. Methods: On the basis of a data pool provided by a multicentric observation study of patients with colon cancer, the postoperative in-hospital mortality rates at two high-volume hospitals ("A" and "B") were compared. After univariate comparison, risk-adjusted comparison of postoperative mortality was performed by logistic regression analysis (LReA), propensity-score analysis (PScA), and the CR-POSSUM score. Postoperative complications were compared by LReA and PScA. Results: Although postoperative mortality differed significantly (P = 0.041) in univariate comparison of hospitals A and B (2.9% vs. 6.4%), no significant difference was found by LReA or PScA. Similarly, the observed mortality at these did not differ significantly from the mortality estimated by the CR-POSSUM score (hospital A, 2.9%/4.9%, P = 0.298; hospital B, 6.4%/6.5%, P = 1.000). Significant differences were seen in risk-adjusted comparison of most postoperative complications (by both LReA and PScA), but there were no differences in the rates of relaparotomy or anastomotic leakage that required surgery. Conclusions: For the hard outcome variable "postoperative mortality," none of the three risk adjustment procedures showed any difference between the hospitals. The CR-POSSUM score can be regarded as the most practicable tool for risk-adjusted comparison of the outcome of colon-carcinoma resection in clinical benchmarking. abstract_id: PUBMED:22440055 Relevance of the c-statistic when evaluating risk-adjustment models in surgery. Background: The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. Study Design: From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. Results: During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. Conclusions: In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. abstract_id: PUBMED:15585394 Evaluation of Charlson-Age Comorbidity Index as predictor of morbidity and mortality in patients with colorectal carcinoma. The Charlson-Age Comorbidity Index (CACI) is a validated tool used to predict patient outcome based on comorbid medical conditions. We wanted to determine if the CACI would predict morbidity and mortality outcomes in patients undergoing surgery for colorectal carcinoma. Records of 279 consecutive colorectal cancer patients who underwent laparotomy by a single surgical group between 1997 and 2001 were reviewed in a retrospective fashion for patient demographics, stage at diagnosis, operation, surgeon, perioperative complications, tumor characteristics, comorbid diseases, performance status, length of stay (LOS), disposition, and mortality. Using the preoperative history and physical, all patients were assigned a score for the CACI. Perioperative morbidity and mortality were recorded and graded to account for severity. The University Statistical Consulting Center and SPSS software were used to analyze the results. The patients were primarily white (97.1%) with a male-to-female ratio of 1:1.2 and a median age of 72 years. AJCC stage at presentation was stage 0 (3.2%), stage I (28.3%), stage II (24.4%), stage III (24.4%), or stage IV (19.7%). Median LOS was 7.0 days. Perioperative mortality was 17 of 279 (6.1%), and overall mortality was 32.6% at a median follow-up of 18.5 months. Higher CACI scores and AJCC stage at presentation correlated with longer LOS and overall mortality. Only the CACI correlated with perioperative mortality and disposition. No correlation was observed with location of tumor, type of surgery, or surgeon. Patients with higher cumulative number of weighted comorbid conditions as indicated by the CACI are at higher risk for perioperative and overall mortality. This simple scoring system is also a significant predictor of disposition (home versus extended care facility) and LOS. The CACI can be a useful preoperative tool to assess and counsel patients undergoing surgery for colorectal carcinoma. Answer: The use of different comorbidity scores for risk adjustment in the evaluation of quality of colorectal cancer surgery does matter, but the impact may be limited. Studies have shown that comorbidity is an independent risk factor for adverse outcomes after colorectal surgery, and different comorbidity scores such as the American Society of Anesthesiologists Physical Status classification (ASA) score, the sum of diseased organ systems (SDOS), the Charlson Comorbidity Index (CCI), and specific comorbidities are all strongly associated with 30-day mortality and a prolonged length of stay (LOS) (PUBMED:22703758). However, the predictive values for all comorbidity measures were similar with regard to adverse post-operative outcomes, and omitting comorbidity only had a marginal effect on the predictive value of the model (PUBMED:22703758). In another study, the inclusion of ASA or ECOG performance status improved the predictive ability of models, but there was no consistently best combination. The addition of ASA or ECOG did not substantially change parameter estimates for hospital peer group after adjusting for Charlson comorbidity index, suggesting that while predictive ability is maximized by inclusion of one or both of ASA score and ECOG performance status, there is little to be gained by adding these to models containing the Charlson comorbidity index to address confounding (PUBMED:26174550). Furthermore, an assessment of the value of comorbidity indices for risk adjustment in colorectal surgery patients found that the inclusion of comorbidity indices added little to the base models, and all comorbidity indices performed similarly well, suggesting that comorbidity indices do not adequately risk-adjust for complications after colorectal surgery (PUBMED:31209671). In summary, while comorbidity scores are important for risk adjustment in the evaluation of quality of colorectal cancer surgery, the choice of comorbidity measure may have a limited impact on the predictive value of risk-adjustment models. The Charlson comorbidity index, in particular, has been found to be an appropriate measure to use in risk adjustment to compare outcomes between hospitals (PUBMED:26174550).
Instruction: Do subfertile women adjust their habits when trying to conceive? Abstracts: abstract_id: PUBMED:27216564 Do subfertile women adjust their habits when trying to conceive? Aim: The aim of this study was to investigate lifestyle habits and lifestyle adjustments among subfertile women trying to conceive. Materials And Methods: Women (n = 747) were recruited consecutively at their first visit to fertility clinics in mid-Sweden. Participants completed a questionnaire. Data were analyzed using logistic regression, t tests, and chi-square tests. Results: The response rate was 62% (n = 466). Mean duration of infertility was 1.9 years. During this time 13.2% used tobacco daily, 13.6% drank more than three cups of coffee per day, and 11.6% consumed more than two glasses of alcohol weekly. In this sample, 23.9% of the women were overweight (body mass index, BMI 25-29.9 kg/m(2)), and 12.5% were obese (BMI ≥30 kg/m(2)). Obese women exercised more and changed to healthy diets more frequently than normal-weight women (odds ratio 7.43; 95% confidence interval 3.7-14.9). Six out of ten women (n = 266) took folic acid when they started trying to conceive, but 11% stopped taking folic acid after some time. Taking folic acid was associated with a higher level of education (p &lt; 0.001). Conclusions: Among subfertile women, one-third were overweight or obese, and some had other lifestyle factors with known adverse effects on fertility such as use of tobacco. Overweight and obese women adjusted their habits but did not reduce their body mass index. Women of fertile age would benefit from preconception counseling, and the treatment of infertility should routinely offer interventions for lifestyle changes. abstract_id: PUBMED:25162259 Nutritional and weight management behaviors in low-income women trying to conceive. Objective: To evaluate the nutritional habits and weight management strategies of women trying to conceive as compared with women not trying to conceive. Methods: This was a cross-sectional survey of health behaviors including nutritional habits and weight management strategies of women aged 16-40 years who were low income, racially diverse, (n=1,711), and attending reproductive health clinics. Multivariable logistic regression analyses were performed to examine the association between pregnancy intention and various health behaviors after adjusting for demographic variables, gravidity, and obesity status. Results: A total of 8.9% (n=153) of the participants stated they were trying to get pregnant. Women trying to conceive were more likely than those not trying to have participated in a number of unhealthy weight loss practices in the past year. These included taking diet pills, supplements, or herbs (13.5% compared with 8.8%; adjusted odds ratio [OR] 1.97, 95% confidence interval [CI] 1.11-3.49), using laxatives or diuretics or inducing vomiting (7.7% compared with 3.0%; adjusted OR 2.70, CI 1.23-5.91), and fasting for 24 hours (10.7% compared with 5.5%; adjusted OR 2.15, CI 1.03-4.51). There were no significant differences between the two groups in amount of exercise, current smoking status, or current alcohol consumption Furthermore, fruit, green salad and other vegetables, and intake of soda and fast food were unrelated to pregnancy intention. Conclusion: This study highlights that women trying to conceive are more likely to participate in unhealthy and potentially dangerous weight loss practices than women not trying to conceive. Level Of Evidence: II. abstract_id: PUBMED:31094573 Time to conception and the menstrual cycle: an observational study of fertility app users who conceived. The objective of this research was to evaluate the association between menstrual cycle characteristics (cycle length, cycle-length variability, and their interaction) and the amount of time it takes women to conceive using a robust multiple linear regression. Participants downloaded Ovia Fertility in 2015 indicated that they had just started trying to conceive, and reported conception within 12 months (n = 45,360, adjusted model n = 8835). The average time to conception among women in the adjusted model was 3.94 months (n = 8835). Women with normal cycle lengths (27-29 days) conceived more quickly than women with cycle lengths of 25-26 days (+0.41 months; p &lt; 0.001), 30-31 days (+0.27 months; p &lt; 0.01), 32-33 days (+0.44 months; p &lt; 0.001), and 34+ days (+0.75 months; p &lt; 0.001). Women with regular cycle-length variability (&lt;9 days between cycles) conceived more quickly than women with irregular variability (+0.72 months; p &lt; 0.001). Results of the interaction analysis indicated that, among women with regular cycle-length variability, those with normal cycle length had shorter time to conception than women with either short or long cycle length. The interaction between cycle length and cycle-length variability provided enhanced insights into the amount of time it takes to conceive, compared to either indicator alone. abstract_id: PUBMED:37625177 Predictors of Fertility-Awareness-Based Method Use Among Women Trying to Conceive and Women Contemplating Pregnancy. Background: Fertility-awareness-based methods have been linked to shorter time to conception; however, little is known about the predictors of fertility-awareness-based method use among women who are currently or will soon be trying to conceive. Objectives: The aim of this study was to identify predictors of fertility-awareness-based methods use among women trying to conceive or contemplating pregnancy within the next year. Methods: Women participating in the Nurses' Health Study 3 were asked if they were trying to become pregnant or contemplating pregnancy and whether they are using fertility-awareness-based methods. Multivariable negative binomial regression was used to identify predictors for a number of fertility-awareness-based methods used. Results: Among the 23,418 women asked about pregnancy intention since 2015, 955 were trying to conceive, and 2,282 were contemplating pregnancy within the next year. The three most used fertility-awareness-based methods among women trying to conceive were menstrual cycle tracking, ovulation prediction kits, and cervical mucus monitoring. Among women contemplating pregnancy, the three most commonly used methods were menstrual cycle tracking, cervical mucus monitoring, and basal body temperature monitoring. The ongoing duration of pregnancy attempts and gravidity were associated with the number of methods used among women actively trying to conceive. When compared with women who were trying for 2 months or less, the number of methods was 29% higher when trying for 3-5 months, 45% higher when trying for 6-12 months, and 38% higher when trying for more than 1 year. Compared with nulligravid women, the number of methods was lower for women with a history of two or more pregnancies. Among women contemplating pregnancy, those who were married or in a domestic partnership used more fertility-awareness-based methods than unpartnered women. No other significant predictors of fertility-awareness-based method use were identified. Discussion: Duration of ongoing pregnancy attempt and gravidity were the only significant predictors for the number of fertility-awareness-based methods used among women actively trying to conceive, whereas partnership was the only significant predictor of the number of fertility-awareness-based methods among women contemplating pregnancy. abstract_id: PUBMED:32609396 Technologies of time: women's practices of trying to conceive with ovulation biosensing. Ovulation biosensors are devices worn on or used with the body, which can help women detect ovulation. The manufacturers of such devices claim that if women know when they ovulate, couples can arrange heterosexual intercourse during this time, and thus increase their chances of conceiving. Within the contemporary UK context, in which becoming pregnant is presented in the popular media, and in medical discourses, as more difficult for women in their thirties and forties, manufacturers' claims are attractive for those trying to conceive. Yet few sociological studies have examined women's practices of ovulation biosensing. Drawing on women's accounts of tracking ovulation, this paper explores how such practices fit into their trajectories of trying to conceive. It examines why ovulation biosensing seemingly becomes helpful, relevant or important during this time. Ovulation biosensors, it argues, alter the landscape of trying to become pregnant by introducing new stages and materialities which seemingly place women closer to conception. Women engage in ovulation biosensing, not only to help them become pregnant, but also as a way to manage the complexities of fertility and the uncertainties of becoming pregnant in contemporary society. abstract_id: PUBMED:35151930 Endometrial hyperplasia in asymptomatic subfertile population. Objective: To establish the prevalence of endometrial hyperplasia in asymptomatic subfertile women, identify the clinical characteristics of the group at risk; predict the associated factors, and determine the predictive cut-off value of endometrial thickness MATERIAL &amp; METHODS: Prospective cross-sectional study of asymptomatic subfertile women recruited from university-affiliated tertiary hospital. Patients received hormonal treatment in the last three months, suffered from premature ovarian failure, ovarian cyst, thyroid malfunction, hyperprolactinaemia or underwent endometrial biopsy for abnormal uterine bleeding were all excluded. Endometrial biopsy was performed for the study group during the luteal phase as a screening test for endometrial hyperplasia RESULT (S): Among hundred and eleven patients underwent endometrial biopsy, endometrial hyperplasia was found in 12 cases (11%) and atypical hyperplasia was detected in one patient (0.9%). All cases of and endometrial hyperplasia with or without atypia were reported among women with polycystic ovary syndrome. The study showed that a patient age, BMI, estradiol levels, insulin resistance and endometrial thickness of a cut-off value of 7.5 mm, have a positive correlation with endometrial hyperplasia in subfertile women with polycystic ovary syndrome. The areas under the ROC curves for these parameters were greater than (0.5). Conclusion(s): In asymptomatic subfertile population, PCOS women at risk to develop endometrial hyperplasia &amp; atypical hyperplasia. Selective endometrial biopsy recommended with BMI ≥ 30, in presence of insulin resistance and with endometrial thickness ≥7.5 mm. abstract_id: PUBMED:23850303 Cervical mucus monitoring prevalence and associated fecundability in women trying to conceive. Objective: To assess the use of cervical mucus monitoring (CMM) in women trying to conceive and determine whether monitoring is associated with increased cycle-specific probability of conception (fecundability). Design: Time-to-pregnancy cohort study. Setting: Population-based cohort. Patient(s): Three hundred thirty-one women trying to conceive, ages 30 to 44 years, without known infertility. Intervention(s): None. Main Outcome Measure(s): CMM prevalence and fecundability. Result(s): During the first cycle of the study, CMM was performed consistently (checked on &gt;66% of pertinent cycle days) by 20 women (6%), inconsistently (34% to 66% of days) by 60 women (18%), infrequently (≤33% of days) by 73 women (22%), and not performed by 178 women (54%). Cycles in which CMM was consistently performed were statistically significantly more likely to result in conception after adjusting for age, race, previous pregnancy, body mass index, intercourse frequency, and urinary luteinizing hormone (LH) monitoring. Fecundability also increased with increasing consistency of CMM. Conclusion(s): Among women trying to conceive, CMM is uncommon, but our study suggests that CMM-a free, self-directed method to determine the fertile window-is associated with increased fecundability independent of intercourse frequency or use of urinary LH monitoring. abstract_id: PUBMED:33961806 An Integrative Review of Fertility Knowledge and Fertility-Awareness Practices Among Women Trying to Conceive. Objective: To synthesize the literature on fertility knowledge and fertility-awareness among women seeking pregnancy. Data Sources: The search terms "fertility-awareness OR fertility knowledge AND women AND subfertile OR infertile OR seeking pregnancy OR trying to conceive OR pre-conception OR conception NOT contraception NOT birth control" were used via CINAHL, PubMed, and Web of Science. Primary research studies were considered in the search parameters. Study Selection: Searches yielded 116 studies published between 1978 and 2020. After screening, 43 full-text studies were assessed for eligibility, 35 of which were omitted as not relevant. Nine quantitative studies using cross-sectional designs met this review's inclusion criteria. Data Extraction: Studies were reviewed for information on the relationship between fertility knowledge/fertility-awareness and pregnancy intention. Studies examining the results of fertility-awareness based method (FABM) efficacy, FABMs for contraception, and provider knowledge regarding FABMs were omitted. Data Synthesis: Analysis showed low knowledge regarding the identification of the fertile window in the menstrual cycle to optimize pregnancy. There was moderate general knowledge on fertility (e.g., infertility definition, age of fertility decline, etc.). Use of an FABM and education regarding FABMs were infrequent, yet participants recognized that it would be beneficial to use and learn when trying to conceive. Conclusion: Women seeking pregnancy have low to moderate fertility knowledge. More research is necessary on the relationship between fertility knowledge/fertility-awareness and unexplained infertility. Nurses, nurse practitioners, and nurse-midwives should provide education on FABMs to women when they first report difficulty achieving pregnancy. Clinicians' approaches toward FABM education for women and how to implement FABM education into the preconception visit are important areas for future research. abstract_id: PUBMED:18974061 Nutrition-related information-seeking behaviours of women trying to conceive and pregnant women: evidence for the life course perspective. Objectives: To examine whether preconception and pregnancy could be an occasion triggering women's interest, search and need for both general and pregnancy-specific nutrition-related information, in order to: (i) provide a greater understanding of the life course perspective (in this case on nutrition behaviours and pregnancy) and (ii) to contribute to the rationale of nutrition interventions aimed at women of child-bearing age. Methods: Data were collected in a cross-sectional study with the aid of a face-to-face interview. The sample consisted of four groups each of about 100 Dutch nulliparous women each: women trying to conceive and women in the first, second or third trimester of their first pregnancy. Results: Pregnant women were more interested in nutrition information than before preconception and before pregnancy, especially the first-trimester group. The frequency of search for nutrition information differed significantly between the groups: the first-trimester group searched for information most frequently. Women wanting to conceive and women in their third trimester of pregnancy found significantly more pregnancy-specific information topics than women in their first and second trimester of pregnancy. Women wanting to conceive had significantly higher needs for general and specific nutrition-related topics compared to pregnant women. Overall, the Internet, the midwife and books were the favourite information sources. Conclusions: The study provided indications that preconception and pregnancy are moments in life that lead to an increased interest, need and search for particularly pregnancy-specific nutrition-related information. This should be borne in mind when healthy nutrition promotion activities are being developed. abstract_id: PUBMED:35568191 Attempts to conceive and the COVID-19 pandemic: data from the Apple Women's Health Study. Background: Previous studies have suggested that emergent events may affect pregnancy planning decisions. However, few have investigated the effect of factors related to the COVID-19 pandemic on pregnancy planning, measured by attempting conception, and how attempting conception status may differ by individual-level factors, such as social status or educational level. Objective: This study aimed to examine the effects of factors related to the COVID-19 pandemic, until March 2021, on attempting conception status and to assess the effect measure modification by educational level and subjective social status. Study Design: We conducted a longitudinal analysis within a subgroup of 21,616 participants in the Apple Women's Health Study who enrolled from November 2019 to March 2021, who met the inclusion criteria, and who responded to the monthly status menstrual update question on attempting conception status (yes or no). Participants reporting hysterectomy, pregnancy, lactation, or menopause were excluded. We used generalized estimating equation methodology to fit logistic regression models that estimate odds ratios and 95% confidence intervals for the association between the proportion of participants attempting conception and the month of response (compared with a prepandemic reference month of February 2020) while accounting for longitudinal correlation and adjusting for age, race and ethnicity, and marital status. We stratified the analysis by social status and educational level. Results: We observed a trend of reduced odds of attempting conception, with an 18% reduction in the odds of attempting conception in August 2020 and October 2020 compared with the prepandemic month of February 2020 (August odds ratio: 0.82 [95% confidence interval, 0.70-0.97]; October odds ratio: 0.82 [95% confidence interval, 0.69-0.97). The participants with lower educational level (no college education) experienced a sustained reduction in the odds of attempting to conceive from June 2020 to March 2021 compared with February 2020, with up to a 24% reduction in the odds of attempting to conceive in October 2020 (odds ratio, 0.76; 95% confidence interval, 0.59-0.96). Among participants that were college educated, we observed an initial reduction in the odds of attempting to conceive starting in July 2020 (odds ratio 0.73; 95% confidence interval, 0.54-0.99) that returned near prepandemic odds. Moreover, we observed a reduction in the odds of attempting to conceive among those with low subjective social status, with a decline in the odds of attempting to conceive beginning in July 2020 (odds ratio, 0.83; 95% confidence interval, 0.63-1.10) and continuing until March 2021 (odds ratio, 0.79; 95% confidence interval, 0.59-1.06), with the greatest reduction in odds in October 2020 (odds ratio, 0.67; 95% confidence interval, 0.50-0.91). Conclusion: Among women in the Apple Women's Health Study cohort, our findings suggested a reduction in the odds of attempting to conceive during the COVID-19 pandemic, until March 2021, particularly among women of lower educational level and lower perceived social status. Answer: Yes, subfertile women do adjust their habits when trying to conceive. A study found that among subfertile women, one-third were overweight or obese, and some had other lifestyle factors with known adverse effects on fertility, such as the use of tobacco. Overweight and obese women in the study exercised more and changed to healthier diets more frequently than normal-weight women. However, they did not reduce their body mass index. Six out of ten women took folic acid when they started trying to conceive, but 11% stopped taking folic acid after some time. Taking folic acid was associated with a higher level of education (PUBMED:27216564). Another study highlighted that women trying to conceive were more likely to participate in unhealthy and potentially dangerous weight loss practices than women not trying to conceive. These practices included taking diet pills, supplements, or herbs; using laxatives or diuretics or inducing vomiting; and fasting for 24 hours. There were no significant differences between the two groups in the amount of exercise, current smoking status, or current alcohol consumption (PUBMED:25162259). Additionally, the use of fertility-awareness-based methods (FABMs) among women trying to conceive or contemplating pregnancy was studied. The ongoing duration of pregnancy attempts and gravidity were associated with the number of methods used among women actively trying to conceive. Women who had been trying for longer periods used more FABMs. Among women contemplating pregnancy, those who were married or in a domestic partnership used more FABMs than unpartnered women (PUBMED:37625177). In summary, subfertile women do make adjustments to their habits when trying to conceive, including taking folic acid, exercising more, changing to healthier diets, and using FABMs. However, some may also engage in unhealthy weight loss behaviors, and the extent of habit adjustments can vary based on factors such as obesity status, duration of trying to conceive, and partnership status.
Instruction: Does antenatal identification of small-for-gestational age fetuses significantly improve their outcome? Abstracts: abstract_id: PUBMED:15717289 Does antenatal identification of small-for-gestational age fetuses significantly improve their outcome? Objectives: Most obstetric clinics have a program for the identification of small-for-gestational age (SGA) fetuses because of the increased risk of fetal complications that they present. We have a structured model for the identification and follow-up of SGA pregnancies. We aimed to determine whether the recognition of SGA antepartum improves fetal outcome. Methods: All pregnancies at Malmö University Hospital from 1990 to 1998 (n = 26 968) were reviewed. SGA fetuses identified prior to delivery (n = 681) were compared with those not identified (n = 573). Also, all pregnancies with SGA fetuses were compared with those appropriate-for-gestational age (AGA) (n = 24 585). The risk of serious fetal complications (hypoxic encephalopathy grade 2 or 3, intracranial hemorrhage, Apgar score &lt;4 at 5 min, neonatal convulsions, umbilical pH &lt;7.0, cerebral palsy, mental retardation, stillbirth, intrapartum or infant death) was assessed with cross-tabulation and logistic regression analysis, adjusted for gestational age and degree of SGA. Results: When compared with SGA fetuses identified before delivery (54%), SGA fetuses not identified before delivery were characterized by a four-fold increased risk of adverse fetal outcome (odds ratio, 4.1; 95% CI, 2.5-6.8). Similarly, compared with AGA fetuses, SGA fetuses were associated with a four-fold increased risk of serious fetal complications. Conclusions: A structured antenatal surveillance program for fetuses identified as SGA results in a lower risk of adverse fetal outcome, compared with cases of SGA fetuses not identified antepartum. abstract_id: PUBMED:37875267 Importance of antenatal identification of small for gestational age fetuses on perinatal and childhood outcomes: A register-based cohort study. Introduction: Fetal growth restriction (FGR) is associated with increased risk for stillbirth, perinatal morbidity, cerebral palsy, neurodevelopmental disorders and cardiovascular disease later in life. Identifying small-for-gestational-age (SGA) fetuses is crucial for the diagnosis of FGR. The aim of this study was to investigate the association between antenatal identification of SGA fetuses and severe adverse perinatal and childhood outcome. Material And Methods: A register-based cohort study of all newborns delivered in Stockholm in 2014 and 2017. Inclusion Criteria: singleton pregnancies without chromosomal aberrations or structural abnormalities, with a gestational age at delivery between 22+0 and 43+0 (n = 48 843). Data from childbirth records were linked to data from nationwide Swedish registers. Pregnancy including offspring data were reviewed. Adverse outcomes for non-identified and identified SGA newborns were compared using logistic regression models. Primary outcome was a composite outcome called severe adverse outcome, defined as at least one of the following: stillbirth, severe newborn distress (Apgar score &lt;4 at 5 min, pH &lt;7 or resuscitation activities &gt;10 min), severe neonatal outcome (hypoxic ischemic encephalopathy 2-3, necrotizing enterocolitis, neonatal seizures, intraventricular hemorrhage grade 3-4, bronchopulmonary disease or death at &lt;1 year), severe childhood outcome (cognitive impairment or motor impairment or cerebral palsy or hearing impairment or visual impairment or death at 1-3 years old). Secondary outcomes were stillbirth, severe newborn distress, severe neonatal outcome, severe childhood outcome. Results: No association was found between antenatal identification of SGA fetuses and severe adverse outcome using the complete composite outcome (adjusted odds ratio [aOR] 1.19, 95% confidence interval [CI] 0.93-1.53). In subgroup analyses, non-identified SGA fetuses had an almost fivefold increased risk for stillbirth (aOR 4.79, 95% CI 2.63-8.72) and an increased risk for severe newborn distress (aOR 1.36, 95% CI 1.02-1.82), but a decreased risk for severe childhood outcome (aOR 0.63, 95% CI 0.40-0.99). No association was found between antenatal identification of SGA and severe neonatal outcome. Conclusions: Non-identified SGA fetuses have an increased risk for stillbirth and severe newborn distress. Conversely, identified SGA fetuses have an increased risk for severe childhood outcome. abstract_id: PUBMED:26552861 Antenatal corticosteroid treatment in singleton, small-for-gestational-age infants born at 24-31 weeks' gestation: a population-based study. Objective: To assess the impact of antenatal corticosteroid therapy on mortality and severe morbidities in preterm, small-for-gestational-age (SGA) neonates compared with preterm non-SGA neonates. Design: Population-based study. Setting/population: Israel National Very Low Birth Weight infant database from 1995-2012. Methods: Singleton infants of 24-31 weeks' gestation, without major malformations. Antenatal corticosteroids were considered either any treatment or no treatment. Main Outcome Measures: Univariate and multivariable logistic regression analyses were performed to assess the effect of antenatal corticosteroids on neonatal mortality and a composite adverse outcome of mortality or severe neonatal morbidity. Results: Among the 10 887 study infants, 1771 were SGA. Of these, 70.4% of SGA and 66.7% of non-SGA neonates were exposed to antenatal corticosteroids. Among SGA neonates, antenatal corticosteroids were associated with decreased mortality (32.2 versus 19.3%, P &lt; 0.0001) and composite adverse outcome (54.1 versus 43.4%, P &lt; 0.0001), similar to the effect in non-SGA neonates (mortality 26.7 versus 12.2%, P &lt; 0.0001; composite outcome 50.5 versus 34.6%, P &lt; 0.0001). Multivariable logistic regression analyses demonstrated a 50% reduction in mortality risk among SGA and 57% reduction in non-SGA neonates exposed to corticosteroids [OR = 0.50, 95% confidence interval (95% CI) 0.39-0.64 and OR = 0.43, 95% CI 0.38-0.47, respectively], P-value for interaction = 0.08. Composite adverse outcome risk was significantly reduced in SGA (OR = 0.67, 95% CI 0.54-0.83) and non-SGA infants (OR = 0.57, 95% CI 0.52-0.63), P-value for interaction = 0.04. Conclusions: Antenatal corticosteroids significantly reduced mortality and severe morbidities among preterm SGA neonates, with slightly a less pronounced effect compared with non-SGA preterm infants. Antenatal corticosteroids should be given to fetuses suspected of intrauterine growth retardation, at risk for preterm delivery, in order to improve perinatal outcome. Tweetable Abstract: Antenatal steroids reduced mortality and severe morbidities among singleton, preterm SGA neonates. abstract_id: PUBMED:35250380 Inadequate Timing Limits the Benefit of Antenatal Corticosteroids on Neonatal Outcome: Retrospective Analysis of a High-Risk Cohort of Preterm Infants in a Tertiary Center in Germany. Introduction A common problem in the treatment of threatened preterm birth is the timing and the unrestricted use of antenatal corticosteroids (ACS). This study was performed to evaluate the independent effects of the distinct timing of antenatal corticosteroids on neonatal outcome parameters in a cohort of very low (VLBW; 1000 - 1500 g) and extreme low birth weight infants (ELBW; &lt; 1000 g). We hypothesize that a prolonged ACS-to-delivery interval leads to an increase in respiratory complications. Materials and Methods Main data source was the prospectively collected single center data for the German nosocomial infection surveillance system (KISS) between 2015 and 2018. Multivariate regression analysis was performed to determine independent effects of the ACS-to-delivery interval on the need for ventilation, surfactant or the occurrence of bronchopulmonary dysplasia, neonatal sepsis or necrotizing enterocolitis. Subgroup analysis was performed for ELBW and VLBW neonates. Results A total of 239 neonates were included. We demonstrate a significantly increased risk of respiratory distress characterized by the need for ventilation (OR 1.045; CI 1.011 - 1.080) and surfactant administration (OR 1.050, CI 1.018 - 1.083) depending on the ACS-to-delivery interval irrespective of other confounders. Every additional day between ACS and delivery increased the risk for ventilation by 4.5% and for surfactant administration by 5%. Subgroup analysis revealed significant differences of respiratory complications in VLBW infants. Conclusions Our data strongly support the deliberate use and timing of antenatal corticosteroids in pregnancies with threatened preterm birth versus a liberal strategy. When given more than 7 days before birth, each day between application and delivery increases is relevant concerning major effects on the infant. Especially VLBW preterm neonates benefit from optimal timing. abstract_id: PUBMED:7982522 Amount of antenatal care and infant outcome. The connection between the amount of antenatal care and pregnancy outcome was studied using the 1987 Finnish Medical Birth Registry. A total of 57,108 women were included in the analysis. The timing of initiation of antenatal care and the relative number of antenatal visits (adjusted by gestation length), were used as measures of amount of antenatal care. Nine outcome variables measuring infant health and interventions were studied. Logistic regression was used to adjust for differences in maternal background characteristics. Women beginning antenatal care after the 16th week of gestation had the poorest outcome. Early attending multiparous women had a higher risk of low birthweight, premature infants, caesarean section and instrumental delivery than did those with average timing of their first attendance. For primigravidas, the increased risk was of prematurity only. A U-shaped curve was found for most of the outcome variables in regard to relative number of visits. The women with many visits had the poorest outcome, and also the highest rates of caesarean section and induction of labour. One reason for the unexpectedly high risks for early attenders may be connected with the content of antenatal care. In Finland, it might be possible to reduce the total number of antenatal visits without having any negative effect on infant health. abstract_id: PUBMED:35372176 A Timely Administration of Antenatal Steroids Is Highly Protective Against Intraventricular Hemorrhage: An Observational Multicenter Cohort Study of Very Low Birth Weight Infants. Aim: The aim of the study is to evaluate the influence of the timing of antenatal steroids (ANSs) on neonatal outcome of very low birth weight infants (VLBWI) born before 30 weeks of gestation in the German Neonatal Network. Methods: The German Neonatal Network is a large population-based cohort study enrolling VLBWIs since 2009. We included 672 neonates, who were born between January 1, 2009 and December 31, 2019 in our analysis in 10 selected centers. Infants were divided into four subgroups based on the interval between the first steroid administration and preterm birth: (I) two doses of betamethasone, ANS-birth interval: &gt;24 h to 7 days, n = 187, (II) only one dose of betamethasone, ANS-birth interval 0-24 h, n = 70, (III) two doses of betamethasone, ANS-birth interval &gt;7 days, n = 177, and (IV) no antenatal steroids, n = 238. Descriptive statistics and logistic regression analyses were performed for the main neonatal outcome parameters. Group IV (no ANS) was used as a reference. Results: An ANS-birth interval of 24 h to 7 days after the first dose was associated with a reduced risk for intraventricular hemorrhage (OR 0.17; 95% CI 0.09-0.31, p &lt; 0.001) and mechanical ventilation (OR 0.37; 95% CI 0.23-0.61, p &lt; 0.001), whereas the group of infants that only received a single dose of steroids reflected a subgroup at high risk for adverse neonatal outcomes; an ANS-birth interval of &gt;7 days was still associated with a lower risk for intraventricular hemorrhage (OR 0.43; 95% CI 0.25-0.72, p = 0.002) and the need for mechanical ventilation (OR 0.43; 95% CI 0.27-0.71, p = 0.001). Conclusion: Our observational data indicate that an ANS-birth interval of 24 h to 7 days is strongly associated with a reduced risk of intraventricular hemorrhage in VLBWIs. Further research is needed to improve the prediction of preterm birth in order to achieve a timely administration of antenatal steroids that may improve neonatal outcomes such as intraventricular hemorrhage. abstract_id: PUBMED:18939401 Gestational age at antenatal booking and delivery outcome. Antenatal care is a form of preventive medicine that pregnant women to allows maintain a state of good health throughout pregnancy, and to improve their chances of having a safe delivery of healthy infants. To achieve this aim, it is a widely held belief that pregnant women need to book early preferable before 14 weeks gestation. This is a retrospective study which reveals among others that late booking is still a common practice in the developing countries with average gestational age at booking being 23.59 (+/- 8.45) weeks, and only 14% of the women booked before the end of first trimester. Nulliparity or low parity was found to be the only factor that favoured early booking. However, gestational age at booking as a sole factor for predicting the pregnancy outcome was found to be insignificant as the outcome was same for early and late bookers. Unbooked pregnant women were found to be twice at risk of operative delivery, four times more likely to suffer delivery complications and twice likely to have low birthweight babies when compared to booked patients. In conclusion, findings of this study confirm the importance of antenatal care for better maternal and foetal outcome, however gestational age at booking as a sole factor is a poor predictor of pregnancy outcome. It is believed that this finding which is in tandem with the new WHO antenatal care protocol will generate divergent views among the health care givers and modify our current practice of Antenatal care to a more focused and effective risk assessment system. abstract_id: PUBMED:36655486 Serial plotting of symphysis-fundal height and estimated fetal weight to improve the antenatal detection of infants small for gestational age: A cluster randomised trial. Objective: To assess whether standardised longitudinal reporting of growth monitoring information improves antenatal detection of infants who are small for gestational age (SGA), compared with usual care. Design: Cluster-randomised controlled trial. Setting: Sixteen French level-3 units in 2018-2019. Population: Singleton pregnancies. Methods: The intervention consisted of the serial plotting of symphysis-fundal height (SFH) and estimated fetal weight (EFW) measurements on customised growth charts using a software program, compared with standard antenatal care. We estimated relative risks (RR) adjusted for known risk factors for fetal growth restriction (FGR). Main Outcome Measures: The primary outcome was antenatal detection of FGR among SGA births (with birthweights below the tenth centile of French customised curves), defined as the mention of suspected FGR in medical records and either referral ultrasounds for growth monitoring or indicated delivery for FGR. Secondary outcomes were false-positive rates, mode of delivery, perinatal morbidity and mortality, and number of antenatal visits and ultrasounds. Results: In total, seven intervention clusters (n = 4349) and eight control clusters (n = 4943) were analysed, after the exclusion of one intervention centre for a major deviation in protocol. SGA births represented 613 (14.1%) and 626 (12.7%) of all births, respectively. The rates of antenatal detection of FGR among SGA births were 40.0% in the intervention arm versus 37.1% in the control arm (crude RR 1.08, 95% CI 0.87-1.34; adj RR 1.09, 95% CI 0.88-1.35). No benefits of the intervention were detected in the analyses of secondary outcomes. Conclusions: Serial plotting of SFH and EFW measurements on customised growth charts did not improve the antenatal detection of FGR among SGA births. abstract_id: PUBMED:36397014 Antenatal care utilization and compliance with national and WHO guidelines in rural Ethiopia: a cohort study. Background: Antenatal health care utilization has the potential to influence maternal and new-born health. In this study, we assessed compliance of antenatal care utilization with national and World Health Organization (WHO) guidelines. We also examined association of antenatal care utilization with adverse pregnancy outcomes as secondary outcome. Methods: This was a community-based cross sectional study conducted from July 2016 to November 2017 in rural south-central Ethiopia. We described antenatal care received by pregnant women, whom we followed at three prescheduled visits during pregnancy and collected birth data at time of delivery. Extent of antenatal care content received, timing of antenatal care, place of antenatal care and place and mode of delivery were obtained and computed in accordance with national and WHO guidelines. For adverse pregnancy outcomes, computed as sum of low birth weight, preterm birth, intrauterine foetal death, and stillbirth, the exposure variable used was antenatal care utilization. Results: Seven hundred and four (704) women participated in the study, and 536 (76.1%) had attended at least one antenatal care visit. Among women who attended antenatal care visit, majority, 421 (79.3%), had done so at health centres and hospitals, while 110 (20.7%) attended at health post. Average number of antenatal care visits was 2.5, which is less than that recommended in national and WHO guidelines. Only 18 (2.6%) women had attended antenatal care in their first trimester, which is low in contrast to the expected 100% specified in the guidelines. Less than half (47%) of the women delivered in a health facility. This is in contrast to the 100% expected health institution deliveries. Low birth weight was 7.9% (n = 48), and preterm birth was 4.9% (n = 31). There were 12 twin pregnancies, three stillbirths, 11 spontaneous abortions, and two intrauterine foetal deaths. We did not find significant association between adverse pregnancy outcomes and antenatal care utilization (COR = 1.07, 95% CI 0.62, 1.86). Conclusion: This study showed that antenatal care service utilization in the study area was markedly low compared to that recommended in national and WHO guidelines. The obtained antenatal health care utilization was not associated with the registered adverse pregnancy outcomes. abstract_id: PUBMED:21432081 Improved perinatal health through qualified antenatal care in urban Phnom Penh, Cambodia. Objectives: The aim of this study is to examine the utilities of antenatal care with comprehensive health education qualified in Phnom Penh for the health of mothers and infants during perinatal and postpartum periods. Attention was given to the existing socioeconomic disparties among women in this urban area, and the utilities were discussed irrespective of socioeconomic status. Methods: A total of 436 pregnant women in an urban area in Phnom Penh were selected using a complete survey in randomly sampled villages and were followed up. Participating in antenatal care with comprehensive health education at least three time was regarded as the use of "qualified antenatal care" during pregnancy. In this study, we investigated the independent associations of the use of qualified antenatal care with the following outcome variables after the adjustment for the influence of socieconomic variables: postpartum maternal health knowledge, postpartum maternal anemia, low birth weight, and infant immunization. Results: Of the 314 subjects who completed the follow-up examination, 66.8% used qualified antenatal care during pregnancy. The use of qualified antenatal care was positively associated with postpartum maternal health knowledge (OR=2.38, 95% CI: 1.12-5.05). and reductions in the incidences of postpartum anemia (OR=0.22,95% CI: 0.05-0.95) and low birth weight (OR=0.05,95% CI: 0.01-0.39) after the adjustment of the influence of socioeconomic status. The infants born to mothers who used qualified antenatal care had significantly higher coverage of BCG, DPT(1), and DTP(3) immunizations (P&lt;0.001,P&lt;0.001, andP&lt;0.01, respectively), independent of their socioeconomic conditions. Conclusion: This study shows the solid utilities of qualified antenatal care in Phnom Penh for perinatal health. Answer: Yes, antenatal identification of small for gestational age (SGA) fetuses significantly improves their outcome. A structured antenatal surveillance program for fetuses identified as SGA results in a lower risk of adverse fetal outcome compared with cases of SGA fetuses not identified antepartum (PUBMED:15717289). This is supported by the finding that SGA fetuses not identified before delivery were characterized by a four-fold increased risk of adverse fetal outcome compared to those identified before delivery (PUBMED:15717289). Furthermore, non-identified SGA fetuses have an increased risk for stillbirth and severe newborn distress, while identified SGA fetuses have an increased risk for severe childhood outcome (PUBMED:37875267). This suggests that while identification of SGA fetuses can improve immediate perinatal outcomes, there may be long-term risks that require further investigation. Additionally, antenatal corticosteroid treatment in singleton, SGA infants born at 24-31 weeks' gestation is associated with decreased mortality and composite adverse outcome, similar to the effect in non-SGA neonates (PUBMED:26552861). This indicates that antenatal interventions such as corticosteroid treatment can improve outcomes for preterm SGA neonates. However, the timing of antenatal care and interventions is crucial. Inadequate timing of antenatal corticosteroids can limit their benefit on neonatal outcomes (PUBMED:35250380), and a timely administration of antenatal steroids is highly protective against intraventricular hemorrhage in very low birth weight infants (PUBMED:35372176). In summary, antenatal identification of SGA fetuses, along with appropriate timing and administration of antenatal care and interventions, significantly improves their outcome, reducing the risk of adverse fetal and neonatal complications.
Instruction: Is inguinal exploration for the impalpable testis an outdated operation? Abstracts: abstract_id: PUBMED:29132799 Surgical exploration for impalpable testis: Which should be first, inguinal exploration or laparoscopic abdominal exploration? Purpose: To discuss an optimal surgical approach for impalpable testis in children, our own treatment results and those reported in the literature were reviewed. Materials And Methods: Seventy-two impalpable testes were diagnosed in 68 patients: unilateral in 64 patients and bilateral in 4 patients. All patients underwent surgical exploration at the ages of 6 to 140months (median, 15months). The inguinal canal was initially explored, and abdominal exploration was performed with laparoscopy when an extra-abdominal testis was not identified. In addition, articles regarding surgical exploration for impalpable testis, published over the last 20years, were retrieved and the results were examined. Results: Testes were detected by inguinal exploration in 28 of 72 (39%) impalpable testes: intracanalicular in 22 testes and at the internal inguinal ring (peeping or low abdominal testis) in 6 testes. All these testes were treated by conventional inguinal orchidopexy. Laparoscopic exploration was performed in 44 (61%) impalpable testes, and 4 (5.6%) high abdominal testes were detected and treated by two-stage Fowler-Stephens orchidopexy. Vanishing or absent testis was the final diagnosis in the remaining 40 testes (55.6%). The literature review showed that the ratios of intra- and extra-abdominal testes were lower in the articles that reported the results of inguinal or scrotal exploration than in those of laparoscopic exploration, although the difference was not significant. Conclusions: Considering the relatively low incidence of high abdominal testis, we recommend to start with inguinal exploration for impalpable testis. When an extra-abdominal testis is not detected, transinguinal laparoscopic exploration should be indicated. Level Of Evidence: Treatment study, Level IV. abstract_id: PUBMED:8814856 Is inguinal exploration for the impalpable testis an outdated operation? Objective: To evaluate the reliability of open inguinal exploration for the investigation of the impalpable testis by reviewing laparoscopic findings after previous negative inguinal exploration. Patients And Methods: Laparoscopy was performed in 18 boys (with a total of 22 impalpable testes) in whom previous inguinal exploration in their referring hospitals had yielded negative or inconclusive findings. Subsequent management of the impalpable testes was based on laparoscopic findings. Results: Despite previous negative surgical findings at inguinal exploration, laparoscopy revealed that 13 of the 22 impalpable testes (59%) were present, 12 within the abdomen and one in the inguinal canal. Absence of the remaining nine testes was positively confirmed by visualizing confluent blind ending vas and vessels. Conclusion: Inguinal exploration is an unreliable method of investigating the impalpable testis, with an unacceptably high incidence of false-negative or inconclusive findings. Laparoscopy should be undertaken as the initial diagnostic manoeuvre of choice. abstract_id: PUBMED:26566489 Use of laparoscopy as the initial surgical approach of impalpable testes: 10-year experience. Aim: To review the experience in the management of impalpable testes using laparoscopy as the initial approach and the need for inguinal exploration. Methods: From January 2004 to June 2014, 339 patients with undescended testes underwent operation in our institute. Fifty patients (15%) had impalpable testes. All children with impalpable testes underwent initial laparoscopy. A retrospective review was conducted on this group of patients and the outcome was analyzed. Results: Forty children had unilateral impalpable testis. Ten children had bilateral impalpable testes. Thirty-one children (78%) in the unilateral group underwent subsequent inguinal exploration while 4 children (40%) in the bilateral group underwent inguinal exploration (P &lt; 0.05). Orchidopexy was performed in 16 children (40%) in the unilateral group and 9 children (90%) in the bilateral group (P &lt; 0.05). Regarding the 24 children with unilateral impalpable testis and underwent orchidectomy for testicular nubbin (n = 19) or atrophic testes (n = 2) or has vanishing testes (n = 3); contralateral testicular hypertrophy was noticed in 10 (41%). No intra-operative complication was encountered. Two children after staged Fowler-Stephens procedure and 1 child after inguinal orchidopexy had atrophic testes. Conclusion: The use of laparoscopy in children with impalpable testes is a safe procedure and can guide the need for subsequent inguinal exploration. Children with unilateral impalpable testis were associated with an increased need for inguinal exploration after laparoscopy. Orchidopexies could be performed successfully in 90% of children with bilateral impalpable testes. abstract_id: PUBMED:12703362 Impalpable testis: complete inguinal exploration by laparoscopy Objective: To prospectively evaluate an inguinal approach to surgery for impalpable testis possibly assisted by laparoscopy during the same operation. Material And Methods: From January 1996 to March 2002 each case of unilateral impalpable testis was operated via an inguinal incision and laparoscopy was performed during the same procedure whenever the inguinal canal was devoid of any testicular structures. Sixty boys were included in this study. Anatomical features were evaluated and testes were classified according to their trophicity and their site. We evaluated the number of laparoscopies and the factors justifying this procedure. Orchidopexy was performed via an inguinal approach with extensive retroperitoneal dissection to release spermatic vessels, when necessary. A one-stage Fowler-Stephens operation was performed when the vascular pedicle remained too short despite this dissection. The results of orchidopexy were evaluated by physical examination with a minimum follow-up of 4 months. Results: In this series of 60 cases, we identified 19 viable testes. Fourteen were situated in the abdomen, 3 in the inguinal canal, and 2 in a perineal ectopic position. The other cases were considered to present testicular atrophy. Laparoscopy was performed in 23 boys (38%) in whom no viable or atrophic testis was demonstrated via the inguinal incision. Sixteen of the 19 viable testes were descended by preserving the spermatic vessels with a good result. Two Fowler-Stephens operations gave a good result. One testis was removed. Conclusion: An inguinal incision, possibly completed by laparoscopy, appears to be a simple and reliable method of evaluation and treatment of impalpable testis. abstract_id: PUBMED:17853013 Impalpable testis: laparoscopy or inguinal canal exploration? Objective: There is great controversy regarding the best approach to impalpable testis. In this retrospective study we evaluated the accuracy of intraperitoneal laparoscopy, inguinal canal exploration and ultrasound in the diagnosis of impalpable testis. Material And Methods: Over a 2-year period (2004-06), 76 patients with a diagnosis of uni- or bilateral clinically impalpable testis were referred to our center. A preoperative ultrasound evaluation was done and all patients underwent intraperitoneal laparoscopy. An inguinal canal incision and exploration was done in all cases except for patients with high intra-abdominal testes (&gt; 2 cm above the internal ring), for whom laparoscopic dissection, mobilization and orchidopexy were necessary. Results: The mean age of the patients was 15.36 years (range 1-39 years). The undescended testis (UDT) was right-sided in 25% of patients, left-sided in 41% and bilateral in 34%. Intraperitoneal laparoscopy, inguinal canal exploration and ultrasound detected 70.6%, 78.4% and 15.6% of testes, respectively. Conclusions: Laparoscopy changed the management protocol of impalpable UDT in only 21% of cases. Inguinal canal exploration alone was sufficient in the majority of our cases. We recommend laparoscopy for those patients with impalpable UDT in whom an initial inguinal canal exploration proves negative. abstract_id: PUBMED:12809907 Role of laparoscopy in patients with previous negative exploration for impalpable testis. Objectives: To evaluate the impact of laparoscopy in the treatment of patients with previous negative inguinal exploration for impalpable undescended testes. Methods: A retrospective review was performed of patients who underwent laparoscopy after prior incomplete or "questionable" negative inguinal explorations for nonpalpable undescended testes. Results: Twenty-seven males aged 1 to 22 years old, with 30 impalpable testes, underwent laparoscopy after prior negative open exploration. The operative notes were available for all patients. In 9 patients (33%), blind-ending spermatic vessels, vas deferens, and a closed ring were found. However, in the remaining 18 patients (67%), viable intra-abdominal or inguinal gonads (9 patients) and intra-abdominal remnants (9 patients) were identified. In 1 patient, seminoma of an intra-abdominal testis was present. Conclusions: When a patient is referred with an inconclusive previous open exploration for an impalpable gonad, laparoscopy has an important role in establishing or refuting the diagnosis of an absent testis. Moreover, therapeutic laparoscopy provides definitive options to standard open techniques when a viable testis or remnant is encountered. In this study, laparoscopy proved more accurate than open exploration with operative notes available defining the nature of the exploration. abstract_id: PUBMED:24661449 Role of laparoscopy and ultrasound in the management of "impalpable testis" in children. Background: Undescended testes is one of the most common congenital abnormalities in boys. In cases of impalpable testes, ultrasound is often used to find the testis, which frequently provides false-negative results. Recently, laparoscopy has become popular in the management of impalpable testes. Methods: Retrospective study of all children with impalpable testes presenting for laparoscopy between August 2007 and July 2011 who had undergone ultrasound examinations without localizing the testes was conducted and the role of laparoscopy in diagnosing impalpable testes was evaluated. Results: Twenty-three patients presented with impalpable testes for laparoscopy. All patients underwent ultrasound examinations in which the testes could not be identified. Of the 23 patients, Five patients were found to have palpable testes in the superficial inguinal pouch under anesthesia and proceeded to conventional open exploration during which the testes were brought into the scrotum. Eighteen patients were found to have impalpable testes in an evaluation under anesthesia (EUA) and proceeded to laparoscopy. Twelve patients were found to have intra-abdominal testes and underwent laparoscopic-assisted orchidopexy. Three patients underwent a two-stage Fowler-Stephens procedure, and two patients with "vanishing" testes with the vas and atrophic vessels entering a closed internal ring proceeded to open exploration and orchidectomy for atrophic testes. In addition, a teenager with atrophic testes underwent laparoscopic orchidectomy. Conclusions: Laparoscopy is superior to ultrasound in the management of impalpable testes when high-resolution ultrasound is not available during the diagnostic process, with respect to both the sensitivity of localizing the testis and being more time and cost effective. abstract_id: PUBMED:30112376 Impalpable Testis: Evaluation of Diagnostic and Treatment Procedures and Our Treatment Protocol. Introduction: The aim of this study is to present our treatment protocol for impalpable testis. Material And Methods: In a retrospective study we analyzed clinical data including diagnostic procedures, intraoperative findings, final diagnosis, treatment modality, and outcome of patients with impalpable testis who underwent surgery from January 2010 until December 2015. Results: Ninety-one patients were admitted under the diagnosis of impalpable testis. In 39 patients ultrasound detected testis in the inguinal canal and orchidopexy was done. In 25 patients (48.08%) laparoscopy showed the entrance of the spermatic cord into the inguinal canal. Open exploration of the inguinal canal was done, testicular remnant removed, and appropriate testicular prosthesis implanted. Twenty patients (20/52) underwent orchidopexy of the abdominal testis (46.51%), 4 of which underwent Fowler-Stevens procedure in two stages, and in 16 patients deliberation of the testis and spermatic cord was sufficient to place the testis into the scrotum. Conclusions: Excision of the testicular nubbin is highly recommendable, as well as implantation of the testicular prosthesis at the time of orchiectomy. abstract_id: PUBMED:9361605 Laparoscopy for the impalpable testis. Background: Use of laparoscopy in the management of the impalpable testis remains controversial. Localization of the testis may help plan or obviate the need for groin exploration. This study reviews the need for inguinal exploration with respect to laparoscopic findings, particularly of vas and vessels entering a closed deep inguinal ring. Method: Case notes of boys undergoing laparoscopy for undescended testes were reviewed retrospectively. Results: Of 86 impalpable testes, 32 were intra-abdominal and ten were absent with intra-abdominal blind ending vas and vessels. In 17 instances the vas and vessels entered an open internal ring and in 26 a closed internal ring. In one boy neither vas, vessels nor testis were visualized. Of the 26 impalpable testes with a closed internal ring, excision of testicular remnants in 18 revealed no histological testicular parenchyma, one boy had bilateral perineal ectopic testes missed clinically and six were not explored. Conclusion: The laparoscopic finding of vas and vessels entering a closed deep inguinal ring should prompt a careful examination for an ectopic testis. If a palpable testis can be ruled out, inguinal exploration is not necessary, as viable testicular parenchyma is rarely found. Laparoscopy would have avoided negative exploration in 42 per cent of impalpable testes in this series. abstract_id: PUBMED:1684136 Management of an impalpable testis: the role of laparoscopy. The use of laparoscopy through a small infraumbilical incision permits noninvasive exploration of the pelvic fossae. The vas deferens as well as the spermatic vessels can be identified either with the testicle or as a blind ending. Twenty-six laparoscopies were performed on 19 patients with unilateral impalpable testis and 7 with bilateral impalpable testis. In this small series, laparoscopy was found to be the best method of locating the impalpable testis, and, depending on the laparoscopic findings, no further surgery or a simple inguinal exploration or abdominal incision is needed for the management of this condition. Answer: Inguinal exploration for the impalpable testis is not necessarily outdated, but its reliability and role in the management of impalpable testis have been questioned and compared to laparoscopic methods. Some studies suggest that inguinal exploration may have a high incidence of false-negative or inconclusive findings, leading to the recommendation that laparoscopy should be the initial diagnostic maneuver of choice (PUBMED:8814856). However, other studies and reviews have shown that inguinal exploration can still be a valuable first step, especially when combined with laparoscopy if needed (PUBMED:29132799; PUBMED:12703362). The literature presents a variety of approaches, with some advocating for initial laparoscopy due to its high diagnostic accuracy and ability to guide subsequent need for inguinal exploration (PUBMED:26566489; PUBMED:30112376). Laparoscopy has been found to be a safe procedure that can successfully perform orchidopexies in a significant number of cases, especially in bilateral impalpable testes (PUBMED:26566489). Additionally, laparoscopy has been shown to be superior to ultrasound in the management of impalpable testes when high-resolution ultrasound is not available (PUBMED:24661449). On the other hand, some studies have found that inguinal exploration alone was sufficient in the majority of cases, recommending laparoscopy only when initial inguinal exploration is negative (PUBMED:17853013). Furthermore, laparoscopy has been shown to be more accurate than open exploration in patients with previous negative inguinal explorations (PUBMED:12809907). In conclusion, while inguinal exploration for the impalpable testis is not completely outdated, its role has evolved. It may still be used as a first step in certain protocols, but laparoscopy has become an important tool that can either complement inguinal exploration or serve as the initial approach due to its diagnostic accuracy and therapeutic capabilities. The choice between the two methods may depend on the specific circumstances of the case, the surgeon's expertise, and the available resources.
Instruction: Does depression influence symptom severity in irritable bowel syndrome? Abstracts: abstract_id: PUBMED:14508041 Does depression influence symptom severity in irritable bowel syndrome? Case study of a patient with irritable bowel syndrome and bipolar disorder. Objective: Irritable bowel syndrome (IBS) is frequently associated with mood disorder. However, it is typically difficult to distinguish between disturbed mood as a causal agent and disturbed mood as a consequence of the experience of IBS. This report considers the association between mood and symptom severity in a patient with diarrhea-predominant IBS and stable, rapid cycling bipolar disorder with a predominantly depressive course. Such a case provides an important opportunity to determine the direction of the relationship between mood and IBS symptom severity because the fluctuations of mood in bipolar disorder are assumed to be driven largely by biological, rather than psychosocial, processes. Methods: The study was carried out prospectively, with ratings of mood and IBS symptom severity made daily by the patient for a period of almost 12 months. Results: The patient experienced regular and substantial changes in mood as well as fluctuations in the level of IBS symptoms during the study period. Contrary to expectation, the correlation between mood and IBS symptom severity on the same day suggested that the patient experienced less severe IBS symptoms during periods of more severe depression. However, time series analysis revealed no significant association between these two processes when serial dependence within each series was controlled for. Conclusions: The unusual co-occurrence of IBS with bipolar disorder provides direct evidence to indicate that depression does not necessarily lead to an increase in the reported severity of IBS, at least in the context of bipolar disorder, and may under certain circumstances actually be associated with a reduction in the severity of IBS symptoms. Factors that might moderate the relationship between depression and symptom severity are discussed. abstract_id: PUBMED:28104632 Visceral hypersensitivity is associated with GI symptom severity in functional GI disorders: consistent findings from five different patient cohorts. Objective: Our aim was to evaluate the association between visceral hypersensitivity and GI symptom severity in large cohorts of patients with functional GI disorder (FGID) and to adjust for psychological factors and general tendency to report symptoms. Design: We included five cohorts of patients with FGIDs (IBS or functional dyspepsia; n=1144), who had undergone visceral sensitivity testing using balloon distensions (gastric fundus, descending colon or rectum) and completed questionnaires to assess GI symptom severity, non-GI somatic symptoms, anxiety and depression. Subjects were divided into sensitivity tertiles based on pain/discomfort thresholds. GI symptom severity was compared between sensitivity tertiles in each cohort and corrected for somatisation, and anxiety and depression. Results: In all five cohorts, GI symptom severity increased gradually with increasing visceral sensitivity, with significant differences in GI symptom severity between the sensitivity tertiles (p&lt;0.0001), with small to medium effect sizes (partial η2: 0.047-0.11). The differences between sensitivity tertiles remained significant in all cohorts after correction for anxiety and depression, and also after correction for non-GI somatic symptom reporting in all of the cohorts (p&lt;0.05). Conclusions: A gradual increase in GI symptom severity with increasing GI sensitivity was demonstrated in IBS and functional dyspepsia, which was consistent across several large patient groups from different countries, different methods to assess sensitivity and assessments in different parts of the GI tract. This association was independent of tendency to report symptoms or anxiety/depression comorbidity. These findings confirm that visceral hypersensitivity is a contributor to GI symptom generation in FGIDs. abstract_id: PUBMED:20433401 Do psychological factors predict symptom severity in patients with subjective food hypersensitivity? Objective: We examined whether psychological factors such as general and gastrointestinal symptom-specific anxiety and depression could predict symptom severity in patients with unexplained, self-reported (subjective) food hypersensitivity. For the purpose, we translated and validated the Visceral Sensitivity Index (VSI). Material And Methods: Seventy consecutive patients completed questionnaires for Hospital Anxiety and Depression Scale, VSI, Irritable Bowel Syndrome Symptom Questionnaire, and Subjective Health Complaints Inventory. Relationship between scores on psychological factors and scores on somatic symptoms were studied by multiple regression analyses. Results: Most patients reported non-gastrointestinal symptoms in addition to their irritable bowel syndrome complaints, but general and symptom-specific anxiety, and depression could not explain a significant amount of the variance in somatic complaints. Gastrointestinal symptom-specific anxiety was a significant predictor of gastrointestinal complaints (p = 0.02), and age was the sole significant predictor of non-gastrointestinal complaints (p = 0.01). Approximately 90% of the total variance in symptom severity remained unexplained by the psychological factors. The Norwegian version of the VSI had satisfactory validity (Cronbach alfa = 0.93). Symptom-specific and general anxiety were significantly correlated (r = 0.48, p &lt; or = 0.0001). Conclusions: Psychological factors were not major predictors of symptom severity in patients with subjective food hypersensitivity. The Norwegian version of VSI had satisfactory validity. abstract_id: PUBMED:36740788 Gastrointestinal-Specific symptom anxiety in patients with gastroparesis: Relationships to symptom severity and quality of life. Background: Gastrointestinal (GI)-specific anxiety has been identified as a treatment target in irritable bowel syndrome. However, GI-specific anxiety has been understudied in other GI functional/motility disorders. Among adults with gastroparesis, we aimed to: (1) initially validate a measure of GI-specific anxiety, the Visceral Sensitivity Index (VSI); and (2) evaluate the relationship between GI-specific anxiety and gastroparesis symptom severity and quality of life, compared to measures of anxiety, depression, and somatization. Methods: Consecutive adult patients (N = 100) with gastroparesis presenting for initial consultation completed a series of self-report measures including the VSI. We conducted a confirmatory factor analysis of the VSI one-factor structure and tested internal consistency and convergent validity. We then performed hierarchical linear regression analyses to explore associations between VSI and gastroparesis symptom severity and overall quality of life. Key Results: Confirmatory factor analysis revealed that the original VSI one-factor structure overall fit well [χ2 (90) = 220.1, p &lt; 0.0001; SRMR = 0.08; RMSEA = 0.12; CFI = 0.96]. The VSI also had excellent internal consistency (α = 0.99) and convergent validity (r = 0.29-0.56; all p &lt; 0.01). Higher GI-specific anxiety was significantly associated with greater gastroparesis symptom severity, including nausea/vomiting, fullness/satiety, and upper abdominal pain scores beyond depression, anxiety, or somatization (all p = &lt;0.01-0.01). Additionally, higher GI-specific anxiety was significantly associated with lower mental health-related quality of life, beyond gastroparesis symptom severity, depression, anxiety, or somatization (p = 0.01). Conclusions & Inferences: The VSI is an adequate measure of GI-specific anxiety in patients with gastroparesis. Higher GI-specific anxiety was associated with increased patient-reported gastroparesis symptom severity and decreased quality of life, beyond depression/anxiety. abstract_id: PUBMED:16469592 Predictors of symptom severity in patients with chronic prostatitis and interstitial cystitis. Purpose: Numerous studies have been performed to identify potential risk factors for CP/CPPS and IC. However, few studies have been done to identify predictors of disease severity. Materials And Methods: A total of 174 men with CP/CPPS and 111 women with IC completed questionnaires to quantify symptom severity and identify demographic, medical and psychosocial characteristics. Symptom severity was assessed with the National Institutes of Health CPSI in men, and the O'Leary-Sant ICSI and problem index in women. Univariate and multivariate analyses were performed to identify characteristics predictive of worse symptoms. Results: The mean National Institutes of Health CPSI score in men was 15.32, and the mean O'Leary-Sant ICSI and problem index in women was 19.17. The most commonly reported comorbidities were allergies, sinusitis, erectile dysfunction and irritable bowel syndrome in men, and allergies, urinary incontinence, sinusitis and irritable bowel syndrome in women. In the 2 sexes self-reported urinary frequency and urgency, worse depression scores and lower education level were independent predictors of worse symptom severity. In men additional independent predictors were self-reported pelvic pain, fibromyalgia and previous heart attack, and in women an additional independent predictor was postmenopausal status. Conclusions: There are several common medical conditions associated with urological pelvic pain syndromes in men and women. Few of them were predictive of symptoms severity in this analysis. Self-reported pelvic pain symptoms, education and depression severity were the factors most strongly predictive of symptom severity in patients with CP/CPPS and IC. abstract_id: PUBMED:36511394 Symptom management needs of patients with irritable bowel syndrome and concurrent anxiety and/or depression: A qualitative study. Aims: To understand the experiences and needs of symptom management among individuals with irritable bowel syndrome and concurrent symptoms of anxiety and/or depression. Design: This study used a qualitative descriptive research design. Methods: Individuals with a diagnosis of irritable bowel syndrome and concurrent symptoms of anxiety and/or depression participated were recruited through an online ResearchMatch and a listserv. Semi-structured interviews focused on symptoms and experiences with symptom management interventions conducted from June to August 2020. Interviews were transcribed and data were analysed based on thematic analysis. Results: Twelve individuals participated in this study; all reported current irritable bowel syndrome and anxiety/depression symptoms. The data analysis cumulated with three themes related to symptom management: (a) irritable bowel syndrome negatively impacts physical and mental well-being; (b) a trial and error approach to symptom management; and (c) challenges with healthcare professionals supporting symptom management including negative interactions with healthcare professionals and lack of nutritional expertize and support. Conclusion: There is a need for individualized approaches which consider patients' current symptoms of anxiety and depression, previous experiences with the trial-and-error process and consideration for intervention delivery methods. Impact: There is a limited qualitative research focusing on the experiences of individuals with irritable bowel syndrome and concurrent symptoms of anxiety and/or depression. This research highlights the need for individualized approaches to enhance symptom management that acknowledges patients' psychological state and past negative experiences with providers and prior dietary regimens. abstract_id: PUBMED:25964226 Anxiety and Depression Increase in a Stepwise Manner in Parallel With Multiple FGIDs and Symptom Severity and Frequency. Objectives: Anxiety and depression occur frequently in patients with functional gastrointestinal disorders (FGIDs), but their precise prevalence is unknown. We addressed this issue in a large cohort of adult patients and determined the underlying factors. Methods: In total, 4,217 new outpatients attending 2 hospitals in Hamilton, Ontario, Canada completed questionnaires evaluating FGIDs and anxiety and depression (Hospital Anxiety and Depression scale). Chart review was performed in a random sample of 2,400 patients. Results: Seventy-six percent of patients fulfilled Rome III criteria for FGIDs, but only 57% were diagnosed with FGIDs after excluding organic diseases, and the latter group was considered for the analysis. Compared with patients not meeting the criteria, prevalence of anxiety (odds ratio (OR) 2.66, 95% confidence interval (CI): 1.62-4.37) or depression (OR 2.04, 95% CI: 1.03-4.02) was increased in patients with FGIDs. The risk was comparable to patients with organic disease (anxiety: OR 2.12, 95% CI: 1.24-3.61; depression: OR 2.48, 95% CI: 1.21-5.09). The lowest prevalence was observed in asymptomatic patients (OR 1.37; 95% CI 0.58-3.23 and 0.51; 95% CI 0.10-2.48; for both conditions, respectively). The prevalence of anxiety and depression increased in a stepwise manner with the number of co-existing FGIDs and frequency and/or severity of gastrointestinal (GI) symptoms. Psychiatric comorbidity was more common in females with FGIDs compared with males (anxiety OR 1.73; 95% CI 1.35-2.28; depression OR 1.52; 95% CI 1.04-2.21). Anxiety and depression were formally diagnosed by the consulting physician in only 22% and 9% of patients, respectively. Conclusions: Psychiatric comorbidity is common in patients referred to a secondary care center but is often unrecognized. The prevalence of both anxiety and depression is influenced by gender, presence of organic diseases, and FGIDs, and it increases with the number of coexistent FGIDs and frequency and severity of GI symptoms. abstract_id: PUBMED:33374779 Habitual FODMAP Intake in Relation to Symptom Severity and Pattern in Patients with Irritable Bowel Syndrome. Restricting intake of FODMAPs (Fermentable Oligo-, Di-, Monosaccharides and Polyols) is used as treatment for irritable bowel syndrome (IBS). However, whether habitual FODMAP consumption correlates to symptom severity, and if this relationship differs among IBS subtypes, is unclear. The aim was to study the relationship between habitual FODMAP intake and symptom severity. A total of 189 patients with IBS-IBS with constipation (IBS-C) n = 44 (22.3%), IBS with diarrhea (IBS-D) n = 54 (27.4%), mixed IBS (IBS-M) n = 46 (23.4%) and unsubtyped IBS (IBS-U) n = 46 (23.4%)-recorded food intake during four days. Symptom severity was measured with the IBS severity scoring system (IBS-SSS). For FODMAP intake, a lower lactose intake was noted among women with IBS-D, p = 0.009. In women, there was a statistically significant relationship between energy-adjusted FODMAP intake and IBS-SSS (r = 0.21, p = 0.003). This was mainly driven by the subtype IBS-U, where excess fructose intake accounted for 19.9% of explained variance in IBS-SSS (p = 0.007). This study demonstrates small differences in FODMAP intake among IBS patients with different subtypes. Association between IBS symptoms and FODMAP intake was most prominent in unsubtyped IBS. However, patients who are intolerant to certain FODMAPs may already have reduced their FODMAP intake, and this warrants future cohort or experimental studies to uncover. abstract_id: PUBMED:20629976 The Patient Health Questionnaire 12 Somatic Symptom scale as a predictor of symptom severity and consulting behaviour in patients with irritable bowel syndrome and symptomatic diverticular disease. Background: Anxiety, depression and nongastrointestinal symptoms are often prominent in irritable bowel syndrome (IBS), but their relative value in patient management has not been quantitatively assessed. We modified the Patient Health Questionnaire 15 (PHQ-15) by excluding three gastrointestinal items to create the PHQ-12 Somatic Symptom (PHQ-12 SS) scale. Aims: To compare the value of the PHQ-12 SS scale with the Hospital Anxiety and Depression (HAD) scale in predicting symptoms and patient behaviour in IBS and diverticular disease. Methods: We compared 151 healthy volunteers (HV), 319 IBS patients and 296 patients with diverticular disease (DD), 113 asymptomatic [ASYMPDD] and 173 symptomatic DD (SYMPDD). Results: Patient Health Questionnaire 12 SS scores for IBS and SYMPDD were significantly higher than HV. Receiver-operator curves showed a PHQ-12 SS &gt;6, gave a sensitivity for IBS of 66.4% with a specificity of 94.7% and a positive likelihood ratio (PLR) = 13.2, significantly better than that associated with an HAD anxiety score &gt;7, PLR = 3.0 and depression score &gt;7 PLR = 6.5. PHQ-12 SS correlated strongly with IBS severity scale and GP visits in both IBS and DD. Conclusion: The PHQ-12 SS scale is a useful clinical tool which correlates with patient behaviour in both IBS and symptomatic DD. abstract_id: PUBMED:31198507 Relations between Coping Skills, Symptom Severity, Psychological Symptoms, and Quality of Life in Patients with Irritable Bowel Syndrome. Background: Irritable bowel syndrome (IBS) is one of the most common functional gastrointestinal disorders with significant impact on quality of life (QOL). Considering the role of stress in the clinical course of IBS, we investigated associations between stress coping skills and symptoms and QOL in IBS patient. Methods: A cross-sectional study was conducted on 95 IBS patients referring to tertiary care centers. Coping skills (Jalowiec coping scale), IBS symptom severity scale, disease-specific QOL (IBS-QOL), and symptoms of depression and anxiety (Hospital Anxiety and Depression Scale [HADS]) were evaluated by questionnaires. Bivariate and multivariate analyses were performed to investigate association among these parameters. Results: Disease severity was positively correlated with emotive (r = 0.30) and fatalistic (r = 0.41) and negatively correlated with optimistic (r = -0.25) and confrontive (r = -0.24) coping strategies. Psychological dysfunction (total HADS score, B [95% (confidence interval) CI] = 2.61 [0.001-5.21]) and fatalistic coping (B [95% CI] = 35.27 [0.42-70.13]) were significant predictors of IBS severity. Conclusions: However, IBS patients involved in this study utilized adaptive coping strategies more frequently. Our study showed that use of maladaptive coping strategies had positive correlation with symptom severity and degree of anxiety and depression among patients, while implementation of optimistic strategies were found to be negatively correlated to severity of symptoms and also utilization of adaptive coping styles was associated with lesser degree of anxiety and depression. Answer: Depression does not necessarily lead to an increase in the reported severity of irritable bowel syndrome (IBS) symptoms. In a case study of a patient with diarrhea-predominant IBS and stable, rapid cycling bipolar disorder with a predominantly depressive course, it was found that the patient experienced less severe IBS symptoms during periods of more severe depression. Time series analysis revealed no significant association between mood and IBS symptom severity when serial dependence within each series was controlled for (PUBMED:14508041). However, the relationship between psychological factors and IBS symptom severity is complex. In a study examining the relationship between visceral hypersensitivity and GI symptom severity in functional GI disorders, it was found that GI symptom severity increased with increasing visceral sensitivity, and this association was independent of the tendency to report symptoms or anxiety/depression comorbidity (PUBMED:28104632). Another study found that psychological factors were not major predictors of symptom severity in patients with subjective food hypersensitivity, although gastrointestinal symptom-specific anxiety was a significant predictor of gastrointestinal complaints (PUBMED:20433401). In patients with gastroparesis, another GI functional/motility disorder, GI-specific anxiety was significantly associated with greater symptom severity and lower quality of life, beyond depression, anxiety, or somatization (PUBMED:36740788). Similarly, in patients with chronic prostatitis and interstitial cystitis, depression severity was predictive of symptom severity (PUBMED:16469592). A qualitative study on individuals with IBS and concurrent symptoms of anxiety and/or depression highlighted the need for individualized approaches to symptom management that consider patients' psychological state (PUBMED:36511394). Another study found that the prevalence of anxiety and depression increased in a stepwise manner with the number of co-existing FGIDs and frequency and/or severity of GI symptoms (PUBMED:25964226). In summary, while depression does not necessarily increase IBS symptom severity, psychological factors, including depression and anxiety, can be associated with symptom severity in IBS and other functional GI disorders. The relationship is influenced by various factors, including visceral hypersensitivity and coping strategies, and requires individualized management approaches.
Instruction: Are neuroendocrine cells of practical value as an independent prognostic parameter in prostate cancer? Abstracts: abstract_id: PUBMED:7613832 Are neuroendocrine cells of practical value as an independent prognostic parameter in prostate cancer? Objective: To assess whether the presence of neuroendocrine (NE) cells is of value as an independent indicator of poor prognosis in patients with prostate carcinoma. Patients And Methods: A series of 160 consecutive patients with prostate carcinoma was studied retrospectively. In 120 there was sufficient tissue for review and to perform immunoperoxidase stains for neuron specific enolase (NSE) and chromogranin A (CGA). All patients had a potential follow-up of at least 5 years. Results: Five-year survival was poorer for patients with a high tumour grade and stage at presentation compared to those with a lower grade and stage. NE cells were more common in higher grade and stage disease, but 5-year survival did not differ significantly between patients with NE cell positive and negative tumours. Conclusion: NE cells are of no practical value as an independent prognostic indicator in patients with prostatic adenocarcinoma. abstract_id: PUBMED:22767265 Neuroendocrine differentiation does not have independent prognostic value in conservatively treated prostate cancer. In vitro studies have implicated neuroendocrine differentiation in the development of hormone resistant prostate cancer following administration of androgen blockers. Studies on clinical material are equivocal. We wished to understand the significance of neuroendocrine differentiation in our large and well-characterised cohort of clinically localised prostate cancer, treated conservatively. Immunohistochemical expression of chromogranin-A was assessed semi-quantitatively on tissue samples of 806 patients in a tissue microarray approach. The correlation of expression with 10-year prostate cancer survival was examined. Multivariate analysis including contemporary Gleason score was performed and sub-group analysis of early hormone treated patients was also undertaken. Chromogranin-A expression correlated with high Gleason score (χ(2) = 28.35, p &lt; 0.001) and early prostate cancer death (HR = 1.61, 95 %CI = 1.15-2.27, p &lt; 0.001). In univariate analysis, NE differentiation correlated significantly with outcome (HR = 1.61, 95 % CI 1.15-2.27, p &lt; 0.001) However in multivariate analysis including Gleason score, chromogranin-A expression was not an independent predictor of survival (HR = 0.97, 95 %CI = 0.89-1.37, p = 0.87). Although chromogranin-A expression was higher in patients with early hormone therapy (χ(2) = 7.25, p = 0.007), there was no association with prostate cancer survival in this sub-group (p = 0.083). Determination of neuroendocrine differentiation does not appear to have any bearing on the outcome of prostatic carcinoma and does not add to the established prognostic model. abstract_id: PUBMED:9186347 The prognostic value of neuroendocrine differentiation in adenocarcinoma of the prostate in relation to progression of disease after endocrine therapy. Purpose: We evaluated the prognostic impact of neuroendocrine differentiation in prostate cancer with regard to the onset of endocrine therapy failure. Materials And Methods: A retrospective study was performed on 72 transurethral resection specimens from patients who subsequently underwent endocrine therapy for prostate cancer and were followed for 44 to 95 months. Progression-free interval was recorded. Distribution pattern and proportion of neuroendocrine cells were examined in transurethral resection specimens. Neuroendocrine cells were identified based on immunoreactivity for chromogranin A. Results: Of 32 patients with progressive disease 27 died of prostate cancer. Chromogranin A positive cells were found in 40 of the 72 prostate cancers (55%). In a Cox proportional hazards analysis neuroendocrine differentiation of the tumor showed a negative correlation with progression-free survival (p = 0.022), which proved to be independent of the Gleason score (p &lt; 0.001). Conclusions: Our results support the view that neuroendocrine differentiation in prostatic adenocarcinomas is a prognostic factor for progressive disease under subsequent endocrine therapy. This prognosticator acts independently of tumor grade. abstract_id: PUBMED:1913066 Neuro-endocrine cells--a new prognostic parameter in prostate cancer. Neuro-endocrine cells are a recognised component of prostatic ducts and acini. Half of all clinically manifest cancers show neuro-endocrine differentiation. Occult carcinomas have a lower incidence of such differentiation. Neuro-endocrine cells are of major prognostic importance and appear more reliable in predicting patients' survival than do conventional histological grading systems. abstract_id: PUBMED:7543077 The prognostic influence of neuroendocrine cells in prostate cancer: results of a long-term follow-up study with patients treated by radical prostatectomy. The distribution of immunohistochemically defined neuroendocrine (NE) cells in benign, pre-cancerous and neoplastic prostatic tissues and the prognostic value of these cells in prostate cancer were studied in the radical prostatectomy specimens of 90 patients from whom complete long-term follow-up data were available. The tissue blocks containing all the different Gleason patterns observed in a particular tumor were selected and immunostained. Since chromogranin B stained only a few cells compared to chromogranin A (CgA), NE cells were only defined by their reactivity with CgA. A semi-quantificative CgA score was assessed for all distinct pathological areas. Cox's regression model was used to analyze the influence of final TNM classification (TNM, 1992), Gleason sum score (GSS), age and CgA score on the probability of progression and tumor-specific death. NE cells were demonstrated in all normal prostatic tissues and in most hyperplastic and intra-epithelial neoplastic lesions. CgA staining was seen in 78% of the tumors. CgA scores were not related with Gleason growth patterns, GSS or TNM classification and had no prognostic value. The independent prognostic variables in Cox's regression model were: GSS and pT stage for progression and GSS for tumor-specific survival. Theoretically, NE cells could influence tumor behavior and this discrepancy suggests the need for experimental studies to investigate the role of NE cells in the normal and neoplastic prostate. abstract_id: PUBMED:24697032 Does neuroendocrine differentiation have prognostic value in prostate core needle biopsies? Aim: The biological behaviour of prostate cancer (PCa) varies significantly and cannot be, therefore, predicted. Better understanding of the mechanisms underpinning PCa oncogenesis and progression with its yet-to-be discovered poor prognostic factors is essential in order to optimise and tailor treatment to an individual patient. The aim of this paper was to investigate the association between the rate of focal PCa neuroendocrine activity, tumour cell proliferation index score, and the rate of PCa positive core needle biopsy results. Material And Methods: 92 men, with histologically confirmed PCa, which was clinically confined to the prostate and was graded with Gleason score &gt; or =7, had their core needle biopsies under transrectal ultrasonography guidance performed. The PCa neuroendocrine activity was immunohistochemically confirmed using antibodies against Chromogranin-A and neuron specific enolase. Results: The neuroendocrine activity was detected in 14 (13%) out of 92 PCa patients participating in the study. The proliferative index was not increased in non-cancerous prostate cells. There was no relationship between PCa neuroendocrine activity, the number and percentage of PCa positive biopsies, prostate volume, serum PSA concentration, and Gleason score found. Conclusions: No association between selected PCa prognostic factors and neuroendocrine activity could be found in patients with organ confined prostate cancer. abstract_id: PUBMED:9690663 Evaluation and clinical value of neuroendocrine differentiation in human prostatic tumors. Background: Prostate cancer, like other solid tumors, is a rather heterogeneous entity. More than 50% of all malignant prostatic tumors contain neuroendocrine-like cells, which cannot be attributed to small cell prostatic carcinoma or carcinoid-like tumors, which represent only 1-2% of all prostatic malignancies. Several investigators have reported that histopathologic determination of neuroendocrine differentiation in prostate carcinomas may have prognostic implications, while others have not confirmed these results. However, on the basis of experimental data, neuroendocrine-like cells appear to be involved in the emergence of androgen-independent cells and could be a target for new prostate cancer therapeutic strategies. Methods: The literature on the neuroendocrine phenotype of prostatic carcinoma is reviewed. This review summarizes most of the accumulated experimental and clinical data on the neuroendocrine phenotype in prostate cancer. We analyze the putative functions of neuroendocrine-like cells in prostate cancer progression and discuss the place of neuroendocrine phenotype biomarkers as diagnostic and prognostic factors in prostate cancer. Results: The fact that focal, patchy and heterogeneous clusters of neuroendocrine-like cells are frequently identified in organ-confined prostatic carcinoma probably accounts for the various evaluations of the predictive value of neuroendocrine histological patterns for the clinical outcome at this stage of the disease. The amount of neuroendocrine cells required to produce a detectable elevation in plasma chromogranin A has not yet been determined, but it is correlated with the number of chromogranin A-positive neuroendocrine (NE) cells. Despite the obvious current limitations of the application of neuropeptides as a serological test, this overview will try to more accurately define the possible roles of specific neuropeptides as prostatic cancer markers in diagnostic and monitoring protocols. The plasma chromogranin A level, in comparison with neuron-specific enolase (NSE), chromogranin B (CBG), pancreastatin, or secretogranin levels, appears to be the most useful neuroendocrine marker for determination of neuroendocrine differentiation of advanced prostatic adenocarcinoma. Conclusions: Future studies on neuroendocrine should confirm whether neuroendocrine biomarkers, especially the chromogranin family of peptides, can be used as prognostic markers during the course of prostate cancer or for the selection of patients suitable for evaluation of new antineoplastic drugs known to be active against specific and aggressive subpopulations of tumor cells. abstract_id: PUBMED:17440078 Androgen-independent growth and tumorigenesis of prostate cancer cells are enhanced by the presence of PKA-differentiated neuroendocrine cells. The neuroendocrine status of prostatic adenocarcinomas is considered a prognostic indicator for development of aggressive, androgen-independent disease. Neuroendocrine-like cells are thought to function by providing growth and survival signals to surrounding tumor cells, particularly following androgen ablation therapy. To test this hypothesis directly, LNCaP cells were engineered to inducibly express a constitutively activated form of the cyclic AMP-dependent protein kinase A catalytic subunit (caPKA), which was previously found upon transient transfection to be sufficient for acquisition of neuroendocrine-like characteristics and loss of mitotic activity. Clonal cells that inducibly expressed caPKA enhanced the growth of prostate tumor cells in anchorage-dependent and anchorage-independent in vitro assays as well as the growth of prostate tumor xenografts in vivo, with the greatest effects seen under conditions of androgen deprivation. These results suggest that neuroendocrine-like cells of prostatic tumors have the potential to enhance androgen-independent tumor growth in a paracrine manner, thereby contributing to progression of the disease. abstract_id: PUBMED:18392627 Ki67 staining index and neuroendocrine differentiation aggravate adverse prognostic parameters in prostate cancer and are characterized by negligible inter-observer variability. Introduction: This study aims to clarify whether neuroendocrine differentiation (NED) and/or proliferation activity assessed by means of Ki67 staining index (Ki67SI) might aggravate other established adverse prognostic parameters commonly used for predicting outcome in surgically treated prostate cancer, and to assess inter-observer variability in assigning NED and Ki67 SI. Material And Methods: A total of 528 patients surgically treated due to prostate cancer were evaluated in this study. Relevant data were retrospectively obtained by chart review. Immunostaining with antibodies directed against Chromogranin A and Ki67 was performed on archived surgical material, and was evaluated by two independent histopathologists blinded to the specimens. Surveying a median postsurgical follow-up of 46.4 months, postsurgical serum PSA-levels were regularly documented for identifying biochemical progress. Multivariate analysis was performed by means of the Cox regression hazards regression method to evaluate possible aggravations of established adverse prognostic parameters (nodal status, tumour stage, pretherapeutic PSA-level, and Gleason score) by NED and/or Ki67SI. Ki67 SI and NED were shown to significantly aggravate these established adverse prognostic parameters, and were found to be characterized by negligible inter-observer variability. Conclusion: Ki67 SI and NED should be advocated to be rendered by the histopathologist because both parameters can be immunohistochemically determined without much additional expense in time and cost involved. This concept is rewarded by an additional gain of prognostic accuracy in evaluating individual risk profile after surgery. abstract_id: PUBMED:9679888 Focal neuroendocrine differentiation lacks prognostic significance in prostate core needle biopsies. Purpose: The biological behavior of prostate cancer is highly variable and cannot sufficiently be predicted by histological criteria alone. New prognostic factors are needed in core needle biopsies before initial treatment decisions. We investigate the prognostic significance of focal neuroendocrine differentiation in core needle biopsies of prostate cancer. Materials And Methods: Core needle biopsies from 105 untreated patients (mean age 71 years) were immunohistochemically examined for focal neuroendocrine differentiation using an antibody against chromogranin A. Tumor cell proliferation was assessed with Ki-67 labeling index using MIB 1 antibody. The cause of death was determined by examination of records including autopsy reports. Results: Focal neuroendocrine differentiation was found in 25% of the tumors. There was no association between the presence of focal neuroendocrine differentiation and Gleason score or Ki-67 labeling index. Tumor specific survival analysis revealed that high Gleason score and high Ki-67 labeling index were predictors of tumor specific death, whereas focal neuroendocrine differentiation failed to provide prognostic information. There was a significant increase in frequency and density of neuroendocrine differentiation between initial core needle biopsies and later specimens of secondary hormone resistant prostate cancer in 15 patients. Conclusions: In contrast to high Gleason score and high Ki-67 labeling index, focal neuroendocrine differentiation is not a prognostic factor in core needle biopsies of prostate cancer. Focal neuroendocrine differentiation seems to appear more frequently and intensively in hormone resistant prostate cancer, supporting a role of neuroendocrine cells in the development of hormone refractory disease. Answer: The presence of neuroendocrine (NE) cells in prostate cancer has been studied to determine their value as an independent prognostic parameter, but the findings are mixed and somewhat contradictory. Some studies have found no practical value of NE cells as an independent prognostic indicator in patients with prostatic adenocarcinoma (PUBMED:7613832), while others have reported that NE differentiation does not have independent prognostic value in conservatively treated prostate cancer (PUBMED:22767265). Additionally, a study found that NE differentiation was not an independent predictor of survival when including Gleason score in the analysis (PUBMED:22767265). Conversely, other research has supported the view that NE differentiation in prostatic adenocarcinomas is a prognostic factor for progressive disease under subsequent endocrine therapy, acting independently of tumor grade (PUBMED:9186347). Some have even suggested that NE cells are of major prognostic importance and may be more reliable in predicting patients' survival than conventional histological grading systems (PUBMED:1913066). However, another study found that NE cells had no prognostic value and that the independent prognostic variables were Gleason sum score and pT stage for progression and tumor-specific survival (PUBMED:7543077). Further research indicated no association between selected prostate cancer prognostic factors and neuroendocrine activity in patients with organ-confined prostate cancer (PUBMED:24697032). Additionally, while neuroendocrine-like cells are thought to be involved in the emergence of androgen-independent cells and could be a target for new therapeutic strategies, the clinical value of neuroendocrine differentiation in human prostatic tumors remains uncertain (PUBMED:9690663). In summary, the evidence is inconclusive regarding the practical value of NE cells as an independent prognostic parameter in prostate cancer. While some studies suggest a potential role for NE differentiation as a prognostic factor, especially in the context of endocrine therapy failure, others have found no significant independent prognostic value when accounting for other established factors such as Gleason score. Further research is needed to clarify the prognostic significance of NE cells in prostate cancer.
Instruction: Are general practitioner hospitals cost-saving? Abstracts: abstract_id: PUBMED:9472375 Are general practitioner hospitals cost-saving? Evidence from a rural area of Norway. Objective: We aimed to determine whether general practitioner GP hospitals, compared with alternative modes of health care, are cost-saving. Methods: Based on a study of admissions (n = 415) to fifteen GP hospitals in the Finnmark county of Norway during 8 weeks in 1992, a full 1-year patient throughput in GP hospitals was estimated. The alternative modes of care (general hospital, nursing home or home care) were based on assessments by the GPs handling the individual patients. The funds transferred to finance GP hospitals were taken as the cost of GP hospitals, while the cost of alternative care was based on municipality and hospital accounts, and standard charges for patient transport. Results: The estimated total annual operating cost of GP hospitals was 32.2 million NOK (10 NOK = 1 Pound) while the cost of alternative care was in total 35.9 million NOK. Sensitivity analyses, under a range of assumptions, indicate that GP care in hospitals incurs the lowest costs to society. Conclusion: GP hospitals are likely to provide health care at lower costs than alternative modes of care. abstract_id: PUBMED:678798 Contribution of general practitioner hospitals in England and Wales. A survey of a one-in-seven sample of general practitioner hospitals in England and Wales, performed to determine the contribution they make to overall hospital work load and the attitudes of the general practitioners working in them, showed that 3% of acute hospital beds in England and Wales were in general practitioner hospitals, which provided initial hospital care for up to 20% of the population. Altogether 16% of general practitioners and 22% of consultants were on the staffs, and they coped with more than 13% of all casualties, 6% of operations, and 4% of x-ray examinations. Nearly a million casualties were treated at no cost to the National Health Service. Twenty new district general hospitals would be needed to cope with the work load currently dealt with by general practitioner hospitals. The results of this survey indicate that these smaller hospitals deal efficiently and cheaply with their work load, and that morale is high. General practitioner hospitals could have an important part to play in providing certain types of care, but there are no financial incentives to enable general practitioners to realise this potential fully. abstract_id: PUBMED:32722989 Economic impact and cost savings of teledermatology units compared to conventional monitoring at hospitals in southern Spain. Introduction: Asynchronous teledermatology (TD) has undergone exponential growth in the past decade, allowing better diagnosis. Moreover, it saves both cost and time and reduces the number of visits involving travel and opportunity cost of time spent on visits to the hospital. The present study performed a cost-saving analysis of TD units and assessed whether they offered a cheaper alternative to conventional monitoring (CM) in hospitals from the perspective of public health-care systems (PHS) and patients. Methods: This study was a retrospective assessment of 7030 patients. A cost-saving analysis comparing TD units to CM for patients at the Hospital de Poniente was performed over a period of one year. The TD network covered the Hospital de Poniente reference area (Spain) linked to 37 primary care (PC) centres that belonged to the Poniente Health District of Almeria. Results: We observed a significant cost saving for TD units compared to participants in the conventional follow-up group. From the perspective of a PHS, there was a cost saving of 31.68% in the TD group (€18.59 TD vs. €27.20 CM) during the follow-up period. The number of CM visits to the hospital reduced by 38.14%. From the patients' perspective, the costs were lower, and the cost saving was 73.53% (€5.45 TD vs. €20.58 CM). Discussion: The cost-saving analysis showed that the TD units appeared to be significantly cheaper compared to CM. abstract_id: PUBMED:6424859 Contribution of general practitioner hospitals in Scotland. The results of a survey of 64 Scottish general practitioner hospitals showed that in 1980 these hospitals contained 3.3% of available staffed beds in Scotland; 13.6% of the resident population had access for initial hospital care, and 14.5% of Scottish general practitioners were on their staffs. During the year of the survey they discharged 1.8% of all non-surgical patients, treated almost 100 000 patients for accidents and emergencies and 140 000 outpatients, and 4.4% of all deliveries in Scotland were carried out in the hospitals surveyed. Most communities which are served by general practitioner hospitals in Scotland are rural and on average are more than 30 miles from their nearest district general hospital. The contribution that these small hospitals make to the overall hospital workload has not previously been estimated. It has been shown nationally to be small but not inconsiderable . In terms of the contribution to the health care of the communities they serve it cannot and should not be underestimated. abstract_id: PUBMED:7576843 Use of general practitioner beds in Leicestershire community hospitals. Background: The shift in care from secondary to primary services is likely to place greater demands on community hospitals. Before changes in the provision of community hospitals can occur, baseline data are needed, outlining their current use. Aim: A study was undertaken to obtain baseline data describing the use of general practitioner beds in Leicestershire community hospitals. Method: A three-month prospective, observational study was carried out between February and May 1992 using data from a questionnaire completed by nurses and general practitioners and from patient hospital records. Study patients comprised all patients admitted to general practitioner beds in all eight Leicestershire community hospitals. Results: A 100% questionnaire response rate was obtained giving data on 685 hospital admissions. Around 70% of admissions were of patients aged 75 years and over. Of admissions, 35% were for acute care, 31% for respite care, 22% for rehabilitation, 7% for terminal/palliative care and 5% for other reasons. Fifteen per cent of patients had been transferred from a consultant bed. Of those not transferred, 91% were admitted by their usual general practitioner or practice partner and for 96% of these patients this was the general practitioner's first choice for care. There was significant variation in both the age mix and care category mix of patients between individual hospitals. Medical deterioration in an underlying condition and family pressure on the general practitioner or carers' inability to cope each contributed to around half of all admissions. Of all admissions, 38% lived alone, and 18% of carers were disabled. Incontinence was reported for 35% of patients, and 26% of all patients were of a high nursing dependency. There was low utilization of community services before admission and 33% received none. There was variation between individual hospitals in use of local and district general hospital investigations, specialist referral and types of therapy. Of 685 admissions 11% died during their stay. Of those discharged, 76% went to their own or a relative's home, 10% to a residential or nursing home and 9% were transferred to an acute bed. Nine percent of discharges were postponed and 10% were brought forward. On discharge to non-residential care, 26% of patients received no community services. Conclusion: Shifting resources from secondary to primary care is a priority for purchasers. Both the introduction of the National Health Service and community care act 1990, and acute units having increasing incentives for earlier discharge, are likely to place greater demands on community hospital beds. Not all general practitioners have the option of community hospital beds. Before access to general practitioner beds can be broadened, existing beds should be used appropriately and shown to be cost-effective. Purchasers therefore require criteria for the appropriateness of admissions to general practitioner beds, and the results of a general practitioner bed cost-benefit analysis. abstract_id: PUBMED:2322703 Contribution of a general practitioner hospital: a further study. Objective: To audit the workload of a general practitioner hospital and to compare the results with an earlier study. Design: Prospective recording of discharges from the general practitioner hospital plus outpatient and casualty attendances and of all outpatient referrals and discharges from other hospitals of patients from Brecon Medical Group Practice during one year (1 June 1986-31 May 1987). Setting: A large rural general group practice which staffs a general practitioner hospital in Brecon, mid-Wales. Patients: 20,000 Patients living in the Brecon area. Results: 1540 Patients were discharged from the general practitioner hospital during the study period. The hospital accounted for 78% (1242 out of 1594) of all hospital admissions of patients of the practice. There were 5835 new attendances at the casualty department and 1896 new outpatient attendances at consultant clinics at the hospital. Of all new outpatient attendances by patients of the practice, 71% (1358 out of 1896) were at clinics held at the general practitioner hospital. Since the previous study in 1971 discharges from the hospital have increased 37% (from 1125 to 1540) and new attendances at consultant clinics 30% (from 1450 to 1896). The average cost per inpatient day is lower at this hospital than at the local district general hospital (pounds 71.07 v pounds 88.06 respectively). Conclusions: The general practitioner hospital deals with a considerably larger proportion of admissions and outpatient attendances of patients in the practice than in 1971 and eases the burden on the local district general hospital at a reasonable cost. Implications: General practitioner hospitals should have a future role in the NHS. abstract_id: PUBMED:10258577 The place of general practitioner hospitals in the organization of hospital services. After many years in which general practitioner hospitals have had little encouragement towards constructive planning they now find themselves being viewed in a more favourable light. Health Authorities should attempt to assess the potential of such hospitals and make plans for their future. It will be necessary for the isolation of the hospitals from the mainstream of district planning to be yielded up in return for an assurance of a fully effective future role. abstract_id: PUBMED:3735224 Casualty and surgical services in Perthshire general practitioner hospitals 1954-84. The results are reported of a study of casualty and surgical services in five general practitioner hospitals in Perthshire - Aberfeldy, Auchterarder, Blairgowrie, Crieff and Pitlochry. Details of the total workload, the nature of the conditions treated and the referral rate to major hospitals are given. Figures for the Royal Infirmary, Perth, the main referral hospital for the county, are also given for comparison. The surgical service at one of the rural hospitals is described.Experience has demonstrated the usefulness of these hospitals in providing casualty and surgical services to both the local population and to visitors, and their superiority in providing these services over health centres because staff and beds are available 24 hours a day.Rural general practitioner hospitals merit a continuing share of resources and bed allocation as they spare major hospitals surgical and medical work. The general practitioners serving the hospitals studied here undertook almost 40% of the total accident and emergency workload in the Perth and Kinross area of Scotland. abstract_id: PUBMED:8650831 The role of the general practitioner hospital in inpatient care. The rationale of the general practitioner hospital continues to be questioned. A study of the services and case-mix of two of the four remaining general practitioner hospitals in Northern Ireland was undertaken to determine whether the nature and cost of inpatient care in these hospitals was comparable to the available alternatives. The case-notes of all non-maternity admissions (n = 509) were reviewed. The two hospitals provide acute medical care for a wide range of patients. The majority of patients appeared to require hospitalisation. It is likely that the beds at the two hospitals were mainly a substitute for district general hospital care. The general practitioner hospitals were estimated to be less costly than alternative forms of care, although it was doubtful whether they fulfilled all the structural criteria of quality generally regarded as important for hospitals of this type. abstract_id: PUBMED:15152972 The decomposition of cost efficiency: an empirical application of the shadow cost function model to Dutch general hospitals. This paper investigates the performance of the Dutch general hospital industry by a parametric method. In general, the parametric approaches find more difficulties in distinguishing between technical and allocative efficiencies than DEA. Only recently a class of models is developed based on shadow prices which have possibilities to distinguish between technical and allocative efficiency. However, these models cause some serious computational problems. This paper recommends an approach to overcome these problems by using an iterative two-stage estimation procedure. The estimation is conducted on a panel data set of Dutch general hospitals. Estimation shows that this method is effective. The parameter estimates are plausible, reliable and satisfy all theoretical requirements. In particular we find some reliable estimates for the individual hospitals' shadow prices. According to these shadow prices hospitals should reallocate their resources in favor of material supplies at the cost of other personnel and nursing personnel. The mean technical efficiency is about 86%, whereas the allocative efficiency is about 92%. The outcomes also show that technical progress is very small. Economies of scale are present only for small hospitals. Answer: Yes, general practitioner (GP) hospitals are considered cost-saving in various contexts. In a rural area of Norway, a study found that the estimated total annual operating cost of GP hospitals was lower than the cost of alternative care, such as general hospitals, nursing homes, or home care. Sensitivity analyses indicated that GP care in hospitals incurs the lowest costs to society, suggesting that GP hospitals are likely to provide health care at lower costs than alternative modes of care (PUBMED:9472375). In England and Wales, a survey showed that general practitioner hospitals provided initial hospital care for up to 20% of the population and coped with a significant percentage of all casualties, operations, and x-ray examinations. The survey indicated that these smaller hospitals deal efficiently and cheaply with their workload, and that morale is high among the staff. It was also noted that twenty new district general hospitals would be needed to cope with the workload currently managed by GP hospitals (PUBMED:678798). In Scotland, a survey of general practitioner hospitals revealed that they accounted for a small but not inconsiderable portion of the overall hospital workload. These hospitals were particularly important for the health care of the rural communities they served, which were often located at a significant distance from the nearest district general hospital (PUBMED:6424859). In Leicestershire, a study to obtain baseline data on the use of GP beds in community hospitals found that there was significant variation in the age mix and care category mix of patients between individual hospitals. The study concluded that shifting resources from secondary to primary care is a priority, and that existing GP beds should be used appropriately and shown to be cost-effective (PUBMED:7576843). Furthermore, a study in Brecon, mid-Wales, found that the general practitioner hospital accounted for a large proportion of all hospital admissions and outpatient attendances of patients in the practice, easing the burden on the local district general hospital at a reasonable cost (PUBMED:2322703). In Northern Ireland, a study of two general practitioner hospitals showed that they provided acute medical care for a wide range of patients and were estimated to be less costly than alternative forms of care (PUBMED:8650831). Overall, the evidence from these studies suggests that general practitioner hospitals are cost-saving and play an important role in providing efficient and cost-effective health care, particularly in rural and community settings.
Instruction: Is obesity associated with emotional and behavioural problems in children? Abstracts: abstract_id: PUBMED:27559321 Maternal Weight Predicts Children's Psychosocial Development via Parenting Stress and Emotional Availability. Introduction: Maternal obesity has been shown to be a risk factor for obesity in children and may also affect children's psychosocial outcomes. It is not yet clear whether there are also psycho-emotional mechanisms explaining the effects of maternal weight on young children's weight and psychosocial development. We aimed to evaluate whether maternal body mass index (BMI), mother-child emotional availability (EA), and maternal parenting stress are associated with children's weight and psychosocial development (i.e., internalizing/externalizing symptoms and social competence) and whether these predictors interact with each other. Methods: This longitudinal study included three assessment points (~11 months apart). The baseline sample consisted of N = 194 mothers and their children aged 5-47 months (M = 28.18, SD = 8.44, 99 girls). At t 1, we measured maternal weight and height to calculate maternal BMI. We videotaped mother-child interactions, coding them with the EA Scales (fourth edition). We assessed maternal parenting stress with the Parenting Stress Index (PSI) short form. At t 1 to t 3, we measured height and weight of children and calculated BMI-SDS scores. Children's externalizing and internalizing problems (t 1-t 3) and social competence (t 3, N = 118) were assessed using questionnaires: Child Behavior Checklist (CBCL 1.5-5), Strengths and Difficulties Questionnaire (SDQ: prosocial behavior), and a checklist for behavioral problems at preschool age (VBV 3-6: social-emotional competence). Results: By applying structural equation modeling (SEM) and a latent regression analysis, we found maternal BMI to predict higher BMI-SDS and a poorer psychosocial development (higher externalizing symptoms, lower social competence) in children. Higher parenting stress predicted higher levels of externalizing and internalizing symptoms and lower social competence. Better maternal EA was associated with higher social competence. We found parenting stress to serve as a mediator in the association between maternal weight and children's psychosocial outcomes. Moreover, children of mothers with an elevated BMI were at greater risk of lower social competence only when their mothers showed low levels of maternal EA (moderation). Conclusion: Interventional studies are needed that investigate the causal pathways between parenting stress, mother-child interaction quality and child outcomes. These aspects might be targets to improve the psychosocial development of the offspring of overweight or obese mothers. abstract_id: PUBMED:38350308 Effects of childhood obstructive sleep apnea with and without daytime sleepiness on behaviors and emotions. Objective: To investigate the relationship between obstructive sleep apnea (OSA) with and without excessive daytime sleepiness (EDS) and behavioral and emotional outcomes in non-obese prepubertal children. Methods: This was a retrospective analysis of children aged 5-11 years who presented to our unit for assessment of their sleep-related complaints. All children underwent polysomnography (PSG). They also completed the Pediatric Daytime Sleepiness Scale (PDSS) and a sleep diary. OSA was diagnosed if the obstructive apnea-hypopnea index (OAHI) was ≥1 event/hour. EDS was defined as PDSS &gt;15. Behavioral and emotional outcomes were assessed using the Child Behavioral Checklist (CBCL). Results: Data from 391 children (mean age of 8.6 ± 1.7 years; 67 % male) were analyzed. Seventy children did not have OSA or EDS, 137 had OSA, 50 had reported having EDS but without OSA, and 134 children had both OSA and EDS. There were significantly higher CBCL total problems score in the combined group (61 ± 9) compared to the non-OSA/EDS group (54 ± 10), and the OSA-only group (54 ± 10) (p &lt; 0.001). The presence of EDS was significantly associated with higher CBCL T score and higher odds for clinically significant behavioral problems (T score ≥65) after adjusting for age, sex, BMI z-score and average sleep duration (p &lt; 0.001). Conclusion: Excessive daytime sleepiness is an important contributory factor associated with suboptimal behavioral and emotional outcomes in children with OSA. abstract_id: PUBMED:35657543 The effect of air pollution and emotional and behavioral problems on preschoolers' overweight and obesity. Childhood overweight and obesity (OWO) has risen dramatically in both developed and developing countries over the past few decades, creating a huge burden of disease. Ambient air pollution and emotional and behavioral problems are important influencing factors of OWO in preschoolers, but few studies have evaluated the impact of air pollution and emotional and behavioral problems on OWO of preschoolers in rural areas and their potential interactions. This study selected 3802 preschool children from 26 kindergartens in 4 rural areas of Anhui Province for a cross-sectional study. A total of 3636 individuals were included in the final analysis. In this study, outdoor air pollutants (PM2.5 and O3) were derived from the China Air Pollution Tracking (TAP) data set, matching preschoolers' external air pollution exposure according to their kindergarten address codes to neighborhoods or administrative villages. OWO were assessed based on WHO Child Growth and Development Standards. Generalized linear model (GLM) and interplot model were used to evaluate the separate effects and potential interactions of air pollutants and emotional and behavioral problems on preschoolers' OWO. In the separate analysis, we found a significant positive association between air pollution and emotional and behavioral problems and OWO among preschoolers. In the interaction analysis, air pollution could enhance the positive effect of emotional and behavioral problems on OWO in preschoolers. In addition, the effect of air pollution and emotional and behavioral problems on overweight and obesity was stronger in preschoolers aged 5 to 6 years. Finally, we also found a stronger positive association between emotional and behavioral problems among girls, macrosomia, non-left-behind children, and preschoolers without eating problems. This study provided a scientific basis for the control of air pollution and overweight and obesity among preschool children in Anhui Province. abstract_id: PUBMED:21114457 Is obesity associated with emotional and behavioural problems in children? Findings from the Millennium Cohort Study. Objectives: We examined cross-sectional and longitudinal associations between obesity and emotional and behavioural problems in a nationally representative sample of young children. Methods: Data were available from 11 202 children (50% boys) participating in the UK's Millennium Cohort Study. Height and weight were measured at 3 and 5 years and children defined as obese using IOTF cut-offs for body mass index (BMI). Emotional and behavioural problems were parentally assessed using the Strengths and Difficulties Questionnaire. Adjusted linear and multinomial regression analyses were conducted separately for boys and girls. Results: At age 3, obese boys had more conduct problems, and obese girls had more prosocial behaviours, than their normal weight counterparts. At age 5, obese boys had more conduct problems, hyperactivity and inattention problems, peer relationship problems and total difficulties. Obese girls only had more peer relationship problems. Obese 3-year-olds were not at increased risk of abnormal scores; in contrast, obese 5-year-old boys were significantly more likely to have abnormal scores for conduct problems, hyperactivity and inattention problems, peer relationship problems, prosocial behaviours and total difficulties. Obesity, at age 3, was also predictive of peer relationship problems at age 5 in boys (95% CI: 0.26 [0.01, 0.52]). Conclusions: Childhood obesity is associated with emotional and behavioural problems from a very young age. Obese boys are at particular risk. Further research is required to examine effect modifiers and mediating factors in these associations. Recognition and response to these mental health problems should be a goal of pediatric obesity interventions and policies. abstract_id: PUBMED:23056854 Correlation of obesity and overweight with emotional-behavioral problems in primary school age girls in tabriz, iran. Objective: Obesity and overweight have shown an increasing trend in most developing countries. Childhood obesity would impose numerous health-related problems. This study was conducted to determine the correlation of obesity and overweight with emotional-behavioral problems in primary school age girls. Methods: In a cross-sectional study, 300 primary school girls (aged 7-11 years) were selected using a multi-staged sampling method, including randomized cluster and stratified method. For all students body mass index was measured and then based on BMI for age and sex, from each grade (1-5), 20 students were selected for each group of normal weight, overweight and obese. The emotional-behavioral problems were evaluated using child behavior checklist that consists of 113 items for childhood behavioral problems. Findings: Total behavioral problems were seen in 17%, 27%, and 2% in obese, overweight, and normal weight children, respectively. Internalizing problems (including Anxious/Depressed, Withdrawn, and Somatic Complaints) were seen in 11%, 15%, and 2% and externalizing problems (including Aggressive and Delinquent Behaviors) were observed in 8%, 17%, and 2% in obese, overweight, and normal weight children, respectively. The mean scores in all scales were higher in obese and overweight children in comparison with normal weight children and the emotional-behavioral problems had significant positive correlation with obesity and overweight (P&lt;0.01). Conclusion: Despite the cultural differences between east and west, yet there are similarities in the most of the emotional-behavioral problems related to overweight and obesity. Also, it seems that the risk of behavioral-emotional problems in overweight girls is more than in obese girls. Overweight and Obesity prevention may be a primary preventive step for these problems in children. abstract_id: PUBMED:23027138 Relation of emotional and behavioral problems with body mass index in preschool children: the Generation R study. Objective: Although problem behavior in children and adolescents has frequently been associated with overweight, it is unclear whether this relationship is already present in early childhood. We hypothesized that problem behavior is positively related to body mass index (BMI) in children of preschool age and that eating behavior explains part of this relation. Methods: The study was embedded in the Generation R Study, a population-based cohort with data available on BMI and problem behavior for 3137 children aged 3 to 4 years. Problem behavior was measured with the child behavior checklist (CBCI), and eating behavior was assessed using the child eating behaviour questionnaire (CEBQ). Linear regression analyses were conducted to assess the association between the CBCI (expressed as z-scores). CEBQ, and BMI standard deviation scores (BMI-SDS), Bootstrapping was used to formally test mediation. Results: Children with higher levels of emotional problems had a lower BMI-SDS after adjustment for relevant covariates (e.g., β [95% confidence interval {Cl}] for mother report of emotional problems = -0.04 [-0.07, -0.001], father report = -0.04 [-0.08, -0.001]). Behavioral problems were not associated with BMI. Emotional and behavioral problems were not associated with underweight or overweight if studied categorically. The effect estimate for the relation of emotional problems with BMI-SDS attenuated to nonsignificance after adjustment for specific eating behaviors, i.e., they were accounted for by satiety responsiveness, fussiness, and emotional undereating. Conclusion: In this population-based study, emotional problems in preschoolers were negatively related to BMI, and this relation was fully explained by food avoidant eating behaviors. abstract_id: PUBMED:26837399 Correlation between obesity and emotional, social, and behavioral problems associated with physical limitation among children and adolescents in Western Saudi Arabia. Objectives: To evaluate the impact of body mass index (BMI) on limitation of physical activity and the associated effect on behavioral, school, and social problems among obese children in Western Saudi Arabia. Methods: A cross-sectional obesity survey was conducted in Jeddah, Saudi Arabia. Data were collected between August 2014 and February 2015 from 281 obese children aged between 2-18 (girls: 130, boys: 151). Participants were selected randomly to represent different economic status, level of education, and family structure. Results: The mean age of participants was 10.5 years for girls and 10.6 years for boys. Higher BMI was associated with physical activity problems in walking (50.6%), running (55.8%), and exercise (44.3%), with school problems in attention (37.4%) and follow-up duties fade (31.3%), and with social problems in doing things other children could do (42.8%), playing with others (46%), and continuing play (33.4%). There was no significant relationship between BMI and emotional problems and school attendance. Conclusion: Higher BMI significantly increased physical limitation, problems maintaining attention in school, and social problems among obese children. However, emotional problems and school attendance did not show a significant correlation with BMI. abstract_id: PUBMED:23680342 Health status, emotional/behavioral problems, health care use, and expenditures in overweight/obese US children/adolescents. Objective: To examine the association of overweight/obesity with health, health care utilization, and expenditures in a national sample of 10- to 17-year-old children and adolescents. Methods: Secondary analysis of 2005 to 2009 Medical Expenditure Panel Survey data (n = 17,224). Outcome measures included suboptimal health, emotional/behavioral problems, health care utilization, and expenditures. Results: Overweight and obese children and adolescents had greater risk of suboptimal health (adjusted risk ratio [ARR], 1.4 and 1.7; P &lt; .01), use of prescriptions (ARR, both 1.1; P = .01), and emergency department visits (ARR, 1.2 and 1.1; P = .01); overweight children/adolescents had lower mean out-of-pocket expenditures (∼$100, P &lt; .01); and obese children/adolescents had greater risk of emotional/behavioral problems (ARR, 1.2; P &lt; .01) and specialist visits (ARR, 1.1; P = .01). The most common specialty referral among obese children/adolescents was psychiatry. Overweight and obesity were not associated with office visits or total expenditures. A greater proportions of children and adolescents with suboptimal health and emotional/behavioral problems had health care expenditures, and those with suboptimal health were more likely to have out-of-pocket expenditures. Conclusions: Pediatric overweight and obesity affect child and adolescent health status, emotional/behavioral problems, and specific domains of health care utilization, but do not appear to be associated with total health care expenditures. Out-of-pocket expenditures are lower among overweight children and adolescents. These findings highlight the need for early intervention in overweight children/adolescents, when health care expenditures may not be greater, and suggest that it may prove useful to pay special attention to the health status and emotional/behavioral problems of overweight and obese children/adolescents in weight-management interventions. abstract_id: PUBMED:32004246 Physical Activity, Sleep, Body Mass Index, and Associated Risk of Behavioral and Emotional Problems in Childhood. Objective: Cross-sectional data indicate that physical activity, longer sleep duration, and normal body weight are associated with better mental health in childhood. It is less clear whether these factors protect against future emotional and behavioral problems. We investigated whether physical activity, sleep duration, and body mass index (BMI) at the age of 7 years are associated with emotional and behavioral problems at the age of 11 years. Methods: Children born to European mothers enrolled in the prospective longitudinal Auckland Birthweight Collaborative Study (N = 871) were assessed at birth and ages 7 and 11 years. Physical activity and sleep duration were measured using accelerometer. BMI was calculated from height and weight measurement. Outcome variables assessed at the age of 11 years were parent and child self-report Strengths and Difficulties Questionnaire and parent and teacher Conners' Rating Scale scores. Results: Physical activity, sleep duration, and BMI at the age of 7 years were not significantly associated with emotional and behavioral problems at the age of 11 years. In cross-sectional analysis at the age of 11 years, there was no significant association between physical activity or sleep duration and emotional and behavioral problems. Children with a BMI in the overweight or obese range were significantly more likely to have teacher-rated behavior problems and parent-rated emotional or behavioral problems after adjustment for potential confounders. Conclusion: Although physical activity and sleep have physical health benefits, they may not be protective against future emotional and behavioral problems in childhood in the general population. BMI in the obese or overweight range was significantly associated with current emotional and behavioral problems at the age of 11 years. abstract_id: PUBMED:34197885 Physical health, behavioral and emotional functioning in children of gulf war veterans. Objective: We examined whether the prevalence of medical and behavioral conditions is higher in children of deployed veterans (DVs) versus non-deployed veterans (NDVs) after the 1991 Gulf War. Methods: We examined 1387 children of 737 veterans. Children ages 2-18 had physical exams and parental reports of physical history and behavior. Results: Physical health was analyzed using GEE models. Behavioral health [total, internalizing, and externalizing behavior problems (TBP, IBP, EBP)] was analyzed with mixed-effects regression models. Analyses were conducted by age group (2-3, 4-11, 12-18), and gender (ages 4-11, 12-18). Children of DVs ages 2-3 had significantly worse dentition (13.9% vs. 4.8%, P = 0.03) and more EBP {least square means (lsmeans) 54.31 vs. 47.59, P = 0.02}. Children of DVs ages 4-11 had significantly more obesity (18.8% vs. 12.7%, P = 0.02). Among children 4-11, male children of DVs had significantly more TBP (lsmeans 70.68 vs. 57.34, P = 0.003), IBP (lsmeans 63.59 vs. 56.16, P = 0.002) and EBP (lsmeans 61.60 vs. 52.93, P = 0.03), but female children did not. For children ages 12-18, male children of DVs had more EBP (lsmeans 63.73 vs. 43.51, P = 0.008), while female children of DVs had fewer EBP (lsmeans 45.50 vs. 50.48, P = 0.02). Veteran military characteristics and mental health, and children's social status and health, including obesity, predicted children's TBP for one or more age groups. Conclusions: Children of DVs experienced worse dentition, greater obesity, and more behavioral problems compared to NDV children, suggesting adverse health effects associated with parental deployment in need of further exploration. Answer: Yes, obesity is associated with emotional and behavioral problems in children. Several studies have found a correlation between obesity and a range of psychosocial issues in children. For instance, a study found that maternal obesity could predict poorer psychosocial development in children, including higher externalizing symptoms and lower social competence. Parenting stress was identified as a mediator in the association between maternal weight and children's psychosocial outcomes (PUBMED:27559321). Another study reported that obesity in young children is associated with emotional and behavioral problems from a very young age, with obese boys being at particular risk (PUBMED:21114457). Additional research has shown that obese and overweight children have higher rates of emotional-behavioral problems compared to their normal-weight peers (PUBMED:23056854). In preschool children, emotional problems were negatively related to BMI, and this relation was fully explained by food avoidant eating behaviors (PUBMED:23027138). Moreover, higher BMI in children was associated with physical activity problems, school problems in attention, and social problems (PUBMED:26837399). Furthermore, overweight and obese children and adolescents were found to have a greater risk of suboptimal health, use of prescriptions, emergency department visits, and emotional/behavioral problems (PUBMED:23680342). However, one study noted that while physical activity and sleep have physical health benefits, they may not be protective against future emotional and behavioral problems in childhood in the general population (PUBMED:32004246). In summary, the evidence suggests that there is a significant association between obesity and emotional and behavioral problems in children, which underscores the importance of addressing obesity as part of a comprehensive approach to improving children's overall well-being.
Instruction: Can common-type atrial flutter be a sign of an arrhythmogenic substrate in paroxysmal atrial fibrillation? Abstracts: abstract_id: PUBMED:18040030 Can common-type atrial flutter be a sign of an arrhythmogenic substrate in paroxysmal atrial fibrillation? Clinical and ablative consequences in patients with coexistent paroxysmal atrial fibrillation/atrial flutter. Background: The coexistence of atrial fibrillation (AF) and atrial flutter (AFL) is well recognized. AF precedes the onset of AFL in almost all instances. We evaluated the effect of 2 ablation strategies in patients with paroxysmal AF (PAF) and AFL. Methods And Results: Ninety-eight patients with PAF/AFL were prospectively recruited to undergo pulmonary vein cryoisolation (PVI). Those with at least 1 episode of sustained common-type AFL were assigned to cavotricuspid isthmus cryoablation followed by a 6-week monitoring period and a subsequent PVI (n=36; group I). Patients with PAF only underwent PVI (n=62; group II). The study included 76 men with a mean age of 50+/-10 years. Most patients (76 [78%]) had no structural heart disease. When the 2 groups were compared, residual AF after a blanking period of 3 months after PVI occurred in 24 patients (67%) in group I versus 7 (11%) in group II (P&lt;0.05). Conclusions: In patients with PAF and no documented common-type AFL, PVI alone prevented the occurrence of AF in 82%, whereas in patients with AFL/PAF, cavotricuspid isthmus cryoablation and PVI were used successfully to treat sustained common-type AFL but appeared to be insufficient to prevent recurrences of AF. In this population, AFL can be a sign that non-pulmonary vein triggers are the culprit behind AF or that sufficient electrical remodeling has already occurred in both atria, and thus a strategy that includes substrate modification may be required. abstract_id: PUBMED:30534429 Clinical Study Regarding Arrhythmogenic Risk Factors and Oxidative Stress Inductibility in Young People. Rhythm disorders in young people are often reported and when they are persistent, repetitive or with a severity degree, an ethiopathgenical assessment of arrhythmogenic risk factors and their implications is required. Purpose: Arrhythmogenic risk factors evaluation and the possibility of determining oxidative stress in the arrhythmic pathology in young people. Material And Methods: The study was conducted on 184 young subjects, aged 16-26 years old; the points of interest were: the presence or absence of cardiac dysrhythmias, the existence of proarrhythmogenic risk factors and determination of oxidative stress status modifications. Results: Of the studied young subjects, 39% presented heart rhythm disturbances, repetitive or persistent (atrial extrasystolic arrhythmia, sinus tachycardia, ventricular extrasystolic arrhythmia, paroxysmal atrial fibrillation, paroxysmal supraventricular tachycardia, associated dysrhythmias, atrial flutter, sinus bradycardia), which have been associated with the following risk factors: coffee consumption 82%, stress 80%, physical effort 72%, energy drinks consumption 72%, hyperlipidic diet 69%, familial predisposition 69%, alcohol intake 53%, frequent sleep deprivation 50%, smoking 31%, overweight 31%. The observed risk factors may be involved in the increasing of oxidative stress level, and, for this reason, the determination of oxidative stress biomarkers is required. The association of arrhythmogenic risk factors, with the expression of oxidative stress markers and the existence of enzymatic genetic polymorphism of redox systems, requires proper monitoring for the further risk of endothelial lesions induction, leading to aterosclerosis. Conclusions: Arrhythmogenic risk factors and biomarkers of oxidative stress are important, especially in young people cases, for monitoring the cardiovascular risk, for primary prevention and early treatment. abstract_id: PUBMED:37956774 Risk of atrial arrhythmias in patients with ventricular tachycardia in arrhythmogenic right ventricular cardiomyopathy. Background: In arrhythmogenic right ventricular cardiomyopathy (ARVC), risk of atrial arrhythmias (AAs) persists after ventricular tachycardia (VT) ablation. Objective: The purpose of this study was to determine the type, prevalence, outcome, and risk correlates of AA in ARVC in patients undergoing VT ablation. Methods: Prospectively collected procedural and clinical data on ARVC patients undergoing VT ablation were analyzed. Risk score for typical atrial flutter was determined from univariate logistic regression analysis. Results: Of 119 consecutive patients with ARVC and VT ablation, 40 (34%) had AA: atrial fibrillation (AF) in 31, typical isthmus-dependent atrial flutter (AFL) in 27, and atrial tachycardia/atypical flutter (AT) in 10. Seventeen patients (43%) with AA experienced inappropriate defibrillator therapy, with 15 patients experiencing shocks. Ablation was performed for typical AFL in 21 (53%), AT in 5 (13%), and pulmonary vein isolation for AF in 4 (10%) patients and prevented AA in 78% and all AFL during additional mean follow-up of 65 months. Risk score for typical flutter included age &gt;40 years (1 point), ≥moderate right ventricular dysfunction (2 points), ≥moderate tricuspid regurgitation (2 points), ≥moderate right atrial dilation (2 points), and right ventricular volume &gt;250 cc (3points), with score &gt;4 identifying 50% prevalence of typical flutter. Conclusion: AAs are common in patients with ARVC and VT, can result in inappropriate implantable cardioverter-defibrillator shocks, and typically are controlled with atrial ablation. A risk score can be used to identify patients at high risk for typical AFL who may be considered for isthmus ablation at the time of VT ablation. abstract_id: PUBMED:31310380 Atrial arrhythmias in patients with arrhythmogenic right ventricular cardiomyopathy: Prevalence, echocardiographic predictors, and treatment. Introduction: The clinical role of atrial arrhythmias (AA) in arrhythmogenic right ventricular cardiomyopathy (ARVC) and the echocardiographic variables that predict them are not well defined. We describe the prevalence, types, echocardiographic predictors, and management of AA in patients with ARVC. Methods: We retrospectively evaluated medical records of 117 patients with definite ARVC (2010 Task Force Criteria) from two tertiary care centers. We identified those patients with sustained AA (&gt;30 seconds), including atrial fibrillation (AF), atrial flutter (AFL), and atrial tachycardia (AT). We collected demographic, genetic, and clinical data. The median follow-up was 3.4 years (interquartile range = 2.0-5.7). Results: Total 26 patients (22%) had one or more types of AA: AF (n = 19), AFL (n = 9), and AT (n = 8). We performed genetic testing on 84 patients with ARVC (71.8%). Two patients with AA (8%) had peripheral emboli, and one patient (4%) suffered inappropriate implantable cardioverter-defibrillator shock. We performed catheter ablation of AA in eight patients (31%), with no procedural complications. Right atrial area and left atrial volume index were independently associated with increased odds of AA; odds ratio (OR), 1.1 (95% confidence interval [CI]:1.02-1.16) (P = .01) and OR, 1.1 (95% CI:1.03-1.15) (P = .003), respectively. An increase in tricuspid annular plane peak systolic excursion was independently associated with reduced odds; OR, 0.3 (95% CI: 0.1-0.94) (P = .003). Conclusions: Atrial arrhythmias (AA) are common in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC). Inappropriate shocks and systemic emboli may be associated with AA. Atrial size and right ventricular dysfunction may help identify patients with ARVC at increased odds of AA. abstract_id: PUBMED:27378141 Atrial Remodeling and Atrial Tachyarrhythmias in Arrhythmogenic Right Ventricular Cardiomyopathy. Less is known about atrial remodeling and atrial tachyarrhythmias (ATa) in arrhythmogenic right ventricular cardiomyopathy (ARVC); this cross-sectional study aimed to determine the prevalence, characterization, and predictors of atrial remodeling and ATa in a large series of patients with ARVC. From February 2004 to September 2014, 294 consecutive patients who met the task force criteria for ARVC were enrolled. The prevalence, characterization, and predictors of atrial dilation and ATa were investigated. Right atrium (RA) dilation was identified in 160 patients (54.4%) and left atrium dilation in 66 patients (22.4%). Both RA and left atrium dilation were found in 44 patients (15.0%). Twenty-five patients (8.5%) had atrial fibrillation (AF), whereas 19 patients (6.5%) had atrial flutter (AFL). Of which, 7 patients (2.4%) had both AF and AFL. Multivariate analysis showed that AFL (odds ratio [OR] 10.309; 95% confidence interval [CI] 2.770 to 38.462; p &lt;0.001), hypertension (OR 9.174; 95% CI 2.364 to 35.714; p = 0.001), and RA dilation (OR 6.993; 95% CI 1.623 to 30.303; p = 0.009) were associated with increased risk for AF. AF (OR 10.526; 95% CI 2.786 to 40.000; p = 0.001) increased the risk of AFL. In conclusion, atrial remodeling and ATa were common in patients with ARVC. abstract_id: PUBMED:32243340 Prevalence of atrial arrhythmia in patients with arrhythmogenic right ventricular cardiomyopathy: a systematic review and meta-analysis. Background/objectives: Little is known about atrial involvement in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC). Recent studies have suggested that atrial arrhythmia, including atrial fibrillation, atrial flutter (AFL), and atrial tachycardia, was common among these patients although the reported prevalence varied considerably across the studies. The current systematic review and meta-analysis was conducted with the aim of comprehensively investigating the prevalence of overall atrial arrhythmia and each atrial arrhythmia subtype in the setting of ARVC by identifying all relevant studies and combining their results together. Methods: A comprehensive literature review was conducted by searching for published articles indexed in MEDLINE and EMBASE databases from inception through to 22 September 2019 to identify cohort studies of patients with ARVC that described the prevalence of atrial arrhythmia among the participants. The pooled prevalence across studies was calculated using a random-effect, generic inverse variance method of DerSimonian and Laird with a double arcsine transformation. Results: A total of 16 cohort studies with 1986 patients with ARVC were included in this meta-analysis. The pooled prevalence of overall atrial arrhythmia among patients with ARVC was 17.9% [95% confidence interval (CI), 13.0-24.0%; I 88%], the pooled prevalence of atrial fibrillation of 12.9% (95% CI, 9.6-17.0%; I 78%), the pooled prevalence of AFL of 5.9% (95% CI, 3.7-9.2%; I 70%), and the pooled prevalence of atrial tachycardia of 7.1% (95% CI, 3.7-13.0%; I 49%). Conclusion: Atrial arrhythmia is common among patients with ARVC with the pooled prevalence of approximately 18%, which is substantially higher than the reported prevalence of atrial arrhythmia in the general population. abstract_id: PUBMED:34650309 Arrhythmogenic right ventricular cardiomyopathy associated with arrhythmia-induced cardiomyopathy: A case report Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a kind of inherited cardio-myopathy, which is characterized by fibro-fatty replacement of right ventricular myocardium, leading to ventricular arrhythmia. However, rapid atrial arrhythmias are also common, including atrial fibrillation, atrial flutter and atrial tachycardia. Long term rapid atrial arrhythmia can lead to further deterioration of cardiac function. This case is a 51-year-old male. He was admitted to Department of Cardiology, Peking University Third Hospital with palpitation and fatigue after exercise. Electrocardiogram showed incessant atrial tachycardia. Echocardiography revealed dilation of all his four chambers, especially the right ventricle, with the left ventricular ejection fraction of 40% and the right ventricular hypokinesis. Cardiac magnetic resonance imaging found that the right ventricle was significantly enlarged, and the right ventricular aneurysm had formed; the right ventricular ejection fraction was as low as 8%, and the left ventricular ejection fraction was 35%. The patients met the diagnostic criteria of ARVC, and both left and right ventricles were involved. His physical activities were restricted, and metoprolol, digoxin, spironolactone and ramipril were given. Rivaroxaban was also given because atrial tachycardia could cause left atrial thrombosis and embolism. His atrial tachycardia converted spontaneously to normal sinus rhythm after these treatments. Since the patient had severe right ventricular dysfunction, frequent premature ventricular beats and non-sustained ventricular tachycardia on Holter monitoring, indicating a high risk of sudden death, implantable cardioverter defibrillator (ICD) was implanted. After discharge from hospital, physical activity restriction and the above medicines were continued. As rapid atrial arrhythmia could lead to inappropriate ICD shocks, amiodarone was added to prevent the recurrence of atrial tachycardia, and also control ventricular arrhythmia. After 6 months, echocardiography was repeated and showed that the left ventricle diameter was reduced significantly, and the left ventricular ejection fraction increased to 60%, while the size of right ventricle and right atrium decreased slightly. According to the clinical manifestations and outcomes, he was diagnosed with ARVC associated with arrhythmia induced cardiomyopathy. According to the results of his cardiac magnetic resonance imaging, the patient had left ventricular involvement caused by ARVC, and the persistent atrial tachycardia led to left ventricular systolic dysfunction. abstract_id: PUBMED:36618840 Right-to-left shunt via iatrogenic atrial septal defect requiring emergency surgical repair following cryoballoon atrial fibrillation ablation in a patient with arrhythmogenic right ventricular cardiomyopathy. An 81-year-old woman with arrhythmogenic right ventricular cardiomyopathy underwent catheter ablation for atrial fibrillation and atrial flutter. Hypoxemia refractory to the administration of oxygen was seen after transseptal puncture. Transthoracic echocardiography revealed right to left shunt via an iatrogenic atrial septal defect (IASD) that was increased by tricuspid regurgitation flow. Her hypoxemia improved after IASD occlusion with the inflation of a venogram balloon catheter. Emergent surgical IASD closure was successfully performed. IASD after transseptal puncture for atrial fibrillation ablation infrequently causes severe complications that require emergent repair. Learning Objective: Some cases requiring iatrogenic atrial septal defect (IASD) closure after atrial fibrillation (AF) ablation have been reported. We describe the case of an arrhythmogenic right ventricular cardiomyopathy patient with right to left shunt via an IASD which required emergent surgical repair after AF ablation. Right to left shunt after trans-septal puncture is rare, however it can be an emergent life-threatening complication. IASD occlusion with venogram balloon catheter is helpful for the diagnosis and the short-term solution. abstract_id: PUBMED:29897149 Atrial involvement in arrhythmogenic right ventricular cardiomyopathy patients referred for ventricular arrhythmias ablation. Background: Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a heritable myocardium disorder that predominantly affects the ventricle. Little is known about atrial involvement. This study aimed to assess atrial involvement, especially the role of genotype on atrium in ARVC. Methods: The incidence, characterization and predictors of atrial involvement were investigated. Nine known ARVC-causing genes were screened and the correlation between genotype and atrial involvement was assessed. Results: Right atrium (RA) dilation, left atrium (LA) dilation, and sustained atrial tachyarrhythmias (ATa) were found in 45, 16 and 3 patients, respectively. Gene mutations were identified in 64 (64.0%) patients. Mutation carriers showed more RA dilation than noncarriers (54.7% vs. 27.8%, P = 0.009), and no difference in LA dilation and ATa. Multivariate analysis showed tricuspid regurgitation (OR: 18.867; 95% CI: 1.466-250.000; P = 0.024) increased the risk of RA dilation and decreased left ventricular ejection fraction (LVEF) (OR: 1.134; 95% CI: 1.002-1.272; P = 0.031) correlated with LA dilation, whereas genotype showed no significant effect. At a median follow-up time of 91 months, 7 patients died and 1 patient accepted heart transplantation. New-onset RA dilation, LA dilation, and sustained ATa were found in 8, 7, and 6 patients, respectively. Atrial involvement was not associated with the long-term survival. Despite mutation carriers showing more RA dilation, Kaplan-Meier analysis showed genotype was not associated with atrial involvement. Conclusion: Atrial involvement was common in ARVC. Tricuspid regurgitation and decreased LVEF increased the risk for atrial dilation. Genotype was not associated with atrial involvement. abstract_id: PUBMED:27572111 Influence of Genotype on Structural Atrial Abnormalities and Atrial Fibrillation or Flutter in Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy. Introduction: Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is associated with desmosomal mutations. Although desmosomal disruption affects both ventricles and atria, little is known about atrial involvement in ARVD/C. Objective: To describe the extent and clinical significance of structural atrial involvement and atrial arrhythmias (AA) in ARVD/C stratified by genotype. Methods: We included 71 patients who met ARVD/C Task Force Criteria and underwent cardiac magnetic resonance (CMR) imaging and molecular genetic analysis. Indexed atrial end-diastolic volume and area-length-ejection-fraction (ALEF) were evaluated on CMR and compared to controls with idiopathic right ventricular outflow tract tachycardia (n = 40). The primary outcome was occurrence of AA (atrial fibrillation or atrial flutter) during follow-up, recorded by 12-lead ECG, Holter monitoring or implantable cardioverter defibrillator (ICD) interrogation. Results: Patients harbored a desmosomal plakophilin-2 (PKP2) (n = 37) or nondesmosomal phospholamban (PLN) (n = 14) mutation. In 20 subjects, no pathogenic mutation was identified. Compared to controls, right atrial (RA) volumes were reduced in PKP2 (P = 0.002) and comparable in PLN (P = 0.441) mutation carriers. In patients with no mutation identified, RA (P = 0.011) and left atrial (P = 0.034) volumes were increased. Bi-atrial ALEF showed no significant difference between the groups. AA were experienced by 27% of patients and occurred equally among PKP2 (30%) and no mutation identified patients (30%), but less among PLN mutation carriers (14%). Conclusion: Genotype influences atrial volume and occurrence of AA in ARVD/C. While the incidence of AA is similar in PKP2 mutation carriers and patients with no mutation identified, PKP2 mutation carriers have significantly smaller atria. This suggests a different arrhythmogenic mechanism. Answer: Yes, common-type atrial flutter can be a sign of an arrhythmogenic substrate in paroxysmal atrial fibrillation (PAF). In a study involving patients with coexistent PAF and atrial flutter (AFL), it was found that in patients with PAF and no documented common-type AFL, pulmonary vein isolation (PVI) alone prevented the occurrence of AF in a significant majority. However, in patients with AFL/PAF, cavotricuspid isthmus cryoablation and PVI were used successfully to treat sustained common-type AFL but appeared to be insufficient to prevent recurrences of AF. This suggests that in this population, AFL can indicate that non-pulmonary vein triggers are responsible for AF or that sufficient electrical remodeling has already occurred in both atria, necessitating a strategy that includes substrate modification (PUBMED:18040030). Moreover, the presence of AFL in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC) and ventricular tachycardia (VT) is associated with a risk of atrial arrhythmias (AAs) even after VT ablation. A risk score for typical AFL was determined, which included factors such as age, right ventricular dysfunction, tricuspid regurgitation, right atrial dilation, and right ventricular volume. This score can help identify patients at high risk for typical AFL who may benefit from isthmus ablation at the time of VT ablation (PUBMED:37956774). Additionally, atrial arrhythmias are common in patients with ARVC, and atrial size and right ventricular dysfunction may help identify patients at increased odds of AA (PUBMED:31310380). Atrial remodeling and atrial tachyarrhythmias are also common in patients with ARVC, with AFL being associated with an increased risk for AF (PUBMED:27378141). In summary, common-type AFL can indeed be indicative of an arrhythmogenic substrate in PAF, and its presence, particularly in the context of ARVC, suggests a need for a comprehensive ablation strategy that addresses the underlying substrate to prevent recurrence of atrial arrhythmias.
Instruction: A focal marked hypoechogenicity within an isoechoic thyroid nodule: is it a focal malignancy or not? Abstracts: abstract_id: PUBMED:24938659 A focal marked hypoechogenicity within an isoechoic thyroid nodule: is it a focal malignancy or not? Background: A marked hypoechogenicity is a reliable criterion for a malignant nodule, whereas isoechogenicity is considered to be a benign ultrasonographic (US) finding. However, there have been no reports to demonstrate whether a focal marked hypoechogenicity within an isoechoic nodule is a focal malignancy. Purpose: To evaluate whether a focal marked hypoechogenicity within an isoechoic nodule on US examination is a focal malignancy and to correlate US histologic findings using surgical or core needle biopsy (CNB) specimens. Material And Methods: From January 2010 to August 2011, 28 consecutive patients (25 women, 3 men; mean age, 52.2 years) with 29 thyroid nodules, which showed a focal marked hypoechogenicity within an isoechoic nodule on US, underwent US-guided CNB for a suspicion of a focal malignancy. All nodules were confirmed by surgery (n = 6) or more than two consecutive US-guided biopsies (n = 23). We evaluated the risk of malignancy, and analyzed histological components which correlated with a focal marked hypoechogenic area within an isoechoic nodule on US. Statistical analysis was performed using the Mann-Whitney U test and Fisher's exact test. Results: Of the 29 nodules, 25 were benign (86.2%) and four were malignant (13.8%). The peripheral location of focal marked hypoechogenicity within an isoechoic nodule was the only significant US finding that suggested malignancy (P = 0.03). All benign focal marked hypoechogenicities were histologically composed of severe fibrosis with a few follicular cells with varying frequencies of hemosiderin, calcification, infarction, and granulation tissue. Conclusion: Although a focal marked hypoechogenicity within an isoechoic nodule is usually benign, the peripheral location of a focal marked hypoechogenicity may be a malignancy. The histologic information obtained by CNB specimen may reduce the needs for repetitive fine needle aspirations or diagnostic surgery. abstract_id: PUBMED:29760786 Ultrasound criteria for risk stratification of thyroid nodules in the previously iodine deficient area of Austria - a single centre, retrospective analysis. Background: We aimed to study the validity of six published ultrasound criteria for risk stratification of thyroid nodules in the former severely iodine deficient population of Austria. Methods: Retrospective, single centre, observer blinded study design. All patients with a history of thyroidectomy due to nodules seen in the centre between 2004 and 2014 with preoperative in-house sonography and documented postoperative histology were analyzed (n = 195). A board of five experienced thyroidologists evaluated the images of 45 papillary carcinomas, 8 follicular carcinomas, and 142 benign nodules regarding the following criteria: mild hypoechogenicity, marked hypoechogenicity, microlobulated or irregular margins, microcalcifications, taller than wide shape, missing thin halo. Results: All criteria but mild hypoechogenicity were significantly more frequent in thyroid cancer than in benign nodules. The number of positive criteria was significantly higher in cancer (2.79 ± 1.35) than in benign nodules (1.73 ± 1.18; p &lt; 0.001). Thus, with a cut-off of two or more positive criteria, a sensitivity of 85% and a specificity of 45% were reached to predict malignancy in this sample of thyroid nodules. As expected, the findings were even more pronounced in papillary cancer only (2.98 ± 1.32 vs. 1.73 ± 1.18, p &lt; 0.001). The six ultrasound criteria could not identify follicular cancer. Conclusion: Our findings support the recently published EU-TIRADS score. Apart from mild hypoechogenicity, the analyzed ultrasound criteria can be applied for risk stratification of thyroid nodules in the previously severely iodine deficient population of Austria. abstract_id: PUBMED:31732777 Ultrasound malignancy risk stratification of thyroid nodules based on the degree of hypoechogenicity and echotexture. Objectives: The ultrasound (US) lexicon of nodule echogenicity and echotexture is one of the major differences among various risk stratification systems of thyroid nodules. This study aimed to stratify the US malignancy risk of thyroid nodules based on their degree of hypoechogenicity and echotexture. Material And Methods: This retrospective study included a total of 2255 consecutive thyroid nodules (≥ 1 cm) with final diagnoses (malignancy rate, 13%) from 2011 to 2016. Thyroid nodules were stratified according to the US degree of hypoechogenicity (mild, moderate, or marked hypoechogenicity) and echotexture (homogeneous vs. heterogeneous). The calculated malignancy risk was compared between each category. Results: There was no significant difference of malignancy risk between the homogeneous markedly hypoechoic and moderately hypoechoic nodules (p ≥ .18). However, the malignancy risks of markedly and moderately hypoechoic nodules were significantly higher than those of mildly hypoechoic nodules (p &lt; .001). Heterogeneous predominantly hypoechoic thyroid nodules showed a significantly higher malignancy risk than predominantly iso- or hyperechoic thyroid nodules (p &lt; .001). There were no significant differences of malignancy risk between heterogeneous predominantly hypoechoic and homogeneous hypoechoic nodules according to the degree of hypoechogenicity (p ≥ .12) and between heterogeneous predominantly iso- or hyperechoic nodules and homogeneous iso- or hyperechoic thyroid nodules (p = .36). Conclusions: The malignancy risk of nodule hypoechogenicity is stratified as mild vs. moderate to marked hypoechogenicity, and the malignancy risk of nodules with heterogeneous echotexture is stratified by the predominant echogenicity of the nodules. Key Points: • Thyroid nodule echogenicity is categorized as marked, moderate, or mild hypoechogenicity and iso- or hyperechogenicity with the reference standard of adjacent thyroid tissue and anterior neck muscles. • The malignancy risk of thyroid nodule echogenicity is stratified as iso- or hyperechoic vs. mild vs. moderate or marked hypoechogenicity. • The malignancy risk of nodules with heterogeneous echotexture is stratified by the predominant echogenicity. abstract_id: PUBMED:36198861 Malignancy risk stratification of thyroid nodules according to echotexture and degree of hypoechogenicity: a retrospective multicenter validation study. Various risk stratification systems show discrepancies in the ultrasound lexicon of nodule echotexture and hypoechogenicity. This study aimed to determine the malignancy risk of thyroid nodules according to their echotexture and degree of hypoechogenicity. From June to September 2015, we retrospectively evaluated 5601 thyroid nodules with final diagnoses from 26 institutions. Nodules were stratified according to the echotexture (homogeneous vs. heterogeneous) and degree of hypoechogenicity (mild, moderate, or marked). We calculated the malignancy risk according to composition and suspicious features. Heterogeneous hypoechoic nodules showed a significantly higher malignancy risk than heterogeneous isoechoic nodules (P ≤ 0.017), except in partially cystic nodules. Malignancy risks were not significantly different between homogeneous versus heterogeneous nodules in both hypoechoic (P ≥ 0.086) and iso- hyperechoic nodules (P ≥ 0.05). Heterogeneous iso-hyperechoic nodules without suspicious features showed a low malignancy risk. The malignancy risks of markedly and moderately hypoechoic nodules were not significantly different in all subgroups (P ≥ 0.48). Marked or moderately hypoechoic nodules showed a significantly higher risk than mild hypoechoic (P ≤ 0.016) nodules. The predominant echogenicity effectively stratifies the malignancy risk of nodules with heterogeneous echotexture. The degree of hypoechogenicity could be stratified as mild versus moderate to marked hypoechogenicity. abstract_id: PUBMED:37252697 Does a three-degree hypoechogenicity grading improve ultrasound thyroid nodule risk stratification and affect the TI-RADS 4 category? A retrospective observational study. Objective: The aim of this study was to determine whether classifying hypoechogenicity in three degrees (mild, moderate, and marked) could improve the distinction between benign and malignant nodules and whether such an approach could influence Category 4 of the Thyroid Imaging Reporting and Data System (TI-RADS). Materials And Methods: In total, 2,574 nodules submitted to fine needle aspiration, classified by the Bethesda System, were retrospectively assessed. Further, a subanalysis considering solid nodules without any additional suspicious findings (n = 565) was performed with the purpose of evaluating mainly TI-RADS 4 nodules. Results: Mild hypoechogenicity was significantly less related to malignancy (odds ratio [OR]: 1.409; CI: 1.086-1.829; p = 0.01), compared to moderate (OR: 4.775; CI: 3.700-6.163; p &lt; 0.001) and marked hypoechogenicity (OR: 8.540; CI: 6.355-11.445; p &lt; 0.001). In addition, mild hypoechogenicity (20.7%) and iso-hyperechogenicity (20.5%) presented a similar rate in the malignant sample. Regarding the subanalysis, no significant association was found between mildly hypoechoic solid nodules and cancer. Conclusion: Stratifying hypoechogenicity into three degrees influences the confidence in the assessment of the rate of malignancy, indicating that mild hypoechogenicity has a unique low-risk biological behavior that resembles iso-hyperechogenicity, but with minor malignant potential when compared to moderate and marked hypoechogenicity, with special influence on the TI-RADS 4 category. abstract_id: PUBMED:34771743 Impact of the Hypoechogenicity Criteria on Thyroid Nodule Malignancy Risk Stratification Performance by Different TIRADS Systems. Background: Various Thyroid Imaging and Reporting data systems (TIRADS) are used worldwide for risk stratification of thyroid nodules. Their sensitivity is high, while the specificity is suboptimal. The aim of the study was to compare several TIRADS systems and evaluate the effect of hypoechogenicity as a sign of risk of malignancy on the overall assessment of diagnostic accuracy. Methods: The prospective study includes 274 patients with 289 thyroid nodules to whom US and risk of malignancy were assessed according to four TIRADS systems-European (EU-TIRADS), Korean (K-TIRADS), TIRADS by American College of Radiology (ACR TIRADS), and modified Kwak et al. TIRADS (L-TIRADS) systems, in which mild hypoechogenicity is not included in malignancy risk suggestive signs. For all thyroid nodules, a fine needle aspiration (FNA) biopsy was performed and evaluated according to the Bethesda system. For all systems, diagnostic accuracy was calculated. Results: Assessing the echogenicity of the thyroid nodules: from 81 of isoechogenic nodules, 2 were malignant (2.1%), from 151 mild hypoechogenic, 18 (12%) were malignant, and from 48 marked hypoechogenic nodules, 16 (33%) were malignant. In 80 thyroid nodules, mild hypoechogenicity was the only sign of malignancy and none appeared malignant. Assessing various TIRADS systems on the same cohort, sensitivity, specificity, PPV, NPV, and accuracy, firstly for EU-TIRADS, they were 97.2%; 39.9%; 18.7%; 99.0%, and 73.3%, respectively; for K-TIRADS they were 97.2%; 46.6%; 20.6%; 99.2%, and 53.9%; for ACR-TIRADS they were 97.2%; 41.1%, 19.0%; 99.0%, and 48.0%, respectively; finally, for L-TIRADS they were 80.6%; 72.7%; 29.6%; 96.3%, and 73.3%. Conclusions: This comparative research has highlighted that applying different TIRADS systems can alter the number of necessary biopsies by re-categorization of the thyroid nodules. The main pattern that affected differences was inconsistent hypoechogenicity interpretation, giving the accuracy superiority to the systems that raise the malignancy risk with marked hypoechogenicity, at the same time with minor compensation for sensitivity. abstract_id: PUBMED:27683740 Focal Nodular Hashimoto's Thyroiditis: Comparison of Ultrasonographic Features with Malignant and Other Benign Nodules. Introduction: Hashimoto's thyroiditis (HT) can present as focal nodular disease. This study aimed to determine the distinguishing sonographic features of nodules in biopsy-proven focal HT. Materials And Methods: The study included 388 thyroid nodules from 310 patients who underwent ultrasound-guided fine-needle aspiration biopsy (FNAB). There were 28 focal HT, 27 malignant and 333 other benign nodules. Sonographic features of focal HT nodules on prebiopsy ultrasound were compared with malignant nodules and other benign nodules using multinomial logistic regression adjusting for the correlation between multiple nodules obtained from the same patient. Results: Most focal HT nodules were purely solid (92.8%), iso-hyperechoic (70.4%), had regular margins (75.0%) and central vascularity (85.7%). Hypoechogenicity (29.6% vs 42.3%; P = 0.017) and microcalcifications (3.6% vs 44.4%; P = 0.003) were significantly less common in focal HT than malignant nodules. None of the focal HT nodules demonstrated marked hypoechogenicity, irregular margins or cervical lymphadenopathy, which are traditionally associated with malignancy. Compared to other benign nodules, focal HT nodules were significantly more likely to be purely solid (92.8% vs 49.0%; P = 0.016), ill-defined (25.0% vs 7.0%; P = 0.004) and lack comet-tail artefacts (92.9% vs 66.1%; P = 0.012), which in combination were 17.9% sensitive and 94.6% specific for focal HT. Conclusion: Awareness of the above-described sonographic appearances of focal HT may aid in differentiating them from malignant nodules and risk-stratify for FNAB. While there is substantial overlap with other benign nodules, a combination of the above-mentioned 3 ultrasound features is highly specific for focal HT and can prompt further serological evaluation in clinically unsuspected HT. abstract_id: PUBMED:22608398 "Focal thyroid inferno" on color Doppler ultrasonography: a specific feature of focal Hashimoto's thyroiditis. Purpose: To evaluate color-Doppler features predictive of focal Hashimoto's thyroiditis. Materials And Methods: A total of 521 patients with 561 thyroid nodules that underwent surgeries or gun biopsies were included in this study. These nodules were divided into three groups: focal Hashimoto's thyroiditis (104 nodules in 101 patients), benignity other than focal Hashimoto's thyroiditis (73 nodules in 70 patients), and malignancy (358 nodules in 350 patients). On color Doppler sonography, four vascularity types were determined as: hypovascularity, marked internal flow, marked peripheral flow and focal thyroid inferno. The χ2 test was performed to seek the potential vascularity type with the predictive ability of certain thyroid pathology. Furthermore, the gray-scale features of each nodule were also studied. Results: The vascularity type I (hypovascularity) was more often seen in focal Hashimoto's thyroiditis than other benignity and malignancy (46% vs. 20.5% and 19%). While the type II (marked internal flow) showed the opposite tendency (26.9% [focal Hashimoto's thyroiditis] vs. 45.2% [other benignity] and 52.8% [malignancy]). However, type III (marked peripheral flow) was unable to predict any thyroid pathology. Importantly, type IV (focal thyroid inferno) was exclusive to focal Hashimoto's thyroiditis. All 8 type IV nodules appeared to be solid, hypoechoic, and well-defined. Using "focal thyroid inferno" as an indicator of FHT, the diagnostic sensitivity and specificity were 7.7% and 100% respectively. Conclusions: The vascularity type of "focal thyroid inferno" is specific for focal Hashimoto thyroiditis. Recognition of this particular feature may avoid unnecessary interventional procedures for some solid hypoechoic thyroid nodules suspicious of malignancy. abstract_id: PUBMED:35208581 Non-Marked Hypoechogenic Nodules: Multicenter Study on the Thyroid Malignancy Risk Stratification and Accuracy Based on TIRADS Systems Comparison. Background and Objectives: The aim of the study was to evaluate the predictive value of the ultrasound criterion "non-marked hypoechogenicity" for malignancy and to determine whether classification of these nodules as TIRADS 3 could improve the overall accuracy of consequently adjusted M-TIRADS score. Materials and Methods: A total of 767 patients with 795 thyroid nodules were subject to ultrasonography examination and ultrasound-guided fine needle aspiration biopsy. Nodules were classified by Kwak TIRADS and modified (M-TIRADS) categories 4A, 4B, and 5 according to number of suspicious US features (marked hypoechogenicity, microlobulated or irregular margins, microcalcifications, taller-than-wide shape, metastatic lymph nodes). Non-marked hypoechoic nodules were classified as TIRADS 3. Results: Thyroid nodules were classified as TIRADS 2, 3, 4A, 4B, and 5 in 14.5, 57.5, 14.2, 8.1, and 5.7%, respectively. Only histopathologic results (125 nodules underwent surgery) and highly specific cytology results (Bethesda II, VI) were accepted as a standard of reference, forming a sub-cohort of 562/795 nodules (70.7%). Malignancy was found in 7.7%. Overall, M-TIRADS showed sensitivity/specificity of 93.02/81.31%, and for PPV/NPV, these were 29.2/99.29%, respectively (OR-18.62). Irregular margins showed the highest sensitivity and specificity (75.68/93.74%, respectively). In TIRADS 3 category, 37.2% nodules were isoechoic, 6.6% hyperechoic, and 52.2% hypoechoic (there was no difference of malignancy risk in hypoechoic nodules between M-TIRADS and Kwak systems-0.9 vs. 0.8, respectively). Accuracy of M-TIRADS classification in this cohort was 78.26% vs. 48.11% for Kwak. Conclusions: The non-marked hypoechoic nodule pattern correlated with low risk of malignancy; classification of these nodules as TIRADS 3 significantly improved the predictive value and overall accuracy of the proposed M-TIRADS scoring with malignancy risk increase in TIRADS 4 categories by 20%; and no significant alteration of malignancy risk in TIRADS 3 could contribute to reducing overdiagnosis, obviating the need for FNA. abstract_id: PUBMED:26639939 Hyalinizing trabecular tumor of the thyroid: diagnosis of a rare tumor using ultrasonography, cytology, and intraoperative frozen sections. Purpose: The goal of this study was to evaluate the clinicopathological and imaging features of thyroid nodules surgically diagnosed as hyaline trabecular tumor (HTT), and to assess the role of cytology and frozen sections (FS) in the diagnosis of HTT. Methods: This study included 21 thyroid nodules in 21 patients treated from August 2005 to March 2015 (mean age, 53.3 years) who were either diagnosed as HTT or had HTT suggested as a possible diagnosis based on cytology, FS, or the final pathology report. Patients' medical records were retrospectively reviewed for cytopathologic results and outcomes during the course of follow-up. Sonograms were reviewed and categorized. Results: Twelve nodules from 12 patients were surgically confirmed as HTT. Ultrasonography (US)-guided fine needle aspiration (FNA) was performed on 11 nodules, of which six (54.5%) were papillary thyroid carcinoma (PTC) or suspicious for PTC and three (27.3%) were HTT or suspicious for HTT. Intraoperative FS suggested the possibility of HTT in seven nodules, of which four (57.1%) were confirmed as HTT. US-FNA suggested the diagnosis of HTT in 10 nodules, of which three (30.0%) were confirmed as HTT. Common US features of the 12 pathologically confirmed cases of HTT were hypoechogenicity or marked hypoechogenicity (83.4%), absence of calcifications (91.7%), parallel shape (100.0%), presence of vascularity (75.0%), and probable benignity (58.3%). Conclusion: HTT should be included in the differential diagnosis of solid tumors with hypoechogenicity or marked hypoechogenicity and otherwise benign US features that have been diagnosed as PTC through cytology. Answer: The presence of a focal marked hypoechogenicity within an isoechoic thyroid nodule on ultrasound examination raises the question of whether it is indicative of a focal malignancy. According to a study by PUBMED:24938659, although a focal marked hypoechogenicity within an isoechoic nodule is usually benign, the peripheral location of such a hypoechogenicity may suggest malignancy. In their study, 86.2% of the nodules with focal marked hypoechogenicity were benign, while 13.8% were malignant. The benign focal marked hypoechogenicities were histologically composed of severe fibrosis with a few follicular cells and varying frequencies of hemosiderin, calcification, infarction, and granulation tissue. The study concluded that the histologic information obtained by core needle biopsy (CNB) specimen may reduce the need for repetitive fine needle aspirations or diagnostic surgery. In the context of thyroid nodule evaluation, hypoechogenicity is a significant ultrasonographic feature used for risk stratification. Different degrees of hypoechogenicity (mild, moderate, or marked) have been associated with varying risks of malignancy. For instance, PUBMED:31732777 found that the malignancy risks of markedly and moderately hypoechoic nodules were significantly higher than those of mildly hypoechoic nodules. Similarly, PUBMED:36198861 reported that marked or moderately hypoechoic nodules showed a significantly higher risk than mild hypoechoic nodules, and the degree of hypoechogenicity could be stratified as mild versus moderate to marked hypoechogenicity. Therefore, while a focal marked hypoechogenicity within an isoechoic thyroid nodule is not definitively indicative of a focal malignancy, it is a feature that warrants further investigation due to the increased risk associated with such a finding. The peripheral location of the hypoechogenicity and the degree of hypoechogenicity are important factors to consider in the risk assessment and management of these nodules.
Instruction: Does it make a difference how and when you take your calcium? Abstracts: abstract_id: PUBMED:11522557 Does it make a difference how and when you take your calcium? The acute effects of calcium on calcium and bone metabolism. Background: Calcium supplements are widely used to prevent osteoporosis. However, little is known about the metabolic effects of different dosages and of the timing of the dosages. Objective: The aim was to study the effects of the timing of the dose (study 1), the effects of the size of the dose (study 2), and the effects of small repetitive doses (study 3) of calcium on calcium and bone metabolism in women. Design: The investigation was conducted in 3 parts, each with 10 participants. In study 1, calcium loads (0 and 25 mg/kg body wt) were taken at 0900 and 2100. In study 2, calcium loads of 0, 250, and 1000 mg were taken at 0900. In study 3, calcium loads of 0 and 200 mg were taken 4 times/d. Markers of calcium and bone metabolism were followed. Results: There was no significant difference in the response of serum parathyroid hormone (PTH) to the calcium load taken at 0900 and that at 2100. There was a significant dose-response effect of the calcium load on serum ionized calcium (P = 0.00005) and serum PTH (P = 0.0003). Small calcium doses (200 mg) taken 4 times/d kept the PTH secretion at a lower level than during the control day (P = 0.016). None of the doses caused significant changes in the markers of bone formation and resorption measured. Conclusions: The calcium loads had no significant effect on the markers of bone formation and resorption measured, although even small calcium doses decreased serum PTH and increased serum ionized calcium concentrations rapidly. The effect was similar whether calcium was taken in the morning or in the evening. abstract_id: PUBMED:30687423 Differences in Loading and Morphological Structure of the Take-off and Non-take-off Limb in Athletics Jumping Events. The objective of the study was to assess differences between the take-off and non-take-off limbs of athletes in track-and-field jumping events based on a segmental analysis of body composition as well as kinetic analysis. The research included 19 participants (10 males, 9 females) with an average age of 18.1 ± 2.8 years. We measured body height, body mass, body composition (body fat, fat free mass, bone mineral content and bone density) and segmental distribution of these variables. To assess strength of the lower limbs, we performed reaction force analysis during take-off and run symmetry. The difference in the representation of soft tissues between the take-off and non-take-off limbs was not significant; the differences were 0.06%, 0 kg in body fat and 0.01 kg in fat free mass. The differences in the values of bone matter were significant. The bone mineral content was 0.05 kg higher in the take-off limb (p &lt; 0.001), and bone density was 0.07 g/cm2 higher (p &lt; 0.001); the practical significance of the difference was intermediate (d = 0.5). Kinetic analysis showed that athletes exerted greater force on the pad with the take-off limb than the non-take-off limb when taking off while using arms in the first peak of the vertical force. The difference determined in this type of take-off was statistically significant (p &lt; 0.05); the practical significance of the difference was medium (d = 0.7). The difference in the second peak of the vertical reaction force in the take-off arm was not statistically significant. The differences were reflected in the different bone matter compositions. abstract_id: PUBMED:34455650 Improving and evaluating the adhesion and stability of make-up by enhancing the affinity between skin/make-up layer. Purpose: Make-up clumps, bumps and collapses are the three factors that determine how well make-up has been performed. The purpose of this study is to reduce the three factors mentioned above by using amphiphilic substances to increase the affinity between the skin and the make-up layer. In addition, it aims to evaluate the improvement of the make-up layer by developing an objective make-up layer evaluation method. Methods: Experiments were performed in an attempt to increase the affinity between the skin and the make-up layer by minimizing the difference in surface energy between the two. Multiple types of artificial skin (leather and bio-skin) were used and treated to form the liquid foundation layer. Qualitative evaluation of the make-up layer was conducted by analyzing the surface, cross-section, and fracture area of the make-up layer, using the evaluation method proposed in this study. Results: After applying this method and taking measurements by 3D surface analysis, the surface roughness of the make-up layer reduced by 46%, and the maximum thickness of the make-up layer reduced by about 50% in comparison with the control group (method not applied). In the case of the make-up layer to which this method was applied, two-dimensional cross-sectional Scanning Electron Microscope (SEM) image analysis confirmed that agglomeration was reduced, and the thickness of the make-up layer was also reduced by an average of 54%. According to this result, the technique of increasing the affinity between the skin and the make-up layer reduces the level of aggregation of make-up and encourages the formation of a uniform and thin make-up layer. Also, the fracture area after motion simulation was reduced by 33%. These results indicate that the method of increasing the affinity between skin/make-up membranes positively affects the formation of a uniform make-up layer. Conclusion: Increasing the affinity by reducing the surface energy between the skin and the make-up layer plays an important role in forming a thin and uniform make-up layer by improving the problems of lifting, agglomeration, and collapse of the make-up. In addition, it has been confirmed that through this method, the quality of consumer experience related to make-up satisfaction can be improved. The results show that objective analyses of make-up help the understanding of the quality of consumer experience on make-up. abstract_id: PUBMED:23590385 Investigation on the use of expired make-up and microbiological contamination of mascaras. Objectives: This study investigated the habits of female students regarding make-up use, and quantifies the microbiological contamination of mascaras worn by this population. Methods: To this end, 44 students answered a structured questionnaire to evaluate the use of expired make-up, shared usage and reports of adverse effects. Subsequently, make-up samples were collected to check the manufacturing registration and the expiration date and its visibility on the label. The total counts of microorganisms and identification of Pseudomonas aeruginosa and Staphylococcus aureus in mascara samples collected were performed as described in the Brazilian Pharmacopea (4th edition) Results: According to the results obtained, 97.9 % (43/44) of participants reported that they use or have previously used make-up after the expiration date, with mascara being the most frequently mentioned product. It was observed that on the sample collection date, 70.5% (31/44) of the students had some type of expired make-up. The microbiological analysis of 40 mascara samples revealed 2.54 ± 1.76 10(4) UFC mL(-1) bacteria and 2.55 ± 1.54 10(4) UFC mL(-1) fungi. Analysis revealed the presence of S. aureus in 79% of samples and of P. aeruginosa in 13%. Conclusion: The results are interesting because they show that women tend to continue to use make-up beyond the expiry date. Frequently, these products have a high level of contamination with pathogenic microorganisms. abstract_id: PUBMED:35882496 Frailty on the acute medical take. Frailty is a prevalent condition in urgent care settings associated with an increased risk of adverse events. Frailty commonly presents on the acute medical take in the form of geriatric syndromes, which include falls, delirium and immobility. Comprehensive geriatric assessment is the evidence-based holistic approach to assessing and managing people with frailty. This multidimensional and interdisciplinary process is generally specialist led, however, acute medical teams can make important contributions through early identification and grading of frailty, and proactive management of geriatric syndromes. abstract_id: PUBMED:36397838 Scientific counterfactuals as make-believe. Counterfactuals abound in science, especially when reasoning about and with models. This often requires entertaining counterfactual conditionals with nomologically or metaphysically impossible antecedents, namely, counternomics or counterpossibles. In this paper I defend the make-believe view of scientific counterfactuals, a naturalised fiction-based account of counterfactuals in science which provides a means to evaluate their meanings independently of the possibility of the states of affairs their antecedents describe, and under which they have non-trivial truth-values. Fiction is here understood as imagination (in contrast with its most typical association with falsity), characterised as a propositional attitude of pretense or 'make-believe' (Walton 1990). The application of this theory to scientific counterfactuals makes their evaluation a game of make-believe: a counterfactual is (fictionally) true iff its antecedent and the rules of the game prescribe the imagining of its consequent (Kimpton-Nye 2020). The result is a practice-based account of counterfactuals and counterfactual reasoning in science which incorporates insights from theoretical and experimental analytic philosophy as well as cognitive science. This way, the make-believe view of scientific counterfactuals shows that the evaluation of scientific counterfactuals is none other than a question of scientific representation in disguise. abstract_id: PUBMED:38467070 Landing and take-off capabilities of bioinspiredaerial vehicles: a review. Bioinspired Flapping-Wing Micro Aerial Vehicles (FWMAVs) have&amp;#xD;emerged over the last two decades as a promising new type of robot. Their high thrustto-&amp;#xD;weight ratio, versatility, safety, and maneuverability, especially at small scales,&amp;#xD;could make them more suitable than fixed-wing and multi-rotor vehicles for various&amp;#xD;applications, especially in cluttered, confined environments and in close proximity to&amp;#xD;humans, flora, and fauna. Unlike natural flyers, however, most FWMAVs currently&amp;#xD;have limited take-off and landing capabilities. Natural flyers are able to take off&amp;#xD;and land effortlessly from a wide variety of surfaces and in complex environments.&amp;#xD;Mimicking such capabilities on flapping-wing robots would considerably enhance their&amp;#xD;practical usage. This review presents an overview of take-off and landing techniques for&amp;#xD;FWMAVs, covering different approaches and mechanism designs, as well as dynamics&amp;#xD;and control aspects. The special case of perching is also included. As well as discussing&amp;#xD;solutions investigated for FWMAVs specifically, we also present solutions that have&amp;#xD;been developed for different types of robots but may be applicable to flapping-wing&amp;#xD;ones. Different approaches are compared and their suitability for different applications&amp;#xD;and types of robots is assessed. Moreover, research and technology gaps are identified,&amp;#xD;and promising future work directions are identified. abstract_id: PUBMED:36154857 Microplastics in take-out food: Are we over taking it? Take-out food has become increasingly prevalent due to the fast pace of people's life. However, few study has been done on microplastics in take-out food. Contacting with disposable plastic containers, take-out food may be contaminated with microplastics. In the present study, abundance and characteristics of microplastics in total of 146 take-out food samples including solid food samples and beverage samples (bubble tea and coffee) were determined and identified. The mean abundance of microplastics in take-out food was 639 items kg-1, with the highest value in rice and the lowest value in coffee. Fragments shape, transparent color and sizes ≤ 500 μm were the main characteristics of microplastics in those food, and polyethylene was the main polymer type. Our results indicated that microplastics in take-out food was influenced by food categories and cooking methods, as well as food packaging materials. Approximately 170-638 items of microplastics may be consumed by people who order take-out food 1-2 times weekly. abstract_id: PUBMED:20368857 Evaluation of skin graft take following post-burn raw area in normovolaemic anaemia. Background: Traditional wisdom is that wound healing is directly related to haemoglobin level in the blood; therefore blood transfusion is given in anaemic patients to raise the haemoglobin level for better wound healing. Methods: Evaluation of wound healing in the form of split thickness skin graft take was done in 35 normovolaemic anaemic patients (haemoglobin level of &lt; 10 gm/ dl) and compared with control group (patients with haemoglobin level of 10 or &gt; 10 gm/ dl). Results: There was no statistically significant difference in mean graft take between the two groups. Conclusion: It is not mandatory to keep haemoglobin level at or &gt;10 g/dL or PCV value at or &gt;30% for skin graft take, as mild to moderate anaemia per se does not cause any deleterious effect on wound healing; provided perfusion is maintained by adequate circulatory volume. Prophylactic transfusion to increase the oxygen carrying capacity of the blood for the purpose of wound healing is not indicated in asymptomatic normovolemic anaemic patients (with haemoglobin levels greater than 6g/dL) without significant cardiovascular or pulmonary disease. abstract_id: PUBMED:34623152 A Loser-Take-All DNA Circuit. DNA-based neural networks are a type of DNA circuit capable of molecular pattern recognition tasks. Winner-take-all DNA networks have been developed to scale up the complexity of molecular pattern recognition with a simple molecular implementation. This simplicity was achieved by replacing negative weights in individual neurons with lateral inhibition and competition across neurons, eliminating the need for dual-rail representation. Here we introduce a new type of DNA circuit that is called loser-take-all: an output signal is ON if and only if the corresponding input has the smallest analog value among all inputs. We develop a DNA strand-displacement implementation of loser-take-all circuits that is cascadable without dual-rail representation, maintaining the simplicity desired for scalability. We characterize the impact of effective signal concentrations and reaction rates on the circuit performance, and derive solutions for compensating undesired signal loss and rate differences. Using these approaches, we successfully demonstrate a three-input loser-take-all circuit with nine unique input combinations. Complementary to winner-take-all, loser-take-all DNA circuits could be used for recognition of molecular patterns based on their least similarities to a set of memories, allowing classification decisions for patterns that are extremely noisy. Moreover, the design principle of loser-take-all could be more generally applied in other DNA circuit implementations including k-winner-take-all. Answer: According to the study described in abstract PUBMED:11522557, it does not make a significant difference in the timing of calcium intake with respect to the effects on serum parathyroid hormone (PTH) levels and serum ionized calcium concentrations. The study investigated the effects of calcium dose timing (morning at 0900 vs. evening at 2100), size of the dose, and frequency of small doses on calcium and bone metabolism in women. The results indicated that there was no significant difference in the response of serum PTH to the calcium load taken in the morning compared to the evening. However, the study did find a significant dose-response effect of the calcium load on serum ionized calcium and serum PTH, with even small calcium doses (200 mg) taken four times a day keeping the PTH secretion at a lower level than during the control day. Despite these changes in serum PTH and ionized calcium, none of the doses caused significant changes in the markers of bone formation and resorption measured. Therefore, while the timing of calcium intake (morning vs. evening) may not be critical, the size and frequency of the doses can have an impact on calcium and bone metabolism.
Instruction: Haemodialysis-unresponsive blood pressure: cardiovascular mortality predictor? Abstracts: abstract_id: PUBMED:16221091 Haemodialysis-unresponsive blood pressure: cardiovascular mortality predictor? Aim: The importance of 'conventional' cardiovascular risk factors in haemodialysis (HD) patients has been questioned following evidence that pre-HD blood pressure (BP) might be inversely related to mortality in contrast to post-HD BP. To evaluate this reverse BP epidemiology in HD patients, HD-induced BP changes were compared with aortic pulse wave velocity (PWV), an independent predictor of cardiovascular mortality. Method: Aortic PWV was evaluated in a limited care HD cohort, all of whom were asymptomatic of cardiovascular disease. Results: Of 47 limited care patients, 45% were classified as HD responsive, with a 17% fall in mean arterial pressure compared with a 6% increase in the HD-unresponsive group. While there were no significant differences between the two groups in traditional vascular disease risk factors or interdialytic weight loss, PWV was significantly elevated in the HD-unresponsive group (12.9 +/- 2.7 compared with 10.8 +/- 2.9; P &lt; 0.05). Furthermore, there was a positive correlation between the change in BP during HD and PWV in all subjects (r = 0.500; P &lt; 0.001 for systolic blood pressure (SBP), a correlation that also existed for diastolic blood pressure (DBP) (P &lt; 0.01). Conclusion: This study suggests that patients with HD-unresponsive BP are more likely to have vascular disease and this association between PWV and HD-induced BP changes might partly explain the apparent paradox of pre- and post-HD BP with mortality. It is proposed that a population with elevated post-HD BP is more likely to be composed of subjects with vascular disease (overt or occult), in contrast to a group with high pre-HD BP, which will be more heterogeneous. abstract_id: PUBMED:22470403 Effect of inflammation on the relationship of pulse pressure and mortality in haemodialysis. Background/aim: The effect of hypertension on mortality in haemodialysis patients is controversial and can be confounded by non-traditional risk factors like systemic inflammation. This study examined the effect of systolic blood pressure (SBP), diastolic blood pressure (DBP), and pulse pressure (PP) on mortality in haemodialysis patients, separately with and without markers of systemic inflammation. Methods: Data from the United States Renal Data System were analysed for 9,862 patients receiving haemodialysis on December 31, 1993, followed through May 2005. Results: In Cox regression analysis, increased age, diabetes, low albumin, high white blood count, low cholesterol, low haemoglobin, high phosphate, low DBP, and cardiovascular comorbidity were associated with high mortality, but SBP was not. Elevated PP adjusted for SBP, age, diabetes, haemoglobin, albumin, cholesterol, calcium, phosphate, parathyroid hormone, and white blood count was associated with higher mortality [adjusted hazard ratio, PP 1.006 (95% confidence interval, CI, 1.002-1.010); SBP 0.993 (95% CI 0.990-0.996)]. In dual models, PP adjusted for SBP then DBP was associated with higher mortality [PP 1.029 (95% CI 1.027-1.032); SBP 0.981 (95% CI 0.979-0.983); PP 1.010 (95% CI 1.008-1.011); DBP 0.981 (95% CI 0.979-0.983)]. Increasing PP deciles &gt;70 mm Hg were associated with increasing mortality in the absence of markers of systemic inflammation (white blood count &gt;10 × 10(9)/l, albumin &lt;3.5 g/dl, diabetes), but not in their presence. Conclusion: PP was a better indicator of adverse outcome than DBP or SBP. Inflammation-associated injury may mask the effect of PP on mortality in haemodialysis patients. abstract_id: PUBMED:34300188 The Impact of Age on Mortality in Chronic Haemodialysis Popu-Lation with COVID-19. Age and chronic kidney disease have been described as mortality risk factors for coronavirus disease 2019 (COVID-19). Currently, an important percentage of patients in haemodialysis are elderly. Herein, we investigated the impact of age on mortality among haemodialysis patients with COVID-19. Data was obtained from the Spanish COVID-19 chronic kidney disease (CKD) Working Group Registry. From 18 March 2020 to 27 August 2020, 930 patients on haemodialysis affected by COVID-19 were included in the Registry. A total of 254 patients were under 65 years old and 676 were 65 years or older (elderly group). Mortality was 25.1% higher (95% CI: 22.2-28.0%) in the elderly as compared to the non-elderly group. Death from COVID-19 was increased 6.2-fold in haemodialysis patients as compared to the mortality in the general population in a similar time frame. In the multivariate Cox regression analysis, age (hazard ratio (HR) 1.59, 95% CI: 1.31-1.93), dyspnea at presentation (HR 1.51, 95% CI: 1.11-2.04), pneumonia (HR 1.74, 95% CI: 1.10-2.73) and admission to hospital (HR 4.00, 95% CI: 1.83-8.70) were identified as independent mortality risk factors in the elderly haemodialysis population. Treatment with glucocorticoids reduced the risk of death (HR 0.68, 95% CI: 0.48-0.96). In conclusion, mortality is dramatically increased in elderly haemodialysis patients with COVID-19. Our results suggest that this high risk population should be prioritized in terms of protection and vaccination. abstract_id: PUBMED:36865003 Neutrophil-to-lymphocyte and platelet-to-lymphocyte ratios: associations with mortality in a haemodialysis cohort. Background: Lymphocyte ratios reflect inflammation and have been associated with adverse outcomes in a range of diseases. We sought to determine any association between neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) and mortality in a haemodialysis cohort, including a coronavirus disease 2019 (COVID-19) infection subpopulation. Methods: A retrospective analysis was performed of adults commencing hospital haemodialysis in the West of Scotland during 2010-21. NLR and PLR were calculated from routine samples around haemodialysis initiation. Kaplan-Meier and Cox proportional hazards analyses were used to assess mortality associations. Results: In 1720 haemodialysis patients over a median of 21.9 (interquartile range 9.1-42.9) months, there were 840 all-cause deaths. NLR but not PLR was associated with all-cause mortality after multivariable adjustment [adjusted hazard ratio (aHR) for in participants with baseline NLR in quartile 4 (NLR ≥8.23) versus quartile 1 (NLR &lt;3.12) 1.63, 95% confidence interval (CI) 1.32-2.00]. The association was stronger for cardiovascular death (NLR quartile 4 versus 1 aHR 3.06, 95% CI 1.53-6.09) than for non-cardiovascular death (NLR quartile 4 versus 1 aHR 1.85, 95% CI 1.34-2.56). In the COVID-19 subpopulation, both NLR and PLR at haemodialysis initiation were associated with risk of COVID-19-related death after adjustment for age and sex (NLR: aHR 4.69, 95% CI 1.48-14.92 and PLR: aHR 3.40, 95% CI 1.02-11.36; for highest vs lowest quartiles). Conclusions: NLR is strongly associated with mortality in haemodialysis patients while the association between PLR and adverse outcomes is weaker. NLR is an inexpensive, readily available biomarker with potential utility in risk stratification of haemodialysis patients. abstract_id: PUBMED:31994850 Fetuin-A Thr256Ser gene polymorphism as a mortality predictor in patients with chronic kidney disease on maintenance haemodialysis in Medan, Indonesia. Aim To investigate an impact of Fetuin-A Thr256Ser gene polymorphism on the mortality rate of chronic kidney disease on maintenance haemodialysis patients in Indonesia. Methods This is an analytic-longitudinal observational study using survival analysis with nine-month follow up on 106 maintenance haemodialysis patients. The PCR-RFLP is used to determine Fetuin-A Thr256Ser gene polymorphism and Fetuin-A serum level measured by using ELISA methods. We use time-independent cox regression analysis to investigate factors that contribute to patient survival. Results The mean survival time of this study is 8.49±1.53 months, with a median survival of 9 months (range 1-9 months). Among 12 (11.3%) deceased patients, most of them carried GG genotype with 8.87 times risk of mortality compared to those with CC+CG genotype (p=0.005). The group of patients with IL-6 level ≥86.9 pg/mL had higher mortality with 3.64 times greater risk compared to those with IL-6 level, &lt;86.9 pg/mL (p=0.03). Conclusion This study revealed a significant dominance independent impact of the Fetuin-A Thr256Ser gene polymorphism on the survival rate of maintenance haemodialysis patients. These results suggest that genotype variation of Fetuin-A gene could be a potential marker to identify high mortality risk in Indonesia's maintenance haemodialysis patients, especially in Medan. abstract_id: PUBMED:23341833 Relative Change in NT-proBNP Level: An Important Risk Predictor of Cardiovascular Congestion in Haemodialysis Patients. Background: Cross-sectional studies have shown that B-type natriuretic peptide (BNP) and its N-terminal fragment (NT-proBNP) are predictive of cardiovascular death in haemodialysis (HD) patients. In the present study, we tested the hypothesis that monitoring NT-proBNP measurements adds further prognostic information, i.e. predicts congestive heart failure (CHF) events. Methods: In a prospective cohort of 236 HD patients, NT-proBNP levels were measured monthly during 18 months. Patients were divided according to the occurrence of CHF events. In a nested case-control study, we assessed the evolution of NT-proBNP levels. Results: On average, the 236 HD patients were followed up for 12.5 months, a period during which 44 patients developed a CHF event (half requiring hospitalisation). At baseline, patients who developed a CHF event had significantly more dilated cardiomyopathy and/or altered left ventricular ejection fraction and higher NT-proBNP levels compared with patients who did not develop a CHF event. During follow-up, we observed a significant increase in NT-proBNP levels preceding the CHF event. At a 20% relative increase of NT-proBNP, the sensitivity of NT-proBNP as a predictor of CHF events was 0.57 and the specificity 0.77. Conclusion: The relative change in NT-proBNP levels is a significant risk predictor of a CHF event. abstract_id: PUBMED:26985382 Early changes in body weight and blood pressure are associated with mortality in incident dialysis patients. Background: While much research is devoted to identifying novel biomarkers, addressing the prognostic value of routinely measured clinical parameters is of great interest. We studied early blood pressure (BP) and body weight (BW) trajectories in incident haemodialysis patients and their association with all-cause mortality. Methods: In a cohort of 357 incident patients, we obtained all records of BP and BW during the first 90 days on dialysis (over 12 800 observations) and analysed trajectories using penalized B-splines and mixed linear regression models. Baseline comorbidities and all-cause mortality (median follow-up: 2.2 years) were obtained from the French Renal Epidemiology and Information Network (REIN) registry, and the association with mortality was assessed by Cox models adjusting for baseline comorbidities. Results: During the initial 90 days on dialysis, there were non-linear decreases in BP and BW, with milder slopes after 15 days [systolic BP (SBP)] or 30 days [diastolic BP (DBP) and BW]. SBP or DBP levels at dialysis initiation and changes in BW occurring in the first month or during the following 2 months were significantly associated with survival. In multivariate models adjusting for baseline comorbidities and prescriptions, higher SBP value and BW slopes were independently associated with a lower risk of mortality. Hazard ratios of mortality and 95% confidence intervals were 0.92 (0.85-0.99) for a 10 mmHg higher SBP and 0.76 (0.66-0.88) for a 1 kg/month higher BW change on Days 30-90. Conclusions: BW loss in the first weeks on dialysis is a strong and independent predictor of mortality. Low BP is also associated with mortality and is probably the consequence of underlying cardiovascular diseases. These early markers appear to be valuable prognostic factors. abstract_id: PUBMED:32439186 Intradialytic hypotension definitions with mortality prediction capacity in a cohort of haemodialysis patients. Introduction: Intradialytic hypotension (IDH) is a common complication and is associated with higher morbidity and mortality in patients on haemodialysis. However, there is a lack of uniformity in definitions of IDH. The main objective of this study is to analyse clinical and dialysis related factors with several IDH definitions, and its relationship with morbidity and mortality in a cohort of haemodialysis patients. Methodology: Observational study with a 30-month follow-up period that includes 68 prevalent patients on haemodialysis with at least six months of treatment. We analysed 18 non-consecutive dialysis sessions (first three of each month of a six-month period), and different definitions of IDH were recorded. A positive event of IDH was defined if any definition occurred in more than 25% of the sessions studied. Using survival analysis, we analysed the prediction capacity of each IDH definition (Nadir90, Nadir100, Fall20, Fall30, Fall20Nadir90, Fall30Nadir90, KDOQI, HEMO). The relationship with non-fatal cardiovascular disease and global mortality was estimated using different Cox proportional models. Results: We found IDH definitions that occurred significantly more frequently (Nadir100: 339.8/1,000 sessions, Nadir90: 172.3/1,000 sessions) than others (KDOQI: 98/1,000 sessions, HEMO 129.9/1,000 sessions). We registered 13 fatal events with a mean follow-up of 27.12±6.84 months. A greater number of sessions with IDH according to the Nadir90 definition was a predictive factor of mortality (Log rank 5.02, p=0.025), independent according to adjusted models (HR: 3.23 [95% CI: 1.08-9.6], p=0.035). The definitions Nadir100 (HR: 4.54 [95% CI: 1.25-16.4], p=0.02) and Fall30Nadir90 (HR: 3.08 [95% CI: 1.07-8.8], p=0.03) were independent predictors of non-fatal cardiovascular disease in adjusted models. Conclusions: Intradialytic hypotension, even asymptomatic, is a predictor of mortality and non-fatal cardiovascular disease in prevalent patients on haemodialysis. abstract_id: PUBMED:30184172 The association of interdialytic blood pressure variability with cardiovascular events and all-cause mortality in haemodialysis patients. Background: Long-term pre-dialysis blood pressure variability (BPV) in haemodialysis patients is associated with increased cardiovascular risk. The association of the main haemodynamic culprit in dialysis, that is, short-term BPV, with outcomes has not been investigated. We examine the prognostic role of short-term BPV for mortality and cardiovascular events in this population. Methods: A total of 227 haemodialysis patients underwent 44-h ambulatory monitoring during a standard interval and were followed-up for 30.17 ± 17.70 months. We calculated SD, weighted SD (wSD), coefficient of variation (CV) and average real variability (ARV) of BP with validated formulas. The primary endpoint was first occurrence of all-cause death, non-fatal myocardial infarction or non-fatal stroke. Secondary endpoints were: (i) all-cause mortality, (ii) cardiovascular mortality and (iii) a combination of cardiovascular events. Results: Cumulative freedom from the primary endpoint was similar for quartiles of pre-dialysis and 44-h systolic BP (SBP), but was progressively longer for increasing quartiles of 44-h SBP-SD (P = 0.014), wSD (P = 0.007), CV (P = 0.031) and ARV (83.9, 71.9, 70.2 and 43.9% for quartiles 1-4; P &lt; 0.001). Higher quartiles of 44-h SBP-ARV were associated with higher risk of all studied outcomes. Among diastolic BPV indices, 44-h diastolic BP (DBP)-CV and 44-h DBP-ARV were associated with increased risk for the composite cardiovascular outcome. In Cox regression analysis, SBP-BPV was related to the primary endpoint, independently of SBP levels and interdialytic weight gain [ARV: hazard ratio (HR) 1.115, 95% confidence interval (95% CI) 1.048-1.185]. This association become insignificant after adjustment for pulse wave velocity (PWV; HR 1.061, 95% CI 0.989-1.137), and further attenuated after additional adjustment for age, dialysis vintage, gender, comorbidities and prevalent cardiovascular disease (HR 1.031, 95% CI 0.946-1.122). Conclusions: Increased BPV during the interdialytic interval is associated with higher risk of death and cardiovascular events, whereas ambulatory BP levels are not. This association was not independent after adjustment for PWV, other risk factors and prevalent cardiovascular disease. Short-term BPV could be a mediator promoting the adverse cardiovascular profile of haemodialysis patients. abstract_id: PUBMED:26656402 Hyponatraemia, mortality and haemodialysis: An unexplained association. Background: As in the general population, in patients on haemodialysis (HD) hyponatraemia is associated with higher mortality risk. The objective of this article was to study the relationship between predialysis serum sodium (sNa) and mortality in an HD population. We also intended to define hyponatraemia and determine the characteristics of hyponatraemic patients in terms of anthropometric data, analytical features, dialysis measurements and hydration (bioimpedance). Methods: Observational, descriptive study of a cohort of HD incident patients. The independent variable was the mean of each patient's sNa analysed during their first 6 months on HD. Results: A total of 4,153 patients were included in the study. Mean age was 64.7 years; 65.2% of the patients were male and 35% were diabetics. Mean follow-up time was 21.48 (SD) (1.31) months. sNa had a normal distribution, with a mean (SD)=138.46 (2.7) mEq/l. Body weight, diabetes mellitus, systolic blood pressure, interdialytic weight gain, total ultrafiltration, serum glucose, albumin and creatinine, vascular access and haemodialysis type, acquire significant differences between sodium quartiles. Lean tissue index (LTI) in patients with low serum sodium, Q1 (135 mEq/l), was significantly lower than the LTI of patients from the other serum sodium quartiles. Patients with sNa&lt;136 mEq/l had a higher independent mortality risk (OR=1.62) (Cox regression analysis). Conclusions: HD patients with hyponatraemia patients have a poor prognosis and present malnutrition or fluid overload. Answer: The study by McIntyre et al. (PUBMED:16221091) suggests that haemodialysis (HD)-unresponsive blood pressure (BP) may be a predictor of cardiovascular mortality in HD patients. The study found that patients with HD-unresponsive BP, who did not experience a significant fall in mean arterial pressure during HD, had significantly elevated aortic pulse wave velocity (PWV), which is an independent predictor of cardiovascular mortality. There was also a positive correlation between the change in BP during HD and PWV, indicating that patients with less responsive BP during HD are more likely to have vascular disease. This association might partly explain the paradox of pre- and post-HD BP with mortality, suggesting that elevated post-HD BP could be indicative of a population with vascular disease, as opposed to high pre-HD BP which could be more heterogeneous. Additionally, the study by Park et al. (PUBMED:22470403) found that pulse pressure (PP) was a better indicator of adverse outcomes than diastolic blood pressure (DBP) or systolic blood pressure (SBP) in haemodialysis patients. The study showed that PP was associated with higher mortality, and this association was more pronounced in the absence of markers of systemic inflammation. This suggests that PP could be a useful predictor of mortality in HD patients, particularly when inflammation is not a confounding factor. In summary, both studies provide evidence that certain BP-related parameters, such as HD-unresponsive BP and elevated PP, can be predictors of cardiovascular mortality in patients undergoing haemodialysis. These findings highlight the importance of monitoring BP changes during HD and considering them in the context of cardiovascular risk assessment for these patients.
Instruction: Does the incidence and outcome of brain metastases in locally advanced non-small cell lung cancer justify prophylactic cranial irradiation or early detection? Abstracts: abstract_id: PUBMED:15949596 Does the incidence and outcome of brain metastases in locally advanced non-small cell lung cancer justify prophylactic cranial irradiation or early detection? Objective: The radical treatment of locally advanced non-small cell lung cancer (LA-NSCLC) currently involves combined modality therapy (CMT) with the use of chemotherapy in addition to radiation therapy and/or surgery. Chemotherapy has been shown to improve survival, but does not alter brain relapse. We reviewed the outcomes of Stage IIIA and IIIB LA-NSCLC patients treated with CMT at our institution. We assessed the incidence of brain metastases and the management and outcome of these patients. Methods: Using our radiation-planning database (RSTS), we identified 230 consecutive patients from the years 1999 and 2000 who received radical radiation therapy to the lung. Extracting data from the chart, we identified 83 patients who were treated radically with chemotherapy, radiation and possibly surgery. These patients form the basis of this study. Results: At 2 years, the actuarial rates for any brain failure, first failure in the brain and sole failure in the brain were 34.2%, 24.6% and 11.0%, respectively. Age was the only factor among sex, histology, stage, weight loss and the timing of chemotherapy and radiation that predicted for an increased risk of first failure in the brain. Patients less than age 60 had a risk of 25.6% versus 11.4% for those greater than 60 (p = 0.022). Among the patients who failed first in the brain, those who had aggressive management of their brain metastases with surgical resection in addition to whole brain radiotherapy had a median survival of 26.3 months compared with 3.3 months for those treated with palliative whole brain radiotherapy alone. Conclusion: Brain metastases are common in patients with LA-NSCLC treated with CMT. These patients may benefit from either prophylactic cranial irradiation or early detection and aggressive treatment of brain metastases. abstract_id: PUBMED:27396646 Prophylactic cranial irradiation for patients with lung cancer. The incidence of brain metastases in patients with lung cancer has increased as a result of improved local and systemic control and better diagnosis from advances in brain imaging. Because brain metastases are responsible for life-threatening symptoms and serious impairment of quality of life, resulting in shortened survival, prophylactic cranial irradiation has been proposed in both small-cell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC) to try to improve incidence of brain metastasis, survival, and eventually quality of life. Findings from randomised controlled trials and a meta-analysis have shown that prophylactic cranial irradiation not only reduces the incidence of brain metastases in patients with SCLC and with non-metastatic NSCLC, but also improves overall survival in patients with SCLC who respond to first-line treatment. Although prophylactic cranial irradiation is potentially associated with neurocognitive decline, this risk needs to be balanced against the potential benefit in terms of brain metastases incidence and survival. Several strategies to reduce neurotoxicity are being investigated. abstract_id: PUBMED:25806190 Potential toxicities of prophylactic cranial irradiation. Prophylactic cranial irradiation (PCI) with total doses of 20-30 Gy reduces the incidence of brain metastasis (BM) and increases survival of patients with limited and extensive-disease small-cell lung cancer (SCLC) that showed any response to chemotherapy. PCI is currently not applied in non-small-cell lung cancer (NSCLC) since it has not proven to significantly improve OS rates in stage IIIA/B, although novel data suggest that subgroups that could benefit may exist. Here we briefly review potential toxicities of PCI which have to be considered before prescribing PCI. They are mostly difficult to delineate from pre-existing risk factors which include preceding chemotherapy, patient age, paraneoplasia, as well as smoking or atherosclerosis. On the long run, this will force radiation oncologists to evaluate each patient separately and to estimate the individual risk. Where PCI is then considered to be of benefit, novel concepts, such as intensity-modulated radiotherapy and/or neuroprotective drugs with potential to lower the rates of side effects will eventually be superior to conventional therapy. This in turn will lead to a re-evaluation whether benefits might then outweigh the (lowered) risks. abstract_id: PUBMED:29277750 The Role of Prophylactic Cranial Irradiation for Non-small Cell Lung Cancer. Background: The use of prophylactic cranial irradiation (PCI) to treat brain metastases (BM) in non-small cell lung cancer (NSCLC) is restricted due to the potential associated toxicity and lack of survival benefit. BM can have a negative impact on neurocognitive function (NF) and quality of life (QOL). The aim of this review was to assess the impact of PCI on disease-specific and NF and QOL outcomes. Materials And Methods: An electronic database literature search was completed to identify relevant studies. Results: Fourteen published articles were included. PCI significantly reduced the incidence of BM, but no significant survival advantage was found. NF decline was reported in one trial. No significant difference in QOL with PCI was reported. PCI was well tolerated by the majority of patients with NSCLC and associated with a relatively low toxicity. Conclusion: PCI reduces the incidence of BM without any significant survival advantage. PCI has the potential to be beneficial in practice for certain patients with locally advanced NSCLC, based on disease factors and patient preference. abstract_id: PUBMED:25055130 Prophylactic cranial irradiation: recent outcomes and innovations. Brain metastases represent a frequent problem in several malignancies. They can shorten survival while causing significant morbidity and impairment in the patient's quality of life. Prophylactic cranial irradiation (PCI) has become an integral part of the standard of care in small cell lung cancer (SCLC), yet its role in other malignancies remains the subject of significant discussion. Its role has been extensively investigated in non-small cell lung cancer and less so for breast cancer and other malignancies. Improvements in medical care as well as in whole brain radiotherapy (WBRT) techniques may improve the risk-benefit ratio of this therapy so as to expand its role in cancer care. The use of memantine in WBRT patients as well as the use of hippocampal avoidance techniques are of particular interest in this effort. Herein, we review the history of PCI, its current use, and areas of investigation in the application of PCI. abstract_id: PUBMED:23306141 Present role of prophylactic cranial irradiation Prophylactic cranial irradiation (PCI) plays a role in the management of lung cancer patients, especially small cell lung cancer (SCLC) patients. As multimodality treatments are now able to ensure better local control and a lower rate of extracranial metastases, brain relapse has become a major concern in lung cancer. As survival is poor after development of brain metastases (BM) in spite of specific treatment, PCI has been introduced in the 1970's. PCI has been evaluated in randomized trials in both SCLC and non-small cell lung cancer (NSCLC) to reduce the incidence of BM and possibly increase survival. PCI reduces significantly the BM rate in both limited disease (LD) and extensive disease (ED) SCLC and in non-metastatic NSCLC. Considering SCLC, PCI significantly improves overall survival in LD (from 15 to 20% at 3 years) and ED (from 13 to 27% at 1 year) in patients who respond to first-line treatment; it should thus be part of the standard treatment in all responders in ED and in good responders in LD. No dose-effect relationship for PCI was demonstrated in LD SCLC patients so that the recommended dose is 25 Gy in 10 fractions. In NSCLC, even if the risk of brain dissemination is lower than in SCLC, it has become a challenging issue. Studies have identified subgroups at higher risk of brain failure. There are more local treatment possibilities for BM related to NSCLC, but most BM will eventually recur so that PCI should be reconsidered. Few randomized trials have been performed. Most of them could demonstrate a decreased incidence of BM in patients with PCI, but they were not able to show an effect on survival as they were underpowered. New trials are needed. Among long-term survivors, neuro-cognitive toxicity may be observed. Several approaches are being evaluated to reduce this possible toxicity. PCI has no place for other solid tumours at risk such as HER2+ breast cancer patients. abstract_id: PUBMED:10964996 Prophylactic cranial irradiation in small-cell lung cancer. Prophylactic cranial irradiation is now known to improve survival to a significant degree in small-cell lung cancer (SCLC) patients; this is in addition to its established role in preventing the disabling symptoms of brain metastases. New information indicates that it confers a survival benefit for limited or extensive stage SCLC patients gaining a complete response in the chest. A review of causes of cerebral dysfunction as a complication indicates that such problems can be due to suboptimal radiation fractionation, chemotherapy, or an inappropriate combination of prophylactic brain irradiation with chemotherapy. Optimum treatment with prophylactic brain irradiation has been shown not to cause adverse effects with detailed psychometric testing. Several additional sources of information can be drawn together to suggest a dose-response pattern for prophylactic brain irradiation, leading to the recommendation that a dose of 25-36 Gy is optimal, delivered in 2-3 Gy daily fractions after the completion of chest irradiation and chemotherapy. This will be better defined in future clinical trials. abstract_id: PUBMED:16258477 The role of prophylactic cranial radiation in the treatment of non-small-cell lung cancer. Patients with lung cancer face a substantial risk of developing brain metastases. Prophylactic cranial irradiation therapy has been demonstrated to significantly decrease the incidence of central nervous system metastasis and improve overall survival in patients with small-cell lung cancer. In non-small-cell lung cancer, the central nervous system is a common site of systemic failure and frequently represents the only focus of extra-thoracic disease. However, the role of prophylactic cranial irradiation in patients with non-small-cell lung cancer is less well defined. As treatment modalities improve and patients live longer their risk of developing central nervous system recurrence increases, and methods to prevent this demand exploration. abstract_id: PUBMED:17005429 Prophylactic cranial irradiation in non-small cell lung cancer Prophylactic cranial irradiation (PCI) has become part of the standard treatment in patients with small cell lung cancer (SCLC) in complete remission. Not only does it decrease the risk of brain recurrence by almost 50%, it has a significant positive effect on survival (5.4 percent increase at 3 years). As the prognosis of patients with locally advanced non-small cell lung cancer (NSCLC) has improved with combined modality treatment, brain metastases have also become an important cause of failure (10 to 30%, approaching 50% in certain studies as in SCLC). Survival after treatment of brain metastases is poor and impact on quality of life of patients is important. As in SCLC, 4 randomised evaluating PCI in NSCLC have been carried out in the seventies and early eighties. If 3 out of 4 trials have shown a significant decrease of brain metastases, none of them demonstrated any impact on survival. Thus PCI cannot be recommended as standard treatment in NSCLC, however new trials would be needed. abstract_id: PUBMED:32761788 Treatment and toxic effects of prophylactic cranial irradiation in stage II-III non-small cell lung cancer: A meta-analysis. Objective: To determine the role of prophylactic cranial irradiation (PCI) in non-small cell lung cancer (NSCLC) patients using meta-analysis. Methods: PubMed, Embase, the Cochrane Database of Systematic Review and the China National Knowledge Infrastructure databases were systematically searched for studies published between 1 January 1980 and 31 March 2019. Search terms included "non-small cell lung cancer," "prophylactic cranial irradiation" and "clinical trials." The research data extracted from above studies was analyzed by Review Manager 5.3 and Stata12.0 software. The outcomes included development of brain metastases (BMs), overall survival (OS), disease-free survival (DFS), BMs for different diagnoses, toxicity, quality of life (QoL). Results: Fifteen trials (nine RCTs and six non-RCTs) involving 2418 NSCLC patients met the inclusion criteria. There was a significant reduction in the risk of developing BM in patients who received PCI compared with those who did not (95% CI, 0.20-0.37; P &lt; 0.00001). PCI significantly reduced the BM of squamous cell carcinoma (P = 0.02), but not for adenocarcinoma (P = 0.07) and other pathological types (P = 0.29). There was a significant increase in DFS for the PCI compared to the non-PCI group (P = 0.006); however, OS did not significantly differ (P = 0.15). In addition, fatigue significantly increased in the PCI group (P = 0.0002). Cognitive disturbance showed no significant difference between PCI and non-PCI groups (P = 0.06). Conclusion: This study showed that, compared with non-PCI, PCI significantly decreased the incidence of NSCLC BM and improved the DFS of patients, and reduced the BM rate from squamous cell carcinoma. However, it showed no effect on OS and the BM rate of adenocarcinoma and other pathological types of tumors. There were limited data concerning PCI-related toxicity and QoL. Answer: The incidence and outcome of brain metastases (BM) in locally advanced non-small cell lung cancer (LA-NSCLC) have been a significant concern, as brain metastases are common in patients treated with combined modality therapy (CMT) and can lead to serious impairment of quality of life and shortened survival (PUBMED:15949596; PUBMED:27396646). Studies have shown that prophylactic cranial irradiation (PCI) significantly reduces the incidence of BM in LA-NSCLC patients (PUBMED:27396646; PUBMED:23306141; PUBMED:17005429). However, the use of PCI in NSCLC has been restricted due to potential associated toxicity and lack of survival benefit (PUBMED:29277750). In terms of survival, while PCI has been shown to improve overall survival in small-cell lung cancer (SCLC) patients who respond to first-line treatment, its impact on overall survival in NSCLC patients is less clear. Some studies have not found a significant survival advantage with the use of PCI in NSCLC (PUBMED:29277750; PUBMED:17005429), while others suggest that PCI may improve disease-free survival (DFS) but not overall survival (OS) (PUBMED:32761788). The potential toxicities associated with PCI, such as neurocognitive decline, must be balanced against the benefits in terms of reduced BM incidence and survival (PUBMED:27396646; PUBMED:25806190). Strategies to reduce neurotoxicity, such as intensity-modulated radiotherapy and neuroprotective drugs, are being investigated (PUBMED:25806190; PUBMED:25055130). Given the high incidence of BM in LA-NSCLC and the potential benefits of PCI in reducing this incidence, as well as the possibility of improved DFS, PCI may be justified for certain patient subgroups based on disease factors and patient preference (PUBMED:29277750). However, the decision to use PCI should be individualized, considering the potential risks and benefits, and new trials are needed to better define its role in NSCLC (PUBMED:23306141; PUBMED:17005429). In conclusion, while PCI reduces the incidence of BM in LA-NSCLC, its justification must be carefully weighed against the lack of a clear survival benefit and the potential for significant toxicities.
Instruction: Do Dual-Mobility Cups Reduce the Risk of Dislocation in Total Hip Arthroplasty for Fractured Neck of Femur in Patients Aged Older Than 75 Years? Abstracts: abstract_id: PUBMED:26718778 Do Dual-Mobility Cups Reduce the Risk of Dislocation in Total Hip Arthroplasty for Fractured Neck of Femur in Patients Aged Older Than 75 Years? Background: Total hip arthroplasty (THA) for intracapsular neck of femur (NOF) fracture remains debatable as it is associated with higher rates of dislocation, notably in the older part of the population. We hypothesized this risk could be limited using dual-mobility cups (DMCs). Methods: Eighty-two patients (83 hips) aged older than 75 years underwent DMC-THA using a posterolateral approach for an intracapsular NOF fracture. Results: Clinical data were collected in 45 patients at a mean of 23.8 ± 9.4 months (12.1-42 months). The mortality rates were 19% (16 patients) and 36.5% (30 patients) at 1 year postoperatively and at the last follow-up, respectively. Postoperatively, there were 2 dislocations of the large articulation (4.4%) and one intraprosthetic dislocation (2.2%), all related to technical errors. Functional results were rated at least good in 71% cases, whereas the Parker and Devane scores were stable, indicating optimal restoration of autonomy and physical activity. Conclusion: Although technically demanding, DMC-THA may prevent dislocation in intracapsular NOF fracture in elderly patients, while consistently limiting the risk of loss of independence. abstract_id: PUBMED:35454367 Dual Mobility Cups as the Routine Choice in Total Hip Arthroplasty. Background and Objectives: Total hip arthroplasty (THA) is considered the most successful surgical procedure in orthopedics. However, dislocation remains the main indication for surgical revision. New designs of dual mobility cups (DMC) have lowered the classical complications and have extended the indications of DMC in elective surgeries. Our aim is to assess the trend of DMC indications in THA as well as the incidence of their dislocation. Materials and Methods: We retrospectively reviewed all patients undergoing THA with DMC during the years 2015 and 2021. The original indication for DMC included patients sustaining neck of femur fractures (NOF#) and associated risk factors for dislocations. Five years later, DMC was considered our standard of care in total hip arthroplasty. The approach (anterolateral or posterolateral) was chosen by the surgeon according to his/her preferences, as was the implant. Data collected included patients' demographics, diagnosis, admission time, surgical approach, cup models, and inclination and complications. Patients sustaining a hip dislocation were prospectively reviewed and assessed for treatment received, new dislocations, and need for surgical revision. Two groups were created for the analysis according to the presence or absence of dislocation during follow-up. Results: In the analysis, 531 arthroplasties were included (mean age 72.2 years) with a mean follow-up of 2.86 years. The trend of indications for DMC increased from 16% of THA in 2015 to 78% of THA in 2021. We found a total of 8 dislocations (1.5%), none of them associated with elective surgery. Closed reduction was unsatisfactory in four cases (50%). There was one case of intraprosthetic dislocation. Dislocations were associated to smaller heads (22 mm) (1.5% vs. 25%, p = 0.008) and cups (51.2 mm vs. 48.7 mm, p = 0.038) and posterior approach (62.5% vs. 37.5%, p = 0.011). Conclusion: Dual mobility cups are a great option to reduce the risk of dislocation after a THA both in the neck of femur fractures and elective cases. The use of an anterolateral approach in THA after a neck or femur fracture might considerably decrease the risk of dislocation. abstract_id: PUBMED:25035820 Dual mobility cups in total hip arthroplasty. Total hip arthroplasty (THA) is considered one of the most successful surgical procedures in orthopaedics. With the increase in the number of THAs performed in the world in the next decades, reducing or preventing medical and mechanical complications such as post-operative THA instability will be of paramount importance, particularly in an emerging health care environment based on quality control and patient outcome. Dual mobility acetabular component (also known as unconstrained tripolar implant) was introduced in France at the end of the 1970s as an alternative to standard sockets, to reduce the risk of THA dislocation in patients undergoing primary THA in France. Dual mobility cups have recently gained wider attention in the United States as an alternative option in the prevention and treatment of instability in both primary and revision THA and offer the benefit of increased stability without compromising clinical outcomes and implant longevity. In this article, we review the use of dual mobility cup in total hip arthroplasty in terms of its history, biomechanics, outcomes and complications based on more than 20 years of medical literature. abstract_id: PUBMED:28276753 Dual mobility acetabular cup for total hip arthroplasty: use with caution. Introduction: Dual-mobility cups were introduced to reduce the incidence of dislocation after total hip arthroplasty (THA). The biomechanics of dual-mobility cups allow for greater range of motion, jump distance and better stability when compared to conventional cups in THA. These cups can be very beneficial in THA particularly when the hip instability is a concern such as elderly patients, obese, hip fracture patients, those with neuromuscular diseases, or patients undergoing revision THA for instability. Areas covered: An extensive search is conducted in PubMed and Google scholar to find past five years publications relevant to dual-mobility cups in THA. 225 articles were found which many of those were excluded because of: written in other languages, nearly same cohort in different journals, unavailable full text, and were not covering main fields of this article. We categorized the articles to four different aspects of the biomechanics, outcomes in primary, revision, and after hip fracture THA. Expert commentary: While the important role of the dual-mobility in addressing recurrent instability and reduction of instability in high risk patients undergoing primary THA has been demonstrated, the concern regarding the release of metal ion release, particularly in the younger and active patients, and prosthetic malfunction are worrisome. abstract_id: PUBMED:32773273 Monolithic Dual Mobility Cup Total Hip Arthroplasty Has High Complication Rates With Surgical Fixation in Elderly With Femur Neck Fracture. Background: This study is prospectively conducted to evaluate surgical complications of monolithic dual mobility cup total hip arthroplasty (THA) in elderly patients with fractured neck of the femur. Methods: Ninety-seven patients (97 hips) with displaced femoral neck fracture who gave informed consent for participation were prospectively enrolled. Their mean age was 76.6 years (range, 60-95 years), and the mean bone mineral density T-score of neck of the femur was -2.8 (range, -1.2 to -5.5). All patients underwent THA with monolithic dual mobility cup, and computed tomography scans were obtained to evaluate radiographic parameters including anteversion, inclination, and loosening of acetabular cups, and periprosthetic acetabular fractures. Results: With regard to cup orientation, mean inclination angle was 40.2° (range, 23.5°-63°) and mean anteversion was 32.6° (range, 7°-66.2°). The proportion of surgical outliers was 10.3% (10/97) in inclination and 35.1% (34/97) in anteversion. Early cup loosening within 2 weeks was detected in 2 hips. Periprosthetic acetabular fractures were identified in 6 hips (6/97, 6.2%). Of the 6 fractures, 5 nondisplaced fractures were healed with conservative management, but 1 fracture with displacement eventually led to cup loosening and the patient underwent revision surgery. Reoperation rate of the monolithic dual mobility cup was 4.1% (4/97). Conclusion: The use of the monolithic dual mobility was associated with improper cup fixation and periprosthetic acetabular fractures in the elderly with poor bone stock, although the dual mobility cup lowered the risk of early dislocation after THA. abstract_id: PUBMED:30612171 A comparative study about the incidence of dislocation and peri-prosthetic fracture between dual mobility versus standard cups after primary total hip arthroplasty. Purpose: Dislocation and peri-prosthetic fracture (PPF) are major reasons for revision THA (total hip arthroplasty). The main advantage of dual mobility (DM) cups is to minimize the incidence of dislocation compared to single mobility (SM) cups. We hypothesized that the use of DM would lead to an increased risk of PPF because of its greater stability. In contrast, standard cups would be at higher risk of dislocation. Methods: A retrospective comparative study was performed in our institution including 126 revision THAs between January 2013 and December 2017. Collected data included gender, age, BMI, Parker score, ASA score, the etiology for primary THA, type of cup (SM or DM), cortical index, Noble index, and the stem fixation. Results: Overall, 53 standard and 73 DM cups were included for study. In the dislocation group, 29 had standard cups (83%) and 6 had DM cups (17%). Dislocation was 12-fold higher in SM cups (p &lt; 0.001). In the PPF group, 24 had standard cups (26%) and 67 had DM cups (74%). PFF was 12-fold higher for DM cups (p &lt; 0.001). A higher Parker score and a higher cortical index had a protective effect on the risk of PPF (OR = 0.76 (p = 0.03), OR = 0.57 (p = 0.048)). Conclusion: The use of DM increased hip stability but led to a higher rate of PPF by load transfer on the femur. Further studies with larger cohort and follow-up are needed to confirm these findings and measure the incidence of these complications. abstract_id: PUBMED:32513027 Outcomes of total hip arthroplasty using dual-mobility cups for femoral neck fractures: a systematic review and meta-analysis. Objectives: Femoral neck fractures (FNFs), with up to 15% mortality, are prominent orthopaedic emergencies. After treating FNFs, dislocation is another challenge increasing morbidity, mortality and treatment costs substantially. The emerging dual-mobility cup (DMC) may decrease dislocation rates following total hip arthroplasty (THA) for FNFs. We performed a systematic review of literature reporting dislocation and mortality rates with DMC-THA for the treatment of FNFs. Methods: 2 authors independently searched PubMed (MEDLINE), Google Scholar and Cochrane library for studies reporting dislocation and mortality rates for FNFs treated with DMC-THA since inception up to January 2019. Data on outcomes of interest was extracted from all studies and assessed for eligibility for a meta-analysis. Results: Out of 522 search results, 18 studies were included in the systematic review and 4 in the meta-analysis. The mean rate of dislocation following DMC-THA for FNFs was found to be 1.87% ± 2.11, with a 1-year mortality rate of 14.0% ± 10.55. Results of meta-analysis showed that dislocation and 1-year postoperative mortality rates were significantly lower for DMC-THA with a risk ratio 0.31 (95% CI, 0.16-0.59; I2 = 0%, p = 0.0003) and 0.55 (0.40, 0.77; I2 = 0%, p = 0.003) respectively when compared to biploar hemiathroplasty (BHA). Conclusions: The mean dislocation and mortality rates in DMC-THA are lower than previously reported rates for THA with single cup and comparable to unipolar and bipolar hemiarthroplasty. Further research involving randomised control trials to assess differences in outcomes, longevity and cost-effectiveness needs to be conducted to make recommendations for the use of DMC in treating FNFs. abstract_id: PUBMED:38146519 Dual Mobility Cup in Fractures of the Femoral Neck in Neuromuscular Disorders and Cognitive Dysfunction Patients above 60 years-old. Objectives: Dislocation rate of total hip arthroplasty (THA) can be as high as 20% for patients with fracture neck of femur, which is a disastrous complication in these vulnerable patients. Numerous techniques, including bipolar arthroplasty and constrained liner, have been adopted to minimize the risk of dislocation. We aimed to evaluate the role of dual mobility Cups in treating patients with fractures of the femoral neck with high risk of postoperative dislocation due to neuromuscular instability disorders. Methods: A prospective cohort study was conducted (place is blinded as asked during submission), between 2016 and 2019, with a post-operative follow up period of two years. We included skeletally mature patients with femoral neck fractures having neuromuscular disorders and cognitive dysfunction who are candidates for THA above 60 years. Patients were then followed up clinically and radiographically at the clinic using Harris Hip Score (HHS) and x-rays at six weeks, six months, one year and two years postoperatively. Results: Twenty patients (20 hips) with femoral neck fractures with high risk of postoperative dislocation due to neuromuscular instability disorders undergoing dual mobility cup were included. The mean age of patients was 70.5 ±6.42 years. There is highly significant difference between HHS preoperatively and postoperatively (six weeks, six months and one, two years) p&lt;0.001.Infection occurred in one case (5 %), sciatic nerve injury occurred in one case (5%), and none of the patients had postoperative dislocation. Conclusion: Dual mobility cup is effective in preventing early dislocation in patients suffered from fracture neck of femur with muscle weakness due to neurologic disorders. abstract_id: PUBMED:35154999 Dislocation of Total Hip Replacement in Femoral Neck Fracture: Do Surgical Approach and Dual Mobility Implant Matter? Introduction: Total hip replacement (THR) in the neck of femur fracture in the elderly is associated with a higher risk of dislocation compared to hemiarthroplasty of hip or total hip replacement in the native hip. There is uncertainty regarding combining surgical approach, femoral head size, and the usage of single bearing or dual mobility to reduce the risk of dislocation. This study looks into the bearing of the prosthesis for posterior or lateral surgical hip approach as well as their head size to give a stable hip to these vulnerable groups of patients. Methods: Initial data were collected retrospectively from February 2017 till May 2019 from the electronic records database and clinical notes. Patients included in the study had a femoral neck fracture (age &gt;60 years) who underwent a total hip replacement. Subsequent data were collected prospectively from June 2019 to July 2020. Results: High rate of dislocation was found with posterior approach and single bearing prosthesis. However, if dual mobility prosthesis was used while using the posterior approach the dislocation rate was very low. Also, with lateral approach and single bearing prosthesis using large femoral head size, the dislocation rate was negligible. Conclusions: We recommend a dual mobility prosthesis for posterior approach THR and lateral approach with single-bearing hip replacement with large size femoral head. The dislocation rate is low using this principle irrespective of the surgical approach. abstract_id: PUBMED:34862038 Dual mobility cups do not reduce the revision risk for patients with acute femoral neck fracture: A matched cohort study from the Swedish Arthroplasty Register. Introduction: Dual mobility cups (DMC) are gaining in popularity, as a method to reduce the risk of dislocation after fracture-related hip arthroplasty. Our aim was to compare revision rate in general, as well as due to dislocation and infection, after DMC and conventional THA due to femoral neck fracture, for each of the two groups of lateral and posterior approach. Materials And Methods: This observational cohort study based on the Swedish Arthroplasty Register (SAR) compared 2242 patients with dual mobility cups (DMC) and 6726 with conventional total hip arthroplasty (cTHA), all due to acute fracture 2005-2019. This was after propensity score matching of 1:3 (age, gender, ASA). Kaplan-Meier survival analysis was used to investigate the 5-year revision rate after DMC and cTHA in the posterior and lateral approach groups. Results: The rates of revision in general, and due to dislocation or infection, were similar for DMC and cTHA in the different approach groups during the follow-up. The cumulative revision rate after posterior approach was 4.7% (95% CI 3.1-6.2) for DMC and 4.8% (3.8-5.7) for cTHA at 5 years. For lateral approach, the result was 2.3% (CI 1.3-3.3) for DMC and 3.7% (CI 3.0-4.4) for cTHA. The 5-year results for cumulative revision rate due to dislocation after posterior approach were 1.3% (0.4-2.2) for DMC and 2.2% (1.5-2.8) for cTHA. Corresponding results for lateral approach was 0.4% (0-0.8) for DMC and 0.7% (0.4-1) for cTHA. Discussion: With implant revision surgery as outcome, we could not detect any differences between DMC and cTHA. Further studies with high external validity are needed regarding total dislocation rate, patient-reported outcome, cost-effectiveness and learning curve, to make a definitive recommendation on the use of DMC for fracture cases. Conclusions: As treatment of acute femoral neck fractures, total hip arthroplasty with a dual mobility cup have similar outcome in terms of revisions in general, and due to dislocation or infections specifically, as one with conventional bearing. The similar outcome is regardless of surgical approach. Answer: Yes, dual mobility cups (DMCs) have been shown to reduce the risk of dislocation in total hip arthroplasty (THA) for fractured neck of femur in patients aged older than 75 years. A study reported that DMC-THA may prevent dislocation in intracapsular neck of femur (NOF) fracture in elderly patients, with a dislocation rate of 4.4% for the large articulation and 2.2% for intraprosthetic dislocation, all related to technical errors (PUBMED:26718778). Another study found that the use of DMCs in THA led to a dislocation rate of 1.5%, with none of the dislocations associated with elective surgery (PUBMED:35454367). Additionally, a systematic review and meta-analysis reported that the mean rate of dislocation following DMC-THA for femoral neck fractures (FNFs) was 1.87%, with a 1-year mortality rate of 14.0%, which is lower than previously reported rates for THA with single cup (PUBMED:32513027). Furthermore, a study on patients with femoral neck fractures with high risk of postoperative dislocation due to neuromuscular instability disorders found no postoperative dislocation after using dual mobility cups (PUBMED:38146519). However, it is important to note that while DMCs can reduce the risk of dislocation, they should be used with caution as there are concerns regarding metal ion release, particularly in younger and active patients, and prosthetic malfunction (PUBMED:28276753). Additionally, the use of monolithic dual mobility cups was associated with improper cup fixation and periprosthetic acetabular fractures in the elderly with poor bone stock, although they lowered the risk of early dislocation after THA (PUBMED:32773273). Therefore, while DMCs are effective in reducing dislocation risk in elderly patients undergoing THA for fractured neck of femur, careful consideration of patient-specific factors and surgical technique is necessary.